I’ve been working on a webapp, which will support offline access. So, after fetching data from the network, I save it to a local store on the client. In my first request, I was saving just over 600 records, and undoubtedly, the app will need to scale many times more than that.
In this case, I’m saving records to IndexedDB and as a proper database, it supports transactions. That means the ability to group multiple operations, and if one fails, to roll them all back and leave the db in a pre-failed state, cleanly. That ability, while advantageous, can effect performance.
One aspect which I always wondered about was how indexedDB “auto” commits transactions. Here’s a great article on how it works and why. Basically the agent/browser will auto-commit when there is no way for the transaction to transition from inactive to active state. This is most commonly an occurrence when there are no remaining callbacks tied to a previous get/put request.
You might ask why “auto” commit rather than calling commit explicitly. As the previous article explains, it is to prevent devs from leaving transactions open or holding them open too long. So initially IndexedDB shipped with only auto-commit, no explicit way to call commit. Later that happened, and as mentioned, “auto” commit will stay.
Knowing the overhead of transactions, and being worried about the initial batch of writes, in to the thousands, I looked for a way to write all the records at once, a putAll method. I was surprised not to find a method like that, though a sister “getAll” method existed.
After some digging, I found a way to make 1 vs 1000 transaction performance is fairly similar. The improved performance is related to relaxing the “atomic” nature of transactions which I mentioned earlier. If you are certain that your operations are independent from each other, you can explicitly set your transaction object “durability” to relaxed via a property with the same name.
db.transaction('customers', 'readwrite', { durability: 'relaxed' })
This will make a signification performance increase. Check out these results, each is processing the same number of total records:
Default results with a data size of 100k, split over various batch chunks.
Batch size: 100 Time: 27150.79999999702 Batch size: 1000 Time: 10298.70000000298 Batch size: 10000 Time: 7891.399999991059 Batch size: 100000 Time: 7732.5
In the extreme case, 1 huge transaction vs 10k smaller ones for the same amount of data, the difference is 344% faster.
Here are the results with the “relaxed” durability explicitly set:
Batch size: 100 Time: 8261.5
Batch size: 1000 Time: 8481.800000011921
Batch size: 10000 Time: 7841.29999999702
Batch size: 100000 Time: 7716.79999999702
In this case, the same 1 huge vs 10k smaller, yields a difference of only 7%.
No surprise the idea to implement getAll was dropped by the Chromium developers.
Below is modified test code based on a script from the Chromium team.
<!doctype html> <meta charset="utf-8"> <title>IndexedDB population performance test</title> <pre id="console"></pre> <script> 'use strict'; const testRelaxed = false; function consoleLog(message) { const console = document.getElementById('console'); console.innerText = console.innerText + message + "\r\n"; } function createTestData() { const data = []; for (let i = 0; i < 100000; i++) data.push({ id: i, value: Math.random() }); return data; } const kTestData = createTestData(); function createDb(db_name) { return new Promise((resolve, reject) => { const request = indexedDB.open(db_name); request.onblocked = reject; request.onerror = reject; request.onupgradeneeded = () => { const db = request.result; db.createObjectStore('customers', { keyPath: 'id' }); }; request.onsuccess = () => { const db = request.result; resolve(db); }; }); } function deleteDb(db_name) { return new Promise((resolve, reject) => { const request = indexedDB.deleteDatabase(db_name); request.onsuccess = resolve; request.onerror = reject; request.onblocked = reject; }); } function writeData(db, data, start, count) { return new Promise((resolve, reject) => { // const transaction = db.transaction('customers', 'readwrite', // { durability: 'relaxed' }); const transaction = db.transaction('customers', 'readwrite', (testRelaxed)?{ durability: 'relaxed' }:{}); const store = transaction.objectStore('customers'); for (let i = 0; i < count; ++i) store.put(data[start + i]); if (transaction.commit) transaction.commit(); transaction.oncomplete = resolve; transaction.onerror = reject; transaction.onblocked = reject; }); } async function testRun(db_name, batch_size) { const db = await createDb(db_name); const start_time = performance.now(); for (let i = 0; i < kTestData.length; i += batch_size) { await writeData(db, kTestData, i, batch_size); } const end_time = performance.now(); db.close(); await deleteDb(db_name); consoleLog(`Batch size: ${batch_size} Time: ${end_time - start_time}`); } async function runAllTests() { await testRun('test1', 100); await testRun('test2', 1000); await testRun('test3', 10000); await testRun('test4', 100000); } runAllTests(); </script>