
We recently had a client that needed their application to work on tablets in warehouses. If you've ever been in a warehouse, you know the Wi-Fi situation: thick concrete walls, metal shelving everywhere, and dead zones that seem to move around just to spite you. Users needed to be able to scan inventory, record shipments, and log activities throughout the building without data loss no matter where they were in the building or how strong their Wi-Fi signal was.
This is what the local-first approach is all about -- the user shouldn't notice a difference if the device completely loses connection. Thanks to the IndexedDB browser API, it's entirely possible for the application to have its own local persistent data store. The data is persisted locally and sent to the backend when connection is reestablished.
It sounds great in theory, but anyone who's tried to build it knows that the implementation is anything but simple. Not only do you need to track which entities were created or modified while offline and how that compares to what's in the backend database, you also have the challenge of keeping your application's frontend state in sync with the data in IndexedDB. To make this easier, we're excited to announce that c3kit-bucket now ships with a full IndexedDB implementation that does all the hard parts of making a Lo-Fi application.
If you've used Bucket before -- whether with Datomic on the backend, JDBC, or the in-memory implementation for testing
and frontend state -- you already know the API. The same db/tx, db/find-by, and db/entity functions you've been
writing now work with IndexedDB in the browser. They update your application's frontend state store and persist to
IndexedDB asynchronously, making sure your application state never gets out of sync with IndexedDB. It's configurable
for use as your application's primary data store or as an offline cache that syncs with the database when connection is
reestablished. It can be used in plain ClojureScript projects or configured with
the Bucket ReMemory
implementation for Reagent applications, providing fine-grained reactive state management and offline persistence in one
fell swoop.
Setting up IndexedDB in Bucket looks almost identical to setting up the memory implementation. Here's the configuration:
(ns my-app.config
(:require [c3kit.bucket.api :as db]
[c3kit.bucket.re-indexeddb] ;; registers :re-indexeddb impl
[reagent.core :as reagent]))
(def bucket-config
{:impl :re-indexeddb
:db-name "my-app"
:store (reagent/atom nil) ;; Optional — :re-indexeddb defaults to a Reagent atom.
:online? #(.-onLine js/navigator)
:idb-strategy :cache})
(defn install-db! []
(db/set-impl! (db/create-db bucket-config my-schema/full-schema)))
If you've configured Bucket before, this should look familiar. The options are:
:impl: Either :indexeddb or :re-indexeddb.
:indexeddb can be thought of as Bucket's :memory implementation plus IndexedDB.:re-indexeddb is Bucket's :re-memory implementation for Reagent plus IndexedDB.:db-name: The name of the IndexedDB database in the browser. This is also used as a localStorage key prefix for
schema version tracking.:online?: A function that returns whether the app is currently online. This controls ID generation, dirty
tracking, and cache behavior. For most web applications, #(.-onLine js/navigator) is the right choice. Avoid checks
that
won't be ready at init time, like WebSocket connection state.:idb-strategy: Either :primary or :cache.
:primary: Use this when your app has no server-side database. IndexedDB is the only persistent store. On
page load, all data
is rehydrated from IndexedDB into memory. Data persists across page reloads indefinitely. This can work well for:
:cache: Use this when your app has a server-side database (Datomic, Postgres, etc.) that is the
authoritative source of
truth when online. IndexedDB acts as a local cache for offline resilience.When :idb-strategy is set to :cache, if :online? returns true on init!, Bucket clears stale cached data from
both the IndexedDB entity stores and the in-memory store -- but dirty entities (unsynced offline changes) are
preserved.
This makes sure that when the page is refreshed, the data fetched from the server isn't competing with stale data from
IndexedDB. Data created or modified while offline is still there for the app to be able to retrieve and sync to the
backend. For example, if a user goes offline and, while they are offline, entities are deleted by another user, their
application once online would still display those entities because the data still exists in their local IndexedDB.
NOTE: Bucket does not currently handle conflict reconciliation when two users modify the same entity while offline. When an offline entity syncs,
sync-tx*merges the incoming changes on top of whatever is currently in the server database -- incoming sync wins. If two users edit different fields on the same entity, both changes survive. But if they edit the same field, whichever user syncs last silently overwrites the other's change with no warning. If your app needs to detect or resolve these conflicts, you'll need to implement that logic on the server side during sync.
After creating the database, call init! to open the IndexedDB connection and load data:
(require '[c3kit.bucket.idb-common :as idb])
(-> (idb/init!)
(.then (fn [_] (sync-offline-data!)))
(.then (fn [_] (render-app))))
init! returns a js/Promise. It opens the IndexedDB database, creating or migrating object stores based on your
schema, then either rehydrates the in-memory store from IndexedDB or clears it depending on your strategy and online
status.
IMPORTANT: Always call your sync function after
init!returns. If the user refreshes the page while already online, the browser's"online"event won't fire, but there may still be dirty entities from a previous offline session waiting to be synced.
Once initialized, it's the same Bucket API you already know:
(require '[c3kit.bucket.api :as db])
;; Create
(db/tx {:kind :task :title "Scan aisle 4" :warehouse "Phoenix"})
;; Read
(db/entity :task 123)
(db/find-by :task :warehouse "Phoenix")
(db/ffind-by :task :warehouse "Phoenix") ;; first match
;; Update
(db/tx (assoc task :title "Scan aisle 5"))
;; Delete
(db/delete task)
;; Count
(db/count-by :task :warehouse "Phoenix")
Behind the scenes, Bucket is doing quite a bit of work depending on whether you're online or offline:
When offline, db/tx assigns a temporary negative ID (e.g., -1, -2) to new entities, adds them to the dirty set for
later syncing, and persists to IndexedDB.
When online, db/tx assigns positive IDs via the in-memory ID generator and persists to IndexedDB as clean data.
In both cases, the in-memory store updates immediately so the frontend can re-render (optimistic updates), and IndexedDB persistence happens asynchronously. If IndexedDB persistence fails, the in-memory store is automatically rolled back. This also makes retrieving data fast and synchronous, since it's reading from the atom that is the database store rather than querying IndexedDB directly, which returns a Promise.
This is where local-first gets interesting. The sync lifecycle is the flow of data from "created offline on the device" to "safely persisted on the server with a real ID." There are four phases.
The user creates, edits, and deletes entities while offline. Each operation updates the in-memory store immediately,
persists to IndexedDB asynchronously, and adds the entity's {:id :kind} to the dirty set in IndexedDB's _meta store.
The user doesn't notice anything different. The app just works.
When connectivity returns, your app calls sync!:
(require '[c3kit.bucket.idb-common :as idb])
(defn sync-callback [dirty-entities]
(when (seq dirty-entities)
(let [dirty-ids (set (map :id dirty-entities))]
(send-to-server dirty-entities
(fn [server-response]
(idb/sync-complete! dirty-ids (:payload server-response)))))))
(idb/sync! sync-callback)
sync! reads the dirty set from IndexedDB, fetches the actual entity data for each dirty entry, and passes the full
entities to your callback. It's then your responsibility to send them to the server however your app communicates (AJAX,
WebSocket, etc.).
It is highly recommended that your payload include a unique
:sync-id, e.g. a hash of the payload content, for idempotency on the server-side. We will discuss this more in step 3b.
A good place to trigger sync is both after init! (to catch dirty entities from a previous session) and on the
browser's "online" event:
(.addEventListener js/window "online" (fn [] (sync-offline-data!)))
On the server side, entities created offline arrive with negative temporary IDs. The server needs to strip these IDs so
the database can assign real ones. Bucket provides idbc/sync-tx* to handle this:
(require '[c3kit.bucket.idbc :as idbc])
(defn handle-sync [{:keys [body]}]
(let [{:keys [entities id-map]} (idbc/sync-tx* (:updates body))]
(run! db/delete (:deletions body))
(ajax/ok entities)))
sync-tx* strips negative IDs, lets the database assign real ones, and returns an id-map of
{old-negative-id new-real-id} for remapping cross-references. The returned entities have their real server-assigned
IDs.
If the server crashes mid-sync, the client will retry the entire batch, since it didn't receive a response from the
server to know which entities have been processed. Without protection, sync-tx* would create duplicates for entities
that were already persisted on the first attempt. To prevent this, pass a dedup-keys-by-kind map:
(idbc/sync-tx* updates
{:task [:employee :date :operation]
:timecard [:employee :date]})
For each offline entity whose kind appears in the map, sync-tx* checks whether an entity with matching attribute
values already exists. If so, it updates the existing entity instead of creating a duplicate.
Network retries, background sync, and service workers can all send the same offline data to the server multiple times.
Bucket provides idbc/claim-sync! to deduplicate at the endpoint level:
(defn handle-sync [{:keys [body]}]
(let [sync-id (:sync-id body)
claimed (idbc/claim-sync! sync-id)]
(if claimed
(ajax/ok (process-sync body)) ;; first time -- process normally
(ajax/ok [])))) ;; duplicate -- return empty collection
The client generates a deterministic sync ID from the entity data, e.g. a hash of the payload content. Your application
is responsible for providing this. claim-sync! returns true only on the first call for a given ID. Combined with
the sync-complete! no-op behavior on empty responses (more on that in a moment), this creates a safe retry loop: the
server skips duplicate work, the client keeps its data until a sync actually succeeds.
Back on the frontend, after the server responds with entities that now have real IDs, call sync-complete! to clean up:
(idb/sync-complete! dirty-id-set server-entities)
This does three things in order:
_meta store so they won't be synced again
sync-complete!no-ops whenserver-entitiesis empty. This is intentional. If the server rejects a duplicate sync viaclaim-sync!, returning an empty response prevents the client from deleting its local data. The dirty entities stay queued for the next sync attempt, preventing data loss.
Everything we've covered so far syncs offline data when the user has the page open and goes back online. But what if the user makes changes offline, closes the browser, and leaves? When their device regains connectivity, there's no page open to trigger the sync.
That's where service workers come in. A service worker can run in the background -- even when all tabs are closed -- and sync dirty entities from IndexedDB to the server when connectivity returns. The service worker and the main app share the same IndexedDB database, so the service worker can read the dirty set and POST it to the server just like the main app does.
Bucket doesn't ship a service worker (every app's sync logic is different), but it does provide
c3kit.bucket.idb-reader -- a lightweight, promise-based namespace that reads dirty entities directly from IndexedDB
without needing the full Bucket infrastructure. This keeps the service worker's dependency footprint minimal. The
IndexedDB guide in the c3kit.bucket docs covers
service worker integration in detail, including coordination between the main app and the service worker to avoid race
conditions on the shared IndexedDB connection.
One last thing worth knowing: you don't need to manually manage IndexedDB object stores. Bucket creates them
automatically from your entity schemas -- each entity kind becomes an object store (e.g., :task becomes a "task"
store), plus a _meta store for dirty tracking.
When your schema changes (new kinds, new fields), Bucket detects the change via a hash of the schema and increments the
IndexedDB version. This triggers IndexedDB's onupgradeneeded event, which creates new stores, removes old ones, and
handles the migration automatically. Existing stores with new fields just work -- IndexedDB is schemaless within a
store.
Building offline-capable applications has historically been one of those things that sounds simple, feels important, and turns into an engineering nightmare the moment you actually try to do it. The number of edge cases is staggering: temporary IDs that need to be replaced, dirty sets that need to survive page refreshes, sync retries that can't create duplicates, UI state that needs to stay consistent through all of it.
With Bucket's IndexedDB implementation, the hard parts are handled for you:
db/tx, db/find-by, db/entity work exactly as they always havesync!, sync-complete!, and server-side deduplication via claim-sync!:re-indexeddb for fine-grained re-rendersThe best part is that if you're already using Bucket, adopting IndexedDB requires very little rewriting of your frontend code.
c3kit-bucket is an open-source repository, and pull requests are welcome. We'd love to hear about your experience building local-first applications with it.
Happy coding!