10 comments

  • carlsverre 1 hour ago
    You might be interested in taking a look at Graft (https://graft.rs/). I have been iterating in this space for the last year, and have learned a lot about it. Graft has a slightly different set of goals, one of which is to keep writes fast and small and optimize for partial replication. That said, Graft shares several design decisions, including the use of framed ZStd compression to store pages.

    I do like the B-tree aware grouping idea. This seems like a useful optimization for larger scan-style workloads. It helps eliminate the need to vacuum as much.

    Have you considered doing other kinds of optimizations? Empty pages, free pages, etc.

    • russellthehippo 1 hour ago
      Very cool, thanks. I hadn’t seen Graft before, but that sounds pretty adjacent in a lot of interesting ways. I looked at the repo and see what I can apply.

      I've tried out all sorts of optimizations - for free pages, I've considered leaving empty space in each S3 object and serving those as free pages to get efficient writes without shuffling pages too much. My current bias has been to over-store a little if it keeps the read path simpler, since the main goal so far has been making cold reads plausible rather than maximizing space efficiency. Especially because free pages compress well.

      I have two related roadmap item: hole-punching and LSM-like writing. For local on non-HDD storage, we can evict empty pages automatically by releasing empty page space back to the OS. For writes, LSM is best because it groups related things together, which is what we need. but that would mean doing a lot of rewriting on checkpoint. So both of these feel a little premature to optimize for vs other things.

      • carlsverre 30 minutes ago
        Both of those roadmap items make sense! Excited to see how you evolve this project!
  • agosta 1 hour ago
    This is awesome! With all of the projects/teams working on improving sqlite, it feels like it's just a matter of time before it becomes a better default than postgres for serious projects.

    I do wonder - for projects that do ultimately enforce single writer sqlite setups - it still feels to me as if it would always be better to keep the sqlite db local (and then rsync/stream backups to whatever S3 storage one prefers).

    The nut I've yet to see anyone crack on such setup is to figure out a way to achieve zero downtime deploys. For instance, adding a persistent disk to VMs on Render prevents zero downtime deploys (see https://render.com/docs/disks#disk-limitations-and-considera...) which is a real unfortunate side effect. I understand that the reason for this is because a VM instance is attached to the volume and needs to be swapped with the new version of said instance...

    There are so many applications where merely scaling up a single VM as your product grows simplifies devops / product maintenance so much that it's a very compelling choice vs managing a cluster/separate db server. But getting forced downtime between releases to achieve that isn't acceptable in a lot of cases.

    Not sure if it's truly a cheaply solvable problem. One potential option is to use a tool like turbolite as a parallel data store and, only during deployments, use it to keep the application running for the 10 to 60 seconds during a release swap. During this time, writes to the db are slower than usual but entirely online. And then, when your new release is live, it can sync the difference of data written to s3 back to the local db. In this way, during regular operation, we get the performance of local IO and fallback onto s3 backed sqlite during upgrades for persistent uptime.

    Sounds like a fraught thing to build. But man it really is hard/impossible to beat the speed of local reads!

    • hrmtst93837 0 minutes ago
      Keeping the DB local cuts the worst latency spikes, but then you trade away the whole pitch of ephemeral compute and just-in-time scaling, so you end up glued to old-school infra patterns in disguise, plus node affinity and warm-cache babysitting that look a lot like the stuff SQLite was supposed to let you dodge. Add a few readers on volatile nodes and it get ugly fast.
  • bob1029 55 minutes ago
    Have you considered using techniques like conditional PUT to enable multiple writers?

    https://aws.amazon.com/about-aws/whats-new/2024/08/amazon-s3...

    https://docs.aws.amazon.com/AmazonS3/latest/userguide/condit...

  • russellthehippo 2 hours ago
    Also I want to acknowledge the other projects in adjacent parts of this space — raw SQLite range-request VFSes, Litestream/LiteFS-style replication approaches, libSQL/Turso, Neon, mvsqlite, etc. I took a lot of inspiration from them, thanks!
  • russellthehippo 2 hours ago
    A bit more color on what I found interesting building this:

    The motivating question for me was less “can SQLite read over the network?” and more “what assumptions break once the storage layer is object storage instead of a filesystem?”

    The biggest conceptual shift was around *layout*.

    What felt most wrong in naive designs was that SQLite page numbers are not laid out in a way that matches how you want to fetch data remotely. If an index is scattered across many unrelated page ranges, then “prefetch nearby pages” is kind of a fake optimization. Nearby in the file is not the same thing as relevant to the query.

    That pushed me toward B-tree-aware grouping. Once the storage layer starts understanding which table or index a page belongs to, a lot of other things get cleaner: more targeted prefetch, better scan behavior, less random fetching, and much saner request economics.

    Another thing that became much more important than I expected is that *different page types matter a lot*. Interior B-tree pages are tiny in footprint but disproportionately important, because basically every query traverses them. That changed how I thought about the system: much less as “a database file” and much more as “different classes of pages with very different value on the critical path.”

    The query-plan-aware “frontrun” part came from the same instinct. Reactive prefetch is fine, but SQLite often already knows a lot about what it is about to touch. If the storage layer can see enough of that early, it can start warming the right structures before the first miss fully cascades. That’s still pretty experimental, but it was one of the more fun parts of the project.

    A few things I learned building this:

    1. *Cold point reads and small joins seem more plausible than I expected.* Not local-disk fast, obviously, but plausible for the “many mostly-cold DBs” niche.

    2. *The real enemy is request count more than raw bytes.* Once I leaned harder into grouping and prefetch by tree, the design got much more coherent.

    3. *Scans are still where reality bites.* They got much less bad, but they are still the place where remote object storage most clearly reminds you that it is not a local SSD.

    4. *The storage backend is super important.* Different storage backends (S3, S3 Express, Tigris) have verg different round trip latencies and it's the single most important thing in determining how to tune prefetching.

    Anyway, happy to talk about the architecture, the benchmark setup, what broke, or why I chose this shape instead of raw-file range GETs / replication-first approaches / etc.

    • hgo 1 hour ago
      I really appreciate this post. Freely and humbly sharing real insights from an interesting project. I almost feel like I got a significant chunk of the reward for your investment into this project just by reading.

      Thank you for sharing.

  • alex_hirner 1 hour ago
    What are your thoughts on eviction, re how easy to add some basic policy?
    • russellthehippo 1 hour ago
      Great question. I have some eviction functions in the Rust library; I don’t expose them through the extension/VFS yet. The open question is less “can I evict?” and more “when should eviction fire?” via user action, via policy, or both.

      The obvious policy-driven versions are things like:

      - when cache size crosses a limit

      - on checkpoint

      - every N writes (kind of like autocheckpoint)

      - after some idle / age threshold

      My instinct is that for the workload I care about, the best answer is probably hybrid. The VFS should have a tier-aware policy internally that users can configure with separate policies for interior/index/data pages. But the user/application may still be in the best position to say “this tenant/session DB is cold now, evict aggressively.”

    • russellthehippo 1 hour ago
      Also it’s super easy to add them, it’s definitely on the roadmap. The manifest tracks the data locally and could quickly evict on any operation that accesses it. Very safe post-checkpoint
  • inferense 1 hour ago
    very cool!
  • jijji 51 minutes ago
    i wonder how much that costs per hour to run any normal load? what benefit does this have versuss using mysql (or any similar rdbms) for the queries? mysql/pgsql/etc is free remember, so using S3 obviously charges by the request, or am i wrong?
    • nthypes 37 minutes ago
      im also curious. GCS also charges per GET request
  • michaeljelly 2 hours ago
    Really cool
  • manudaro 47 minutes ago
    [dead]