A contribution to Serverpod that enables parallel processing of FutureCalls, eliminating bottlenecks and giving full control over scheduling.
Sometimes it’s the smallest additions that unlock the biggest improvements.
Until now, FutureCalls in Serverpod were executed sequentially. That worked — but it also meant: If one job took a long time, every other one had to wait.
I just contributed a small but powerful change to the Serverpod package: You can now configure
✅ how often the database is scanned for new calls ✅ how many calls are processed in parallel
In short: FutureCalls are no longer a bottleneck. You get full control over scheduling behavior and server load.
Configuration is simple and flexible — and officially documented: 👉 https://docs.serverpod.dev/concepts/scheduling#configuration
I ran into this myself while building a feature-heavy app. Instead of working around the bottleneck, I decided to fix it — and share it. Open Source, the way it’s meant to be. 🤝
Curious how you’re using FutureCalls in your projects. Any patterns or use cases worth sharing?