2025 in Review: Worktree's First Year

This year was Worktree’s first, with our official soft-launch happening on February 23rd. In the past ten-ish months, we’ve grown well past what could have been expected, and we have an incredibly ambitious and exciting roadmap in front of us heading into 2026.
For the uninitiated, Worktree is a new software supply chain, fully hosted and built in Canada. Worktree Code is a code hosting and collaboration platform, offering a fast and competitive alternative to GitHub; and Worktree Cloud is a new from-scratch Cloud platform, offering static site hosting, a CDN, and object storage – also all built and hosted in Canada.
Worktree Code
In numbers:
- 626 Git repositories
- 251 users
- ~2TiB average data sent per month
- ~75 million average HTTP requests per month
- ~150 average SQL queries per second
As the true nerds know, Worktree is forked from the Gitea open-source project. But, we didn’t just slap a theme on it and call it done; we’ve been ripping the codebase apart, doing invasive refactors in the name of security, reliability, and scalability.
Since forking from Gitea we’ve modified over 30k lines of code, and despite implementing numerous new features still have removed about 9k lines of code overall.
So what have we been doing?
Completely rewritten Git backend – we’re gradually migrating the codebase to a new
libgit2-based Git interface. This replaced the previous system which relied on a mix of direct Git CLI execution andgo-git. We’re happy to report that so far we’ve fully removedgo-gitin favour of our new backend, which immediately cut our RAM usage in production by half. This also contributes significantly to SHA-256 hash support, which should be rolling out soon.More robust GPG signing – with our new
libgit2backend, we also were able to ditch the clunky and nightmarish reliance on GnuPG for system-initiated Git operations like repo initialization and merging pull requests. These are now using a pure Go OpenPGP library and are fully API-driven; essentially, we’re no longer bottlenecked by the myriad inefficiencies of the GnuPG-driven system. The new system is more secure, scalable, and faster, plus is easier to maintain and set up. The new keys are stored encrypted at-rest and only decrypted in memory the moment they are used. And, because it’s all API-driven, we can harden it further with HSMs in the future with minimal changes.UX and design improvements – we shipped several redesigns of key pages across the app: user and organization profiles, repo creation and settings pages, and a complete rebuild of the projects UI. We’ve also been removing legacy cruft and bloat; we recently removed VueJS from the codebase entirely in favour of native web components and/or HTMX. Expect to see many more UX uplifts in the near future.
Custom Actions Runner – our internal fork of
act_runnerhas been serving public CI jobs dutifully so far, utilizing cloud VMs as its security boundary. We have a lot of in-progress work to improve our Actions environment and speed coming up soon; more on that later.Fixing blockers for high-availability – we’ve been gradually refactoring legacy parts of the codebase that rely on in-memory locks, shared state, or other sins that prevent the app from running distributed across multiple machines. We’re nearly there, with only a few small issues left to resolve.
Infrastructure migrations – to save time (and money) when we launched, we started on AWS EC2 instances. While this worked, it was misaligned with the goals of the platform, and quickly got prohibitively expensive as we grew. In July we migrated to bare metal leased from OVH as a stop-gap. In Q4 we started purchasing our own hardware, and expect to finally migrate to our own colocated servers at some point in Q1 2026.
As an aside, the move to bare metal also helped us absorb the numerous aggressive attacks unleashed by various nationstates and AI companies – we’ve seen upwards of 2000 requests per second hitting our load balancers, despite our attempts to block this traffic. Thankfully, our new infrastructure can handle this load with relative ease.
Worktree Cloud
In numbers:
- 12 static sites, deployed over 100 times
- 18 CDN endpoints
- 99.999% uptime
- ~50GiB of objects stored
An enormous amount of engineering effort in 2025 went into building the foundations for Worktree Cloud. Because we decided not to use an off-the-shelf platform like OpenStack, progress has been slower, but the long-term benefits will be immeasurably greater. Despite the slower pace, we’ve still managed to ship a lot in 2025:
In-house Object Storage – we wrote our own distributed erasure-coding consistent-hashing-based backend (open sourced as blobfs), and then reverse-engineered the S3 API and built a cell-based, object-aware router that uses
blobfsas its storage layer. Since October, all of Worktree has been running on this bespoke object storage platform, and we expect to start rolling out public access in Q1 2026. You should join the waiting list!Static Sites – our versioned, deploy-driven static site platform launched earlier this year. All of our websites have been self-hosted on Worktree Cloud since July. Sites was also our first production use case for Object Storage, and was running on our internal prototype of Object Storage long before Worktree Code was!
CDN Endpoints – alongside Sites also launched CDN, which is our from-scratch caching layer-7 edge proxy. It’s built to be fast, and we definitely achieved that goal: our average response time to requests (measured from Bell Fibre in MontrĂ©al) is ~38ms, including content transfer.
Foundational Infrastructure – while not user-facing, a lot of work went into the supporting infrastructure behind the Cloud. VRRP/ARP/NDP IP failover, Wireguard overlay, HAProxy, Kubernetes… In the end, we have an architecture that will take us well into the future, and is directly transferrable to future projects and future colocation plans.
What’s Next
We have a tonne of projects in the pipeline that we can’t wait to share. Here’s a peek at what’s coming up in the next few months.
Object Storage Launch – We expect to open up Object Storage to the general public early next year, pending a few more hardware purchases and colocation deployments.
Actions 2.0 – We’ve been chipping away at a fully rebuilt, clean-room implementation of an Actions runtime for Worktree. We’re not using the Gitea
act_runner, which eliminates a significant amount of runtime complexity and offers a massive overall security uplift. As a side effect, this new runtime also will allow us to migrate Actions off of AWS EC2, which will remove our last AWS dependency, and bring newer Ubuntu runtimes along with it.Container Apps – Our new Actions runtime will itself run on the prototype of the new Container Apps Platform-as-a-Service (PaaS) that we’re building as our first compute product for Worktree Cloud. Built on lightweight KVM VMs, and tightly integrated into the other Cloud products, Container Apps will let you deploy containers as easily as a static site, load balance HA deployments, and even expose dualstack TCP services directly.
Open Source Release – We’ve held off on open sourcing Worktree itself, as so much is in-flux that the added overhead of cutting versions, writing documentation, and otherwise managing an open source project would be a burden. However, as some of our largest refactors come to fruition, things are looking up. We hope to officially open source our fork in H1 2026.
As always, we invite you to join the Discord and follow us on Mastodon, X, or Bluesky to get the latest updates and become part of the community.
Here’s to 2026!