We want to enable hosting for online gaming the way WordPress did for websites.
WordPress made web publishing universal with two things: a free, open-source codebase that runs on a small commodity VPS, and a managed tier for people who never wanted to touch a server. Around 40% of the web runs on it. We want a similar setup for multiplayer game backends, because games keep dying when they shouldn’t.
The Problem
When a publisher shuts a live-service game’s servers down, the game stops working. The studio went bankrupt, or the title isn’t earning, or a corporate strategy shift orphaned the project; the cause varies, the result doesn’t. A thing players paid for becomes a thing they can’t play.
The Stop Killing Games movement has spent the last few years arguing this should change. The asks are reasonable: when you sunset a multiplayer title, give players a self-host path, an offline mode, or a refund.
The honest counter from the industry is that retrofitting a self-host path into a shipped multiplayer game is brutal. The auth layer talks to PSN, Xbox Live, Steam, or Epic. Player progression is woven through proprietary services. Matchmaking sits behind closed-source orchestration. Friends and presence depend on platform-specific social graphs, and the data inside them is regulated by GDPR. Handing the community a working server isn’t a flag flip; it’s months of unfunded engineering on a title that already isn’t earning.
Both sides have a point. The community is asking for something the industry doesn’t know how to deliver cheaply.
Sustainable Value for Studios and Players
Studios that ship on ggscale get an authoritative game-server fleet, a P2P relay, auth, storage, leaderboards, lobbies, and matchmaking, for less than the alternatives. The same architecture also gives them a clean exit when the title winds down, with no rewrite at end-of-life. Players get games that keep working. Nobody has to wait for legislation.
We want deployment to feel as simple as WordPress’s. The control plane is a single Go binary. Drop it on any Linux box, point it at a Postgres URL (managed service or self-hosted), and it runs. Go makes that kind of distribution easy: no Java or Python runtime to install, no system dependencies to track. There’s also no Redis, Kafka, service mesh, or separate cache cluster.
If you’d rather have everything in one command, the published docker-compose.yml brings up ggscale-server, Postgres, Prometheus, and the dev observability stack together. Prometheus and the observability tools are optional; the binary doesn’t depend on them. Postgres can run alongside the binary inside compose, or against a managed service (Neon, Supabase, RDS) by setting DATABASE_URL. The compose file CI runs on every pull request is the same one self-hosters use, so the production setup doesn’t diverge from what the project tests.
Portability is the source, the binary, and your pg_dump. Two compose files cover the convenience layer: a default for the control plane, and a second that adds k3s + Agones for studios that want to run their own dedicated game-server fleet. Studios that prefer not to operate any of it can use ggscale Cloud (the hosted version we run), the same codebase as a managed tier with usage-based pricing.
Three Audiences, One Platform
ggscale serves three groups, and the same backend code handles all of them.
Studios launching new games. Indie studios and smaller publishers get a cheap, no-lock-in backend. They self-host on a small VPS or use ggscale Cloud as a managed service. The open-source path also gives them a clean answer for what happens when the title eventually winds down.
Publishers ending a live-service title. When a studio decides to shut a multiplayer game down, ggscale offers a paid service to port it onto the open-source stack. The community can then run the game themselves, the publisher hands off ownership cleanly, and nobody has to write an offline-mode patch from scratch.
Player communities: helps keep games alive and lets you host your own game servers. When a publisher sunsets a multiplayer title, an active community can pick up the same backend the studio used and keep playing. For current games that ship on ggscale, communities can stand up their own server: with mods, in their own region, for tournaments, or just for a group of friends. None of it depends on the publisher’s central infrastructure.
A studio that ships on ggscale at launch can become a sunset customer years later, and the same infrastructure keeps hosting the game for its community after that.
How Preservation Actually Works
Open-source licensing. ggscale ships under Apache 2.0. A studio that builds on it owns a working backend they can keep running, fork, or hand to a community without renegotiating any license. Nothing critical to operating the game lives behind a proprietary contract.
SDKs that simplify the move to open source. The Go and C# SDKs follow standard patterns and sit behind a small set of well-defined interfaces. A studio swapping in ggscale to replace a proprietary backend integrates against that surface instead of taking on an open-ended rewrite.
Self-hosted or managed cloud. Studios that want to operate their own backend can run the Go binary on a Linux box. Studios that don’t want to operate anything can use ggscale Cloud. Either path eliminates the bespoke-backend hosting burden; nobody has to maintain a proprietary stack forever.
Architecture
The simplest self-hosted setup is one Linux box. The Go binary (ggscale-server) is the control plane: auth, matchmaking, lobbies, leaderboards, storage, and friends. It needs a Postgres database, which can sit on the same box or live anywhere reachable by URL.
graph TB
P["Players"]
subgraph host["One Linux box"]
CP["ggscale-server
(Go binary)"]
end
DB[("Postgres
local or managed")]
P --> CP
CP --> DB
Two optional add-ons sit alongside the core, picked per game.
k3s + Agones for dedicated game servers. Studios that want authoritative game servers (the kind needed for ranked, esports, or anything where cheating cannot be tolerated) can add a k3s + Agones cluster. The control plane allocates pods, the game runs the simulation server-side, and rolling deploys, graceful drain, and CCU metering come included. A separate compose file brings this up in one command.
graph TB
P["Players"]
subgraph host["Control plane host"]
CP["ggscale-server"]
DB[("Postgres")]
end
subgraph k8s["k3s cluster"]
AG["Agones controller"]
GS["Game server pods"]
AG --> GS
end
P --> CP
P -->|"game traffic"| GS
CP --> DB
CP -->|"allocate"| AG
pion/turn for peer-to-peer relay. For co-op, casual, and cost-sensitive titles, the control plane can issue short-lived TURN credentials for peer connections. Players connect directly when their NAT allows it, with a TURN relay fallback otherwise. Bandwidth-only sessions are 10x to 100x cheaper than running a dedicated game server per match.
The studio picks per session which transport fits the game; same matchmaker, same identity, same SDK. Most games end up using both: ranked modes on the authoritative fleet, casual modes through the relay.