Digital sovereignty is no longer just a policy topic. It’s starting to show up as a real infrastructure concern in places where most developers never had to think about it before.
For years, deploying modern web applications has been almost frictionless. Platforms like Netlify, Vercel, AWS, and Cloudflare made it easy to ship full-stack apps quickly and cheaply, with strong defaults and excellent performance. That convenience shaped our habits. It wasn’t a deliberate tradeoff where everyone consciously chose “speed over control.” Most of us simply didn’t see it as a risk.
When Infrastructure Stops Being Neutral
Then a political moment changed that for me. In 2025, the United States imposed sanctions on the International Criminal Court. Microsoft, bound by US law, complied and shut down the accounts of ICC investigators.
That made something very real: infrastructure is not neutral. If you rely on systems that ultimately fall under a different political and legal reality, you’re accepting that your ability to operate can be affected from the outside.
That’s what got me started. But what kept me going was that it turned into a technical problem I genuinely wanted to understand. Beyond the political concern, I got pulled into the engineering side of it: how do these deployment models actually work, and what happens if you stop defaulting to the big US platforms?
Seeking Modern Developer Experience on EU Soil
I wanted to find out what it would take to deploy a modern full-stack Astro application on European infrastructure, in a way that feels comparable to what we’re used to on American platforms. Not comparable in the sense of matching Cloudflare’s edge architecture one-to-one, because that’s unrealistic.
Instead, I defined "comparable" based on three core requirements needed for a modern workflow:
- Automation: I need to be able to build and deploy via scripts and CI/CD, not by clicking in a dashboard.
- Performance: The setup must support a hybrid app, meaning it serves static assets fast while running server logic dynamically.
- Trust: The workflow must be reliable enough to run in production without constant manual intervention.
Astro was my test case simply because I like using it and it matches the kind of applications I want to build. Astro can produce static output, but it also has server routes and hybrid rendering. That’s where infrastructure decisions start to matter. You’re no longer deploying “just a static website.” You’re deploying static files and server logic together, and you need a platform that supports both sides well.
The question I tried to answer was straightforward: can I deploy an Astro app on European infrastructure while hitting those three criteria?
Why Containers Are the Wrong Tool for the Job
My first attempt was Bunny. Bunny often comes up when you search for European alternatives, and for good reason. It’s performance-focused, and it offers building blocks that sound like they should fit modern web workloads. I started with Bunny Magic Containers because it was the simplest path to get something running quickly.
I already knew I could run a Node.js application inside a Docker container, so the runtime itself wasn’t the unknown. The real goal of this step was the automation requirement: could I write deployment scripts that deploy an Astro app reliably, and support feature-branch deployments and cleanup through CI?
I managed to get it working, but it wasn’t the end goal. Containers were never the best solution here. Putting an entire website inside a container is like using a truck to deliver a single sandwich. It works, but it’s heavy, and it comes with overhead you don’t want long term. Ideally, static files should be delivered fast and cheaply, while only the server logic runs dynamically. So Magic Containers were a useful stepping stone, but they confirmed what I already expected: to get closer to modern deployment models, I would need something function-based.
Bunny Edge Scripting looked closer to what I actually wanted, but I struggled to make progress there. Not because I proved it was impossible, but because I couldn’t get enough clarity to confidently build a deployment flow around it. The documentation wasn’t enough for me to move quickly, and while experimenting I kept running into errors I couldn’t easily reason about. At some point it stopped being productive, and I decided to explore a different direction.
Shifting to a Serverless Architecture on Scaleway
That’s when I started looking into Scaleway. Scaleway stood out because it offered a broader set of services and felt more like a complete platform. It looked like the kind of place where the pieces I needed could realistically exist: serverless functions, object storage, and enough networking and automation options to build something closer to the setup I had in mind. It also felt more direct and less sales-driven. The docs were clear about how things work and how you’re expected to automate them.
That automation part mattered a lot. Dashboards are useful, but they don’t scale as a workflow. If you want something you can actually use in real projects, you need infrastructure that can be driven through APIs, scripted cleanly, and deployed through CI/CD.
Reverse Engineering a Custom Adapter
To run Astro server routes in a serverless function, you need a handler that translates incoming requests into whatever shape Astro expects at runtime. For proof of concept, I first used an AI agent to generate something that worked inside a Scaleway serverless function. It did run, but the implementation was messy. It contained patterns that didn’t feel intentional, and logic that didn’t seem necessary. It was a perfect example of something important: AI can produce code that works, but still be fundamentally wrong in how it’s structured.
So I reverse engineered it. I spent time reading Astro docs and Scaleway docs, and I discussed ideas with Scaleway support to validate how certain parts should be done. Then I rebuilt the adapter into something smaller and easier to understand.
At this point, the core idea is working. I can deploy the server logic as a serverless function, automate deployments through scripts, and spin up separate environments per feature branch. However, looking back at my criteria, performance is where the next step lies. Static assets are still bundled inside the function package for now. This works, but bundling static assets makes your function bloated, and bloated functions are the enemy of reducing cold starts. If you want lightweight functions, quick startup times, and a clean scaling model, static assets should live somewhere else.
The Challenge of Bridging the Integration Gap
The obvious next step is separating server and static environments properly: functions for server logic, object storage for static assets, and a clean way to connect the two.
And that’s where this gets interesting again, because the remaining challenge isn’t “can I deploy something.” It’s the network layer and the integration story. Cloudflare has an advantage here. When I look at their approach, I see an assets fetch mechanism wired into the worker environment. My assumption is that this is heavily optimized behind the scenes. I don’t know all the details, but it clearly helps make the connection between “server logic” and “static assets” feel native and fast. On European platforms, you can build the same idea in principle, but the integration isn’t always as obvious. Even when you have the right building blocks, you still need to connect them through routing, caching behavior, and clean request handling. The goal isn’t to perfectly replicate Cloudflare. The goal is to see how far you can get with a similar architecture, and what the real gaps are.
Digital Sovereignty is a Gradient
This is also where I think the “Europe is behind” narrative is too simplistic. I don’t believe Europe lacks the knowledge or talent to build systems like this. The bigger gap is scale and momentum. There are fewer companies building global-scale infrastructure products, fewer default “it just works” paths, and less shared tooling around modern deployment workflows. Not because people here can’t build it, but because building platforms at that scale takes years, massive investment, and a lot of iteration.
This is where the story pauses for now. Not because the experiment is finished, but because the remaining work is integration work: static separation, storage, routing, and figuring out what a clean model looks like for real-world deployments.
If there’s one conclusion I’m confident about so far, it’s this: digital sovereignty is not a switch you flip. It’s a gradient. And we can move toward it.
But that movement won’t happen automatically. It needs demand. It needs developers to create pressure, to build adapters, to develop and share deployment strategies for European platforms, and ideally to involve those platforms in the process if they’re open to it. American platforms didn’t become effortless by accident. Their ecosystems matured because people built the tooling, refined the workflows, and treated deployment as a first-class developer experience problem.
Wrapping up
If we want European infrastructure to become a serious default option for modern web apps, we need to start building that ecosystem ourselves. Even if it’s messy at first. Even if the first versions are only proof of concept. That’s how you create the foundation for something that eventually becomes reliable enough for everyone to use.
