Boosteroid SVP: Since 2019, we have been solving the problems the AI industry only faces today

Boosteroid is a global leader in cloud gaming, serving over 8 million users across Europe and the Americas. By removing the need for expensive hardware, Boosteroid enables high-end gaming on almost any screen—from desktops, smartphones and handhelds to TVs and even vehicles.

While it is a gaming company at its heart, Boosteroid is fundamentally an infrastructure company. It operates as a huge intercontinental cloud-based gaming rig that offloads heavy computation from millions of devices to its powerful GPU servers. At the core of this sophisticated operation are the people who build and maintain it.

Today, we are interviewing Antonina Batova, Senior Vice President of Infrastructure, who manages Boosteroid’s entire global footprint.

1) To start, could you outline what exactly goes into your role as SVP of Infrastructure?

My role is basically to manage the physical reality of our cloud and determine our expansion strategy. It starts way before any hardware is purchased. I have to figure out if a specific market and data center actually make sense for us, not just in terms of latency and user demand, but whether the facility can handle our power, cooling and qualified on-site personnel needs.

Then come negotiations and contracts with DCs and ISPs. A big part of my job is taking their technical specs and commercial offers and making sure they align with our business model. I have to say ‘yes’ or ‘no’ to a provider based on whether their infrastructure can support our deployment.

On the hardware side, I oversee the procurement of GPU clusters, the logistics of getting gear to the site, and the entire lifecycle afterward, including RMAs. I also make sure our internal teams – network and server engineers – are on the same page when we are rolling out a new site. Currently, I’m running 28 of these locations worldwide, so I’m dealing with everything from long-term contract negotiations to the day-to-day technical health of the network.

2) When evaluating offers from different data centers, why is it so difficult to determine if they fit the Boosteroid business model? What variables are included?

No two data centers structure their bills the same way. You have to look at the ‘on-site realities’ of each facility. For example, some might have a higher PUE (Power Usage Effectiveness), so they bundle those extra cooling costs inside the kWh price. Others give you an all-in rate – space, power allocation, and cooling – but then you have to dig into how they handle ‘committed’ vs. ‘overcommitted’ usage.

One of the biggest deal-breakers for me is power allocation per rack. If a facility can’t deliver the density we need, the floor space is irrelevant. This is where rightsizing becomes critical. I have to look at the actual projected consumption of our hardware rather than just accepting a standard power buffer. Data centers usually quote costs based on the maximum theoretical power draw of the hardware just to stay on the safe side. I look for providers flexible enough to match our specific density requirements so we aren’t overpaying for capacity we won’t use.

To make a smart decision, I have to normalize all these variables to find a common denominator. I essentially strip away the marketing and find the true cost of running our specific hardware in that specific room.

Connectivity is another layer of complexity. Sometimes it makes sense to use the data center’s own blend of ISPs, but in many cases, I’ll decide to leverage our existing global relationships with Tier-1 carriers like Cogent or negotiate directly with a local provider. It’s a constant battle for cost, latency, and reliability.

3) We hear a lot about the energy demands of modern compute. How common is it to find data centers that can support the high power density – like 20kW per rack and higher – that GPU setups require?

Until recently, our deployment was considered a very high density because our servers can hit 1.8 kW in peak. Back in 2019-2022, the gold standard for a solid Tier 3 data center in Europe or the US was 15-16 kW per rack, and in places like Brazil, you were looking at only 8-10 kW or even less. In fact, since 2019, we have been solving the problems the AI industry only faces today.

It was a real stroke of luck to find those ‘heavy lifters’ that could deliver 22-23 kW per rack without requiring major modifications. Often, if they had to modify their infrastructure for you, those costs would just be passed down as higher NRCs. But the real issue wasn’t just allocating the 20 kW of power per rack. It was the ability to air-cool that heat and keep cold aisle temperatures stable and in line with ASHRAE standards.

Today, it’s much more common to see 20 kW, 30 kW or even 70 kW being offered, but there is a shift in how that’s done. Most data centers are modernizing for liquid-cooled AI racks with built-in in-row or in-unit CDUs, rather than for traditional air-cooled servers. If you look at where the industry is heading, AI racks can go up to 140-150 kW now. That is becoming the new standard if a data center wants to be competitive and sell their premises to serious AI projects.

4) We are living through the boom in AI and high-performance computing. From your experience on the ground, are standard data centers actually ready to handle the power requirements of modern GPU clusters?

In practice, there is no single definition of a ‘standard’ data center anymore. In my work across multiple regions, I consistently see a split market. On one hand, you have independent, non-corporate providers – especially in places like Sweden and Finland – who were ahead of the curve, offering liquid cooling long before this AI-driven demand exploded.

On the other hand, you have massive networks like Digital Realty that have the capital to upgrade their infrastructure, and they are doing it right now. The real constraint today is not demand, but deployable capacity. Even when negotiating cloud gaming infrastructure, it becomes clear how difficult it is to find a site that can realistically support a 10–20 MW deployment with the right cooling. I’m not talking about a ‘powered shell’; I’m talking about infrastructure that can actually handle that load without failing.

This is exactly why we see such a push for modular data centers. Every major vendor, from Huawei to Supermicro, is now offering containerized solutions to address this gap. From my perspective in the industry, the market is in a state of catch-up. Companies can no longer wait years for traditional builds, so they are forced to look at these rapid-deployment, high-spec modular options to keep their projects viable.

5) It is easy to focus just on the GPUs, but successful deployment requires much more. What other factors do you have to consider today?

The hardware used to be the easiest part, but now it’s a constant struggle because of the AI boom. You’re competing for everything — lead times, components and power availability. Even getting a final price and a delivery date from vendors takes forever now because everything is in short supply.

The biggest risk is the timing. If it takes six months just to get your GPUs, but your data center lease is already signed, you’re stuck paying for empty space. When you’re managing budgets at this level, you can’t afford that kind of waste. My job is to time everything so the hardware and the facility are ready at the same moment. In this market, a successful rollout is really about making sure you aren’t burning money while waiting for gear to arrive.

6) Boosteroid operates in 28 data center locations worldwide. Do you find that infrastructure standards are consistent globally, or do you run into specific regional difficulties when trying to deploy high-density setups?

On paper, the standards are at the same level all over the globe. Most enterprise-grade facilities aim for Tier III compliance, with some moving toward Tier IV, and either have Uptime Institute certification or are compliant with the standards. As I already mentioned, it’s not impossible but pretty difficult to find what you need in the emerging markets like Brazil as opposed to well-established locations in the US or Europe.

Speaking of Brazil, in this market for the first time ever we faced an issue with one of the facilities failing these standards. It was a very unexpected turn and one of several cases where they simply couldn’t maintain the environment we needed; in one instance, extreme weather conditions led to power outages and, as a result, cooling equipment failure. This was our first real case of a provider failing our SLA so fundamentally. In such cases, a ‘Tier 3’ label doesn’t always guarantee that the infrastructure will handle the actual workload and regional climate risks.

7) There is a rush to deploy hardware right now, but technology moves fast. What is the biggest strategic mistake you see companies making regarding the mid-term lifecycle of their equipment?

I actually don’t see top-tier companies making a mistake here. If anything, the market is moving in the right direction. We are seeing a shift toward repurposing older generation GPUs for less demanding tasks like inference, research, or media processing. If you check the offerings of ‘neoclouds’ like Nebius or Coreweave, their pricing models are already built around different GPU grades. This is how they manage a shorter AI GPU lifecycle of 2–3 years compared to the traditional 5–10 years in other sectors.

We experienced this firsthand in cloud gaming, where hardware requirements evolve so rapidly that you have to update infrastructure constantly to keep up with AAA titles. However, even with recycling, reselling, or downgrading, the industry will still face an enormous amount of servers and supporting equipment needing decommissioning at scale.

Another smart move I see is AI companies opting for ready-made solutions from neoclouds instead of committing to long-term data center contracts themselves. It allows them to scale without the risk of being stuck with obsolete infrastructure as the hardware evolves. These neoclouds will continue to expand, but the real bottleneck now shifted from hardware availability to power supply. Finding even 10MW of available capacity today is an incredibly difficult task, and that is the main strategic challenge for the next few years.

Share: