Cloud Computing is Fractional Reserve Banking
My employer operates a Platform as a Service, a PaaS. We host "customer workloads", which is to say we run their web apps and anything involved with making it easier for the people building those web apps to deliver them. I imagine that our founder (my boss) started building this platform probably the day after the concept of cgroups was announced on some Linux mailing list, that is to say - we have always been a containerized platform.
My work lately has been allocating costs (from our cloud bills on the left side) out to the tens or hundreds of thousands of containers that make use of the resources we buy (the right side), so I've been deep, deep, deep in the weeds of what resources are being allocated where. Part of the point of containerization (and virtualization in general) involves the capability to allocate more resources to $things than what you actually have on hand. An example:
We purchase a 4CPU VM from a provider, a "host". We then place containerized customer $things on that host and allocate resources to those containers. It's extraordinarily rare that all containers need to use CPU time simultaneously, and so via the magic of containers we're able to allocate a multiple of that 4CPU to the containers on that host, say 40CPU - a 10x "overcommit". We have systems in place in case a host becomes overloaded, but in general this works very, very well for web workloads. They're spiky both in the long term and the short term - big traffic waves come and go, and each individual web request often only requires a relative handful of cycles to fulfill, leaving the CPU otherwise idle. We make use of that otherwise idle time in another container.
This might surprise some folks who are not super familiar with the cloud computing industry, but it occurred to me yesterday that there's an analogy in the world of finance that's been in use for centuries and underpins essentially all of modern life - fractional reserve banking.
In fractional reserve banking, banks are only required to hold a fraction of their customers' deposits as reserves, while the rest can be lent out to other customers. This allows them to create more credit than the cash reserves they actually have on hand. Similarly, in cloud computing, virtualization technology allows providers - us, and AWS before us - to sell more compute resources than they physically have on hand.
How virtualization actually works is well covered on the internet, but to torture a different analogy - picture an apartment building where the landlord rents out your apartment to someone else when you go to work each day. They do this throughout the building, and have a very well-oiled understanding of how much "empty apartment" capacity is available at any moment to rent out to someone. That's basically how virtualization works, but the landlord is the OS kernel.
An obvious difference between the two is that fractional reserve banking is a centuries-old practice that has been codified into law and regulation, while cloud computing is a relatively new, opaque, wildly profitable market that is still evolving. However, both involve the creation of resources that are backed by the creditworthiness or trustworthiness of the provider, rather than physical assets.
In both cases, the goal is to create more value than would be possible through physical means alone. By creating virtual resources that can be sold or lent out, providers can generate more revenue and profit than they could through traditional means. However, this also creates the potential for risk and instability, as the value or reliability of these resources can fluctuate based on a variety of factors. In the cloud business, capacity planning is the very serious business of ensuring that if there's a "run on the bank", our customers' apps don't fall over as a result.
Additionally, there are varying levels of "capital reserve requirements" for the various classes of technology resources. Compute is easily leveraged, and this leverage is what I've been thinking of lately as our potential profit margin. Computing is a time-bound activity but once it completes it can move on to the next thing with virtually no penalty. Picture a cook in a diner, knocking out customer orders one after the other. Whether busy or not the cook is getting paid the same, so best to keep them busy (from the capitalist viewpoint).
Storage on the other hand is more complicated as saving state for later is its entire job. It fills up like a filing cabinet, so you have to keep excess capacity on hand lest disks reach 100%. It's possible, but a lot more specialized a practice to make use of any unused disk space. Likewise for bandwidth, as its simply like water flowing through a pipe. There is no way to make it multiply it just because its not being used somewhere else. In fact, it's all being used by definition - unused water doesn't flow through the pipe.
This is a fresh spitball session, but I can't help trying to connect this to something like Bretton Woods, whence the world decided on USD as the global reserve currency, and the platform power that has given the US ever since. Likewise, allowing the marketing teams at AWS to convince us all that "Capex bad, Opex good, nobody's running their own servers anymore" is clever and has spawned countless innovations as well as an entire industry, but bears a closer look as your company (hopefully) scales.
I am also forced to ponder the implications of building on somebody else's platform, and the wisdom and downsides of that once you've proven out your business model. To wit - if your company is too successful in adding value, if the business built in rented space is too successful, what's to stop the landlord from raising your rent? Anyway, this is fresh creative magma so tread carefully..
Until next time.