FinOps is performance optimization
Wherein we present a simple mental model for what FinOps really is.
In the early days of the automobile, aerodynamics were not a huge concern. "Cars" were basically horse carriages fitted with outboard motors rather than being pulled by horses. It wasn't until a little later that, as the engines became more powerful and the cars became faster, the shape of the car started to matter in many different ways. The point of the car hasn't changed since the late 1800s, but the various optimizations on which humans are constantly innovating has not stopped ever since.
I was a web dev once. I'd be stunned if anyone reading this hadn't heard the old yarn that Amazon once did a study and concluded that every 100ms of additional page load time cost them 1% of total sales. You know what a web dev does the first time they hear this one? Well, if it's 2008 or so you go and run your site through Google's PageSpeed test, an evolved version of which lives on to this day.
You know what it told you to do? It told you to take your 400 javascript files and smush them all into one file and then maybe to do some magic called "minification" and then to GZip them all, and darned if it didn't make your site faster. You didn't have to rewrite your site and you definitely didn't have to fuss with the functionality, you just made some optimizations to better handle how it physically travels over the wire to your customer. The automation to handle all this has been an open source commodity for over a decade now.
We are in the early days of cloud computing, and it's analogous to the early days of many new technology paradigms. People are just trying to get their heads around it for the most part, and transition their old ways of thinking and doing into these new tools and ways of doing. They bring their preconceptions and previous mental models with them. This is healthy.
This gives us simple analogies to grab on to when explaining FinOps.
Instead of measuring network latencies or CPU or memory utilization, we're zooming out a little bit and looking more broadly at the whole system. Which parts cost a lot of money? Should they cost a lot of money? How can that part be made more cost effective without sacrificing functionality for our users and quality of life for the engineers building it?
Posing the problem this way is especially effective for one of your two primary stakeholders as a FinOps practitioner - the engineering org. Performance optimization is a fun part of the craft of building things. Engineers typically have all manner of debugging and profiling tools available when they want to figure out which parts of the system are (under)performant, but this wealth of tooling and visibility and awareness hasn't completely made it into the area of cloud costs yet.
That is part of our job, as I see it. We make otherwise inscrutable billing info meaningful by translating it into the language of our engineering orgs. Once we do that it becomes, essentially, profiling data. It profiles the efficiencies and inefficiencies of our cloud infrastructure, and then we can start getting creative about how to make them more performant within the same functional footprint. It allows the process of optimizing your company's cloud infrastructure to start happening on its own, without your needing to push it too much.
Happy Friday, and thanks for reading.