(2-3 minute read)
TLDR: We talked to Fluid CTO, Alexander Turner about which businesses Fluid can help and how implementing Fluid makes life easier for the user.
What businesses do you see Fluid helping the most?
Fluid is a ubiquitous tool. It makes sense for anyone who is unhappy with their cloud spend and wants to take control again to regain flexibility. It helps if you’re using Kubernetes in high-demand and high-throughput workloads.
One of the challenges with the cloud is it is a seemingly infinite pool of resources. The cloud is limited by how fast you can get to the cloud, and not only do you have to organize that connectivity yourself, if you’re running workloads between offices or premises or studios, you must organize that connectivity too. You also must pay for each bit that you send into or out of the cloud.
This is where Fluid really differentiates – building edge workloads and providing you control of the edge means that those data costs can completely disappear unless they’re for internet users. The storage and overheads that you’re paying for, just by virtue of using someone else’s intellectual property, don’t exist anymore. From my vantage point, for example in the media industry, this is truly exciting because it provides scale on-premises where it’s difficult to run these high bandwidth workloads off-premises, efficiently and effectively.
One of the reasons people move away from on-premises infrastructure is that it’s generally challenging to run. It requires a lot of engineering resources. We’ve really put a lot of effort into making Fluid as easy as possible to deploy and manage, and with that in mind, it is attractive to a cloud engineer to run and deploy. Instead of requiring an on-premises engineering team or VMware engineers, a large complex cluster can be deployed by a data center’s remote hands and managed by them instead.
How does implementing Fluid make the user’s life easier?
Fluid brings that cloud management experience to your edge on-premises workloads.
The tasks that you would have to spend in setting up network, and configuring compute, and booting compute are gone. We leverage the same technologies that large-scale cloud / hyperscalers use to automatically boot servers, automatically configure servers, automatically configure the network, and present that all in one single-management pane interface, so you can simply jump in the cloud portal and manage it.
You can plug things in and turn them on and they will PXE boot. They will configure themselves automatically. It’s touchless, it’s hands-free, it’s a really, really simple way of deploying. The cost of that comes at the cost of network configuration flexibility, and we think that’s a good thing. The same trend that we see in cloud providers today, is that they have a way of configuring the network, and that’s designed for scale, security, and resiliency. We have taken that model and applied it on-premises, so there’s no consideration to be made around architecture. There is an architecture, and it makes it as easy as possible for the end-user to conform to that.
We know it scales, we know it supports all sorts of workloads, from complicated Kubernetes workloads, all the way to VMs running in Windows Server 2008 running payment applications. We’ve had a wide range of supported applications and tooling on top of the platform. It’s easy.