ASE CASE STUDY

Re-purposing of existing hardware was easily done because we didn’t have to adhere to a strict compatibility matrix.

Distributed Storage Solutions

SERVICE PROVIDER

Fluid

CLIENT

ASE

BACKGROUND

ASE is a Managed Service Provider (MSP) headquartered in Sydney, Australia, with a global footprint and clients around the world.  The company is a global leader in data and cloud services, providing its partners with technological solutions designed to make a company’s digital performance more efficient, deliver greater value, and be able to meet future demand. 

THE CHALLENGE

ASE had existing virtualization environments provided by VMware that weren’t adding value, decreasing available physical compute resources due to overhead VMware service addon-ons, and an over engineered licensing cost model that was always a nightmare to calculate  

ASE found there was a lack of innovation, particularly in relation to containerization and how the inevitable transition from VM to containers would happen. They stood back for a second and identified there were a lot of legacy features in the product that weren’t relevant to a modern architecture and bloat-ware.  

More and more of ASE’s customers were choosing hybrid and multi-cloud environments. They were bringing in an AWS or an Azure into their mix and the ability for them to easily orchestrate their connectivity, or the connectivity from OnPrem to the cloud, kept growing. ASE needed a new solution. 

THE SOLUTION

ASE made the decision to deploy Fluid to replace their VMware virtualization environments internally as it was simply a more streamlined, cloud-friendly way of evolving their infrastructure. 

Fluid accelerated ASE’s ability to innovate with customers, and the ability evolve their offerings because they could action items such as deploying containers in Kubernetes. The integration provided a modern way of shifting all of ASE and their existing customer’s virtual environments –to- an-all-one code friendly stack that is CI/CD pipeline compatible for version control. 
 

OUTCOME

With Fluid in place, a large amount of time that ASE’s team would have spent doing rollouts and onboarding was cut down. Those resource savings can be passed on to the client or used to increase profit.  

With their previous infrastructure environment, ASE would be looking at potentially 2-3 days to onboard new customers. With Fluid, the ASE team can now onboard and deploy in just a couple minutes. A massive time and resource saving that’s allowed ASE to continue to focus on their client solutions, instead of getting that work done, and be able to pass on the cost savings to their clients. 

Curious?

Top 3 reasons CTO’s need to take note of Fluid

Cloud manager

(2-3 minute read) 

TLDR: We talked to Fluid CTO, Alexander Turner about why Fluid will make your life easier. Think edge, K8s, containerization, and cost vantage points. 

Alex, what are your top three points for why CTO’s should take note of Fluid?  

Alexander Turner, Fluid CTO
  1. Your competitors and peers are moving to the edge, right?  

    That means that their services to their end customers are going to be faster and they’re going to be quicker to deploy.  
     
    There’s no denying the drive to move more compute and workload closer to your end-user or consumer. Edge computing promises just that, but how?
     
    With cloud computing providers being forced into larger, more central facilities, how can you deploy scalable workloads on the edge? Fluid’s adds value in its ability to transform legacy or On-premise computing infrastructure into your own internet-accessible cloud.
     
    Fluid makes provisioning hardware a task simple enough for remote hands to complete in a few hours. All that needs to happen is cabling and rack-and-stack. Fluid’s orchestration platform automatically boots and installs your servers not before presenting them on our globally accessible management portal. Fluid makes it trivial to build your own mini-cloud wherever and whenever you want.
     
  1. Future workloads are container-first.
     
    Modern organizations need to be able to quickly adapt their product and scale to customer requirements at the press of the button. Use of public cloud services have normalized the expectation of easily deploying applications with a single API call Wizard. Containers are the building block of rapid scale and empower innovation in product by abstracting time spent managing and converting it into time spent building. Future and present workloads and applications start with containers. They are small, nimble services that scale fast and run anywhere. Kubernetes has proven itself to be the de facto standard when it comes to container orchestrations and portability.  Kubernetes empowers applications to run across multiple cloud platforms and On-premise with tools like Fluid, so your technical teams can focus on creating IP and not keeping it running.   
     
  1. On-premise VM-Backed workloads will burn a hole in your pocket.

    Let’s start with the number of resources it takes to just manage and run them! Building an alternative strategy that doesnt rely just on VMs, supports cost reduction in the cloud, or supports your cloud repatriation focus is required. It may or may not be the cloud, but the cloud itself comes with bill shock. With the consumption-based model, it doesn’t matter if you’re only running one VM, or a thousand VMs, you’re getting charged for what you use.
     
    With Fluid you have a running cost per node. You can run whatever resources you please. There’s no bill shock, it’s completely controlled. Traditional workloads need to run somewhere smart, somewhere that’s future-proofed and somewhere they aren’t going to drown the business in cost.

Fluid makes CTO’s work lives easier. Want to know more? Book a demo today.  

What it takes to build a scalable, multi-cloud, On-premise computing environment in minutes

Cloud manager

(2-3 minute read) 

TLDR: We talked to Fluid CTO, Alexander Turner about the challenges of building Fluid from a technical standpoint. From orchestrating scalable Kubernetes environments that stand up on their own, to maintaining cluster quorum, and orchestrating complicated cloud networks. 

What was the most complex element of Fluid to bring together? 

There are obviously a couple of elements of Fluid that have been challenging.  

A couple of major elements that really stand out are;  

Alexander Turner, Fluid CTO
  • Orchestrating scalable Kubernetes environments to stand up by themselves  
  • How do you maintain cluster quorum as servers are booting and as the clusters are being built?  
  • Networking. How do we orchestrate complicated cloud networks? 
    Networks that not only allow us to get higher resiliency down to a particular server, but also allow us to orchestrate turning up cloud providers and cloud provider links to bridge that gap between internet and high-performance switching and leverage that architecture? 

For anyone who’s played in the On-premise game before, you’ll know there’s some challenges, especially for the first time, until you get some cadence. Activities such as connecting a cloud provider, ethernet service, or global interconnect to your network. We’re talking about an AWS Direct Connect or Azure Express Route, and how many steps are involved. This usually involves finding a third-party to deliver the VLAN to you, connecting to them, turning up IP on the service, configuring the cloud provider side, also configuring BGP, accepting the circuit in AWS, or Express Route, and just dancing between those and then realizing, oh, I actually need a virtual gateway. I need this, I need that… 

We’ve built is a platform that automates all of that, including accepting the link. You simply provision a Virtual Cross Connect (VXC) with a provider like Megaport or PCCW Console Connect.  

Here’s the total steps to connect to Fluid:  

  1. Deliver a circuit to your Fluid switch, connected directly into your infrastructure or switch. 
  1. A very simple wizard that takes your cloud providers credentials. You can figure out the API credentials for your VXC provider, like Console Connect, or Megaport.  
  1. Simply put, you give the virtual gateway or the VNet you want it to connect to Amazon, Google Cloud, or Microsoft, run through the wizard with a couple of simple questions, and orchestrate the end-to-end connectivity.  
  1. Test it. You’ll get a green light when it’s on and there’s connectivity.  

We took that multi-step, very cumbersome process, and made it a single flow that is really easy to operate and use. You can just insert data, get that connectivity in, and then start scaling and building your network.  

With Fluid, cloud connectivity couldn’t be easier, especially at volume.  

CTO Interview | Who should use Fluid and why?

Cloud manager

(2-3 minute read) 

TLDR: We talked to Fluid CTO, Alexander Turner about which businesses Fluid can help and how implementing Fluid makes life easier for the user.  

What businesses do you see Fluid helping the most?  

Fluid is a ubiquitous tool. It makes sense for anyone who is unhappy with their cloud spend and wants to take control again to regain flexibility. It helps if you’re using Kubernetes in high-demand and high-throughput workloads. 
 
One of the challenges with the cloud is it is a seemingly infinite pool of resources. The cloud is limited by how fast you can get to the cloud, and not only do you have to organize that connectivity yourself, if you’re running workloads between offices or premises or studios, you must organize that connectivity too. You also must pay for each bit that you send into or out of the cloud.  

Alexander Turner, Fluid CTO

This is where Fluid really differentiates – building edge workloads and providing you control of the edge means that those data costs can completely disappear unless they’re for internet users. The storage and overheads that you’re paying for, just by virtue of using someone else’s intellectual property, don’t exist anymore. From my vantage point, for example in the media industry, this is truly exciting because it provides scale on-premises where it’s difficult to run these high bandwidth workloads off-premises, efficiently and effectively.  

One of the reasons people move away from on-premises infrastructure is that it’s generally challenging to run. It requires a lot of engineering resources. We’ve really put a lot of effort into making Fluid as easy as possible to deploy and manage, and with that in mind, it is attractive to a cloud engineer to run and deploy. Instead of requiring an on-premises engineering team or VMware engineers, a large complex cluster can be deployed by a data center’s remote hands and managed by them instead.  

How does implementing Fluid make the user’s life easier?  

Fluid brings that cloud management experience to your edge on-premises workloads.  

The tasks that you would have to spend in setting up network, and configuring compute, and booting compute are gone. We leverage the same technologies that large-scale cloud / hyperscalers use to automatically boot servers, automatically configure servers, automatically configure the network, and present that all in one single-management pane interface, so you can simply jump in the cloud portal and manage it.  

You can plug things in and turn them on and they will PXE boot. They will configure themselves automatically. It’s touchless, it’s hands-free, it’s a really, really simple way of deploying. The cost of that comes at the cost of network configuration flexibility, and we think that’s a good thing. The same trend that we see in cloud providers today, is that they have a way of configuring the network, and that’s designed for scale, security, and resiliency. We have taken that model and applied it on-premises, so there’s no consideration to be made around architecture. There is an architecture, and it makes it as easy as possible for the end-user to conform to that.  

We know it scales, we know it supports all sorts of workloads, from complicated Kubernetes workloads, all the way to VMs running in Windows Server 2008 running payment applications. We’ve had a wide range of supported applications and tooling on top of the platform. It’s easy. 

CTO Interview | Behind the scenes: Fluid foundations

CTO

(2-3 minute read) 

TLDR: We spoke to Fluid CTO, Alexander Turner regarding why he saw there had to be a better way for on-premises environments to be created, and his favorite Fluid features, such as the bridge between cloud and on-premises infrastructure. 

Alexander, how did you become involved with Fluid?  

Coming out of a recent tenure with one of the large cloud providers, I was looking to instantiate and drive change in the industry where I felt that we had some real sticking points. I met Andrew Sjoquist (Fluid Founder), and he has brilliant ideas for Fluid. It was just a perfect match. Andrew had brilliant ideation around where the pain points were for his customers, we both had a vision, and both wanted to make a change.

Alexander Turner, Fluid CTO

Why was Fluid created?  

Fluid was created because there had to be a better way for on-premises environments to be created. 

Today, it’s become cool to move workloads to the cloud. That cloud simplicity makes a lot of sense. Cloud cost and data ingress can be challenging, but ultimately it comes down to that the cloud is not everywhere. The type of cloud deployments that we’re seeing deployed are very large, especially when one of the big three deploy a new cloud environment, it’s not generally in a small city or a small footprint. It’s fairly high demand, and there’s a high ROI required to build the infrastructures.  

We noted a trend towards edge workloads. We want to provide end-users and customers ultimate flexibility in the location of their data and their workloads. The challenge today is it’s still really hard to deploy stuff. AWS, Microsoft, and Google have their products to deploy on-premises, but it does wedge you to their cloud and their ecosystem, which may not necessarily be a good fit.  

We believe that the future of multi and hybrid cloud is driven by Kubernetes. We’ve been on a mission to democratize Kubernetes on-premises and make it as easy as possible for anyone to deploy infrastructure. We make it as easy as if you were deploying a large cluster in the cloud. We want you to be able to just plug some boxes together, power them on, run a one-line command and then take control from your portal.  

What do you think is the most awesome feature of Fluid?  

I really love the bridge between the cloud and on-premise infrastructure.  

As an engineer myself, one of the most frustrating things I’ve found working in big corporates or complicated network environments was always, how do you get access to things? Even if you’re deploying a VMware cluster, it’s often a headache getting your vSphere console access out accessible to the internet or accessible globally. It involves multiple teams, it’s a stuff around. 

We’ve built a secure reverse tunneling tool, which leverages two levels of encryption for you to use and access our cloud-hosted portal as if you were accessing a service from the cloud. Simply put, you can just go to the portal at Fluid HQ, enter your cloud pairing token, and you’re online and managing a cluster remotely.  

That bridge between cloud-managed and on-premises really makes on-premises and those edge workloads feel like the cloud again.  

Distributed Storage Solutions (DSS) Case Study

By utilising Fluid, DSS deployed its new footprint ready for scaling whilst taking advantage of advanced data centre networking concepts.

Distributed Storage Solutions

SERVICE PROVIDER

ASE

CLIENT

DSS

Distributed Storage Solutions

BACKGROUND

Distributed Storage Solutions (DSS) are a client of service provider ASE. DSS is involved with cryptocurrency, backed by decentralised storage environments.

DSS is dedicated to sustainable and robust data storage infrastructure in the Filecoin Network developed on the IPFS (InterPlanetary File System) protocol.

Decentralised storage platforms break apart the users’ files and distribute them across multiple nodes on their network, resulting in no single point of failure. Filecoin is a decentralised storage blockchain developed by Protocol Labs on the IPFS protocol. Data is stored and retrieved via a series of cryptographic proofs by the storage user and the storage provider.

THE CHALLENGE

DSS was looking to scale out a proof-of-concept lab environment into a production enterprise-scale Filecoin deployment. This larger deployment required significantly more bandwidth between the computational and storage components of the solution.

There was also a requirement to ensure new computational hosts and storage capacity could be deployed simply and without significant human resources, whilst maintaining a 100% uptime state for the environment during change that resulted from scale.

THE SOLUTION

Apply the Fluid network operating system across not only the new, 100 gigabit infrastructure, but also across on-premise server hardware, where ASE had transitioned DSS into an Equinix Data Center.

OUTCOME

ASE installed Fluid, completing the networking component of DSS’s project in just one day. This networking project would usually take one to two weeks.

By utilising Fluid, DSS deployed its new footprint ready for scaling whilst taking advantage of advanced data centre networking concepts such as routing on host (ROH), BGP, Equal Cost MultiPath (ECMP) balancing as well as achieving the most possible bandwidth from their storage architecture. This has allowed for an increase in the production of proven storage capacity and consequently an increase in the amount of FIL (Filecoin) generated for the business.

Fluid makes networking simple and allows a high throughput data to flow from DSS’s servers to their storage infrastructure. The ability to maintain 100 gigabits and more across the entire network was a core element as networking is critical. Integrating Fluid into the solution improved DSS’s network performance considerably and provided them with increased productivity.

Curious?

Ezypay: A Fluid infrastructure transition

We’ve loved partnering with you because you’ve come with us on the journey, as we’ve grown, so have you.

Andrew So, CIO

SERVICE PROVIDER

ASE

CLIENT

Ezypay

BACKGROUND

Ezypay is Australia’s leading solution for subscription and direct debit billing. The company offers a cloud-based subscription payment platform to manage recurring direct debit payments across multiple sites, multiple payment methods, and multiple currencies to businesses and their customers across the Asia Pacific region.

Ezypay has been a client of service provider ASE for over eight years. During that time ASE has delivered colocation, connectivity, and unified communications services.

After a technology analysis of their business, ASE originally relocated Ezypay into an Equinix colocation facility, out of their offices in Chatswood, NSW, Australia. To take advantage of the better availability and uptime of services, ASE shifted Ezypay infrastructure and workloads, which also came with a host of other advantages from a connectivity and security perspective, as well as access to cloud services.

THE CHALLENGE

Ezypay’s business evolved to make more use of cloud services, particularly on Amazon Web Services (AWS). A requirement remained to continue existing colocated infrastructure, due to several compliance and operational reasons. It was critical to keep data close, out of the clouds, or available between multiple clouds.

The infrastructure that Ezypay had colocated comprised of VMware, an IBM SAN, and IBM Lenovo servers and continued for several years. Eventually, that came to an end of life where they had to migrate in some way, shape, or form.

Without a need for a full refresh, or to redeploy a VMware environment and traditional legacy SAN and server application environment, Fluid came into the picture as a solution.

THE SOLUTION

Ezypay looked to Fluid to transition to a more agile and modern architecture.

Fluid provided the capability to not only deploy new containerised services, which their developers and other teams can create simply and quickly using infrastructure as code practices but also for Ezypay to migrate and run their legacy VMware environment as containers inside of Fluid.

Ezypay received the best of both worlds, to have the forward-looking approach of being able to deploy new containerised services into the environment, but also to support the legacy environment as well. They maximised their investment for hardware, general maintenance, and other overheads that come with maintaining an environment.

Fluid’s integration with NetApp Cloud Manager and Trident (the CSI storage container storage interface for Kubernetes that is also deployed out of the box automatically when you deploy Fluid) provided visibility and control of all compute, network, and data storage elements that make up the solution. The ability to manage data as a discrete and highly prized asset by utilising NetApp technology such as Snapshot and data replication improved resiliency and peace of mind. Not only does Ezypay use Fluid, but they also use NetApp Private Storage (NPS) as a service offering from ASE.

OUTCOME

Ezypay has moved from an environment where they were using legacy infrastructure, with legacy backups and data management, through to implementing Fluid to be agile with how they go forward with their product enhancements and general technology operations as well.

Implementing Fluid has resulted in Ezypay being able to place more focus on their application and therefore their users, as opposed to maintaining legacy infrastructure. They had already started to see the benefits of doing that when they moved from their office out to Equinix, at the recommendation of ASE.

Next, ASE will retire their co-location. Ezypay has now consolidated an entire cabinet full of equipment down to about 2RUs of rack space, thanks to the solution Fluid has enabled.

By making data more open and accessible whilst maintaining protection and privacy, Ezypay will be able to accelerate its product development across any cloud and in any region where it operates.

The ability to manage the Fluid environment from within Cloud Manager has also resulted in exposure to other NetApp services such as Snapshot Control, as well as a range of other services, like Cloud Insights, that they can easily utilise from the same portal.

Curious?