"We saw the amazing community that’s grown up around Kubernetes, and we wanted to be part of that. We wanted to benefit from added velocity and reduced cost, and also align with the rest of the industry on best practices and tools.” — JAI CHAKRABARTI, DIRECTOR OF ENGINEERING, INFRASTRUCTURE AND OPERATIONS, SPOTIFY
What is Kubernetes?
Kubernetes (also known as k8s or “kube”) is an open source container orchestration platform that automates many of the manual processes involved in deploying, managing, and scaling containerized applications.
The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google’s experience running production workloads at scale with best-of-breed ideas and practices from the community.
Kubernetes architecture and how it works?
Kubernetes defines a set of building blocks (“primitives”), which collectively provide mechanisms that deploy, maintain, and scale applications based on CPU, memory or custom metrics. Kubernetes is loosely coupled and extensible to meet different workloads. This extensibility is provided in large part by the Kubernetes API, which is used by internal components as well as extensions and containers that run on Kubernetes. The platform exerts its control over compute and storage resources by defining resources as Objects, which can then be managed as such. Kubernetes follows the primary/replica architecture. The components of Kubernetes can be divided into those that manage an individual node and those that are part of the control plane.
Where can you use Kubernetes?🤔
The options to use Kubernetes hardly have any restrictions, almost any option of use is possible thanks to all the possibilities of installation that it offers and because many solutions are integrating it in their architectures. Thus, we have a wide range to use K8S in the flavor we want.
- Bare Metal: we can deploy our cluster directly on physical machines using multiple operating systems: Fedora, CentOS, Ubuntu, etc.
- Virtualization On-Premise: if we want to mount our cluster on-premise, but with virtual machines, the possibilities grow. We can use Vagrant, CloudStack, Vmware, OpenStack, CoreOS, oVirt, Fedora, etc.
- Cloud solutions: if we want to have all the advantages of Kubernetes, without taking care of managing everything below, we have all these alternatives in the cloud:
- OpenShift: the leader of the PaaS integrates Kubernetes and, when using it in its different options (enterprise, online, etc.), we will be using managed K8S clusters.
- Google Container Engine: service managed and offered by Google, who is responsible for managing the instances of Compute Engine. It also deals with monitoring, logging, the health of the instances, and updating Kubernetes to the latest available version.
- CloudFoundry offers Kubernetes in its Container Runtime.
- Others: Azure, IBM, Kube2Go, GiantSwarm also offer services managed by Kubernetes.
What specifically can Kubernetes do for us?
Five fundamental business capabilities that Kubernetes can drive in the enterprise–be it large or small. And to add teeth to these use cases, we have identified some real-world examples to validate the value that enterprises are getting from their Kubernetes deployments
- Faster time to market
- IT cost optimization
- Improved scalability and availability
- Multi-cloud (and hybrid cloud) flexibility
- Effective migration to the cloud.
How Spotify uses Kubernetes?
Spotify is a Swedish audio streaming and media services provider, launched in October 2008. The platform is owned by Spotify AB, a publicly traded company on the New York Stock Exchange since 2018 through its holding company Spotify Technology S.A. based in Luxembourg.
Challenges Faced by Spotify:
Spotify Launched in 2008, the audio-streaming platform has grown to over 200 million monthly active users across the world. Their intention is to provide a quality services to there customers. They wanted to empower creators and enable a really immersive listening experience for all of the consumers that they have.
In early adopter of microservices and Docker, Spotify had containerized microservices running across its fleet of VMs with a homegrown container orchestration system called Helios. By late 2017, it became clear for them that having a small team working on the features was just not as efficient as adopting something that was supported by a much bigger community.
Kubernetes solved this challenge, it benefited Spotify by increasing the velocity and reducing the cost. The migration, which would happen in parallel with Helios running, could go smoothly because “Kubernetes fit very nicely as a complement and now as a replacement to Helios,” says Chakrabarti. Spotify has also started to use gRPC and Envoy, replacing existing homegrown solutions, just as it had with Kubernetes.
The biggest service currently running on Kubernetes takes about 10 million requests per second as an aggregate service and benefits greatly from autoscaling.
Before they have to wait for an hour to create a new service and get an operational host to run it in production, but with Kubernetes, they can do that on the order of seconds and minutes. In addition, with Kubernetes’s bin-packing and multi-tenancy capabilities, CPU utilization has improved on average two- to threefold.
For Furthur Queries, Suggestion’s Feel Free to Connect with me On Linkedin.