KUBERNETES IN DETAIL WITH USED CASES
So, Kubernetes is basically a container management tool. But what are these containers? And where does Kubernetes come in?
Containers are like a small version of virtual machines. They are light executables that have their own file systems, memory and CPU among other features, but they don’t require their own operating systems.
They simply package up code and its dependencies, making sure applications run quickly and reliably between different cloud computing environments.
Containers run on both Windows and Linux. You can therefore easily create your packaged application on your development platform, and then deploy it in any platform that supports containers.
There’s a challenge though:
Containers don’t work best on their own. You need a way to manage, automate and scale the deployment process. You need a container orchestration system. And that’s where Kubernetes (K8s) comes in.
WHY KUBERNETES ?
Containers are hot right now. And Kubernetes is currently the go-to system for container orchestration. But, exactly what do you get when you use Kubernetes? Here’s a list of benefits:
1. It Solves The Multiple Computer Nightmare
Using different containers is amazing. Until you learn that something’s missing; that you’re running several containers across many machines and struggling to figure out how each can run seamlessly.
It can be insanely frustrating to do so. But, not with Kubernetes. This container orchestration platform ensures multiple containers work together seamlessly. It can:
- Scale containers up or down when demand changes
- Distribute load between containers
- Keep storage consistent among other features
All this is done automatically, freeing up your time and resources so that you can focus on your core activities. This is one of the biggest reasons why Kubernetes is loved by developers.
2. It Makes It Easy to Configure Services
The Kubernetes engine uses pods (a collection of containers deployed together in one host) to orchestrate modular parts. Kubernetes ensures one container image is not overwhelmed with too much functionality.
It groups together a collection of pods that take care of similar functions. This way, services can be configured easily for load balancing, observability and horizontal scaling.
3. Automatic Load Balancing
Occasionally, containers can suffer from high load. If such traffic gets too high, Kubernetes can balance it and distribute it to ensure deployment is balanced and stable. Thus, you don’t get cases where some machines are doing too much while others do close to nothing.
When more memory, storage or CPU is needed, Kubernetes ramps them up. On the other hand, when less is needed, Kubernetes can shut some machines down.
4. It Simplifies Large Scale Updating and Deployment of Software
Using Kubernetes makes it easy to manage how software can be deployed and updated. This is partly done using Kubernetes Controllers. It enables you to:
- Deploy software at scale across pods.
- Quickly identify what is complete, what’s processing, and what has failed
- Automate rollbacks and rollouts
- Orchestrate rolling updates
It enables you to use different types of applications, without restricting supported language runtimes. If you can run an application in a container, you can run it in Kubernetes.
5. Add New Functionality
As an open source project, adding new functionality to Kubernetes is easy. A large community of developers and companies build extensions, integrations, and plugins that help Kubernetes users do more.
6. Run Anywhere
Run highly available and scalable Kubernetes clusters on AWS while maintaining full compatibility with your Kubernetes deployments running on-premises.
7. Run Kubernetes On AWS
AWS makes it easy to run Kubernetes. You can choose to manage Kubernetes infrastructure yourself with Amazon EC2 or get an automatically provisioned, managed Kubernetes control plane with Amazon EKS. Either way, you get powerful, community-backed integrations to AWS services like VPC, IAM, and service discovery as well as the security, scalability, and high-availability of AWS.
REMEMBER:
You pay $0.10 per hour for each Amazon EKS cluster that you create and for the AWS resources you create to run your Kubernetes worker nodes. You only pay for what you use, as you use it; there are no minimum fees and no upfront commitments.
WHAT IS KUBERNETES ?
Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime.
Kubernetes manages clusters of Amazon EC2 compute instances and runs containers on those instances with processes for deployment, maintenance, and scaling. Using Kubernetes, you can run any type of containerized applications using the same toolset on-premises and in the cloud . For example, if a container goes down, another container needs to start.
AWS makes it easy to run Kubernetes in the cloud with scalable and highly-available virtual machine infrastructure, community-backed service integrations, and Amazon Elastic Kubernetes Service (EKS),a certified conformant, managed Kubernetes service.
HOW KUBERNETES WORKS
Kubernetes works by managing a cluster of compute instances and scheduling containers to run on the cluster based on the available compute resources and the resource requirements of each container. Containers are run in logical groupings called pods and you can run and scale one or many containers together as a pod.
Kubernetes control plane software decides when and where to run your pods, manages traffic routing, and scales your pods based on utilization or other metrics that you define. Kubernetes automatically starts pods on your cluster based on their resource requirements and automatically restarts pods if they or the instances they are running on fail. Each pod is given an IP address and a single DNS name, which Kubernetes uses to connect your services with each other and external traffic.
HOW KUBERNETES STANDS DIFFRENT IN THE MARKET ?
Kubernetes offers portability, and faster, simpler deployment times. This means that companies can take advantage of multiple cloud providers if needed and can grow rapidly without having to re-architect their infrastructure. It makes it easy to use and manage containerized applications. You’re then left to focus on creating new features and fixing bugs rapidly, without worrying much about deployment. It also ensures that you fully optimize your usage of machines.
KUBERNETES HAS BEEN GROWING RAPIDLY , HERE’S WHY ?
There are many benefits of using Kubernetes that span across different fields.
This system has grown so much since June 2014 that it has literally dominated its space. Here are four reasons that have led to its rapid growth:
1. Optimized responsiveness: With Kubernetes, you maintain replica sets. This means that multiple pods run at the same time. You therefore don’t have to worry about replicating an entire application, triggering a load balancer, and switching over to a secondary application. This keeps your application resilient, having maximum responsiveness and uptime.
2. Scalability: It allows you to multiply workloads and scale up and down with minimal issues. Its auto scaler service can replicate pods automatically to different nodes to maximize the usage of resources.
3. Flexibility: It’s easy to adjust applications that are comprised of granular components. You don’t have to deploy feature improvements. This doesn’t have to be done in massive overhauls that mess up usability. It can be deployed in smaller steps that are more manageable.
HOW TO USE KUBERNETES WITH DOCKER
Docker is an enterprise container platform. Kubernetes is a container orchestration system. These two are not to be confused for one another, since they’re totally different. In other words, Docker is like a bus, and Kubernetes is like a bus station. They work together.
With the Docker hub, you can create containers that pack and isolate software with whatever it needs to run. These containers allow you to be agile and build portable, secure apps. Docker makes it easy to pack and ship apps.
Once you’ve set up your containers in Docker, you still need them to be deployed and scaled using an orchestration system. This is where Kubernetes comes in. You don’t have to manually start servers and run commands when a container dies.
Kubernetes ensures that deployment, scaling and management of your containerized applications is done automatically.
ROLE OF KUBERNETES IN ENTERPRISES.
Despite the core business, every company is embracing digitalization. Their ability to adapt to rapid growth and competitiveness is mostly appreciated. Cloud-native technologies are aroused to meet the needs in providing observability and automation that are necessary to manage applications. Also, they are planned to scale up with high velocity. Businesses were previously restricted for quarterly deployments for critical applications.
The Kubernetes declarative API-driven infrastructure enables teams to work independently. Moreover, it helps operators to concentrate their business goals. These inevitable shifts in the culture of working have been proven to contribute towards higher productivity and autonomy, along with decreasing the labour of development teams.
USED CASES
Some Company used case to know, To give you all a certain idea how Kubernetes is used in Company’s .Here I explained with three examples :-
1) CASE STUDY: adidas
Staying True to Its Culture, adidas Got 40% of Its Most Impactful Systems Running on Kubernetes in a Year
CHALLENGE
In recent years, the adidas team was happy with its software choices from a technology perspective — but accessing all of the tools was a problem. For instance, “just to get a developer VM, you had to send a request form, give the purpose, give the title of the project, who’s responsible, give the internal cost center a call so that they can do recharges,” says Daniel Eichten, Senior Director of Platform Engineering. “The best case is you got your machine in half an hour. Worst case is half a week or sometimes even a week.”
SOLUTION
To improve the process, “They started from the developer point of view,” and looked for ways to shorten the time it took to get a project up and running and into the adidas infrastructure, says Senior Director of Platform Engineering Fernando Cornago. They found the solution with containerization, agile development, continuous delivery, and a cloud native platform that includes Kubernetes and Prometheus.
IMPACT
Just six months after the project began, 100% of the adidas e-commerce site was running on Kubernetes. Load time for the e-commerce site was reduced by half. Releases went from every 4–6 weeks to 3–4 times a day. With 4,000 pods, 200 nodes, and 80,000 builds per month, adidas is now running 40% of its most critical, impactful systems on its cloud native platform.
2) CASE STUDY: IBM
Building an Image Trust Service on Kubernetes with Notary and TUF
CHALLENGE
IBM Cloud offers public, private, and hybrid cloud functionality across a diverse set of runtimes from its OpenWhisk-based function as a service (FaaS) offering, managed Kubernetes and containers, to Cloud Foundry platform as a service (PaaS). These runtimes are combined with the power of the company’s enterprise technologies, such as MQ and DB2, its modern artificial intelligence (AI) Watson, and data analytics services. Users of IBM Cloud can exploit capabilities from more than 170 different cloud native services in its catalog, including capabilities such as IBM’s Weather Company API and data services. In the later part of 2017, the IBM Cloud Container Registry team wanted to build out an image trust service.
SOLUTION
The work on this new service culminated with its public availability in the IBM Cloud in February 2018. The image trust service, called Portieris, is fully based on the Cloud Native Computing Foundation(CNCF) open source project Notary , according to Michael Hough, a software developer with the IBM Cloud Container Registry team. Portieris is a Kubernetes admission controller for enforcing content trust. Users can create image security policies for each Kubernetes namespace, or at the cluster level, and enforce different levels of trust for different images. Portieris is a key part of IBM’s trust story, since it makes it possible for users to consume the company’s Notary offering from within their IKS clusters. The offering is that Notary server runs in IBM’s cloud, and then Portieris runs inside the IKS cluster. This enables users to be able to have their IKS cluster verify that the image they’re loading containers from contains exactly what they expect it to, and Portieris is what allows an IKS cluster to apply that verification.
IMPACT
IBM’s intention in offering a managed Kubernetes container service and image registry is to provide a fully secure end-to-end platform for its enterprise customers. “Image signing is one key part of that offering, and our container registry team saw Notary as the de facto way to implement that capability in the current Docker and container ecosystem,” Hough says. The company had not been offering image signing before, and Notary is the tool it used to implement that capability. “We had a multi-tenant Docker Registry with private image hosting,” Hough says. “The Docker Registry uses hashes to ensure that image content is correct, and data is encrypted both in flight and at rest. But it does not provide any guarantees of who pushed an image. We used Notary to enable users to sign images in their private registry namespaces if they so choose.”
3) CASE STUDY: Spotify
Spotify: An Early Adopter of Containers, Spotify Is Migrating from Homegrown Orchestration to Kubernetes
CHALLENGE
Launched in 2008, the audio-streaming platform has grown to over 200 million monthly active users across the world. “Our goal is to empower creators and enable a really immersive listening experience for all of the consumers that we have today — and hopefully the consumers we’ll have in the future,” says Jai Chakrabarti, Director of Engineering, Infrastructure and Operations. An early adopter of microservices and Docker, Spotify had containerized microservices running across its fleet of VMs with a homegrown container orchestration system called Helios. By late 2017, it became clear that “having a small team working on the features was just not as efficient as adopting something that was supported by a much bigger community,” he says.
SOLUTION
“We saw the amazing community that had grown up around Kubernetes, and we wanted to be part of that,” says Chakrabarti. Kubernetes was more feature-rich than Helios. Plus, “we wanted to benefit from added velocity and reduced cost, and also align with the rest of the industry on best practices and tools.” At the same time, the team wanted to contribute its expertise and influence in the flourishing Kubernetes community. The migration, which would happen in parallel with Helios running, could go smoothly because “Kubernetes fit very nicely as a complement and now as a replacement to Helios,” says Chakrabarti.
IMPACT
The team spent much of 2018 addressing the core technology issues required for a migration, which started late that year and is a big focus for 2019. “A small percentage of our fleet has been migrated to Kubernetes, and some of the things that we’ve heard from our internal teams are that they have less of a need to focus on manual capacity provisioning and more time to focus on delivering features for Spotify,” says Chakrabarti. The biggest service currently running on Kubernetes takes about 10 million requests per second as an aggregate service and benefits greatly from autoscaling, says Site Reliability Engineer James Wen. Plus, he adds, “Before, teams would have to wait for an hour to create a new service and get an operational host to run it in production, but with Kubernetes, they can do that on the order of seconds and minutes.” In addition, with Kubernetes’s bin-packing and multi-tenancy capabilities, CPU utilization has improved on average two- to threefold.
WHEN KUBERNETES STARTED ?
On July 21, 2015, Google Cloud announced a new application management technology named Kubernetes. It is not like a conventional inclusive Platform as a Service (PaaS) system. Since it tends to operate more at container level than at the hardware level, Kubernetes has made it much more comfortable for enterprises to develop container-based applications, offering more applicable features that are common to PaaS systems. These may include load balancing, deployment, logging, scaling, and monitoring.
However, Kubernetes is not monolithic and acts as optional default solutions that are pluggable. Kubernetes is successful in providing building blocks for developer platforms while preserving flexibility and user choice where essential. Just like every other technology, Kubernetes has become the most positively impacting technology for the industry. It is contributing well to how software is deployed for scaling, flexibility, processing, and engagement for big open-source projects.
RISE OF KUBERNETES (K8s)
The rise of application containers has proven to be the most critical contributing precursor to Kubernetes. Docker was the first tool that has containers usable by a wide range of businesses. They came into the market as an open-source project in 2013. Developers are enabling them to achieve better management, easier language runtime, application deployment, flexibility, and scalability.
Containers have made stateless applications more flexible for scalability and provide an unalterable experience of deployment artifact. They have drastically reduced the number of variables that were previously confronted between production systems and test systems. Meanwhile, when containers have presented with substantial stand-alone value for developers and businesses, the next thing is to tackle management and delivery services, architectures, applications and has spanned multiple containers with hosts.
Google has already been succeeded in encountering some issues in its own IT infrastructure. Since it has run the world’s most top rated search engine with millions of users and products leading to innovation and adoption of containers. Google’s internal platform, Borg, inspired Kubernetes. It was used for managing and scheduling billions of containers to implement services.
Since its launch, Kubernetes has proven to be more than just Borg for everyone. It has distilled with most reliable API patterns and architectures of prior software. It has coupled them with current authorization policies, load balancing, and other features that are required to manage and run applications at massive scale. In turn, this provides developers with the groundwork for cluster abstractions to enable true portability across clouds.
With the explosion of innovation around Kubernetes, businesses have started to analyze obstacles for complete adoption. Many giants in the industry have increased investing resources and assuring mission-critical workloads. Fortunately, Kubernetes got a better response for the wave of adoption that swept to the forefront of crowded container management space.
DRAWBACKS OF KUBERNETES
- Kubernetes can be an overkill for simple applications. …
- Kubernetes is very complex and can reduce productivity. …
- The transition to Kubernetes can be cumbersome. …
- Kubernetes can be more expensive than its alternatives.
Kubernetes: A Remarkable Breakthrough for Developers.
Kubernetes enables you to deliver and scale consistently and predictably. It makes it easy to use and manage containerized applications. You’re then left to focus on creating new features and fixing bugs rapidly, without worrying much about deployment.
It also ensures that you fully optimize your usage of machines. For that reason, it lowers the cost of cloud subscriptions and simplifying operations.
Once again, Kubernetes is one powerful system that you simply can’t ignore!
RECENT NEWS :
Docker support is being deprecated in Kubernetes
A tweet by the Kubernetes SIG Security co-chair, Ian Coldwater didn’t help matters, either: “Docker support is being deprecated in Kubernetes. You need to pay attention to this and plan for it. THIS WILL BREAK YOUR CLUSTERS.”
The move might come as a shock to anyone who’s been busy spinning up containers and not paying attention to the development of Kubernetes. But it really isn’t such a big deal.
If you’re wondering what’s a container runtime, it’s best explained in a trending tweet by Google Cloud Platform’s Staff Developer Advocate Kelsey Hightower: “Docker != Containers. There are container images. Docker can build them. There are container registries. Docker can push and pull from them. There are container runtimes. Docker is one of them. There are container processes. Docker can create them but Linux is still the boss.”
So all that has happened is that Kubernetes is deprecating (and will eventually remove) Docker as a container runtime in favor of runtimes that use the Container Runtime Interface (CRI), such as containerd and CRI-O.
For the end-users of Kubernetes there shouldn’t be much of a fallout of this move as the developers explain “Docker-produced images will continue to work in your cluster with all runtimes, as they always have.”
However, if you’re rolling your own clusters though, you’ll need to make sure that you don’t use Docker as a container runtime going forward. If you do, you’ll get a depreciation warning with the current v1.20 release.
If you don’t want your clusters to break, make sure you switch to one of the compliant container runtimes before the runtime support for Docker is removed, which is currently planned for v1.22 due in late 2021.
WHY ONE SHOULD LEARN KUBERNETES ?
The First thing to say, it is definitely good to learn Kubernetes because it is the most popular and widely used open source platform for container orchestration. Containers are the most used because it offers faster running, processing and execution of software programs. Another advantage that makes Kubernetes the preferred among developers is that it can run on a local machine like a laptop as Minikube. Minikube is Kubernetes, but a smaller version that can perform all tasks as Kubernetes on a Desktop does
If you are into containers, cloud, orchestration of containers, then it is good to learn Kubernetes.