0
comments
Posted by
Belbinson Toby ,
Thursday, April 30, 2020
8:50 PM
A deployment represents
one or more identical pods, managed by the Kubernetes Deployment
Controller. A deployment defines the number of replicas (pods)
to create, and the Kubernetes Scheduler ensures that if pods or nodes
encounter problems, additional pods are scheduled on healthy nodes.
You
can update deployments to change the configuration of pods, container
image used, or attached storage. The Deployment Controller drains and
terminates a given number of replicas, creates replicas from the new
deployment definition, and continues the process until all replicas
in the deployment are updated.
Most
stateless applications in AKS should use the deployment model rather
than scheduling individual pods. Kubernetes can monitor the health
and status of deployments to ensure that the required number of
replicas run within the cluster. When you only schedule individual
pods, the pods aren't restarted if they encounter a problem, and
aren't rescheduled on healthy nodes if their current node encounters
a problem.
If
an application requires a quorum of instances to always be available
for management decisions to be made, you don't want an update process
to disrupt that ability. Pod
Disruption Budgets can
be used to define how many replicas in a deployment can be taken down
during an update or node upgrade. For example, if you have 5 replicas
in your deployment, you can define a pod disruption of 4 to
only permit one replica from being deleted/rescheduled at a time. As
with pod resource limits, a best practice is to define pod disruption
budgets on applications that require a minimum number of replicas to
always be present.
Deployments
are typically created and managed with
kubectl
create or kubectl
apply.
To create a deployment, you define a manifest file in the YAML (YAML
Ain't Markup Language) format. The following example creates a basic
deployment of the NGINX web server. The deployment
specifies 3 replicas
to be created, and that port 80 be
open on the container. Resource requests and limits are also defined
for CPU and memory.
Posted by
Belbinson Toby ,
8:34 PM
Kubernetes
uses pods to
run an instance of your application. A pod represents a single
instance of your application. Pods typically have a 1:1 mapping with
a container, although there are advanced scenarios where a pod may
contain multiple containers. These multi-container pods are scheduled
together on the same node, and allow containers to share related
resources.
When
you create a pod, you can define resource
requests to
request a certain amount of CPU or memory resources. The Kubernetes
Scheduler tries to schedule the pods to run on a node with available
resources to meet the request. You can also specify maximum resource
limits that prevent a given pod from consuming too much compute
resource from the underlying node. A best practice is to include
resource limits for all pods to help the Kubernetes Scheduler
understand which resources are needed and permitted.
For
more information, see Kubernetes
pods and Kubernetes
pod lifecycle.
A
pod is a logical resource, but the container(s) are where the
application workloads run. Pods are typically ephemeral, disposable
resources, and individually scheduled pods miss some of the high
availability and redundancy features Kubernetes provides. Instead,
pods are usually deployed and managed by Kubernetes Controllers,
such as the Deployment Controller.
Posted by
Belbinson Toby ,
8:22 PM
To
run your applications and supporting services, you need a
Kubernetes node.
An AKS cluster has one or more nodes, which is an Azure virtual
machine (VM) that runs the Kubernetes node components and container
runtime:
- The
kubeletis the Kubernetes agent that processes the orchestration requests from the control plane and scheduling of running the requested containers.
- Virtual networking is handled by the kube-proxy on each node. The proxy routes network traffic and manages IP addressing for services and pods.
- The container runtime is the component that allows containerized applications to run and interact with additional resources such as the virtual network and storage. In AKS, Moby is used as the container runtime.
The
Azure VM size for your nodes defines how many CPUs, how much memory,
and the size and type of storage available (such as high-performance
SSD or regular HDD). If you anticipate a need for applications that
require large amounts of CPU and memory or high-performance storage,
plan the node size accordingly. You can also scale out the number of
nodes in your AKS cluster to meet demand.
In
AKS, the VM image for the nodes in your cluster is currently based on
Ubuntu Linux or Windows Server 2019. When you create an AKS cluster
or scale out the number of nodes, the Azure platform creates the
requested number of VMs and configures them. There's no manual
configuration for you to perform. Agent nodes are billed as standard
virtual machines, so any discounts you have on the VM size you're
using (including Azure
reservations)
are automatically applied.
If
you need to use a different host OS, container runtime, or include
custom packages, you can deploy your own Kubernetes cluster
using aks-engine.
The upstream
aks-engine releases
features and provides configuration options before they are
officially supported in AKS clusters. For example, if you wish to use
a container runtime other than Moby, you can use aks-engine to
configure and deploy a Kubernetes cluster that meets your current
needs
Subscribe to:
Posts (Atom)

