0
comments
Posted by
Belbinson Toby ,
Thursday, June 4, 2020
11:15 PM
using
System;
using
System.Collections.Generic;
using
System.Linq;
using
System.Text;
using
System.Threading.Tasks;
namespace
ConsoleApp1
{
interface
IBMW
{
void
CreateCar();
}
interface
IToyota
{
void
CreateCar();
}
public
class
CarFactory
: IBMW, IToyota
{
void
IBMW.CreateCar()
{
Console.WriteLine("Creating
BMW");
}
void
IToyota.CreateCar()
{
Console.WriteLine("Creating
Toyota");
}
}
class
Program
{
static
void
Main(string[]
args)
{
IBMW
objBMW = new
CarFactory();
IToyota
objToyota = new
CarFactory();
objBMW.CreateCar();
objToyota.CreateCar();
Console.ReadLine();
}
}
}
Posted by
Belbinson Toby ,
Thursday, April 30, 2020
8:50 PM
A deployment represents
one or more identical pods, managed by the Kubernetes Deployment
Controller. A deployment defines the number of replicas (pods)
to create, and the Kubernetes Scheduler ensures that if pods or nodes
encounter problems, additional pods are scheduled on healthy nodes.
You
can update deployments to change the configuration of pods, container
image used, or attached storage. The Deployment Controller drains and
terminates a given number of replicas, creates replicas from the new
deployment definition, and continues the process until all replicas
in the deployment are updated.
Most
stateless applications in AKS should use the deployment model rather
than scheduling individual pods. Kubernetes can monitor the health
and status of deployments to ensure that the required number of
replicas run within the cluster. When you only schedule individual
pods, the pods aren't restarted if they encounter a problem, and
aren't rescheduled on healthy nodes if their current node encounters
a problem.
If
an application requires a quorum of instances to always be available
for management decisions to be made, you don't want an update process
to disrupt that ability. Pod
Disruption Budgets can
be used to define how many replicas in a deployment can be taken down
during an update or node upgrade. For example, if you have 5 replicas
in your deployment, you can define a pod disruption of 4 to
only permit one replica from being deleted/rescheduled at a time. As
with pod resource limits, a best practice is to define pod disruption
budgets on applications that require a minimum number of replicas to
always be present.
Deployments
are typically created and managed with
kubectl
create or kubectl
apply.
To create a deployment, you define a manifest file in the YAML (YAML
Ain't Markup Language) format. The following example creates a basic
deployment of the NGINX web server. The deployment
specifies 3 replicas
to be created, and that port 80 be
open on the container. Resource requests and limits are also defined
for CPU and memory.
Posted by
Belbinson Toby ,
8:34 PM
Kubernetes
uses pods to
run an instance of your application. A pod represents a single
instance of your application. Pods typically have a 1:1 mapping with
a container, although there are advanced scenarios where a pod may
contain multiple containers. These multi-container pods are scheduled
together on the same node, and allow containers to share related
resources.
When
you create a pod, you can define resource
requests to
request a certain amount of CPU or memory resources. The Kubernetes
Scheduler tries to schedule the pods to run on a node with available
resources to meet the request. You can also specify maximum resource
limits that prevent a given pod from consuming too much compute
resource from the underlying node. A best practice is to include
resource limits for all pods to help the Kubernetes Scheduler
understand which resources are needed and permitted.
For
more information, see Kubernetes
pods and Kubernetes
pod lifecycle.
A
pod is a logical resource, but the container(s) are where the
application workloads run. Pods are typically ephemeral, disposable
resources, and individually scheduled pods miss some of the high
availability and redundancy features Kubernetes provides. Instead,
pods are usually deployed and managed by Kubernetes Controllers,
such as the Deployment Controller.
Posted by
Belbinson Toby ,
8:22 PM
To
run your applications and supporting services, you need a
Kubernetes node.
An AKS cluster has one or more nodes, which is an Azure virtual
machine (VM) that runs the Kubernetes node components and container
runtime:
- The
kubeletis the Kubernetes agent that processes the orchestration requests from the control plane and scheduling of running the requested containers.
- Virtual networking is handled by the kube-proxy on each node. The proxy routes network traffic and manages IP addressing for services and pods.
- The container runtime is the component that allows containerized applications to run and interact with additional resources such as the virtual network and storage. In AKS, Moby is used as the container runtime.
The
Azure VM size for your nodes defines how many CPUs, how much memory,
and the size and type of storage available (such as high-performance
SSD or regular HDD). If you anticipate a need for applications that
require large amounts of CPU and memory or high-performance storage,
plan the node size accordingly. You can also scale out the number of
nodes in your AKS cluster to meet demand.
In
AKS, the VM image for the nodes in your cluster is currently based on
Ubuntu Linux or Windows Server 2019. When you create an AKS cluster
or scale out the number of nodes, the Azure platform creates the
requested number of VMs and configures them. There's no manual
configuration for you to perform. Agent nodes are billed as standard
virtual machines, so any discounts you have on the VM size you're
using (including Azure
reservations)
are automatically applied.
If
you need to use a different host OS, container runtime, or include
custom packages, you can deploy your own Kubernetes cluster
using aks-engine.
The upstream
aks-engine releases
features and provides configuration options before they are
officially supported in AKS clusters. For example, if you wish to use
a container runtime other than Moby, you can use aks-engine to
configure and deploy a Kubernetes cluster that meets your current
needs
Posted by
Belbinson Toby ,
8:10 PM
When
you create an AKS cluster, a control plane is automatically created
and configured. This control plane is provided as a managed Azure
resource abstracted from the user. There's no cost for the control
plane, only the nodes that are part of the AKS cluster.
The
control plane includes the following core Kubernetes components:
- kube-apiserver - The API server is how the underlying Kubernetes APIs are exposed. This component provides the interaction for management tools, such as
kubectlor the Kubernetes dashboard.
- etcd - To maintain the state of your Kubernetes cluster and configuration, the highly available etcd is a key value store within Kubernetes.
- kube-scheduler - When you create or scale applications, the Scheduler determines what nodes can run the workload and starts them.
- kube-controller-manager - The Controller Manager oversees a number of smaller Controllers that perform actions such as replicating pods and handling node operations.
AKS
provides a single-tenant control plane, with a dedicated API server,
Scheduler, etc. You define the number and size of the nodes, and the
Azure platform configures the secure communication between the
control plane and nodes. Interaction with the control plane occurs
through Kubernetes APIs, such as
kubectl or
the Kubernetes dashboard.
This
managed control plane means that you don't need to configure
components like a highly available etcd store,
but it also means that you can't access the control plane directly.
Upgrades to Kubernetes are orchestrated through the Azure CLI or
Azure portal, which upgrades the control plane and then the nodes. To
troubleshoot possible issues, you can review the control plane logs
through Azure Monitor logs.
If
you need to configure the control plane in a particular way or need
direct access to it, you can deploy your own Kubernetes cluster
using aks-engine.
Posted by
Belbinson Toby ,
7:51 PM
Kubernetes
is a rapidly evolving platform that manages container-based
applications and their associated networking and storage components.
The focus is on the application workloads, not the underlying
infrastructure components. Kubernetes provides a declarative approach
to deployments, backed by a robust set of APIs for management
operations.
You
can build and run modern, portable, microservices-based applications
that benefit from Kubernetes orchestrating and managing the
availability of those application components. Kubernetes supports
both stateless and stateful applications as teams progress through
the adoption of microservices-based applications.
As
an open platform, Kubernetes allows you to build your applications
with your preferred programming language, OS, libraries, or messaging
bus. Existing continuous integration and continuous delivery (CI/CD)
tools can integrate with Kubernetes to schedule and deploy releases.
Azure
Kubernetes Service (AKS) provides a managed Kubernetes service that
reduces the complexity for deployment and core management tasks,
including coordinating upgrades. The AKS control plane is managed by
the Azure platform, and you only pay for the AKS nodes that run your
applications. AKS
Posted by
Belbinson Toby ,
3:07 AM
Azure
Kubernetes Service (AKS) makes it simple to deploy a managed
Kubernetes cluster in Azure. AKS reduces the complexity and
operational overhead of managing Kubernetes by offloading much of
that responsibility to Azure. As a hosted Kubernetes service, Azure
handles critical tasks like health monitoring and maintenance for
you. The Kubernetes masters are managed by Azure. You only manage and
maintain the agent nodes. As a managed Kubernetes service, AKS is
free - you only pay for the agent nodes within your clusters, not for
the masters.
You
can create an AKS cluster in the Azure portal, with the Azure CLI, or
template driven deployment options such as Resource Manager templates
and Terraform. When you deploy an AKS cluster, the Kubernetes master
and all nodes are deployed and configured for you. Additional
features such as advanced networking, Azure Active Directory
integration, and monitoring can also be configured during the
deployment process. Windows Server containers are supported in AKS.
Subscribe to:
Comments (Atom)



