Introduction
VMware Cloud on AWS enables your IT and Operations teams to add value to your investments in AWS by extending your on-premises VMware vSphere environments to the AWS cloud. VMware Cloud on AWS is an integrated cloud offering jointly developed by Amazon Web Services (AWS) and VMware. It is optimized to run on dedicated, elastic, bare-metal Amazon Elastic Compute Cloud (Amazon EC2) infrastructure.
By running VMware Tanzu within the same infrastructure as the general VM workloads organizations can immediately start their modern application development strategy without incurring additional costs. For example, you can use SDDC spare capacity to run Tanzu Kubernetes Grid to enable next-generation application modernization, or compute power not used by disaster recovery can be used for Tanzu Kubernetes Grid clusters.
Tanzu Kubernetes Grid Managed Service Architecture
This reference architecture details how to use Tanzu services in a VMware Cloud on AWS SDDC. The Tanzu Kubernetes Grid Managed (TKG) managed service used in this architecture deploys a pair of vSphere Namespaces and some Tanzu Kubernetes clusters. Tanzu Mission Control (TMC) deploys Fluent Bit extensions for log collection and aggregation with vRealize Log Insight Cloud. TMC can also deploy Tanzu Observability to monitor Kubernetes metrics at scale. TMC configures containers and persistent storage backups through Velero using Amazon Web Services S3 as a durable object storage location.
- Deploy the VMware Cloud on AWS SDDC in the desired region and availability zone.
- Virtual machines run within the same SDDC on their own network segments connected to the VMware Cloud Compute Gateway and protected by firewall rules. These networks are typical but not required for the Tanzu Kubernetes Grid Service.
- The cloud administrator activates the Tanzu Kubernetes Grid service, deploying the supervisor cluster into the VMware-managed network segments. This network placement is comparable to deploying vCenter and NSX appliances within VMware Cloud on AWS SDDC.
- After successfully activating the TKG service, register the TKG Supervisor cluster as a management cluster within Tanzu Mission Control.
- The cloud administrator creates vSphere Namespaces from within the vCenter’s vSphere Client. These vSphere Namespaces provide resource and tenant isolation capabilities. The cloud administrator assigns resource limits for CPU, memory, and storage and then enables access to these resources for users via vCenter Single Sign-On. Each vSphere Namespace provisioned creates a new Tier-1 Gateway and connects a new network segment with the SDDC router. The vSphere Namespace network segments come from the CIDR ranges provided during the TKG activation process.
- Platform operators deploy new Tanzu Kubernetes clusters into their assigned vSphere Namespaces through the kubectl CLI or Tanzu Mission Control.
- The platform operators can use Tanzu Mission Control to set Kubernetes Polices for their clusters and use the TMC Catalog to deploy tools such as Fluent Bit configured to work with vRealize Log Insight Cloud for log aggregation, Tanzu Observability for capturing performance monitoring and metrics, or Harbor Image Registry for OCI compliant images.
- Platform operators can configure an Amazon S3 storage location for a backup destination configured through Tanzu Mission Control.
TASKS
Unlike earlier labs in this workshop, you will share a 4-Node SDDC with other students (You do not have a dedicated SDDC for your exclusive use), for this reason, the majority of the tasks you will carry out will be related to the DevOps persona. All ITOps-related tasks, except for creating a vSphere Namespace have been completed by the instructor, those tasks include but are not limited to:
- Activate TKG service and deploy the supervisor Cluster in vSphere
- Setup the Gateway firewall rules to allow access for containerized applications
- Install and Configure the CLI tools needed to manage the TKG Cluster(s)
- etc..
While the creation of a vSphere Namespace is an ITOps task, you will perform it here, beyond that all other tasks are DevOps-related.
vSphere Namespaces are set up by the vSphere Admin, run in the context of the Supervisor Cluster and allow admins to control resource limits and other policies
- If you are no longer logged into your VDI desktop or lost the RDP session to the Tanzu Desktop, then follow the instruction in Lab 10 Task 1 (Steps 1 through 8) before continuing to step 2
- From the Tanzu Desktop Launch Firefox, Edge or Brave
(NOTE: You will use this browser instance to access the SDDC vCenter as CloudAdmin) - In the browser go to https://vmc.vmware.com/sddcs/console
- log in as:
- vmcexpert{1|2|3}-##@27virtual.net (where ## is your student number) i.e [email protected]
- {Password-Provided-by-Instructor}
- In the upper right-hand corner Click the dropdown next to <your-username>
-
Confirm you are logged into the correct Organization, If not select it
Note: Your correct Organization should be {Your-VMCEXPERT-Environment}-VCDR01 e.g. VMCEXPERT2-VCDR01
- In the SDDC Tile, click Open vCenter
- Click Show Credential
- Copy the Default vCenter User Password (Store this for future use)
- Click Open vCenter
- Log into vCenter as:
- User Name: [email protected]
- Password: (Paste in the Password from step 9)
- Once Logged into vCenter, inspect the Host & Clusters Inventory view
- Take note of the Supervisor Cluster Control VMs in the Mgmt-ResourcePool Resource Pool
Also note the NameSpaces Resource Pool - In the Upper Left-hand corner, click the Hamburger Menu, Select Workload Management
- Click the Supervisors tab to confirm the Supervisor Cluster exists and is in a Health state
- Click the Namespaces tab
- Click New Namespace
- Expand vCenter (vcenter.sddc.xx.xx.xx.xx.vmwarevmc.com) and select Cluster-1
- Type {Your Username} in the Name field I.E. vmcexpert3-33
- Click Create
We will now add come controls around the Namespace by limiting access through RBAC, adding a Storage Policy and selecting the VM Classes that could be used to create a Tanzu Cluster. We will also log into the Namespace via CLI
- On the Namespace you just created, Click Add Permissions to restrict this Namespace to your DevOps user account
- In the Add Permissions Dialog Choose/enter the following
- Identity Source: 27Virtual.net
- User/Group: {Your User Name} I.E. VMCEXPERT3-33
- Role: Owner
- Click OK
- In the Storage Tile, Click Add Storage
- In the Select Storage Policies dialog, select vSAN Default Storage Policy
- Click OK
- In the VM Service Tile, Click Add VM Class
- In the Select Add VM Class dialog, select best-effort-medium, best-effort-small and best-effort-xsmall
- Click OK
- On the Status Tile In the vCenter Console, Click Open to open the Link to the Kubernetes Control Plane
- Copy and store this URL (Do not include the https:// and the trailing "/").
Note: You can store this in your lab input workbook
- Launch the Windows Terminal, in the PowerShell window, type the following command to access your namespace
kubectl vsphere login --server={Kubernetes Control Plane Endpoint}
- When Prompted for credentials:
- Username: {enter your DevOps Username} I.E. [email protected]
- Password: {Password-Provided-by-Instructor}
A Tanzu Kubernetes cluster is a full distribution of the open-source Kubernetes container orchestration platform that is built, signed, and supported by VMware. You can provision and operate Tanzu Kubernetes clusters on the Supervisor Cluster by using the Tanzu Kubernetes Grid Service. A Supervisor Cluster is a vSphere cluster that is enabled with vSphere with Tanzu.
When you deploy a workload cluster, most of the configuration for the cluster is the same as the configuration of the management cluster that you use to deploy it. Because of this, the easiest way to create a configuration file for a workload is to start with a copy of the management cluster configuration file.
- Launch the Windows Terminal. In the PowerShell window, type the following command to access your namespace
kubectl vsphere login --server={Kubernetes Control Plane Endpoint} --tanzu-kubernetes-cluster-namespace={Your namespace created in task 1}
- When Prompted for credentials:
- Username: {enter your DevOps Username} I.E. [email protected]
- Password: {Password-Provided-by-Instructor}
kubectl config use-context {Your namespace created in task 1}
We will now edit and save the TKC manifest file which we'll use to create the Tanzu Cluster
- From your Tanzu Desktop, In windows explorer, navigate to C:\lab_files\vce\TMC
- Locate the vmcexpert#.XX-cluster file
- Double click the file to edit it in Visual Studio Code
- With the File opened, review it's content. Observe the settings for StorageClass, VMClass and replicas that will be used for this cluster
-
Edit lines 6, 7, 16 & 21
- name: {Your Student name}-cluster NOTE: the value should have no braces {}
- Namespace: {Your student name}
- vmClass: best-effort-xsmall
- Save the file as {Your user name}-cluster.yml
- In Windows terminal, change directory to c:\Lab_Files\VCE\TMC
- list the files, to confirm your cluster manifest file created in step 7 - 8 exists
- run the following command to create the cluster
kubectl apply -f .\{your username}-cluster.yml
kubectl get cluster {your username}-cluster
NOTE: While the CLI may report that the Cluster has been provisioned, it may actually take up to 5 mins for all the vSphere and NSX based tasks to complete. These tasks include:
- Deploying and configuring the Kubernetes cluster VMs
- Creating a dedicated NSX network segment for the namespace
- Deploying a T1 gateway for the Namespace
- etc..
With the cluster now provisioned, we will re-authenticate to the TKG Supervisor Service to get a new cluster context in your KUBECONFIG file.
- Run the following login command to update the KUBECONFIG file:
kubectl vsphere login --server={Kubernetes Control Plane Endpoint} --tanzu-kubernetes-cluster-namespace={Your namespace} --tanzu-kubernetes-cluster-name={Your Cluster name}
- When Prompted for credentials:
- Username: {enter your DevOps Username} I.E. [email protected]
- Password: {Password-Provided-by-Instructor}
kubectl config use-context {Your cluster name}
kubectl get nodes
- In the vSphere UI in the browser instance, switch back to "Hosts & Clusters" view
- Expand the Namespaces Resource Pool
- Identify your namespace and expand it
- identify your cluster and expand it
- Notice the nodes (VMs) that were deployed when you created your cluster
Pods are the smallest execution unit in a Kubernetes cluster. In Kubernetes, containers do not run directly on cluster nodes; instead one or more containers are encased in a pod. All applications in a pod share the same resources and local network, easing communications between applications in a pod. Pods utilize an agent on each node called a kubelet to communicate with the Kubernetes API and the rest of the cluster. Although developers need API access, management of pods is transitioning to the domain of DevOps.
A Kubernetes pod is a collection of one or more Linux® containers, and is the smallest unit of a Kubernetes application. Any given pod can be composed of multiple, tightly coupled containers (an advanced use case) or just a single container (a more common use case).
Difference between Kubernetes pods and nodes
Pods are an abstraction of executable code, nodes are abstractions of computer hardware, so the comparison is a bit apples-and-oranges.
Pods are simply the smallest unit of execution in Kubernetes, consisting of one or more containers, each with one or more applications and their binaries.
Nodes are the physical servers or VMs that comprise a Kubernetes Cluster. Nodes are interchangeable and typically not addressed individually by users or IT, other than when maintenance is required.
A Kubernetes Deployment is used to tell Kubernetes how to create or modify instances of the pods that hold a containerized application. Deployments can scale the number of replica pods, enable the rollout of updated code in a controlled manner, or roll back to an earlier deployment version if necessary.
Benefits of Kubernetes Deployment
- Kubernetes automates the work and repetitive manual functions that are involved in deploying, scaling, and updating applications in production.
- Since the Kubernetes deployment controller is always monitoring the health of pods and nodes, it can replace a failed pod or bypass down nodes, replacing those pods to ensure continuity of critical applications.
- Deployments automate the launching of pod instances and ensure they are running as defined across all the nodes in the cluster. More automation translates to faster deployments with fewer errors.
- The Authentication token stored in your local KUBECONFIG file expires every 10 hours. You may want to re-authenticate to the TKG Service before starting the lab to ensure you have access to the Supervisor cluster.
Run the following login command to update the KUBECONFIG file:
kubectl vsphere login --server={Kubernetes Control Plane Endpoint} --tanzu-kubernetes-cluster-namespace={Your namespace} --tanzu-kubernetes-cluster-name={Your Cluster name}
- When Prompted for credentials:
- Username: {enter your DevOps Username} I.E. [email protected]
- Password: {Password-Provided-by-Instructor}
kubectl config use-context {Your cluster name}
Now let's create our first Kubernetes Pod
- Run the following commands to create an NGINX pod and view it
kubectl run nginx --image=nginx
kubectl get pods
We will now deploy a 2nd pod from a YAML file
- In Windows Explorer, navigate to c:\Lab_Files\VCE\Pods
- Open the nginx2.yaml file
- examine the content of the file to determine what it specifies
- In Windows terminal, change directory to c:\Lab_Files\VCE\Pods
- List the contents of the directory to confirm the nginx2 file exists
- Let's create the Pod and examine it by running the following commands:
kubectl apply -f .\nginx2.yaml
kubectl get pod nginx2
Now let's create our 3rd pod and examine how to retrieve information about deployed pods
- In Windows terminal, run the following commands to deploy an Ubuntu pod with a standard output, view details of the Pod and retrieve the standard output
kubectl run ubuntu --image=ubuntu -- echo "Deploy human virtues of Compassion and Humanity"
kubectl get pod ubuntu
kubectl describe pod ubuntu
kubectl logs ubuntu
What does the log show and why?
- To get the YAML of running Ubuntu Pod, run the following command:
kubectl get pod ubuntu -o yaml
The result is that the API server returns the declarative YAML that can be used to build a new Kubernetes manifest
- Now, let's cleanup after ourselves and remove the pods we deployed. to do so run the following command:
kubectl get pods
kubectl delete pods ubuntu nginx2 nginx
kubectl get pods
Now, we will create a Kubernetes deployment.
A Deployment provides declarative updates for Pods and ReplicaSets.
You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.
We will begin these lab steps by first disabling Pod Security Policies.
Pod Security Policies (PSPs) are sometimes used to limit what access pods have within a Kubernetes cluster. For example, PSPs can be used to ensure Pods don't have sudo access within the Kubernetes nodes. We disable Pod Security Polices as they have been deprecated and we are not covering them in this course. Some versions of TKG have them enabled by default.
- In Windows Terminal run the following commands to disable PSP and deploy 3 nginx pods as part of your 1st deployment:
cd ..\Deployments\
ls
kubectl apply -f .\disable-psp.yaml
kubectl apply -f .\deployment-manifest.yaml
- Now lets inspect the deployment and it's components with the following commands:
kubectl get deployments
kubectl get replicasets
kubectl get pods
We will now modify the deployment and also see how we can rollback a previous version of the deployment
- Modify the deployment and view its history and deployment details using the following commands
kubectl apply -f .\modified-deployment-manifest.yaml
kubectl get deployment
kubectl rollout history deployment myfirst-deployment
Let's retrieve further information about this deployment by using the following command
kubectl describe deployment myfirst-deployment
We will now rollback the deployment to the previous version and clean up the deployment once done.
- Run the following commands to inspect the version history and rollback to a previous version
kubectl rollout history deployment myfirst-deployment
kubectl rollout undo deployment myfirst-deployment --to-revision=1
kubectl rollout history deployment myfirst-deployment
- Run the following commands to cleanup your deployment and verify the cleanup removed your deployment
kubectl delete deployment myfirst-deployment
kubectl get deployments
kubectl get replicasets
kubectl get pods
A Kubernetes service is a logical abstraction for a deployed group of pods in a cluster (which all perform the same function).
Since pods are ephemeral, a service enables a group of pods, which provide specific functions (web services, image processing, etc.) to be assigned a name and unique IP address (clusterIP). As long as the service is running that IP address, it will not change. Services also define policies for their access.
Difference between a deployment and a service
In Kubernetes, a deployment is a method of launching a pod with containerized applications and ensuring that the necessary number of replicas is always running on the cluster.
On the other hand, a service is responsible for exposing an interface to those pods, which enables network access from either within the cluster or between external processes and the service.
Kubernetes services connect a set of pods to an abstracted service name and IP address. Services provide discovery and routing between pods. For example, services connect an application front-end to its backend, each of which runs in separate deployments in a cluster. Services use labels and selectors to match pods with other applications. The core attributes of a Kubernetes service are:
- A label selector that locates pods
- The clusterIP IP address and assigned port number
- Port definitions
- Optional mapping of incoming ports to a targetPort
Services can be defined without pod selectors. For example, to point a service to another service in a different namespace or cluster.
Service Types
- ClusterIP. Exposes a service that is only accessible from within the cluster.
- NodePort. Exposes a service via a static port on each node’s IP.
- LoadBalancer. Exposes the service via the cloud provider’s load balancer.
- ExternalName. Maps a service to a predefined externalName field by returning a value for the CNAME record.
- In Windows Explorer, navigate to C:\Lab_Files\VCE\Services
- Open the svc-manifest1.yaml file and examine its content
- In Windows Terminal, execute the following commands to deploy the pods and service. Also, review the service deployment
cd ..\services
ls
kubectl apply -f .\svc-manifest1.yaml
kubectl get services -o wide
kubectl describe service myfirst-service
- Now, we will view the pods and delete one of them to observe what happens. Execute the following commands to do so:
kubectl get endpoints myfirst-service
kubectl get pods
kubectl delete pod {the_name_of _one of_your_nginx-pods}
kubectl get pods
Question: How does deleting a pod affect the cluster?
- Did a new pod get created to replace that deleted pod?
- How did the endpoints change?
- How might this affect access from other applications?
- Now let's run a container that has the curl command installed in the image. Let's use the following imperative commands to deploy a curl container and exec into a shell
kubectl run curlpod -it --image=curlimages/curl -- sh
curl myfirst-service:8080
Now, let's perform a cleanup before moving forward. To do so we will exit Curl, delete the service and any containers. Execute the following commands:
exit
kubectl get services
kubectl get pods
kubectl delete -f .\svc-manifest1.yaml
kubectl delete pod curlpod
A core strategy for maximizing availability and scalability, load balancing distributes network traffic among multiple backend services efficiently. A range of options for load-balancing external traffic to pods exists in the Kubernetes context, each with its own benefits and trade-offs.
Load distribution is the most basic type of load balancing in Kubernetes. At the dispatch level load distribution is easy to implement. Each of the two methods of load distribution that exist in Kubernetes operates through the kube-proxy feature. Services in Kubernetes use the virtual IPs which the kube-proxy feature manages.
P addresses for Kubernetes pods are not persistent because the system assigns each new pod a new IP address. Typically, therefore, direct communication between pods is impossible. However, services have their own relatively stable IP addresses which field requests from external resources. The service then dispatches the request to an available Kubernetes pod.
Kubernetes load balancing makes the most sense in the context of how Kubernetes organizes containers. Kubernetes does not view single containers or individual instances of a service, but rather sees containers in terms of the specific services or sets of services they perform or provide.
In this task, we will use a supplied YAML manifest to provision a deployment and a Load Balancer
- In Windows Explorer, navigate to C:\Lab_Files\VCE\Load_Balancers
- Open the lb-manifest.yaml file and review its content
- In Windows Terminal, execute the following command to provision and investigate the deployment:
cd ..\Load_Balancers
ls
kubectl apply -f .\lb-manifest.yaml
kubectl get services -o wide
kubectl describe service myfirst-lbservice
Questions:
- What is the Cluster IP of the Service
- What is the External IP of the Service
- Which Port is the NodePort running on
- From the Tanzu desktop, open a browser instance and navigate to the external IP of the Load Balancer service
- Try out your Tetris skills
- Delete the deployments, replica sets, pods and services by executing the following command
kubectl delete -f .\lb-manifest.yaml
Conclusion
Pods allow you to deploy closely coupled components together as separate containers. For instance, you can bundle an app and a proxy for that app that adds an encryption layer together so encrypted traffic goes in and out of the app without modifying the app container.
Pods in a Kubernetes cluster can be used in two main ways:
Pods that run a single container. The “one-container-per-Pod” model is the most common Kubernetes use case; in this case, you can think of a Pod as a wrapper around a single container, and Kubernetes manages the Pods rather than the containers directly.
Pods that run multiple containers that need to work together. A Pod might encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. These co-located containers might form a single cohesive unit of service–one container serving files from a shared volume to the public, while a separate “sidecar” container refreshes or updates those files. The Pod wraps these containers and storage resources together as a single manageable entity.
0 Comments
Add your comment