Search

Deploy Tanzu Community Edition v0.11- Using Linux boostrap VM [Part-1]



The VMware Tanzu Community Edition is a fully-featured, easy to manage Kubernetes platform for learners and users. It is a freely available, community supported, open source distribution of VMware Tanzu that can be installed and configured in minutes on your local workstation or your favorite cloud.

The Tanzu community edition enables the creation of application platforms by leveraging cluster APIs to deploy and manage Kubernetes clusters. The community edition provides you with flexibility to produce application platform that meet your requirements; without having to start from scratch. For more information about Tanzu community edition and the architecture, refer to the project's documentation website: https://tanzucommunityedition.io/docs/v0.11/ .

In this blog post series, we will be deploying a managed cluster to a vSphere environment using a Linux (Ubuntu 20.04) VM as the bootstrap provider. I am using the desktop version of Ubuntu for this example; simply due to the ease of use. We would need to carry out the following steps; in order to setup the Tanzu Community Edition in our vSphere environment:

1- Install the Tanzu Cli

2- Prepare to deploy Clusters

3- Deploy a Management cluster

4- Deploy a Workload cluster

5- Examine the cluster


We will be looking at steps '1' to '3' in this blog post; steps '4' and '5' would be covered in part-2 of this blog series. Let us now get started with the fun part!!


Section-1: Intall the Tanzu Cli

The Tanzu cli installation, requires the following to be pre-installed, as a pre-requisite; on the bootstrap VM- Docker and kubectl

1- Installing Docker :

- Run the following commands to setup the docker repository:

 sudo apt-get update
 sudo apt-get install \
     ca-certificates \
     curl \
     gnupg \
     lsb-release

- Add Docker's official GPG key, set stable repository and install docker:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
 echo \
   "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
  sudo apt-get update

  sudo apt-get install docker-ce docker-ce-cli containerd.io

- Setup the Docker permissions:

 sudo groupadd docker
 sudo usermod -aG docker <your user>

Reboot the VM at this point

2- Installing Kubectl:

- Download the latest release of kubectl:

 curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

- Install Kubectl:

sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

- Validate the installation, by running the following command, and ensure the command runs without any errors:

	kubectl version --client

With the requisites done, now we can install the Tanzu Cli


3- Install Tanzu Cli:

In this example, we will install TanzuCli using ‘curl’. This script requires curl, grep, sed, tr, and jq in order to work. All of these are present by default on my Ubuntu box except jq. However, please check on your linux distro; and install all the dependencies before proceeding.


-Download the Tanzu Community edition package:

curl -H "Accept: application/vnd.github.v3.raw" \
     -L https://api.github.com/repos/vmware-tanzu/community-edition/contents/hack/get-tce-release.sh | \
     bash -s v0.11.0 linux

- Extract the file:

 tar xzvf ~/tce-linux-amd64-v0.11.0.tar.gz

- Install Tanzu Cli

 cd tce-linux-amd64-v0.11.0
 ./install.sh

This completes the setup of Tanzu Cli on the bootstrap linux box. Now, we will proceed with preparing our on-prem vsphere environment for deploying the TCE v0.11.

2-Prepare to Deploy clusters:

-Download the OVF file:

Before proceeding, please ensure your vsphere environment meets the following requirements:

  • - vSphere 6.7 u3 or later

  • - A VM folder in which to collect the Tanzu Community Edition VMs

  • - Datastore with sufficient capacity for the control plane and work load VMs

  • - Traffic allowed out to vCenter Server from the network on which clusters will run.

  • - Traffic allowed between your local bootstrap machine and port 6443 of all VMs in the clusters you create. Port 6443 is where the Kubernetes API is exposed.

  • - Traffic allowed between port 443 of all VMs in the clusters you create and vCenter Server. Port 443 is where the vCenter Server API is exposed.

  • - NTP is running on all hosts, and all hosts and bootstrap VM are in time sync.

  • - DHCP server in the network


Download the OVA, that matches your K8s' node os from this link:

https://customerconnect.vmware.com/downloads/get-download?downloadGroup=TCE-0110

I used the "photon-3-kube-v1.22.5+vmware.1-tkg.2-790a7a702b7fa129fb96be8699f5baa4-tce-011.ova" ova for this example:

-Create the VM folder and deploy the OVA as template:

- In your vCenter content library, create a folder. This is where you will place the TKG VM template and VMs.


- Deploy the OVA previously downloaded, to your cluster. The OVA deploy is pretty straight forward.

- Once the OVA deploy is completed, DO NOT power up the VM. Right click on the VM, Template ->convert to template. Add the template to the folder previously created in the content library.

- Create the SSH key pair, at the prompt for file to save key, press enter to select the default. At the "Enter passphrase", enter the password and repeat, for the key pair:

ssh-keygen -t rsa -b 4096 -C "email@example.com"

- Add the private key to the ssh agent of your boostrap VM:

 ssh-add ~/.ssh/id_rsa



3- Deploy the Management Cluster:

We will deploy the management cluster using the UI method. Let us take a look at the steps:

- On your VM bash terminal, enter the following command to fire-up the UI:

tanzu management-cluster create --ui

- The UI screen automatically opens in your browser:

- Under "VMware vSphere" click on "Deploy", the "Deploy Management Cluster on Vsphere" window opens. Fill in the IaaS provider details. This would be the vCenter IP, username and password. Select "Skip SSL Thumbprint Verification" radio button and then click on connect. Under "SSH Public Key", click on "Browse File" and browse to the location where the SSH public key was generated in the previous section. Then click "NEXT":

- Enter the Management cluster settings. First select the deployment type. Since I am setting this up in my homelab, I selected "Development" and "Instance Type" as "small". Set a "Managment Cluster" name. In the "Control Plane Endpoint Provider" you can use a "NSX Advanced Load Balancer". The default option is "kube-vip". Since I am testing this in my homelab, I am going ahead with the default option. Select the "Worker node instance type", in this example I am have selected "small". Set the 'Control Plane Endpoint" IP. This IP should be in the same subnet as your Bootstrap VM:

- Under "Resources" select the VM Folder, Datastore and Cluster where you would be creating the K8s management cluster:

- Under "Kubernetes Network" select the K8s port group created in the previous section under "Network Name". You can leave all other settings to default:


- Disable the "Identity Management Settings" :

- In the "OS Image" setting, select the TKG VM template created in the previous section, from the "OS Image" drop-down:

- Once all the settings have been configured, your UI should look this, click on 'Review Configuration':

- Scroll and review the configurations. Then click on "Deploy Management Cluster"

- If all the pre-requisites and configurations were done correctly, the bootstrap deploy will complete automatically in few minutes:



This completes our TCE v0.11 management cluster deploy!!

You can check your deploy by entering the following command:

tanzu management-cluster get

NOTE: Once the Management Cluster deploy is complete, set static IP reservation on you DHCP server for the Management Control Plane VM and the Management Worker node VM. This is essential, to ensure that the VMs persist the IPs across reboots; else the K8s containers will not be able to fire-up.


Stay tuned for "Part-2" where we will be setting up the workload cluster.

174 views0 comments

Recent Posts

See All