Search

NSX Application Platform Setup: Part 1 - TCE and Environment Setup


The NSX-T application platform (NAPP) is a feature added to NSX-T from NSX-T 3.2. This is a group of K8's services that allow us to deploy the additional NSX-T features such as:

- NSX-T intelligence

- NSX-T Network Detections and Response

- NSX-T Malware prevention

As you might have already guessed, the NSX application platform requires an application platform requires an upstream k8s cluster. This can be a Tanzu Kubernetes cluster or a simple K8s cluster.

I my homelab environment, I decided to use the Tanzu Community Edition, to create the NSX k8s nodes. I already have my Tanzu Management cluster up and running, the steps for setting this up has been outlined in this blogpost: https://www.virtualmystery.info/post/deploy-tanzu-community-edition-v0-11-using-linux-boostrap-vm-part-1

In this post, we will be focusing on deploying the workload cluster specific to the NAPP deploy requirements. The detailed NAPP setup requirement can be found at this link: https://docs.vmware.com/en/VMware-NSX/4.0/nsx-application-platform/GUID-85CD2728-8081-45CE-9A4A-D72F49779D6A.html

For the ease of reference, I have added a screenshot of the requirements here:

Before we setup the K8s worker cluster, we need to that the following network infrastructure has been setup in our environment:

- A DHCP server in the same subnet as the ESXi and TCE management cluster. Ensure the DHCP range does not include the entire subnet, we would be needing some of these IPs later on in the procedure.

- We already have a static endpoint IP assigned to our TCE management cluster. Set aside another IP in the same subnet (outside of the DHCP range), this would be used as the static endpoint IP for our NAPP k8s worker cluster.

- We will be deploying the MetalLB load balancer, as I do not have an NSX avi setup in my environment. We need to set aside few IPs [ ~10], which will be used by the MetalLB to assign IPs to the NAPP pods in the K8s worker cluster.


Now that we have our environment setup, let us dive into creating the NAPP k8s worker cluster. For my homelab setup, I went ahead with the minimum spec'd form factor i.e. the "Standard" form factor.

This requires a minimum of 1 control node and 3 worker nodes; each allocated 2CPUs. The control node would require 4GB RAM and the worker nodes require 16GB each. Based on this requirement, we can use the below cluster config file- "napp.yml":

SERVICE_CIDR: 100.64.0.0/13
TKG_HTTP_PROXY_ENABLED: "false"
TKG_IP_FAMILY: ipv4
VSPHERE_CONTROL_PLANE_DISK_GIB: "200"
VSPHERE_CONTROL_PLANE_ENDPOINT: <YOUR ENDPOINT IP HERE>
VSPHERE_CONTROL_PLANE_MEM_MIB: "16384"
VSPHERE_CONTROL_PLANE_NUM_CPUS: "4"
WORKER_MACHINE_COUNT:3VSPHERE_DATACENTER: /VirmystDatacenter
VSPHERE_DATASTORE: /VirmystDatacenter/datastore/AD_Datastore
VSPHERE_FOLDER: /VirmystDatacenter/vm
VSPHERE_INSECURE: "true"
VSPHERE_NETWORK: /VirmystDatacenter/network/TCENW
VSPHERE_PASSWORD: <##REMOVED##>
VSPHERE_RESOURCE_POOL: /VirmystDatacenter/host/VirmystCore/Resources
VSPHERE_SERVER: vcsa.virmyst.homelab
VSPHERE_SSH_AUTHORIZED_KEY: |-
    -----BEGIN OPENSSH PRIVATE KEY-----
    #######REMOVED###################
    -----END OPENSSH PRIVATE KEY-----
VSPHERE_TLS_THUMBPRINT: ""
VSPHERE_USERNAME: administrator@vsphere.local
VSPHERE_WORKER_DISK_GIB: "64"
VSPHERE_WORKER_MEM_MIB: "16384"
VSPHERE_WORKER_NUM_CPUS: "4"
CONTROL_PLANE_MACHINE_COUNT:1

Save the yaml file. Open a terminal window and use the following command to deploy the TCE worker cluster:

tanzu cluster create k8nappclst --file ~/<path to the above yaml file>

Once the worker cluster has been created, you can check it's status by using the command:

tanzu cluster list

This completes the first part of the blog post series. In the next post we shall be taking a look at deploying the MetalLB load balancer into the environment.

23 views0 comments