Illumina Innovates with Rancher and Kubernetes
Use Rancher to create a Kubernetes cluster in vSphere.
When creating a vSphere cluster, Rancher first provisions the specified amount of virtual machines by communicating with the vCenter API. Then it installs Kubernetes on top of them. A vSphere cluster may consist of multiple groups of VMs with distinct properties, such as the amount of memory or the number of vCPUs. This grouping allows for fine-grained control over the sizing of nodes for the data, control, and worker plane respectively.
Note: The vSphere node driver included in Rancher currently only supports the provisioning of VMs with RancherOS as the guest operating system.
Before proceeding to create a cluster, you must ensure that you have a vSphere user with sufficient permissions. If you are planning to make use of vSphere volumes for persistent storage in the cluster, there are additional requirements that must be met.
You must ensure that the hosts running Rancher servers are able to establish network connections to the following network endpoints:
The following steps create a role with the required privileges and then assign it to a new user in the vSphere console:
From the vSphere console, go to the Administration page.
Go to the Roles tab.
Create a new role. Give it a name and select the privileges listed in the permissions table.
Go to the Users and Groups tab.
Create a new user. Fill out the form and then click OK. Make sure to note the username and password, as you will need it when configuring node templates in Rancher.
Go to the Global Permissions tab.
Create a new Global Permission. Add the user you created earlier and assign it the role you created earlier. Click OK.
To create a cluster, you need to create at least one vSphere node template that specifies how VMs are created in vSphere.
Note: Once you create a node template, it is saved, and you can re-use it whenever you create additional vSphere clusters.
Log in with an admin account to the Rancher UI.
From the user settings menu, select Node Templates.
Click Add Template and then click on the vSphere icon.
Under Account Access enter the vCenter FQDN or IP address and the credentials for the vSphere user account (see Prerequisites).
As of v2.2.0, account access information will be stored as a cloud credential. Cloud credentials are stored as Kubernetes secrets.
Under Instance Options, configure the number of vCPUs, memory, and disk size for the VMs created by this template.
Optional: Enter the URL pointing to a RancherOS cloud-config file in the Cloud Init field.
Ensure that the OS ISO URL contains the URL of a VMware ISO release for RancherOS (rancheros-vmware.iso).
rancheros-vmware.iso
Optional: Provide a set of Configuration Parameters for the VMs.
Under Scheduling, enter the name/path of the Data Center to create the VMs in, the name of the VM Network to attach to, and the name/path of the Datastore to store the disks in.
Optional: Assign labels to the VMs that can be used as a base for scheduling rules in the cluster.
Optional: Customize the configuration of the Docker daemon on the VMs that will be created.
Assign a descriptive Name for this template and click Create.
After you’ve created a template, you can use it stand up the vSphere cluster itself.
From the Global view, click Add Cluster.
Choose vSphere.
Enter a Cluster Name.
Use Member Roles to configure user authorization for the cluster.
Use Cluster Options to choose the version of Kubernetes, what network provider will be used and if you want to enable project network isolation. To see more cluster options, click on Show advanced options.
Add one or more node pools to your cluster.A node pool is a collection of nodes based on a node template. A node Template defines the configuration of a node, like what Operating System to use, number of CPUs and amount of memory. Each node pool must have one or more nodes roles assigned.
Notes: Each node role (i.e. etcd, Control Plane, and Worker) should be assigned to a distinct node pool. Although it is possible to assign multiple node roles to a node pool, this should not be done for production clusters. The recommended setup is to have a node pool with the etcd node role and a count of three, a node pool with the Control Plane node role and a count of at least two, and a node pool with the Worker node role and a count of at least two. Regarding the etcd node role, refer to the etcd Admin Guide.
Notes:
etcd
Control Plane
Worker
Review your configuration, then click Create.
Note: If you have a cluster with DRS enabled, setting up VM-VM Affinity Rules is recommended. These rules allow VMs assigned the etcd and control-plane roles to operate on separate ESXi hosts when they are assigned to different node pools. This practice ensures that the failure of a single physical machine does not affect the availability of those planes.
Note:
If you have a cluster with DRS enabled, setting up VM-VM Affinity Rules is recommended. These rules allow VMs assigned the etcd and control-plane roles to operate on separate ESXi hosts when they are assigned to different node pools. This practice ensures that the failure of a single physical machine does not affect the availability of those planes.
Result:
Default
default
System
cattle-system
ingress-nginx
kube-public
kube-system
The tables below describe the configuration options available in the vSphere node template.
443
disk.EnableUUID=TRUE
The following table lists the permissions required for the vSphere user account configured in the node templates: