Illumina Innovates with Rancher and Kubernetes
Whether you’re configuring Rancher to run in a single-node or high-availability setup, each node running Rancher Server must meet the following requirements.
Rancher is tested on the following operating systems and their subsequent non-major releases with a supported version of Docker.
If you are using RancherOS, make sure you switch the Docker engine to a supported version using:
# Look up available versions sudo ros engine list # Switch to a supported version sudo ros engine switch docker-18.09.2
See Running on ARM64 (Experimental) if you plan to run Rancher on ARM64. Docker Documentation: Installation Instructions
Hardware requirements scale based on the size of your Rancher deployment. Provision each individual node according to the requirements.
HA Node Requirements
Single Node Requirements
Disks
Rancher performance depends on etcd in the cluster performance. To ensure optimal speed, we recommend always using SSD disks to back your Rancher management Kubernetes cluster. On cloud providers, you will also want to use the minimum size that allows the maximum IOPs. In larger clusters consider using dedicated storage devices for etcd data and wal directories.
Each node used (either for the Single Node Install, High Availability (HA) Install or nodes that are used in clusters) should have a static IP configured. In case of DHCP, the nodes should have a DHCP reservation to make sure the node gets the same IP allocated.
When deploying Rancher in an HA cluster, certain ports on your nodes must be open to allow communication with Rancher. The ports that must be open change according to the type of machines hosting your cluster nodes. For example, if your are deploying Rancher on nodes hosted by an infrastructure, port 22 must be open for SSH. The following diagram depicts the ports that are opened for each cluster type.
22
Rancher nodes:Nodes running the rancher/rancher container
rancher/rancher
etcd nodes:Nodes with the role etcd
controlplane nodes:Nodes with the role controlplane
worker nodes:Nodes with the role worker
Kubernetes healthchecks (livenessProbe and readinessProbe) are executed on the host itself. On most nodes, this is allowed by default. When you have applied strict host firewall (i.e. iptables) policies on the node, or when you are using nodes that have multiple interfaces (multihomed), this traffic gets blocked. In this case, you have to explicitly allow this traffic in your host firewall, or in case of public/private cloud hosted machines (i.e. AWS or OpenStack), in your security group configuration. Keep in mind that when using a security group as Source or Destination in your security group, that this only applies to the private interface of the nodes/instances.
livenessProbe
readinessProbe
iptables
If you are Creating an Amazon EC2 Cluster, you can choose to let Rancher create a Security Group called rancher-nodes. The following rules are automatically added to this Security Group.
rancher-nodes
Security group: rancher-nodes