Create control plane
Overview​
- Self-managed
- Platform-managed
When deploying a Platform-managed vCluster Standalone cluster, vCluster Platform handles automated provisioning and lifecycle management of the control plane node, including rolling upgrades.
When deploying a self-managed vCluster Standalone cluster, the assets required to install the control plane are located in the GitHub releases of vCluster.
vCluster Standalone eliminates the dependency on an external Control Plane Cluster by running directly on your infrastructure. This architectural difference means:
controlPlane.standalone.enabledis automatically set when vCluster detects it's running as a binary rather than as a podcontrolPlane.privateNodes.enabledis automatically set because standalone mode has no Control Plane Cluster, so all worker nodes are private nodes that join directly
If you're familiar with traditional vCluster deployments, you'll notice this configuration has fewer required settings because there's no Control Plane Cluster relationship to configure.
For optional settings like dedicating the control plane node (not running workloads on it), configuring containerd, or customizing node registration, see the standalone configuration reference. For vcluster.yaml options that are not supported in Standalone mode, see Unsupported configuration options.
Predeployment configuration options​
Before deploying, review the configuration options that can't be updated after deployment. These options require you to deploy a new vCluster instance rather than upgrading your existing one.
Control-plane options​
- High availability - Run multiple control plane nodes
- CoreDNS - Currently only CoreDNS deployed by vCluster during startup is supported.
- Backing Store - Decide how the data of your cluster is stored, must be one of either embedded SQLite (the default) or embedded etcd.
Node roles​
Decide if the control plane node will also be a worker node or not. Once a node joins the cluster, the roles of the node cannot change.
By default, the control plane node also acts as a worker node. To deploy a dedicated control plane that does not run workloads, set controlPlane.standalone.joinNode.enabled to false.
When the control plane node also acts as a worker, tenant workloads share the machine with
the vCluster control-plane process and its credentials. For production deployments, set
controlPlane.standalone.joinNode.enabled
to false and use dedicated worker nodes.
Worker nodes​
With vCluster Standalone, worker node pools can only be private nodes. Since there is no Control Plane Cluster, there is no concept of Shared Nodes.
Prerequisites​
- Access to a node that satisfies the node requirements
Install control plane node​
Self-managed​
Perform all steps on the control plane node as root.
Switch to root and set
CONFIG_DIRECTORYto the directory where you want to store thevcluster.yamlconfiguration file.sudo su -Modify the following with your specific values to replace on the whole page and generate copyable commands:Create directory for storing vCluster Standalone configuration.
Create config directorymkdir -p /etc/vclusterSave a basic
vcluster.yamlconfiguration file for vCluster Standalone on the control plane node.Create vCluster config filecat <<EOF > /etc/vcluster/vcluster.yaml
controlPlane:
distro:
k8s:
version: v1.34.0
EOFwarningAdding additional control plane nodes is not supported unless you follow the high availability steps for configuration.
Run the installation script on the control plane node:
Install vCluster Standalone on control plane nodecurl -sfL https://github.com/loft-sh/vcluster/releases/download/v0.33.1/install-standalone.sh | sh -s -- --vcluster-name standaloneCheck that the control plane node is ready by running these commands:
Check node statuskubectl get nodesExpected output:
NAME STATUS ROLES AGE VERSION
ip-192-168-3-131 Ready control-plane,master 11m v1.32.1Verify cluster components are runningkubectl get pods -APods should include:
- Flannel: CNI for container networking
- CoreDNS: DNS service for the cluster
- KubeProxy: Network traffic routing and load balancing
- Konnectivity: Secure control plane to worker node communication
- Local Path Provisioner: Dynamic storage provisioning
Available flags to use in the install script​
There are several flags available that can be added to the script.
| Flag | Description |
|---|---|
--vcluster-name | Name of the vCluster instance |
--vcluster-version | Specific vCluster version to install |
--config | Path to the vcluster.yaml configuration file |
--binary | Path to an existing vCluster binary (use with --skip-download) |
--skip-download | Skip downloading vCluster binary (use existing) |
--skip-wait | Exit without waiting for vCluster to be ready |
--extra-env | Additional environment variables for vCluster |
--join-token | Token for joining additional nodes to the cluster |
--join-endpoint | Endpoint address for joining additional nodes |
--vcluster-kubernetes-bundle | Path to an air-gapped Kubernetes bundle |
--reset-only | Uninstall and reset the vCluster installation without reinstalling |
--fips | Enable FIPS-compliant mode |
--platform-access-key | Access key for vCluster Platform integration |
--platform-host | vCluster Platform host URL |
--platform-insecure | Skip TLS verification for Platform connection |
--platform-instance-name | Instance name in vCluster Platform |
--platform-project | Project name in vCluster Platform |
Access your cluster​
After installation, vCluster automatically configures the kubeconfig on the control plane node and sets the kubectl context to your new vCluster Standalone instance.
To use vCluster Standalone as a Control Plane Cluster for tenant clusters, set your kubectl context to the vCluster Standalone instance. You can then create and manage tenant clusters using the vCluster CLI.
To access the cluster from other machines, copy the kubeconfig from /var/lib/vcluster/kubeconfig.yaml on the control plane node, then replace the server field with relevant IP or DNS to access the cluster.
You can also configure vCluster to make external access easier.
The vCluster CLI is installed at /var/lib/vcluster/bin/vcluster-cli.
Platform-managed​
When managing a standalone cluster through vCluster Platform, the initial control plane node is provisioned through the platform using Auto Nodes. Platform connects to your node provider (such as AWS, GCP, or a bare metal provisioner), provisions a VM or physical server, installs the vCluster binary, and joins the node as the control plane. Once the cluster is running, Platform manages its full lifecycle. Platform coordinates configuration updates and version upgrades as rolling operations, eliminating the need for manual SSH access.
Use vCluster Platform to:
- Add a Node Provider.
- Add the vCluster configuration (example below).
- Provision the cluster from the platform UI.
controlPlane:
standalone:
enabled: true
autoNodes:
provider: aws # Node provider you want to use
quantity: 1 # Number of nodes (HA requires embedded etcd or external DB)
distro:
k8s:
image:
tag: v1.35.0 # Kubernetes version you want to use
# Worker nodes
privateNodes:
enabled: true
autoNodes: # (optional) Add worker nodes with Auto Nodes
- provider: aws
dynamic:
- name: aws-pool-1
After provisioning completes, vCluster Platform manages the control plane node lifecycle. Worker node lifecycle also remains managed through the platform UI when Auto Nodes are used.
Access your cluster​
To access a Standalone cluster managed by vCluster Platform, open the vCluster in the platform UI and click Connect.
As an alternative, use the vcluster platform connect vcluster command.