Skip to main content
Version: main 🚧

Create control plane

Supported Configurations
Running the control plane as a binary with vCluster Standalone. When scaling with additional worker nodes, they are joined as private nodes.

Overview​

When deploying a self-managed vCluster Standalone cluster, the assets required to install the control plane are located in the GitHub releases of vCluster.

Why is the configuration so minimal?

vCluster Standalone eliminates the dependency on an external Control Plane Cluster by running directly on your infrastructure. This architectural difference means:

  • controlPlane.standalone.enabled is automatically set when vCluster detects it's running as a binary rather than as a pod
  • controlPlane.privateNodes.enabled is automatically set because standalone mode has no Control Plane Cluster, so all worker nodes are private nodes that join directly

If you're familiar with traditional vCluster deployments, you'll notice this configuration has fewer required settings because there's no Control Plane Cluster relationship to configure.

For optional settings like dedicating the control plane node (not running workloads on it), configuring containerd, or customizing node registration, see the standalone configuration reference. For vcluster.yaml options that are not supported in Standalone mode, see Unsupported configuration options.

Predeployment configuration options​

Before deploying, review the configuration options that can't be updated after deployment. These options require you to deploy a new vCluster instance rather than upgrading your existing one.

Control-plane options​

  • High availability - Run multiple control plane nodes
  • CoreDNS - Currently only CoreDNS deployed by vCluster during startup is supported.
  • Backing Store - Decide how the data of your cluster is stored, must be one of either embedded SQLite (the default) or embedded etcd.

Node roles​

Decide if the control plane node will also be a worker node or not. Once a node joins the cluster, the roles of the node cannot change.

By default, the control plane node also acts as a worker node. To deploy a dedicated control plane that does not run workloads, set controlPlane.standalone.joinNode.enabled to false.

warning

When the control plane node also acts as a worker, tenant workloads share the machine with the vCluster control-plane process and its credentials. For production deployments, set controlPlane.standalone.joinNode.enabled to false and use dedicated worker nodes.

Worker nodes​

With vCluster Standalone, worker node pools can only be private nodes. Since there is no Control Plane Cluster, there is no concept of Shared Nodes.

Prerequisites​

Install control plane node​

Self-managed​

Control Plane Node

Perform all steps on the control plane node as root.

  1. Switch to root and set CONFIG_DIRECTORY to the directory where you want to store the vcluster.yaml configuration file.

    sudo su -
    Modify the following with your specific values to replace on the whole page and generate copyable commands:
  2. Create directory for storing vCluster Standalone configuration.

    Create config directory
    mkdir -p /etc/vcluster

    Save a basic vcluster.yaml configuration file for vCluster Standalone on the control plane node.

    Create vCluster config file
    cat <<EOF > /etc/vcluster/vcluster.yaml
    controlPlane:
    distro:
    k8s:
    version: v1.34.0
    EOF
    warning

    Adding additional control plane nodes is not supported unless you follow the high availability steps for configuration.

  3. Run the installation script on the control plane node:

    Install vCluster Standalone on control plane node
    curl -sfL https://github.com/loft-sh/vcluster/releases/download/v0.33.1/install-standalone.sh | sh -s -- --vcluster-name standalone
  4. Check that the control plane node is ready by running these commands:

    Check node status
    kubectl get nodes

    Expected output:

    NAME               STATUS   ROLES                  AGE   VERSION
    ip-192-168-3-131 Ready control-plane,master 11m v1.32.1
    Verify cluster components are running
    kubectl get pods -A

    Pods should include:

    • Flannel: CNI for container networking
    • CoreDNS: DNS service for the cluster
    • KubeProxy: Network traffic routing and load balancing
    • Konnectivity: Secure control plane to worker node communication
    • Local Path Provisioner: Dynamic storage provisioning

Available flags to use in the install script​

There are several flags available that can be added to the script.

FlagDescription
--vcluster-nameName of the vCluster instance
--vcluster-versionSpecific vCluster version to install
--configPath to the vcluster.yaml configuration file
--binaryPath to an existing vCluster binary (use with --skip-download)
--skip-downloadSkip downloading vCluster binary (use existing)
--skip-waitExit without waiting for vCluster to be ready
--extra-envAdditional environment variables for vCluster
--join-tokenToken for joining additional nodes to the cluster
--join-endpointEndpoint address for joining additional nodes
--vcluster-kubernetes-bundlePath to an air-gapped Kubernetes bundle
--reset-onlyUninstall and reset the vCluster installation without reinstalling
--fipsEnable FIPS-compliant mode
--platform-access-keyAccess key for vCluster Platform integration
--platform-hostvCluster Platform host URL
--platform-insecureSkip TLS verification for Platform connection
--platform-instance-nameInstance name in vCluster Platform
--platform-projectProject name in vCluster Platform

Access your cluster​

After installation, vCluster automatically configures the kubeconfig on the control plane node and sets the kubectl context to your new vCluster Standalone instance.

To use vCluster Standalone as a Control Plane Cluster for tenant clusters, set your kubectl context to the vCluster Standalone instance. You can then create and manage tenant clusters using the vCluster CLI.

To access the cluster from other machines, copy the kubeconfig from /var/lib/vcluster/kubeconfig.yaml on the control plane node, then replace the server field with relevant IP or DNS to access the cluster. You can also configure vCluster to make external access easier.

The vCluster CLI is installed at /var/lib/vcluster/bin/vcluster-cli.

Platform-managed​

When managing a standalone cluster through vCluster Platform, the initial control plane node is provisioned through the platform using Auto Nodes. Platform connects to your node provider (such as AWS, GCP, or a bare metal provisioner), provisions a VM or physical server, installs the vCluster binary, and joins the node as the control plane. Once the cluster is running, Platform manages its full lifecycle. Platform coordinates configuration updates and version upgrades as rolling operations, eliminating the need for manual SSH access.

Use vCluster Platform to:

  1. Add a Node Provider.
  2. Add the vCluster configuration (example below).
  3. Provision the cluster from the platform UI.
Modify the following with your specific values to replace on the whole page and generate copyable commands:
vcluster.yaml for a standalone control plane managed by vCluster Platform
controlPlane:
standalone:
enabled: true
autoNodes:
provider: aws # Node provider you want to use
quantity: 1 # Number of nodes (HA requires embedded etcd or external DB)
distro:
k8s:
image:
tag: v1.35.0 # Kubernetes version you want to use

# Worker nodes
privateNodes:
enabled: true
autoNodes: # (optional) Add worker nodes with Auto Nodes
- provider: aws
dynamic:
- name: aws-pool-1

After provisioning completes, vCluster Platform manages the control plane node lifecycle. Worker node lifecycle also remains managed through the platform UI when Auto Nodes are used.

Access your cluster​

To access a Standalone cluster managed by vCluster Platform, open the vCluster in the platform UI and click Connect.

As an alternative, use the vcluster platform connect vcluster command.