Installing Kubernetes on Ubuntu Server
This guide walks you through installing a Kubernetes cluster on Ubuntu Server using kubeadm, kubelet, and kubectl.
- Ubuntu Server 20.04 or later
- Minimum 2 GB RAM
- 2 CPUs or more
- Full network connectivity between all machines
- Unique hostname, MAC address, and product_uuid for every node
- Root or sudo privileges
Overviewβ
You'll install three essential packages on all nodes:
- kubeadm: The command-line tool that bootstraps your Kubernetes cluster by initializing the control plane and configuring cluster components
- kubelet: The primary node agent that runs on every machine in your cluster. It manages the pod lifecycle by starting, stopping, and maintaining application containers as directed by the control plane
- kubectl: The command-line interface for interacting with your Kubernetes cluster. It communicates with the API server to deploy applications, inspect resources, and manage cluster operations
kubeadm does not install or manage kubelet or kubectl for you. You must ensure they match the version of the Kubernetes control plane that kubeadm installs. Mismatched versions can cause version skew, leading to unexpected bugs and instability.
Supported skew: The kubelet can be one minor version behind the API server (e.g., kubelet 1.34.x with API server 1.35.x), but the kubelet version may never exceed the API server version.
Step 1: Configure Required Portsβ
Kubernetes components communicate over specific TCP/UDP ports. Before installation, ensure your firewall allows traffic on these ports.
Control Plane Node Portsβ
| Protocol | Direction | Port Range | Purpose | Used By |
|---|---|---|---|---|
| TCP | Inbound | 6443 | Kubernetes API server | All |
| TCP | Inbound | 2379-2380 | etcd server client API | kube-apiserver, etcd |
| TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
| TCP | Inbound | 10259 | kube-scheduler | Self |
| TCP | Inbound | 10257 | kube-controller-manager | Self |
etcd is the distributed key-value store that holds all cluster data. While these ports are listed for control plane nodes, you can also host etcd externally or on custom ports if needed.
Worker Node Portsβ
| Protocol | Direction | Port Range | Purpose | Used By |
|---|---|---|---|---|
| TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
| TCP | Inbound | 10256 | kube-proxy | Self, Load balancers |
| TCP | Inbound | 30000-32767 | NodePort Servicesβ | All |
| UDP | Inbound | 30000-32767 | NodePort Servicesβ | All |
β NodePort Services expose applications on a static port on each node, making them accessible from outside the cluster.
Testing Port Connectivityβ
Verify that required ports are accessible between nodes using netcat:
# Install netcat if not available
sudo apt-get install -y netcat
# Start listener on port 6443 (API server port)
nc -l 6443
# Test connectivity to control plane
nc -vz <control-plane-ip> 6443
If the connection succeeds, you'll see: Connection to <control-plane-ip> 6443 port [tcp/*] succeeded!
Repeat this process for all critical ports. Start with port 6443 (API server), then test etcd ports (2379-2380), and kubelet API (10250). This ensures proper cluster communication before proceeding.
Step 2: Configure Swap Memoryβ
The kubelet (Kubernetes node agent) has specific requirements regarding swap memory to ensure predictable performance and resource management.
Why Disable Swap?β
By default, Kubernetes requires swap to be disabled because:
- Predictable performance: Swap can cause unpredictable latency when the kernel moves memory to disk
- Resource guarantees: Kubernetes resource limits (memory requests/limits) assume physical RAM, not virtual memory
- Container isolation: Swap can break the memory isolation between containers
- Disable Swap (Recommended)
- Tolerate Swap (Advanced)
# Disable swap immediately (affects current session only)
sudo swapoff -a
# Verify swap is disabled
free -h
# Look for "Swap: 0B" in the output
# Disable swap permanently across reboots
# Comment out swap entries in /etc/fstab
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
# Verify fstab changes
cat /etc/fstab | grep swap
After running these commands, swap will remain disabled even after system reboots.
If your environment requires swap to remain enabled, you can configure kubelet to tolerate it:
# Allow kubelet to start even with swap enabled
failSwapOn: false
# Configure swap behavior (optional)
# NoSwap: Workloads cannot use swap (default)
# LimitedSwap: Workloads can use swap with restrictions
# UnlimitedSwap: No swap restrictions
swapBehavior: LimitedSwap
Apply the configuration:
sudo systemctl restart kubelet
- Even with
failSwapOn: false, workloads cannot use swap by default - You must explicitly set
swapBehaviorto something other thanNoSwapto enable swap access - This is an advanced configuration that may impact cluster performance and stability
Step 3: Enable IPv4 Packet Forwardingβ
Kubernetes networking requires the Linux kernel to forward IPv4 packets between network interfaces. This is essential for:
- Pod-to-pod communication across different nodes
- Service networking to route traffic to the correct pods
- Container network interface (CNI) plugins to function correctly
# Create a sysctl configuration file for Kubernetes
# This configuration persists across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
EOF
# Apply the sysctl parameters immediately without reboot
sudo sysctl --system
Verify the configuration:
sysctl net.ipv4.ip_forward
Expected output: net.ipv4.ip_forward = 1
The net.ipv4.ip_forward parameter tells the Linux kernel to act as a router, forwarding packets between network interfaces. Without this, containers on different nodes cannot communicate, and services won't route traffic properly.
Step 4: Install Container Runtime (containerd)β
Kubernetes uses the Container Runtime Interface (CRI) to interact with container runtimes. We'll install containerd, an industry-standard container runtime that manages the complete container lifecycle.
containerd is a CNCF graduated project that:
- Manages container execution and supervision
- Handles image transfer and storage
- Provides low-level storage and network attachment
- Is used by Docker, Kubernetes, and other platforms
Understanding the Installation Componentsβ
A complete containerd installation requires three components:
- containerd: The core runtime daemon
- runc: The OCI-compliant runtime that actually creates and runs containers
- CNI plugins: Network plugins for container networking
Step 4.1: Install containerdβ
# Visit the containerd releases page to find the latest version
# https://github.com/containerd/containerd/releases
# Download containerd (replace <VERSION> with the latest version, e.g., 1.7.13)
wget https://github.com/containerd/containerd/releases/download/v<VERSION>/containerd-<VERSION>-linux-amd64.tar.gz
# Verify the checksum (get SHA256 from the releases page)
sha256sum containerd-<VERSION>-linux-amd64.tar.gz
# Extract containerd binaries to /usr/local
sudo tar Cxzvf /usr/local containerd-<VERSION>-linux-amd64.tar.gz
This extracts several binaries:
containerd: Main daemoncontainerd-shim-runc-v2: Interface between containerd and runcctr: Command-line client for debugging
Install the systemd service file:
# Download the official containerd systemd service unit
sudo mkdir -p /usr/local/lib/systemd/system
sudo wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service \
-O /usr/local/lib/systemd/system/containerd.service
# Reload systemd to recognize the new service
sudo systemctl daemon-reload
# Enable containerd to start on boot and start it now
sudo systemctl enable --now containerd
# Verify containerd is running
sudo systemctl status containerd
systemd is the init system used by modern Linux distributions to manage services. The .service file tells systemd how to start, stop, and manage the containerd daemon.
Step 4.2: Install runcβ
runc is the low-level runtime that actually creates and runs containers according to the OCI (Open Container Initiative) specification.
# Visit the runc releases page to find the latest version
# https://github.com/opencontainers/runc/releases
# Download runc (replace <VERSION> with latest, e.g., 1.1.12)
wget https://github.com/opencontainers/runc/releases/download/v<VERSION>/runc.amd64
# Verify the checksum (get SHA256 from the releases page)
sha256sum runc.amd64
# Install runc to /usr/local/sbin with executable permissions
sudo install -m 755 runc.amd64 /usr/local/sbin/runc
# Verify installation
runc --version
The runc binary is statically compiled, meaning it includes all dependencies. It should work on any Linux distribution without additional requirements.
Step 4.3: Install CNI Pluginsβ
CNI (Container Network Interface) plugins provide networking capabilities for containers, such as:
- Creating network interfaces in containers
- Assigning IP addresses
- Setting up network routes
- Implementing network policies
# Create CNI plugins directory
sudo mkdir -p /opt/cni/bin
# Visit the CNI plugins releases page to find the latest version
# https://github.com/containernetworking/plugins/releases
# Download CNI plugins (replace <VERSION> with latest, e.g., 1.4.0)
wget https://github.com/containernetworking/plugins/releases/download/v<VERSION>/cni-plugins-linux-amd64-v<VERSION>.tgz
# Verify the checksum
sha256sum cni-plugins-linux-amd64-v<VERSION>.tgz
# Extract CNI plugins
sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v<VERSION>.tgz
This installs various CNI plugins including:
- bridge: Creates a network bridge on the host
- host-local: Allocates IP addresses from a predefined range
- loopback: Sets up the loopback interface
- portmap: Forwards ports from the host to containers
Step 4.4: Configure containerdβ
Generate and configure the containerd configuration file:
# Create containerd configuration directory
sudo mkdir -p /etc/containerd
# Generate the default configuration
containerd config default | sudo tee /etc/containerd/config.toml
The /etc/containerd/config.toml file specifies daemon-level options including:
- Runtime configuration
- Plugin settings
- Image registry configuration
- Storage locations
Configure the systemd cgroup driver:
Kubernetes requires the container runtime and kubelet to use the same cgroup driver. The systemd cgroup driver is recommended for systems using cgroup v2 (default in Ubuntu 22.04+).
- Containerd 2.x
- Containerd 1.x
Edit /etc/containerd/config.toml and find the runc runtime configuration section:
[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runc.options]
SystemdCgroup = true
Or use sed to make the change:
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
For containerd 1.x, the configuration path is slightly different:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
Or use sed:
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
Cgroups (control groups) limit and isolate resource usage (CPU, memory, disk I/O) for containers. The systemd cgroup driver integrates with systemd's resource management, providing better integration and stability on modern Linux systems using cgroup v2.
Override the sandbox (pause) image:
The pause container is a special container that holds the network namespace for all containers in a pod. Update the image to use the official Kubernetes pause image:
# Edit /etc/containerd/config.toml
# Find the [plugins."io.containerd.grpc.v1.cri"] section and add:
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "registry.k8s.io/pause:3.10"
Apply all configuration changes:
# Restart containerd to apply configuration
sudo systemctl restart containerd
# Verify containerd is running with new configuration
sudo systemctl status containerd
Step 5: Install kubeadm, kubelet, and kubectlβ
Now we'll install the Kubernetes packages from the official package repositories.
The legacy Kubernetes repositories (apt.kubernetes.io and yum.kubernetes.io) were deprecated and frozen on September 13, 2023. You must use the new pkgs.k8s.io repositories to install Kubernetes versions released after this date.
Package Repository Structureβ
The new Kubernetes repositories are organized by minor version. Each Kubernetes minor version (1.34, 1.35, etc.) has its own dedicated repository. This guide covers Kubernetes v1.35.
# Update package index and install prerequisites
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
What these packages do:
- apt-transport-https: Allows APT to retrieve packages over HTTPS (may be a dummy package on newer Ubuntu versions)
- ca-certificates: Provides SSL certificates to verify HTTPS connections
- curl: Downloads files from URLs
- gpg: Verifies package signatures
Download and install the Kubernetes GPG signing key:
# Create the keyrings directory if it doesn't exist
# This directory stores GPG keys used to verify package authenticity
sudo mkdir -p -m 755 /etc/apt/keyrings
# Download the Kubernetes GPG key and convert it to GPG keyring format
# The same key works for all Kubernetes versions
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.35/deb/Release.key | \
sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
GPG (GNU Privacy Guard) keys verify that packages come from the official Kubernetes project and haven't been tampered with. The --dearmor option converts the key from ASCII armor format to binary format.
Add the Kubernetes apt repository:
# Add the Kubernetes repository to APT sources
# This tells APT where to download Kubernetes packages from
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.35/deb/ /' | \
sudo tee /etc/apt/sources.list.d/kubernetes.list
This repository contains only Kubernetes 1.35 packages. To install a different minor version, change v1.35 in the URL to your desired version (e.g., v1.34). Always check the official documentation for the version you're installing.
Install kubelet, kubeadm, and kubectl:
# Update the package index with the new repository
sudo apt-get update
# Install Kubernetes components
sudo apt-get install -y kubelet kubeadm kubectl
# Prevent automatic updates of Kubernetes packages
sudo apt-mark hold kubelet kubeadm kubectl
The apt-mark hold command marks packages as "held back," preventing them from being automatically upgraded during system updates (apt upgrade). This is critical for Kubernetes because:
- Version synchronization: All cluster nodes should run the same Kubernetes version
- Controlled upgrades: Kubernetes upgrades require careful planning and execution
- Compatibility: Automatic updates can break version compatibility between components
When you're ready to upgrade Kubernetes, you'll need to:
- Remove the hold:
sudo apt-mark unhold kubelet kubeadm kubectl - Upgrade following the official upgrade procedure
- Re-apply the hold:
sudo apt-mark hold kubelet kubeadm kubectl
Enable and start the kubelet service:
# Enable kubelet to start automatically on boot
sudo systemctl enable --now kubelet
The kubelet will restart every few seconds in a crashloop state, waiting for kubeadm to tell it what to do. This is normal behavior. The kubelet will stabilize once you run kubeadm init on the control plane node or kubeadm join on worker nodes.
You can observe this with: sudo journalctl -u kubelet -f
Verify installation:
kubeadm version
kubectl version --client
kubelet --version
Step 6: Initialize the Control Planeβ
This step is performed only on the control plane node (master node). The control plane runs the core Kubernetes components that manage the cluster.
What kubeadm init Doesβ
When you run kubeadm init, it performs several initialization phases:
- Preflight checks: Validates system requirements (swap disabled, required ports open, etc.)
- Certificate generation: Creates a self-signed Certificate Authority (CA) and certificates for all components
- Kubeconfig files: Generates kubeconfig files for kubelet, controller-manager, scheduler, and admin
- Static pod manifests: Creates manifests for API server, controller-manager, scheduler, and etcd
- Wait for control plane: Waits for the control plane components to become healthy
- Upload configuration: Stores cluster configuration in ConfigMaps
- Install addons: Deploys CoreDNS (DNS server) and kube-proxy (network proxy)
- Generate join token: Creates a token for worker nodes to join the cluster
Initialization Optionsβ
- Basic Initialization
- High Availability Setup
- Custom Configuration
For a simple, single control plane setup:
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
What this does:
- Initializes the control plane with default settings
- Sets the pod network CIDR to
192.168.0.0/16 - Uses the default gateway IP as the API server advertise address
- Binds the API server to port 6443
For production clusters that require multiple control plane nodes:
# Add a DNS name or load balancer IP to /etc/hosts
echo "192.168.0.102 cluster-endpoint" | sudo tee -a /etc/hosts
# Initialize with a shared control plane endpoint
sudo kubeadm init \
--control-plane-endpoint=cluster-endpoint \
--pod-network-cidr=192.168.0.0/16 \
--upload-certs
Understanding --control-plane-endpoint:
- Sets a shared endpoint for all control plane nodes
- Can be a DNS name (recommended) or IP address of a load balancer
- Allows you to add additional control plane nodes later
- The endpoint address becomes part of the API server certificate
Why use a load balancer endpoint?
- High availability: If one control plane node fails, requests route to healthy nodes
- Scalability: Distribute API server load across multiple nodes
- Flexibility: Easily add/remove control plane nodes without changing client configurations
Converting a single control plane cluster to HA after initialization (without --control-plane-endpoint) is not supported by kubeadm. Always plan for HA from the beginning if you might need it.
For custom API server addresses and advanced options:
sudo kubeadm init \
--apiserver-advertise-address=192.168.0.102 \
--pod-network-cidr=192.168.0.0/16 \
--service-cidr=10.96.0.0/12
Key options explained:
--apiserver-advertise-address: The IP address the API server advertises to other cluster members. This address is added to the API server's TLS certificate--pod-network-cidr: The IP range for pod networks. This must match your CNI plugin's requirements (Calico uses192.168.0.0/16)--service-cidr: The IP range for cluster services (default:10.96.0.0/12)
The --pod-network-cidr flag tells the control plane which IP range to use for pod networking. Different CNI plugins have different requirements:
- Calico:
192.168.0.0/16 - Flannel:
10.244.0.0/16 - Weave: Auto-configured
Check your CNI plugin's documentation for the correct CIDR.
Post-Initialization Stepsβ
After successful initialization, you'll see output similar to:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a Pod network to the cluster.
You can now join any number of machines by running the following on each node:
kubeadm join 192.168.0.102:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:1234567890abcdef...
Configure kubectl for your user:
# Create the .kube directory in your home folder
mkdir -p $HOME/.kube
# Copy the admin kubeconfig file
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# Change ownership to your user
sudo chown $(id -u):$(id -g) $HOME/.kube/config
The kubeconfig file ($HOME/.kube/config) contains:
- Cluster information: API server address and CA certificate
- User credentials: Client certificate and key for authentication
- Context: Which cluster and user to use by default
kubectl reads this file to authenticate with the API server. Never share this fileβit grants full administrative access to your cluster.
Verify the control plane:
# Check node status
kubectl get nodes
# Check that control plane pods are running
kubectl get pods -n kube-system
# View cluster information
kubectl cluster-info
Initially, your node will show as NotReady because the pod network addon hasn't been installed yet.
Step 7: Install Pod Network Add-onβ
Kubernetes requires a Container Network Interface (CNI) plugin to enable pod-to-pod communication. Without a CNI plugin, pods cannot communicate across nodes, and the node will remain in NotReady state.
How CNI Plugins Workβ
CNI plugins:
- Assign IP addresses to pods
- Create virtual network interfaces
- Set up routing rules for pod traffic
- Implement network policies
- Enable pod-to-pod and pod-to-service communication
Install Calicoβ
Calico is a popular CNI plugin that provides:
- Layer 3 networking using BGP
- Network policy enforcement
- High performance with no overlay network overhead
- Support for both IPv4 and IPv6
# Apply the Calico manifest
# This creates all necessary resources: DaemonSets, Deployments, ConfigMaps, etc.
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Monitor the installation:
# Watch pods being created (Ctrl+C to exit)
kubectl get pods -n kube-system -w
# Check that all Calico pods are running
kubectl get pods -n kube-system | grep calico
You should see pods like:
calico-node-xxxxx: Runs on each node to provide networkingcalico-kube-controllers-xxxxx: Manages Calico resourcescalico-typha-xxxxx: Optional scaling component for large clusters
After the CNI plugin is installed and its pods are running, your node status will change from NotReady to Ready. Check with: kubectl get nodes
Other Network Add-on Optionsβ
Flannel - Simple overlay network
Flannel creates a simple overlay network using VXLAN (by default):
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
Flannel characteristics:
- Very simple to deploy and operate
- Uses VXLAN overlay by default
- Lower performance than Calico's native routing
- Limited network policy support
- Good for small to medium clusters
Required: Use --pod-network-cidr=10.244.0.0/16 during kubeadm init
Weave Net - Automatic mesh network
Weave Net creates an automatic mesh network between nodes:
kubectl apply -f https://github.com/weaveworks/weave/releases/download/latest_release/weave-daemonset-k8s.yaml
Weave Net characteristics:
- Automatic mesh topology
- Built-in network policy support
- Encryption available
- Slightly higher CPU usage
- Auto-configures pod CIDR
Cilium - eBPF-based networking
Cilium uses eBPF (extended Berkeley Packet Filter) for high-performance networking:
# Install Cilium CLI first
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-amd64.tar.gz
sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
# Install Cilium
cilium install
Cilium characteristics:
- Uses eBPF for very high performance
- Advanced observability features
- Multi-cluster networking
- Service mesh capabilities
- Requires Linux kernel 4.9+ with eBPF support
- Calico: Best for most production workloads, excellent performance, strong network policies
- Flannel: Simple setup, good for learning and development
- Weave Net: Good balance of features and simplicity
- Cilium: Best performance and features, requires newer kernels
For more CNI plugins, visit the Kubernetes networking add-ons page.
Step 8: Join Worker Nodesβ
After initializing the control plane and installing a CNI plugin, you can add worker nodes to your cluster.
Prerequisites for Worker Nodesβ
Ensure you've completed Steps 1-5 on each worker node:
- β Configured required ports
- β Disabled swap (or configured to tolerate)
- β Enabled IPv4 forwarding
- β Installed containerd, runc, and CNI plugins
- β Installed kubeadm, kubelet, and kubectl
Join the Clusterβ
Run the join command from your kubeadm init output on each worker node:
sudo kubeadm join <control-plane-ip>:6443 \
--token <token> \
--discovery-token-ca-cert-hash sha256:<hash>
Understanding the join command:
<control-plane-ip>:6443: Address of the API server--token: Bootstrap token for authentication (expires after 24 hours)--discovery-token-ca-cert-hash: SHA256 hash of the cluster CA certificate (ensures you're joining the correct cluster)
Token Managementβ
Bootstrap tokens expire after 24 hours for security. If your token has expired or you lost the join command:
Generate a new token:
# On the control plane node
kubeadm token create --print-join-command
This outputs a complete kubeadm join command with a new token.
List existing tokens:
kubeadm token list
Manually construct a join command:
# Get the token
kubeadm token list
# Get the CA certificate hash
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | \
openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'
# Use these to construct the join command
sudo kubeadm join <control-plane-ip>:6443 \
--token <token> \
--discovery-token-ca-cert-hash sha256:<hash>
Verify Nodes Joinedβ
From the control plane node:
# List all nodes
kubectl get nodes
# Expected output:
# NAME STATUS ROLES AGE VERSION
# control-plane Ready control-plane 10m v1.35.x
# worker-1 Ready <none> 2m v1.35.x
# worker-2 Ready <none> 1m v1.35.x
# Get detailed node information
kubectl describe node <node-name>
# Check that all system pods are running
kubectl get pods -A
All nodes should show Ready status. If a node shows NotReady, check the kubelet logs with sudo journalctl -u kubelet.
Next Stepsβ
π Congratulations! Your Kubernetes cluster is now operational. Here's what you can do next:
View cluster resources:
# Get all resources in all namespaces
kubectl get all -A
# View nodes with more details
kubectl get nodes -o wide
# Check cluster component health
kubectl get componentstatuses
Install Kubernetes Dashboard (Optional)β
The Kubernetes Dashboard provides a web-based UI for cluster management:
# Deploy the dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
# Create an admin user (for testing only - not for production)
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
EOF
# Get the access token
kubectl -n kubernetes-dashboard create token admin-user
# Start the proxy
kubectl proxy
# Access the dashboard at:
# http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
The admin user created above has full cluster access. For production environments, implement proper RBAC (Role-Based Access Control) with limited permissions.
Set Up Monitoring and Loggingβ
Monitor your cluster's health and performance:
Metrics Server (required for kubectl top):
# Install Metrics Server
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
# For self-signed certificates, you may need to add --kubelet-insecure-tls
# Edit the metrics-server deployment if needed
# Wait for metrics-server to be ready
kubectl wait --for=condition=ready pod -l k8s-app=metrics-server -n kube-system --timeout=120s
# View resource usage
kubectl top nodes
kubectl top pods -A
Learn Key Kubernetes Conceptsβ
Now that your cluster is running, explore these fundamental concepts:
- π¦ Pods: The smallest deployable units
- π Deployments: Declarative updates for Pods and ReplicaSets
- π Services: Expose applications running on a set of Pods
- πΎ Volumes: Persistent storage for containers
- π ConfigMaps & Secrets: Configuration management
- π Ingress: HTTP/HTTPS routing to services
- π― Namespaces: Virtual clusters for resource isolation
Understanding Your Cluster Architectureβ
Your Kubernetes cluster now consists of:
Control Plane Components (Master Node)β
- kube-apiserver: The API server is the front-end for the Kubernetes control plane. All communications and operations go through it
- etcd: Consistent and highly-available key-value store used as Kubernetes' backing store for all cluster data
- kube-scheduler: Watches for newly created Pods with no assigned node, and selects a node for them to run on
- kube-controller-manager: Runs controller processes (Node Controller, Replication Controller, Endpoints Controller, Service Account Controller)
- cloud-controller-manager: Runs controllers that interact with cloud providers (if applicable)
Node Components (All Nodes)β
- kubelet: An agent that runs on each node, ensuring that containers are running in Pods
- kube-proxy: A network proxy that maintains network rules on nodes, enabling service communication
- Container runtime: Software responsible for running containers (containerd in our setup)
Add-onsβ
- CNI Plugin (Calico): Provides networking and network policy
- CoreDNS: Provides DNS services for the cluster
- Metrics Server (optional): Provides resource metrics for nodes and pods
Best Practices for Productionβ
As you prepare your cluster for production workloads, consider these best practices:
Securityβ
- Enable RBAC: Control who can access what in your cluster
- Use Network Policies: Restrict traffic between pods
- Scan images: Use tools like Trivy or Clair to scan container images for vulnerabilities
- Encrypt secrets: Enable encryption at rest for Secrets in etcd
- Regular updates: Keep Kubernetes and all components up to date
- Pod Security Standards: Enforce pod security policies to prevent privileged containers
High Availabilityβ
- Multiple control plane nodes: Run at least 3 control plane nodes for HA
- External etcd: Consider running etcd outside the control plane nodes
- Load balancer: Use a load balancer for the API server
- Multiple worker nodes: Distribute workloads across multiple nodes
- Pod Disruption Budgets: Ensure availability during updates and failures
Resource Managementβ
- Set resource requests and limits: Define CPU and memory requirements for all containers
- Use Horizontal Pod Autoscaling: Automatically scale applications based on load
- Implement Quality of Service (QoS): Classify pods into QoS classes (Guaranteed, Burstable, BestEffort)
- Node affinity and taints: Control pod scheduling on specific nodes
Monitoring and Observabilityβ
- Centralized logging: Use tools like EFK stack (Elasticsearch, Fluentd, Kibana) or Loki
- Metrics collection: Deploy Prometheus for metrics collection
- Distributed tracing: Implement tools like Jaeger or Zipkin for microservices
- Alerting: Set up alerts for critical cluster events and resource exhaustion
Additional Resourcesβ
Official Documentationβ
- π Kubernetes Official Documentation: Comprehensive guides and references
- π οΈ kubeadm Reference Guide: Detailed kubeadm documentation
- π§ kubectl Cheat Sheet: Quick reference for kubectl commands
- π Kubernetes API Reference: Complete API documentation
Learning Resourcesβ
- π Kubernetes Basics Tutorial: Interactive tutorial for beginners
- π Certified Kubernetes Administrator (CKA): Official certification program
- πΊ Kubernetes YouTube Channel: Official videos and conference talks
- π¬ Kubernetes Slack: Join the community for help and discussions
Tools and Utilitiesβ
- k9s: Terminal UI for Kubernetes clusters
- Lens: Desktop IDE for Kubernetes
- Helm: Package manager for Kubernetes
- Kustomize: Configuration management tool
- kubectx/kubens: Switch between clusters and namespaces easily
Community and Supportβ
- π Kubernetes Community: Get involved with the Kubernetes community
- π‘ Stack Overflow: Ask questions and find answers
- π GitHub Issues: Report bugs and request features
- π§ Kubernetes Mailing Lists: Join developer and user discussions
Summaryβ
You've successfully installed a production-ready Kubernetes cluster on Ubuntu Server! Here's what you've accomplished:
β
Configured networking and system requirements
β
Installed and configured the containerd container runtime
β
Installed Kubernetes components (kubeadm, kubelet, kubectl)
β
Initialized the control plane node
β
Deployed a CNI network plugin
β
Joined worker nodes to the cluster
Your cluster is now ready to run containerized applications. Start by deploying simple applications, then gradually explore more advanced features like StatefulSets, DaemonSets, Jobs, and CronJobs.
Remember to regularly update your cluster, monitor its health, implement proper security practices, and back up your etcd data.
Happy orchestrating! π
If you encounter issues or have questions:
- Check the official Kubernetes documentation
- Review component logs with
kubectl logsandjournalctl - Ask questions in the Kubernetes Slack community
- Search Stack Overflow for similar issues