Skip to main content

Installing Kubernetes on Ubuntu Server

This guide walks you through installing a Kubernetes cluster on Ubuntu Server using kubeadm, kubelet, and kubectl.

Prerequisites
  • Ubuntu Server 20.04 or later
  • Minimum 2 GB RAM
  • 2 CPUs or more
  • Full network connectivity between all machines
  • Unique hostname, MAC address, and product_uuid for every node
  • Root or sudo privileges

Overview​

You'll install three essential packages on all nodes:

  • kubeadm: The command-line tool that bootstraps your Kubernetes cluster by initializing the control plane and configuring cluster components
  • kubelet: The primary node agent that runs on every machine in your cluster. It manages the pod lifecycle by starting, stopping, and maintaining application containers as directed by the control plane
  • kubectl: The command-line interface for interacting with your Kubernetes cluster. It communicates with the API server to deploy applications, inspect resources, and manage cluster operations
Version Compatibility

kubeadm does not install or manage kubelet or kubectl for you. You must ensure they match the version of the Kubernetes control plane that kubeadm installs. Mismatched versions can cause version skew, leading to unexpected bugs and instability.

Supported skew: The kubelet can be one minor version behind the API server (e.g., kubelet 1.34.x with API server 1.35.x), but the kubelet version may never exceed the API server version.


Step 1: Configure Required Ports​

Kubernetes components communicate over specific TCP/UDP ports. Before installation, ensure your firewall allows traffic on these ports.

Control Plane Node Ports​

ProtocolDirectionPort RangePurposeUsed By
TCPInbound6443Kubernetes API serverAll
TCPInbound2379-2380etcd server client APIkube-apiserver, etcd
TCPInbound10250Kubelet APISelf, Control plane
TCPInbound10259kube-schedulerSelf
TCPInbound10257kube-controller-managerSelf
About etcd

etcd is the distributed key-value store that holds all cluster data. While these ports are listed for control plane nodes, you can also host etcd externally or on custom ports if needed.

Worker Node Ports​

ProtocolDirectionPort RangePurposeUsed By
TCPInbound10250Kubelet APISelf, Control plane
TCPInbound10256kube-proxySelf, Load balancers
TCPInbound30000-32767NodePort Services†All
UDPInbound30000-32767NodePort Services†All

† NodePort Services expose applications on a static port on each node, making them accessible from outside the cluster.

Testing Port Connectivity​

Verify that required ports are accessible between nodes using netcat:

On control plane node - Start TCP listener
# Install netcat if not available
sudo apt-get install -y netcat

# Start listener on port 6443 (API server port)
nc -l 6443
On worker node - Test connection
# Test connectivity to control plane
nc -vz <control-plane-ip> 6443

If the connection succeeds, you'll see: Connection to <control-plane-ip> 6443 port [tcp/*] succeeded!

Testing Methodology

Repeat this process for all critical ports. Start with port 6443 (API server), then test etcd ports (2379-2380), and kubelet API (10250). This ensures proper cluster communication before proceeding.


Step 2: Configure Swap Memory​

The kubelet (Kubernetes node agent) has specific requirements regarding swap memory to ensure predictable performance and resource management.

Why Disable Swap?​

By default, Kubernetes requires swap to be disabled because:

  • Predictable performance: Swap can cause unpredictable latency when the kernel moves memory to disk
  • Resource guarantees: Kubernetes resource limits (memory requests/limits) assume physical RAM, not virtual memory
  • Container isolation: Swap can break the memory isolation between containers
# Disable swap immediately (affects current session only)
sudo swapoff -a

# Verify swap is disabled
free -h
# Look for "Swap: 0B" in the output

# Disable swap permanently across reboots
# Comment out swap entries in /etc/fstab
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

# Verify fstab changes
cat /etc/fstab | grep swap

After running these commands, swap will remain disabled even after system reboots.


Step 3: Enable IPv4 Packet Forwarding​

Kubernetes networking requires the Linux kernel to forward IPv4 packets between network interfaces. This is essential for:

  • Pod-to-pod communication across different nodes
  • Service networking to route traffic to the correct pods
  • Container network interface (CNI) plugins to function correctly
# Create a sysctl configuration file for Kubernetes
# This configuration persists across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
EOF

# Apply the sysctl parameters immediately without reboot
sudo sysctl --system

Verify the configuration:

sysctl net.ipv4.ip_forward

Expected output: net.ipv4.ip_forward = 1

What This Does

The net.ipv4.ip_forward parameter tells the Linux kernel to act as a router, forwarding packets between network interfaces. Without this, containers on different nodes cannot communicate, and services won't route traffic properly.


Step 4: Install Container Runtime (containerd)​

Kubernetes uses the Container Runtime Interface (CRI) to interact with container runtimes. We'll install containerd, an industry-standard container runtime that manages the complete container lifecycle.

About containerd

containerd is a CNCF graduated project that:

  • Manages container execution and supervision
  • Handles image transfer and storage
  • Provides low-level storage and network attachment
  • Is used by Docker, Kubernetes, and other platforms

Understanding the Installation Components​

A complete containerd installation requires three components:

  1. containerd: The core runtime daemon
  2. runc: The OCI-compliant runtime that actually creates and runs containers
  3. CNI plugins: Network plugins for container networking

Step 4.1: Install containerd​

# Visit the containerd releases page to find the latest version
# https://github.com/containerd/containerd/releases

# Download containerd (replace <VERSION> with the latest version, e.g., 1.7.13)
wget https://github.com/containerd/containerd/releases/download/v<VERSION>/containerd-<VERSION>-linux-amd64.tar.gz

# Verify the checksum (get SHA256 from the releases page)
sha256sum containerd-<VERSION>-linux-amd64.tar.gz

# Extract containerd binaries to /usr/local
sudo tar Cxzvf /usr/local containerd-<VERSION>-linux-amd64.tar.gz

This extracts several binaries:

  • containerd: Main daemon
  • containerd-shim-runc-v2: Interface between containerd and runc
  • ctr: Command-line client for debugging

Install the systemd service file:

# Download the official containerd systemd service unit
sudo mkdir -p /usr/local/lib/systemd/system
sudo wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service \
-O /usr/local/lib/systemd/system/containerd.service

# Reload systemd to recognize the new service
sudo systemctl daemon-reload

# Enable containerd to start on boot and start it now
sudo systemctl enable --now containerd

# Verify containerd is running
sudo systemctl status containerd
What is systemd?

systemd is the init system used by modern Linux distributions to manage services. The .service file tells systemd how to start, stop, and manage the containerd daemon.

Step 4.2: Install runc​

runc is the low-level runtime that actually creates and runs containers according to the OCI (Open Container Initiative) specification.

# Visit the runc releases page to find the latest version
# https://github.com/opencontainers/runc/releases

# Download runc (replace <VERSION> with latest, e.g., 1.1.12)
wget https://github.com/opencontainers/runc/releases/download/v<VERSION>/runc.amd64

# Verify the checksum (get SHA256 from the releases page)
sha256sum runc.amd64

# Install runc to /usr/local/sbin with executable permissions
sudo install -m 755 runc.amd64 /usr/local/sbin/runc

# Verify installation
runc --version
Static Binary

The runc binary is statically compiled, meaning it includes all dependencies. It should work on any Linux distribution without additional requirements.

Step 4.3: Install CNI Plugins​

CNI (Container Network Interface) plugins provide networking capabilities for containers, such as:

  • Creating network interfaces in containers
  • Assigning IP addresses
  • Setting up network routes
  • Implementing network policies
# Create CNI plugins directory
sudo mkdir -p /opt/cni/bin

# Visit the CNI plugins releases page to find the latest version
# https://github.com/containernetworking/plugins/releases

# Download CNI plugins (replace <VERSION> with latest, e.g., 1.4.0)
wget https://github.com/containernetworking/plugins/releases/download/v<VERSION>/cni-plugins-linux-amd64-v<VERSION>.tgz

# Verify the checksum
sha256sum cni-plugins-linux-amd64-v<VERSION>.tgz

# Extract CNI plugins
sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v<VERSION>.tgz

This installs various CNI plugins including:

  • bridge: Creates a network bridge on the host
  • host-local: Allocates IP addresses from a predefined range
  • loopback: Sets up the loopback interface
  • portmap: Forwards ports from the host to containers

Step 4.4: Configure containerd​

Generate and configure the containerd configuration file:

# Create containerd configuration directory
sudo mkdir -p /etc/containerd

# Generate the default configuration
containerd config default | sudo tee /etc/containerd/config.toml
Configuration File

The /etc/containerd/config.toml file specifies daemon-level options including:

  • Runtime configuration
  • Plugin settings
  • Image registry configuration
  • Storage locations

Configure the systemd cgroup driver:

Kubernetes requires the container runtime and kubelet to use the same cgroup driver. The systemd cgroup driver is recommended for systems using cgroup v2 (default in Ubuntu 22.04+).

Edit /etc/containerd/config.toml and find the runc runtime configuration section:

[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runc.options]
SystemdCgroup = true

Or use sed to make the change:

sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
Why systemd cgroup?

Cgroups (control groups) limit and isolate resource usage (CPU, memory, disk I/O) for containers. The systemd cgroup driver integrates with systemd's resource management, providing better integration and stability on modern Linux systems using cgroup v2.

Override the sandbox (pause) image:

The pause container is a special container that holds the network namespace for all containers in a pod. Update the image to use the official Kubernetes pause image:

# Edit /etc/containerd/config.toml
# Find the [plugins."io.containerd.grpc.v1.cri"] section and add:
/etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "registry.k8s.io/pause:3.10"

Apply all configuration changes:

# Restart containerd to apply configuration
sudo systemctl restart containerd

# Verify containerd is running with new configuration
sudo systemctl status containerd

Step 5: Install kubeadm, kubelet, and kubectl​

Now we'll install the Kubernetes packages from the official package repositories.

Important Repository Change

The legacy Kubernetes repositories (apt.kubernetes.io and yum.kubernetes.io) were deprecated and frozen on September 13, 2023. You must use the new pkgs.k8s.io repositories to install Kubernetes versions released after this date.

Package Repository Structure​

The new Kubernetes repositories are organized by minor version. Each Kubernetes minor version (1.34, 1.35, etc.) has its own dedicated repository. This guide covers Kubernetes v1.35.

# Update package index and install prerequisites
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg

What these packages do:

  • apt-transport-https: Allows APT to retrieve packages over HTTPS (may be a dummy package on newer Ubuntu versions)
  • ca-certificates: Provides SSL certificates to verify HTTPS connections
  • curl: Downloads files from URLs
  • gpg: Verifies package signatures

Download and install the Kubernetes GPG signing key:

# Create the keyrings directory if it doesn't exist
# This directory stores GPG keys used to verify package authenticity
sudo mkdir -p -m 755 /etc/apt/keyrings

# Download the Kubernetes GPG key and convert it to GPG keyring format
# The same key works for all Kubernetes versions
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.35/deb/Release.key | \
sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
About GPG Keys

GPG (GNU Privacy Guard) keys verify that packages come from the official Kubernetes project and haven't been tampered with. The --dearmor option converts the key from ASCII armor format to binary format.

Add the Kubernetes apt repository:

# Add the Kubernetes repository to APT sources
# This tells APT where to download Kubernetes packages from
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.35/deb/ /' | \
sudo tee /etc/apt/sources.list.d/kubernetes.list
Version-Specific Repositories

This repository contains only Kubernetes 1.35 packages. To install a different minor version, change v1.35 in the URL to your desired version (e.g., v1.34). Always check the official documentation for the version you're installing.

Install kubelet, kubeadm, and kubectl:

# Update the package index with the new repository
sudo apt-get update

# Install Kubernetes components
sudo apt-get install -y kubelet kubeadm kubectl

# Prevent automatic updates of Kubernetes packages
sudo apt-mark hold kubelet kubeadm kubectl
What is apt-mark hold?

The apt-mark hold command marks packages as "held back," preventing them from being automatically upgraded during system updates (apt upgrade). This is critical for Kubernetes because:

  • Version synchronization: All cluster nodes should run the same Kubernetes version
  • Controlled upgrades: Kubernetes upgrades require careful planning and execution
  • Compatibility: Automatic updates can break version compatibility between components

When you're ready to upgrade Kubernetes, you'll need to:

  1. Remove the hold: sudo apt-mark unhold kubelet kubeadm kubectl
  2. Upgrade following the official upgrade procedure
  3. Re-apply the hold: sudo apt-mark hold kubelet kubeadm kubectl

Enable and start the kubelet service:

# Enable kubelet to start automatically on boot
sudo systemctl enable --now kubelet
Crashloop Behavior

The kubelet will restart every few seconds in a crashloop state, waiting for kubeadm to tell it what to do. This is normal behavior. The kubelet will stabilize once you run kubeadm init on the control plane node or kubeadm join on worker nodes.

You can observe this with: sudo journalctl -u kubelet -f

Verify installation:

kubeadm version
kubectl version --client
kubelet --version

Step 6: Initialize the Control Plane​

This step is performed only on the control plane node (master node). The control plane runs the core Kubernetes components that manage the cluster.

What kubeadm init Does​

When you run kubeadm init, it performs several initialization phases:

  1. Preflight checks: Validates system requirements (swap disabled, required ports open, etc.)
  2. Certificate generation: Creates a self-signed Certificate Authority (CA) and certificates for all components
  3. Kubeconfig files: Generates kubeconfig files for kubelet, controller-manager, scheduler, and admin
  4. Static pod manifests: Creates manifests for API server, controller-manager, scheduler, and etcd
  5. Wait for control plane: Waits for the control plane components to become healthy
  6. Upload configuration: Stores cluster configuration in ConfigMaps
  7. Install addons: Deploys CoreDNS (DNS server) and kube-proxy (network proxy)
  8. Generate join token: Creates a token for worker nodes to join the cluster

Initialization Options​

For a simple, single control plane setup:

sudo kubeadm init --pod-network-cidr=192.168.0.0/16

What this does:

  • Initializes the control plane with default settings
  • Sets the pod network CIDR to 192.168.0.0/16
  • Uses the default gateway IP as the API server advertise address
  • Binds the API server to port 6443
About Pod Network CIDR

The --pod-network-cidr flag tells the control plane which IP range to use for pod networking. Different CNI plugins have different requirements:

  • Calico: 192.168.0.0/16
  • Flannel: 10.244.0.0/16
  • Weave: Auto-configured

Check your CNI plugin's documentation for the correct CIDR.

Post-Initialization Steps​

After successful initialization, you'll see output similar to:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a Pod network to the cluster.

You can now join any number of machines by running the following on each node:

kubeadm join 192.168.0.102:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:1234567890abcdef...

Configure kubectl for your user:

# Create the .kube directory in your home folder
mkdir -p $HOME/.kube

# Copy the admin kubeconfig file
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

# Change ownership to your user
sudo chown $(id -u):$(id -g) $HOME/.kube/config
What is kubeconfig?

The kubeconfig file ($HOME/.kube/config) contains:

  • Cluster information: API server address and CA certificate
  • User credentials: Client certificate and key for authentication
  • Context: Which cluster and user to use by default

kubectl reads this file to authenticate with the API server. Never share this fileβ€”it grants full administrative access to your cluster.

Verify the control plane:

# Check node status
kubectl get nodes

# Check that control plane pods are running
kubectl get pods -n kube-system

# View cluster information
kubectl cluster-info

Initially, your node will show as NotReady because the pod network addon hasn't been installed yet.


Step 7: Install Pod Network Add-on​

Kubernetes requires a Container Network Interface (CNI) plugin to enable pod-to-pod communication. Without a CNI plugin, pods cannot communicate across nodes, and the node will remain in NotReady state.

How CNI Plugins Work​

CNI plugins:

  • Assign IP addresses to pods
  • Create virtual network interfaces
  • Set up routing rules for pod traffic
  • Implement network policies
  • Enable pod-to-pod and pod-to-service communication

Install Calico​

Calico is a popular CNI plugin that provides:

  • Layer 3 networking using BGP
  • Network policy enforcement
  • High performance with no overlay network overhead
  • Support for both IPv4 and IPv6
# Apply the Calico manifest
# This creates all necessary resources: DaemonSets, Deployments, ConfigMaps, etc.
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

Monitor the installation:

# Watch pods being created (Ctrl+C to exit)
kubectl get pods -n kube-system -w

# Check that all Calico pods are running
kubectl get pods -n kube-system | grep calico

You should see pods like:

  • calico-node-xxxxx: Runs on each node to provide networking
  • calico-kube-controllers-xxxxx: Manages Calico resources
  • calico-typha-xxxxx: Optional scaling component for large clusters
Node Ready Status

After the CNI plugin is installed and its pods are running, your node status will change from NotReady to Ready. Check with: kubectl get nodes

Other Network Add-on Options​

Flannel - Simple overlay network

Flannel creates a simple overlay network using VXLAN (by default):

kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

Flannel characteristics:

  • Very simple to deploy and operate
  • Uses VXLAN overlay by default
  • Lower performance than Calico's native routing
  • Limited network policy support
  • Good for small to medium clusters

Required: Use --pod-network-cidr=10.244.0.0/16 during kubeadm init

Weave Net - Automatic mesh network

Weave Net creates an automatic mesh network between nodes:

kubectl apply -f https://github.com/weaveworks/weave/releases/download/latest_release/weave-daemonset-k8s.yaml

Weave Net characteristics:

  • Automatic mesh topology
  • Built-in network policy support
  • Encryption available
  • Slightly higher CPU usage
  • Auto-configures pod CIDR
Cilium - eBPF-based networking

Cilium uses eBPF (extended Berkeley Packet Filter) for high-performance networking:

# Install Cilium CLI first
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-amd64.tar.gz
sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin

# Install Cilium
cilium install

Cilium characteristics:

  • Uses eBPF for very high performance
  • Advanced observability features
  • Multi-cluster networking
  • Service mesh capabilities
  • Requires Linux kernel 4.9+ with eBPF support
Choosing a CNI Plugin
  • Calico: Best for most production workloads, excellent performance, strong network policies
  • Flannel: Simple setup, good for learning and development
  • Weave Net: Good balance of features and simplicity
  • Cilium: Best performance and features, requires newer kernels

For more CNI plugins, visit the Kubernetes networking add-ons page.


Step 8: Join Worker Nodes​

After initializing the control plane and installing a CNI plugin, you can add worker nodes to your cluster.

Prerequisites for Worker Nodes​

Ensure you've completed Steps 1-5 on each worker node:

  • βœ… Configured required ports
  • βœ… Disabled swap (or configured to tolerate)
  • βœ… Enabled IPv4 forwarding
  • βœ… Installed containerd, runc, and CNI plugins
  • βœ… Installed kubeadm, kubelet, and kubectl

Join the Cluster​

Run the join command from your kubeadm init output on each worker node:

sudo kubeadm join <control-plane-ip>:6443 \
--token <token> \
--discovery-token-ca-cert-hash sha256:<hash>

Understanding the join command:

  • <control-plane-ip>:6443: Address of the API server
  • --token: Bootstrap token for authentication (expires after 24 hours)
  • --discovery-token-ca-cert-hash: SHA256 hash of the cluster CA certificate (ensures you're joining the correct cluster)

Token Management​

Bootstrap tokens expire after 24 hours for security. If your token has expired or you lost the join command:

Generate a new token:

# On the control plane node
kubeadm token create --print-join-command

This outputs a complete kubeadm join command with a new token.

List existing tokens:

kubeadm token list

Manually construct a join command:

# Get the token
kubeadm token list

# Get the CA certificate hash
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | \
openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'

# Use these to construct the join command
sudo kubeadm join <control-plane-ip>:6443 \
--token <token> \
--discovery-token-ca-cert-hash sha256:<hash>

Verify Nodes Joined​

From the control plane node:

# List all nodes
kubectl get nodes

# Expected output:
# NAME STATUS ROLES AGE VERSION
# control-plane Ready control-plane 10m v1.35.x
# worker-1 Ready <none> 2m v1.35.x
# worker-2 Ready <none> 1m v1.35.x

# Get detailed node information
kubectl describe node <node-name>

# Check that all system pods are running
kubectl get pods -A

All nodes should show Ready status. If a node shows NotReady, check the kubelet logs with sudo journalctl -u kubelet.


Next Steps​

πŸŽ‰ Congratulations! Your Kubernetes cluster is now operational. Here's what you can do next:

View cluster resources:

# Get all resources in all namespaces
kubectl get all -A

# View nodes with more details
kubectl get nodes -o wide

# Check cluster component health
kubectl get componentstatuses

Install Kubernetes Dashboard (Optional)​

The Kubernetes Dashboard provides a web-based UI for cluster management:

# Deploy the dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

# Create an admin user (for testing only - not for production)
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
EOF

# Get the access token
kubectl -n kubernetes-dashboard create token admin-user

# Start the proxy
kubectl proxy

# Access the dashboard at:
# http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
Security Note

The admin user created above has full cluster access. For production environments, implement proper RBAC (Role-Based Access Control) with limited permissions.

Set Up Monitoring and Logging​

Monitor your cluster's health and performance:

Metrics Server (required for kubectl top):

# Install Metrics Server
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

# For self-signed certificates, you may need to add --kubelet-insecure-tls
# Edit the metrics-server deployment if needed

# Wait for metrics-server to be ready
kubectl wait --for=condition=ready pod -l k8s-app=metrics-server -n kube-system --timeout=120s

# View resource usage
kubectl top nodes
kubectl top pods -A

Learn Key Kubernetes Concepts​

Now that your cluster is running, explore these fundamental concepts:

  • πŸ“¦ Pods: The smallest deployable units
  • πŸš€ Deployments: Declarative updates for Pods and ReplicaSets
  • πŸ”Œ Services: Expose applications running on a set of Pods
  • πŸ’Ύ Volumes: Persistent storage for containers
  • πŸ” ConfigMaps & Secrets: Configuration management
  • 🌐 Ingress: HTTP/HTTPS routing to services
  • 🎯 Namespaces: Virtual clusters for resource isolation

Understanding Your Cluster Architecture​

Your Kubernetes cluster now consists of:

Control Plane Components (Master Node)​

  • kube-apiserver: The API server is the front-end for the Kubernetes control plane. All communications and operations go through it
  • etcd: Consistent and highly-available key-value store used as Kubernetes' backing store for all cluster data
  • kube-scheduler: Watches for newly created Pods with no assigned node, and selects a node for them to run on
  • kube-controller-manager: Runs controller processes (Node Controller, Replication Controller, Endpoints Controller, Service Account Controller)
  • cloud-controller-manager: Runs controllers that interact with cloud providers (if applicable)

Node Components (All Nodes)​

  • kubelet: An agent that runs on each node, ensuring that containers are running in Pods
  • kube-proxy: A network proxy that maintains network rules on nodes, enabling service communication
  • Container runtime: Software responsible for running containers (containerd in our setup)

Add-ons​

  • CNI Plugin (Calico): Provides networking and network policy
  • CoreDNS: Provides DNS services for the cluster
  • Metrics Server (optional): Provides resource metrics for nodes and pods

Best Practices for Production​

As you prepare your cluster for production workloads, consider these best practices:

Security​

  • Enable RBAC: Control who can access what in your cluster
  • Use Network Policies: Restrict traffic between pods
  • Scan images: Use tools like Trivy or Clair to scan container images for vulnerabilities
  • Encrypt secrets: Enable encryption at rest for Secrets in etcd
  • Regular updates: Keep Kubernetes and all components up to date
  • Pod Security Standards: Enforce pod security policies to prevent privileged containers

High Availability​

  • Multiple control plane nodes: Run at least 3 control plane nodes for HA
  • External etcd: Consider running etcd outside the control plane nodes
  • Load balancer: Use a load balancer for the API server
  • Multiple worker nodes: Distribute workloads across multiple nodes
  • Pod Disruption Budgets: Ensure availability during updates and failures

Resource Management​

  • Set resource requests and limits: Define CPU and memory requirements for all containers
  • Use Horizontal Pod Autoscaling: Automatically scale applications based on load
  • Implement Quality of Service (QoS): Classify pods into QoS classes (Guaranteed, Burstable, BestEffort)
  • Node affinity and taints: Control pod scheduling on specific nodes

Monitoring and Observability​

  • Centralized logging: Use tools like EFK stack (Elasticsearch, Fluentd, Kibana) or Loki
  • Metrics collection: Deploy Prometheus for metrics collection
  • Distributed tracing: Implement tools like Jaeger or Zipkin for microservices
  • Alerting: Set up alerts for critical cluster events and resource exhaustion

Additional Resources​

Official Documentation​

Learning Resources​

Tools and Utilities​

  • k9s: Terminal UI for Kubernetes clusters
  • Lens: Desktop IDE for Kubernetes
  • Helm: Package manager for Kubernetes
  • Kustomize: Configuration management tool
  • kubectx/kubens: Switch between clusters and namespaces easily

Community and Support​


Summary​

You've successfully installed a production-ready Kubernetes cluster on Ubuntu Server! Here's what you've accomplished:

βœ… Configured networking and system requirements
βœ… Installed and configured the containerd container runtime
βœ… Installed Kubernetes components (kubeadm, kubelet, kubectl)
βœ… Initialized the control plane node
βœ… Deployed a CNI network plugin
βœ… Joined worker nodes to the cluster

Your cluster is now ready to run containerized applications. Start by deploying simple applications, then gradually explore more advanced features like StatefulSets, DaemonSets, Jobs, and CronJobs.

Remember to regularly update your cluster, monitor its health, implement proper security practices, and back up your etcd data.

Happy orchestrating! πŸš€


Need Help?

If you encounter issues or have questions:

  1. Check the official Kubernetes documentation
  2. Review component logs with kubectl logs and journalctl
  3. Ask questions in the Kubernetes Slack community
  4. Search Stack Overflow for similar issues