Introduction
Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. Originally developed by Google, it's now maintained by the Cloud Native Computing Foundation and has become the de facto standard for container orchestration.
What is Kubernetes?
Kubernetes, often abbreviated as K8s, provides a platform for:
- Container Orchestration: Managing containerized applications across a cluster of machines
- Service Discovery: Automatically finding and connecting services
- Load Balancing: Distributing traffic across multiple instances
- Auto-scaling: Automatically scaling applications based on demand
Core Architecture
Control Plane
The control plane manages the worker nodes and Pods in the cluster. It includes:
- • API Server: The central hub that exposes the Kubernetes API
- • etcd: Key-value store for all cluster data
- • Scheduler: Assigns Pods to nodes
- • Controller Manager: Manages various controllers
Worker Nodes
Worker nodes run the containerized applications. Each node contains:
- • kubelet: Communicates with the control plane
- • Container Runtime: Runs containers (Docker, containerd, etc.)
- • kube-proxy: Handles network routing for services
Essential Kubernetes Objects
Pods
The smallest deployable unit in Kubernetes. A Pod represents a single instance of a running process and can contain one or more containers.
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80Deployments
Provides declarative updates for Pods and ReplicaSets. Manages the desired state of your application.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.16.1Services
Exposes a set of Pods as a network service. Provides stable networking and load balancing.
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIPEssential kubectl Commands
Basic Operations
# Get cluster info kubectl cluster-info # Get nodes kubectl get nodes # Get pods kubectl get pods # Get all resources kubectl get all
Creating Resources
# Create from file kubectl apply -f deployment.yaml # Create deployment kubectl create deployment nginx --image=nginx # Expose deployment kubectl expose deployment nginx --port=80
Debugging
# Describe resource kubectl describe pod nginx-pod # Get logs kubectl logs nginx-pod # Execute in pod kubectl exec -it nginx-pod -- /bin/bash
Scaling & Updates
# Scale deployment kubectl scale deployment nginx --replicas=5 # Update image kubectl set image deployment/nginx nginx=nginx:1.17 # Rollback kubectl rollout undo deployment/nginx
Getting Started - Your First Application
Step 1: Create a Deployment
kubectl create deployment hello-world --image=nginx:latest
This creates a deployment with a single nginx pod.
Step 2: Expose the Application
kubectl expose deployment hello-world --type=NodePort --port=80
This creates a service to expose your application.
Step 3: Scale the Application
kubectl scale deployment hello-world --replicas=3
This scales your application to 3 replicas for high availability.
Step 4: Check Status
kubectl get pods kubectl get services kubectl describe deployment hello-world
Monitor your application's status and health.
Best Practices
Resource Management
- • Always set resource requests and limits
- • Use namespaces to organize resources
- • Implement proper labeling strategy
- • Use ConfigMaps and Secrets for configuration
Security
- • Use RBAC for access control
- • Implement Pod Security Policies
- • Regularly update container images
- • Use service accounts appropriately
Monitoring
- • Implement health checks (readiness/liveness)
- • Use Prometheus for metrics collection
- • Set up proper logging with Fluentd
- • Monitor resource utilization
Development
- • Use Helm for package management
- • Implement GitOps workflows
- • Version your container images
- • Test deployments in staging first
Next Steps
Now that you understand the basics of Kubernetes, here are some areas to explore further:
Advanced Workloads
StatefulSets, DaemonSets, Jobs, and CronJobs
Networking
Ingress, Network Policies, and Service Mesh
Storage
Persistent Volumes, Storage Classes, and CSI
Conclusion
Kubernetes is a powerful platform that can seem overwhelming at first, but by understanding the core concepts and practicing with basic commands, you'll quickly become proficient. Start with simple deployments and gradually explore more advanced features as your applications grow in complexity. The key is to start small and build your knowledge incrementally.