Skip to content
Jarviix

Tech · 6 min read

Kubernetes Fundamentals: What It Is, What It Solves, and When You Don't Need It

Kubernetes is the dominant container orchestrator but adds significant operational complexity. The core abstractions, when it's worth the cost, and the simpler alternatives.

By Jarviix Engineering · Apr 19, 2026

Kubernetes container orchestration
Photo via Unsplash

Kubernetes has won the container orchestration war. It's the dominant platform for running containerized applications at scale and the de facto standard for modern cloud infrastructure. It's also notoriously complex, with a steep learning curve and significant operational overhead that's often unjustified for smaller teams.

This post covers what Kubernetes actually does, the core abstractions you need to understand, when it's worth the operational complexity, and the simpler alternatives that work for many use cases.

What Kubernetes does

Kubernetes (often abbreviated K8s) is a container orchestration platform. Given a fleet of machines, it:

  • Schedules containers onto machines based on resource requirements
  • Maintains desired state (5 replicas? K8s ensures 5 pods are always running)
  • Heals failures (restarts crashed containers, replaces failed nodes)
  • Manages rollouts and rollbacks of new versions
  • Routes traffic between containers (service discovery, load balancing)
  • Manages storage, networking, configuration, secrets
  • Scales applications based on load

It abstracts away "where things run" — you declare what you want; Kubernetes figures out how.

The core abstractions

Understanding K8s starts with understanding its primary objects.

Pod

The smallest deployable unit. One or more containers that share networking and storage. Usually one container per pod, but multi-container pods exist for tight coupling (sidecar pattern).

Pods are ephemeral — they get created, killed, replaced. Don't depend on individual pod identity in normal applications.

Deployment

Manages a set of identical pods. Declares "I want 5 replicas of this image." Kubernetes ensures 5 pods are always running. Handles rolling updates: deploy new version, gradually replace old pods.

This is the standard abstraction for stateless services.

StatefulSet

Like Deployment but for stateful services where each pod needs stable identity and persistent storage. Used for databases, message queues with persistent state.

More complex than Deployments; avoid unless you specifically need stateful behavior.

Service

Stable network endpoint for a group of pods. Pods come and go; the Service has a stable virtual IP and DNS name.

Types:

  • ClusterIP (default): internal cluster IP only
  • NodePort: exposes on a port on every node
  • LoadBalancer: provisions a cloud load balancer
  • ExternalName: alias to external DNS

Ingress

Routes external HTTP/HTTPS traffic to Services. Supports path-based and host-based routing. Implementations: NGINX Ingress, Traefik, AWS ALB Ingress, Istio Gateway.

ConfigMap and Secret

Configuration data and sensitive data injected into pods as environment variables or files. Secrets are base64-encoded (NOT encrypted by default — use external secrets managers for real security).

Namespace

Logical partition within a cluster. Used to separate environments (dev/staging/prod) or teams.

Node

A worker machine in the cluster. Runs the kubelet (K8s agent) and container runtime. Pods get scheduled onto Nodes.

Job and CronJob

Run-to-completion workloads (Job) and scheduled tasks (CronJob).

The control plane

The "brain" of Kubernetes. Components:

API Server

The main entry point. Everything goes through the API server (kubectl, controllers, even internal components).

etcd

Distributed key-value store for cluster state. Single source of truth. Performance and reliability of etcd determine cluster health.

Scheduler

Decides which Node a new Pod should run on, based on resource requirements, affinity rules, etc.

Controller Manager

Runs reconciliation loops that maintain desired state — if 5 replicas are wanted but only 4 exist, controller starts a new pod.

Cloud Controller Manager

Integrates with cloud provider for things like load balancers, persistent volumes.

In managed services (EKS, GKE, AKS), the control plane is operated by the cloud provider. You only manage Worker Nodes and your applications.

A typical deployment flow

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-api
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-api
  template:
    metadata:
      labels:
        app: my-api
    spec:
      containers:
      - name: api
        image: registry.example.com/my-api:v1.2.3
        ports:
        - containerPort: 8000
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "500m"
        env:
        - name: DATABASE_URL
          valueFrom:
            secretKeyRef:
              name: db-secret
              key: url
---
apiVersion: v1
kind: Service
metadata:
  name: my-api
spec:
  selector:
    app: my-api
  ports:
  - port: 80
    targetPort: 8000

Apply with kubectl apply -f. Kubernetes ensures 3 replicas of my-api:v1.2.3 are running, with proper resource limits, secret-based config, and a stable Service endpoint.

When Kubernetes is justified

Multi-service architectures at scale

If you operate 20+ microservices across many machines, K8s provides the management plane that simpler tools can't.

Auto-scaling needs

HPA (Horizontal Pod Autoscaler) scales pods based on metrics. Cluster Autoscaler scales nodes. Sophisticated auto-scaling at scale is K8s's strength.

Multi-environment uniformity

Same K8s YAML works in dev, staging, prod. Dramatically reduces "works on my machine" issues.

Complex deployment patterns

Blue-green, canary, A/B testing — all reasonably straightforward in K8s.

Multi-cloud / hybrid

K8s is portable across clouds. If you need workload portability, K8s is the standard.

When Kubernetes is overkill

Small teams, few services

A 5-engineer team running 3 services on 5 servers will spend more time on K8s operations than benefits gained.

Simple monoliths

A single Rails or Django app doesn't need orchestration. Heroku-style PaaS or VMs work fine.

Limited DevOps capacity

K8s requires dedicated platform expertise. If no one on your team specializes in it, simpler alternatives reduce operational burden.

Workloads that don't suit containers

Heavy databases, GPU workloads with complex driver requirements, workloads needing direct kernel access — often easier outside K8s.

Simpler alternatives

Managed PaaS

Render, Railway, Fly.io, Vercel, AWS App Runner. Push code; service deploys and runs it. No K8s knowledge needed.

Best for: small to medium teams, web applications, prototypes, side projects.

Docker Compose + VMs

Simple multi-container deployments on virtual machines. Limited orchestration but huge simplicity.

Best for: small teams, predictable workloads, traditional architectures.

AWS ECS / Azure Container Instances / Cloud Run

Cloud-managed container services. Less powerful than K8s but much simpler.

Best for: workloads where K8s features aren't needed.

Nomad

HashiCorp's lighter-weight orchestrator. Less feature-rich but easier to operate.

Best for: smaller teams wanting orchestration without K8s complexity.

Operational realities of running Kubernetes

Steep learning curve

Expect 6-12 months for a team to become productive. YAML, networking models, security contexts, Helm charts, Kustomize — many concepts to learn.

Constant updates

K8s releases every 3-4 months. Maintaining current versions is ongoing work.

Networking complexity

K8s networking (CNI plugins, services, ingress, network policies) is its own deep topic.

Storage challenges

Persistent volumes, dynamic provisioning, storage classes — significant complexity for stateful workloads.

Cost

K8s clusters often cost more than equivalent VM-based deployments due to system overhead, control plane costs, and the tendency to over-provision.

Debugging

Issues can span pods, services, ingress, network policies, RBAC. Debugging skills take time to develop.

Common mistakes

  • Adopting K8s too early: significant operational tax before scale justifies it
  • Self-hosting the control plane: rarely justified vs managed services
  • Treating it like a VM: SSHing into pods, modifying running containers
  • Ignoring resource requests/limits: scheduler can't make good decisions
  • Storing config in YAML files in git only: secrets management is critical
  • One giant cluster for everything: blast radius of issues becomes too large; consider multiple clusters
  • Skipping monitoring: K8s clusters need dedicated observability
  • Using deployment for stateful workloads: causes data loss on pod replacement

Kubernetes is powerful and dominant, but adoption shouldn't be reflexive. Match the tool to the problem: at small scale, simpler platforms deliver value faster with less operational tax. At larger scale, K8s becomes essential infrastructure. Wherever you are on that spectrum, understanding the core concepts helps you make informed decisions and operate effectively when you do use it.

Frequently asked questions

Do I need Kubernetes for my project?

Probably not, unless you're operating at meaningful scale (multiple services, multiple machines, multiple environments) or have specific requirements (auto-scaling, multi-region, complex deployments). For small projects: managed PaaS (Render, Railway, Fly.io, App Runner) or VMs with Docker Compose are dramatically simpler. Kubernetes adds 6-12 months of operational learning curve and ongoing maintenance burden. The litmus test: if your team has fewer than 5 engineers, Kubernetes is probably overkill.

What's the difference between EKS, GKE, AKS, and self-hosted Kubernetes?

EKS (AWS), GKE (Google), AKS (Azure) are managed control planes — cloud provider runs the master nodes, you manage worker nodes. Removes most operational pain. GKE is widely considered the most polished given Google originated Kubernetes. Self-hosted (running your own control plane) is rarely justified — the operational burden is significant and managed services are reasonably priced. For most teams, managed Kubernetes is the only sensible option.

Should I learn Kubernetes if I'm a backend engineer?

Working knowledge yes, deep expertise probably no. Most backend engineers should understand: pods, deployments, services, ingress, configmaps, secrets — enough to deploy and debug their own services. Deep Kubernetes knowledge (operators, custom resources, networking internals, security contexts) is valuable for platform/SRE roles. Use the time you'd spend mastering Kubernetes on something more directly valuable to your role unless you specifically work with K8s daily.

Related Jarviix tools

Read paired with the calculator that does the math.

Read next