All posts by Son T

Is Kubernetes administration today like Unix Systems Administration in the 1990s?

Click for Table of Contents

Kubernetes already become too unnecessarily complex for enterprise IT?

1. Real-World Data & Surveys: Complexity Is Rising
  • Spectro Cloud’s 2023 Report shows 75% of Kubernetes practitioners encounter issues running clusters—a significant jump from 66% in 2022. Challenges intensify at scale—especially around provisioning, security, monitoring, and multi-cluster setups. [InfoWorldOkoone]
  • DEVOPSdigest (2025) highlights that enterprises often run multiple distributions (EKS, GKE, OpenShift), leading to tooling sprawl, operational inconsistency, and fragmented networking/security stacks, which strain platform teams significantly. [devopsdigest.com]
2. Admissions & Simplified Offerings from Google
  • Google itself acknowledged the persistent complexity of Kubernetes, even after years of improvements—prompting them to offer GKE Autopilot, a managed mode that abstracts away much configuration overhead. [The Register] [Wikipedia]
3. Structural Challenges & Knowledge Gaps
  • Harpoon’s breakdown of root causes points to:
    • Kubernetes’ intricate core architecture, multiple components, and high customizability.
    • Steep learning curve—you need command over containers, infra, networking, storage, CI/CD, automation.
    • Troubleshooting overhead—distributed nature complicates debugging. [harpoon.io]
  • Baytech Consulting (2025) identifies a scaling paradox: what works in pilot can fall apart in enterprise rollouts as complexity, cost, drift, and security fragility compound with growth. [Baytech Consulting]
  • AltexSoft reports Kubernetes salaries and licensing/infra costs can be high, with unpredictable cloud bills. Around 68% of firms report rising Kubernetes costs, mainly due to over-provisioning and scaling without proper observability. Organizations can waste ~32% of cloud spend. [AltexSoft]
5. Community Voices (Reddit)

Community commentary reflects real frustration:

“One does not simply run a K8s cluster… You need additional observability, cert management, DNS, authentication…”Reddit

“If you think you need Kubernetes, you don’t… Unless you really know what you’re doing.”Reddit

“K8s is f… complex even with EKS. Networking is insane.”Reddit

“Kubernetes stacks are among the most complex I’ve ever dealt with.”Reddit

These quotes show that complexity isn’t just theoretical—it’s a real barrier to adoption and effective operation.

6. Research-Published Failures
  • A recent academic paper, Mutiny!, reveals that even tiny faults—like single-bit flips in etcd—can cascade into major issues, including cluster-wide failures or under-provisioning. Demonstrates limited fault tolerance unless actively mitigated. [arXiv]
Is Kubernetes Too Complex?
DimensionEvidence of Complexity
Operational pain75% practitioners reporting issues; enterprise multi-cluster/tool divergence
Vendor admissionGoogle launching Autopilot to reduce Kubernetes complexity
Learning & toolingSteep curve; cumbersome YAML; dozens of moving parts (networking, storage, autoscale)
Financial burdenRising cloud costs; over-provisioning; hidden infrastructure waste
Community sentimentWidespread anecdotes about complexity, overhead, and misapplication
Technical fragilityExperimental research showing failure propagation even from single tiny errors
Powerful with high complexity
  • Kubernetes is undeniably powerful—but that power comes with a steep complexity tax, especially for enterprises scaling across clusters, clouds, and teams.
  • Its flexibility and extensibility, while strengths, can also lead to sprawling architectures and tooling sprawl.
  • Managed services (like GKE Autopilot, EKS, AKS), GitOps, platform engineering teams, and strong governance frameworks are essential to tame this complexity.
  • For many SMBs or smaller projects, simpler alternatives (Nomad, ECS Fargate, Heroku-style platforms) might be more pragmatic unless you truly need Kubernetes’ scale benefits.

Is Systems Admin in 1990s like managing Kubernetes today?

Systems Administration in the 1990s

In the 1990s, sysadmins typically had to manage on-premises servers, networks, and storage, often with little automation. Key traits:

  • Heterogeneous environments
    • Mix of Solaris, HP-UX, AIX, Windows NT, NetWare, early Linux.
    • Each OS had unique tooling, quirks, and administrative models.
  • Manual provisioning
    • Installing from CDs/tapes, hand-editing configs (/etc/*), patching manually.
    • Network setup via raw config files or proprietary tools.
  • Siloed tooling
    • Monitoring with Nagios, Big Brother, MRTG.
    • Backup with Veritas, Arcserve.
    • Identity with NIS or LDAP — all separate, poorly integrated.
  • High skill & resource requirements
    • A small team of “wizards” needed deep knowledge of Unix internals, networking, SCSI storage, TCP/IP stack quirks, etc.
    • Troubleshooting required understanding the whole stack, often without Google or Stack Overflow.
  • Cultural complexity
    • “Snowflake servers” (no two were alike).
    • Documentation gaps → single points of knowledge in individuals’ heads.
    • Vendor support contracts were essential.

Kubernetes Administration Today

Fast forward ~30 years: the “modern sysadmin” (platform/SRE/K8s admin) faces a similar landscape:

  • Heterogeneous environments
    • Mix of Kubernetes distros (EKS, GKE, AKS, OpenShift, Rancher).
    • Add-ons for storage (Rook, Longhorn, CSI drivers), networking (CNI plugins like Calico, Cilium), security (OPA, Kyverno).
  • Manual YAML/Helm/IaC complexity
    • Instead of hand-editing /etc, we’re hand-crafting Kubernetes YAML, Helm charts, CRDs.
    • Misconfiguration is one of the top causes of outages (akin to mis-edited config files in 90s).
  • Siloed tooling
    • Metrics → Prometheus/Mimir/Thanos.
    • Logs → Loki/ELK.
    • Traces → Tempo/Jaeger.
    • CI/CD → ArgoCD, Flux, Jenkins.
    • Security → Falco, Kyverno, Gatekeeper.
      Each solves a slice, but integration is nontrivial — like juggling Nagios, Veritas, and LDAP in the 90s.
  • High skill & resource requirements
    • K8s admins must understand containers, networking (CNI, ingress), distributed storage, security, GitOps, cloud APIs.
    • Debugging pods across namespaces, RBAC issues, or etcd failures can feel like debugging kernel panics in the 90s.
  • Cultural complexity
    • Clusters drift if not well-managed.
    • “Pets vs. cattle” mindset is the modern equivalent of avoiding snowflake servers.
    • Knowledge often concentrated in a few “platform engineers.”

Parallels

1990s Sysadmin WorldKubernetes Admin Today
Mix of Unix flavors & Windows NTMix of Kubernetes distros (EKS, GKE, AKS, OpenShift)
Manual installs/patchingManual YAML, Helm, IaC configs
Siloed tools (Nagios, Arcserve, LDAP)Siloed observability & security stacks
Snowflake serversCluster drift, misconfigured CRDs
Need “wizards” with deep system skillsNeed platform engineers/SREs with broad skills
Vendor support criticalReliance on managed services (GKE Autopilot, EKS)
Troubleshooting = art + experienceDebugging multi-layered microservices stacks

Key Difference

  • Scale & abstraction:
    • 1990s sysadmins often managed tens of servers.
    • Today’s K8s admins manage hundreds/thousands of pods, spread across multi-cluster, multi-cloud, multi-region environments.
  • Automation gap:
    • In the 90s, lack of automation caused complexity.
    • With Kubernetes, abundance of automation frameworks causes choice overload and integration complexity.

Kubernetes today feels like Unix sysadmin in the 90s:

  • A powerful but fragmented ecosystem, with high cognitive overhead.
  • Requires specialists (“K8s wizards”) to keep clusters stable.
  • Enterprise IT is repeating cycles: adopting bleeding-edge, complex infrastructure that eventually stabilizes into commoditized, simplified platforms (just like Linux standardization simplified 2000s IT).

We’re arguably in the “1995 equivalent” of Kubernetes — powerful but messy. In 5–10 years, we might see a “Linux-like consolidation” or abstraction layer that hides most of today’s complexity.

Timeline Analogy: 1990s Sysadmin vs. Kubernetes Admin Today

1990s: Fragmented Unix + Windows NT Era
  • Enterprises ran Solaris, HP-UX, AIX, SCO Unix, Novell NetWare, Windows NT, often side by side.
  • Each had different tooling, package managers, patching mechanisms.
  • Skills weren’t portable — a Solaris admin couldn’t easily manage NetWare.
  • Tooling was siloed (Nagios, Arcserve, MRTG, NIS, LDAP, Veritas).
  • Complexity = every vendor had its own model, and integration was painful.

Analogy to Kubernetes today:
Multiple Kubernetes distros (OpenShift, Rancher, GKE, EKS, AKS) + endless CNIs, CSIs, service meshes, observability stacks. Skills don’t fully transfer across environments.

Early 2000s: Linux Standardization & Automation Emerges
  • Linux (Red Hat, Debian, SUSE) consolidated the Unix ecosystem → standard APIs, packages, and tooling.
  • Automation tools (CFEngine → Puppet/Chef → Ansible) emerged, making configuration repeatable.
  • Virtualization (VMware, Xen) abstracted away hardware, reducing snowflake servers.
  • Enterprises got more portable skillsets and better ROI from staff.

Analogy for Kubernetes:
We’re waiting for similar consolidation in the K8s space — either a dominant “Linux of Kubernetes” (a distro that becomes the de facto enterprise standard) or stronger platform abstractions.

2010s: Cloud + DevOps + Containers
  • AWS, Azure, GCP commoditized infrastructure.
  • DevOps culture + automation pipelines became mainstream.
  • Docker simplified app packaging and delivery.
  • Enterprises shifted from sysadmin “server caretakers” → SRE/DevOps “platform enablers.”

Analogy for Kubernetes:
This was the simplification wave after the complexity of 1990s Unix. Today, K8s is at the “Unix ’95” stage — the complexity is still front and center. The simplification wave (through managed services and PaaS-like abstractions) hasn’t fully happened yet.

2020s: The Future of Kubernetes (Projection)
  • Managed services (GKE Autopilot, EKS Fargate, AKS) are becoming the equivalent of VMware in the 2000s — hiding underlying infrastructure complexity.
  • PaaS-like abstractions (Heroku-style experience on top of K8s, e.g., Render, Fly.io, Knative, Crossplane) will likely commoditize Kubernetes itself.
  • Platform engineering teams will provide “golden paths” to developers, hiding YAML and cluster admin pain.
  • Just like Linux became invisible (we use it daily without thinking), Kubernetes may fade into the substrate — invisible to developers, only visible to infra specialists.

History Repeats
EraSysadmin WorldKubernetes WorldParallel
1990sFragmented Unix, NT, manual opsFragmented K8s distros, YAML/Helm, manual configsHigh complexity, vendor sprawl
2000sLinux standardizes, automation matures(Future) K8s consolidates or is abstracted awayReduced friction, portable skills
2010sCloud, DevOps, containers simplify infra(Future) PaaS & managed services simplify K8sDevs focus on apps, not infra
2020sLinux invisible (everywhere, but hidden)Kubernetes invisible (substrate under platforms)Only infra teams touch it directly
Summary

Kubernetes today is like Unix in the mid-1990s: powerful but fragmented and resource-intensive. Over the next decade, we’ll likely see Linux-like consolidation (fewer distros, stronger defaults) and/or VMware-like abstraction (managed offerings, PaaS layers) that make Kubernetes complexity mostly invisible to developers.

The Nature of IT RIFs (reduction in force aka layoffs aka mass redundancies)

If you work for any IT company and see Slack users all of a sudden disappearing – then your company is performing a RIF. Out of the blue or with very short notice – a colleague or two’s Slack account is closed and you are left wondering why.

This trend has been around for a while now and sprung sites such as https://layoffs.fyi/ documenting the unprecedented amount of layoffs in the IT industry. Other sites document layoffs in other industries (e.g. UK education and civil service) too, and paints a gloomy picture of the state of unemployment and an extreme tough jobs market.

My current employer is make a round of RIF this moment in time! Hence this article about RIFs. I was affected by a RIF a year ago with a different corporation, so I am putting in motion things to do from the lessons I learnt from last time. I hope this will help anyone affected this time…

Trust No One

When you are being told that there is a round of RIF and “we are not affected” or “we are safe” by your manager or director – do not trust them. When this announcement is made interpret this as “you need to make plans and execute them ASAP in preparation that you will be affected”. Until the RIF round is “official” over, then consider this “unknown” period is your “at-risk” period.

By UK employment rules (the minimum that corporations will follow) your employer will have to give you an “at-risk” period (different from above!) When they give you this notice, you are able to stop work and look for other roles – internally OR externally).

My previous employer when they gave me this “at-risk” period, had already frozen hiring and no new “req”s were granted, making it impossible to get an internal role if you wanted to stay with your employer. In this situation you are effectively certain of being made redundant and will have to leave their employment…

You need to put things into place if you are going to survive the redundancy.

The Nature of IT RIFs

Two is not a pattern to draw definite conclusions on but it seems to me that when a corporation announces a record profit-making quarter, they follow this up by a record spend which forces them to make a RIF. This is the way…

The nature of IT recruitment and redundancies seem to have established a boom and bust pattern. Corporations overspend and over-recruit to achieve a commercial objective or goal (usually adding as much value to the corps as possible) and when this funding-period is over, they then perform a RIF to be able to start the next project. This is an evil cycle for the all employees, not just those who are let go.

When a RIF occurs, there’s little rhyme or reason why specific individual is affected. The main directive or goal of a RIF is to reduce costs so the corps can make up the huge spend or fund the new project – nowadays it is certain to be AI. The lowest hanging fruits will be picked first and then maybe the projects that are costing most but have delivered little and then just randomly in areas that (to the bosses) are not important. Of course, to the individual, we are all important so we ask the question why me? Why my team? Why my organisation?

There is no reason – even if your manager or director gives you a reason – this will not be it!

Accept and Move On

Successful people turn disadvantages to advantages – they accept the situation, deal with it fast – learn from the situation – and move on! They do not “sulk” or “get down” or “get stuck” – they learn, try again, try something different until they succeed. This is what anyone affected by RIF must do. When I say “accept and move on” yes, I mean accept the severance package and start on your CV/Resume and start job hunting… or if you are due a good package, buy that Porsche you’ve always wanted and drive it… into a traffic jam…

One of the help that might be available to someone who is “at-risk” is free consultation with a career coach. I must admit, I was very skeptical about this free facility at first, but once I ventured out to look at the job market, I find myself turning around and was open to help, tips, advice and motivation of any kind to get a head start.

The job market has changed a lot and has also gotten tougher and tougher with each round of redundancies. You need al the advice and coaching you can get. The successful things you did to attain the job/role that you’ve juts been made redundant from will NOT work this time! You need new job hunting skills, tools and be adaptable to the current state of the market.

Those who have not hunted for new roles or moved jobs in the past 5 or 10 years will have to learn and act fast! I see that even talent advisors and experience recruiters struggle to find new roles for themselves let alone for others…

What To Do?

This is my list – it needs to be adapted for your personal needs/situation – it is just to give you something to start with:

  1. Update/rewrite your CV/Resume
    • Your CV/resume will be current so update
    • Your CV/resume will not be in a modern format/layout
    • Your CV/resume will need to be tailored to the role
    • Your CV/resume will need to be in a format for auto-form-filling easier
    • Your CV/resume will need to be in a format for AI to process and not reject you without passing it on to a human!
  2. Create a generic cover letter
    • Your roles will be very similar in requirements, so a generic letter will save time
    • Leave areas for specifics, but don’t forget to change those specifics
  3. Sign up to LinkedIn and other job boards
    • These sites will have job hunting tips and advice so take advantage
    • These sites allow you to network so take advantage
    • These sites might have training courses or practise facilities
  4. Reach-out to contact and ex-colleagues
    • There maybe suitable vacancies with their employer
    • Ask them to spread the message that you are looking for a new role
  5. Create a spreadsheet of job applications
    • You will soon lose track of what company, the recruiter, the role, etc that you’ve applied for an why – keep a spreadsheet of all relevant info
  6. Create a routine of job searching/application and rest
    • You will need to be disciplined so a route that works with rest breaks to relieve the stress will keep you going until you are successful
  7. Practise interviews. conversations, and coding tests, etc.
    • You will need to be sharp and effective in your interview, practise and deploy all the tricks and methods for effective interview e.g. using S.T.A.R. method and the like.
    • Practise in a Zoom session and record yourself, playback to evaluate how you perform and what you should do and not do, say and not say

Good Luck!

Do not give up! And do not stop once you’ve achieved a new role! Work as though you are under threat of being made redundant – there is no such thing as a safe job any more – always actively develop and progress to the next role…

I am writing this in a situation when I am actually in my “at-risk” period… But as I’ve started this process well before the RIF news, I think I am ahead in the job queue (although not necessary near the very start!)

Popularity of WordPress Over the Decade

I asked ChatGPT: Give me a graph of wordpress popularity over the past decade compared to other CMS such as Joomla and Drupal

Key Insights

WordPress

  • 2025: Powers about 43.5% of all websites, holding 61.2% of the CMS market share [aquipress.com]
  • Growth: Expanded from around 21% in 2014 to a peak of 65.2% in early 2022, and has slightly declined to ~61.3–61.4% by spring 2025 [themeisle.com]
  • Stability: Experienced consistent growth across the last decade, though growth has decelerated recently [themeisle.com].

Joomla

  • 2025: Commands around 1.5% of all websites, representing 2.1% of the CMS market [w3tech.com]
  • Decline: Dropped from roughly 9% in 2014 to just over 2% in 2025; this is a decline of approximately 16% since 2024 [searchenginejournal.com]
  • Competitive Shift: Previously ranked 2nd or 3rd, it’s now overtaken by hosted platforms like Wix and Shopify [themeisle.com].

Drupal

  • 2025: Maintains around 0.8% of all websites, or 1.2% of the CMS market share [searchjournal.com]
  • Long-Term Decline: By 2025, its share fell from around 3.5% in 2014 to ~1.2%, showing a 25% drop since 2024 [tooltester.com]

🔍 Trends at a Glance

PlatformShare on All Websites (2025)CMS Market Share (2025)Trend Over Past Decade
WordPress~43.5%~61%Steady growth until early 2022, slight decline afterward
Joomla~1.5%~2.1%Continuous decline
Drupal~0.8%~1.2%Steady decline