All posts by Son T

A(I) Risky Move?

It is no secret that I am leaving Oracle to join Nscale at the end of this month. My main aim was to avoid the Oracle RIF/layoffs and while putting the worse case scenario in place, I managed to get a role at Nscale.

Both Oracle and Nscale are hedging their bets on AI and with the current worry that the AI bubble is going to burst, with the boss of Google saying: ‘no company is going to be immune’ [BBC], I asked AI these questions…

How Oracle would suffer in an AI crash?

Oracle would experience significant financial and operational setbacks if an AI crash occurs, mainly because its recent strategy has focused heavily on AI cloud infrastructure and hyperscaler partnerships. Here’s how Oracle and Oracle Cloud Infrastructure (OCI) would suffer:

Revenue Slowdown and Underutilized Data Centers

  • Oracle expects that AI-driven cloud infrastructure could make up the majority of its revenue by 2030, with multi-billion dollar GPU investments and flagship deals supporting OpenAI, Meta, and other hyperscale model trainers.webpronews+1
  • In an AI crash, client demand for large-scale training and inference workloads would sharply decline. That means Oracle’s new data centers and GPU fleets (including hundreds of thousands of Nvidia GPUs) could become underutilized, forcing write-downs and lowering ROI.fortune

Margin Compression and Volatility

  • Oracle’s current margins on AI cloud offerings are modest (around 14% in recent quarters), and plans for improvement depend on scale and full utilization of infrastructure.bloomberg+1
  • Reduced client activity would keep margins low or negative, intensifying investor scrutiny and driving downward pressure on stock price—recent volatility already shows concern over fulfilling ambitious AI cloud predictions.cnbc+1
  • Oracle would also struggle to justify continued massive investments to shareholders if growth fails to materialize.

Exposure to Contract and Customer Risk

  • Oracle is especially exposed to a few large anchor clients (e.g. OpenAI, Microsoft), who may defer or downsize contracts if the economics of large-model training weaken.webpronews+1
  • Cancellations or renegotiations on these hyperscale deals would lead to revenue instability and possible reputational harm in the cloud sector.

Delayed Application Growth and Broader Impact

  • While Oracle’s core application business (ERP, NetSuite, traditional cloud services) offers some revenue buffer, these divisions would also be affected by a broader tech and enterprise spending slowdown typical in an AI-induced market downturn.constellationr+1
  • Expansion plans and R&D investment would likely be scaled back, slowing Oracle’s ability to compete with more insulated rivals.

Summary Table: Oracle’s Pain Points in an AI Crash

Effect AreaOracle Cloud & Business Impact
Infrastructure ROIUnderutilized data centers, GPU fleets
Margins & ProfitMargin compression, stock volatility
Contract RiskDeferred/canceled hyperscale projects
App/Enterprise SpendRevenue dampening, slower growth

In summary, Oracle would suffer via underperforming cloud assets, margin pressure, dependency on a handful of high-risk AI clients, and contraction of enterprise demand. While traditional non-AI applications offer some shelter, the scale of Oracle’s recent AI cloud bets makes it highly exposed to a major market correction.reuters+6

How Nscale would be affected by an AI crash?

Nscale would be severely affected by an AI crash due to its intense reliance on hyperscaler contracts, rapid capital deployment, and debt-financed AI infrastructure builds. Its business model is specifically geared toward serving large-scale AI model training and compute workloads, making it one of the highest-risk startups in a downturn scenario.

Revenue Freeze and Stranded AI Data Centers

  • If AI demand collapses, hyperscalers like Microsoft and OpenAI—Nscale’s anchor clients—would scale back their contracts, delay GPU purchases, or walk away from expansion phases, freezing Nscale’s main cashflow pipeline.computing
  • Data centers built for intensive model training could be left mostly idle, forcing Nscale to absorb heavy operational losses while continuing to service debt and lease obligations.

Investor and Debt Pressure

  • Nscale’s recent $1.1 billion Series B and $433 million SAFE financing involve a mix of industrial and venture capital (Aker, Nvidia, Dell, Blue Owl), and depend on optimistic growth projections and future contract fulfillment.cnbc+3
  • In a crash, these investors would likely mark down their stakes significantly, making further fundraising difficult. If cash flow dries up, Nscale could face covenant breaches, mass layoffs, and possibly insolvency.

Asset Write-Downs and Market Value Declines

  • Large inventories of Nvidia GPUs and purpose-built data centers for AI workloads could lose much of their resale value if a flood of surplus infrastructure hits the market simultaneously.computing
  • Hardware and campuses not suited for general-purpose workloads would be especially vulnerable—mirroring the fate of unused “dark fiber” and crypto-mining assets after previous speculative bubbles burst.

Strategic Pivot and Survival Pathways

  • With deep exposure to hyperscaler deals, Nscale’s best shot at survival would be pivoting toward government-backed sovereign compute projects, or repositioning under-utilized capacity for scientific, academic, or inference-driven workloads.computing
  • However, the overall scale-back in AI investment would make even these pivots challenging and may require restructuring, asset sales, or merger activity.

Summary Table: Nscale’s Crash Exposure

Pain PointCrash Impact
AI contract relianceRevenue freeze, phased-out deals, layoffs
Infrastructure assetsWrite-downs, idle data centers, GPU oversupply
Investor relationsMarkdowns, covenant risk, fundraising difficulty
Pivot opportunitiesLimited to sovereign or non-AI workloads

In summary, Nscale would be among the hardest-hit AI infrastructure startups during a crash, facing immediate cash flow problems, asset devaluation, and existential threats unless it rapidly adapts or receives protective strategic support.uktech+4

As an employee of Oracle or Nscale – who would be worst affected in an AI crash?

As an employee, Nscale staff would likely be much worse affected by an AI crash than Oracle employees. This is due to the relative scale, business diversity, and risk tolerance of the two companies:

Nscale Employees: High Risk

  • Startup Vulnerability: Nscale is a high-growth, venture-backed startup focused almost entirely on the AI cloud hyperscaler market. If an AI crash occurs, Nscale would face abrupt revenue freezes, contract withdrawals, and rapid asset devaluation, with far less financial cushion to weather losses than a large incumbent like Oracle.
  • Layoffs and Insolvency Risk: In a crash, Nscale may be forced to lay off a substantial portion of its workforce quickly, pause projects, or even enter administration if funding evaporates or debts cannot be serviced. Employees could face sudden job loss, delayed salaries, and loss of equity value, with fewer internal opportunities for reassignment.
  • Equity and Options: Most compensation upside for Nscale staff is likely tied to startup equity. A market correction would deeply devalue or wipe out employee options/shareholdings, and prospects for another lucrative exit (e.g., IPO or acquisition) would diminish drastically.

Oracle Employees: More Stability, Less Immediate Risk

  • Diversified Business: Oracle is a global enterprise with diversified revenue streams across cloud, applications, hardware, licensing, and services. Even with heavy AI investment, its overall risk is spread out, and core business lines would remain resilient.
  • Less Severe Layoffs: Large tech companies like Oracle do enact layoffs in downturns, but they tend to be more measured and spread across divisions. Employees have more options to transfer internally or leverage severance and career support programs.
  • Compensation Impact: Public company employees with RSUs or stock options would feel a drop in value, but these tend to recover after corrections, and overall job security remains higher due to Oracle’s operational scale and financial reserves.

Summary Table

FactorNscale Employee ImpactOracle Employee Impact
Layoff riskExtremely high, abruptModerate, measured
Job securityLowHigh
Equity/option valueLikely wiped outLower, but more resilient
Internal mobilityMinimalSignificant
Corporate backup/supportMinimal/noneSubstantial

In summary:
Nscale employees are far more exposed to job loss, equity wipeout, and uncertainty during an AI crash, while Oracle employees—even in affected AI-focused roles—would generally be shielded by the company’s size, global reach, and business diversity.

What is the benefits of joining Nscale for my career over staying at Oracle

Joining Nscale instead of staying at Oracle offers a profoundly different set of career benefits and risks. Your choice should be shaped by your appetite for growth, impact, compensation profile, and risk tolerance.

Career Acceleration and Impact

  • Rapid Responsibility & Visibility: At Nscale, a high-growth AI infrastructure startup, you’re likely to hold broader responsibilities, have a direct line to executives, and see tangible impact from your contributions—versus a more specialized, hierarchical environment at Oracle.linkedin+1
  • Cutting-Edge Technology Exposure: Nscale’s business is centered around the latest in AI hyperscale, GPU clusters, and sovereign compute, putting you at the heart of Europe’s most ambitious AI and cloud projects. This hands-on work accelerates technical learning and keeps you current with elite tooling.linkedin
  • Networking Opportunity: Being at a much talked-about scaleup, you’ll interact with top-tier industry partners (Microsoft, Nvidia, blue-chip VCs), as well as the fast-moving European AI ecosystem, which can boost professional reputation and future opportunities.cnbc+1

Compensation and Equity

  • Equity Upside: As with most startups, joining Nscale comes with employee equity—potentially life-changing financial upside if the company’s rapid growth leads to a successful IPO or acquisition. At Oracle, equity is usually granted as RSUs whose upside, while substantial, is tied to a mature company’s slower growth trajectory.
  • Compensation Mix Changes: Startup salary may be lower or matched to Oracle’s, but total comp potential is usually tied more to company performance and exit events.

Innovation, Learning and Influence

  • Innovation Environment: Nscale’s pace and hunger for innovation generally surpass what’s possible at a large, diversified incumbent like Oracle. Progression can be much faster for high performers at a startup, and you have more say in technical and strategic direction.
  • Learning Curve: The breadth of challenges—from scaling infra to winning deals and iterating products for hyperscaler clients—means you’ll develop skills quickly that are valuable for future founding roles, CTO posts, or senior technical leadership.

Risks, Trade-Offs and Stability

  • Job and Financial Risk: As shown in your AI crash scenario queries, Nscale is exposed to sector volatility. Layoff, equity wipeout, or strategic pivots are real risks in a downturn, while Oracle offers much greater job security and stable, long-term career pathways.
  • Work-Life Balance: Startups typically expect longer hours and intense focus, often trading stability for the chance at rapid advancement and financial gain.

Summary Table: Nscale vs Oracle Career Benefits

Career FactorNscale (Startup)Oracle (Enterprise)
Career accelerationVery high—fast progression, impactSlower, more structured
Tech exposureLatest/bleeding-edge AI/GPU workSolid, broad enterprise stack
Compensation upsideHigh equity, riskyStable salary, predictable RSUs
Job securityLow—volatile, high layoff riskHigh, robust to market downturns
InfluenceLarge, direct access to leadershipSmaller, part of larger organization
Work-life balanceOften demandingGenerally more moderate
NetworkingTop-tier VCs, hyperscalers, foundersGlobal enterprise contacts

In summary: Nscale offers a “rocket ship” growth and learning environment for someone hungry for ownership, innovation, and possible equity windfall—but this comes with higher risk and volatility. Oracle provides greater long-term job security, measured career progression, and reliable compensation, but less potential upside and less direct influence. The right choice depends on your risk appetite, ambition, and career goals.uktech+2

What would an AI crash look like?

An AI crash would resemble a hybrid of the 1990s dot-com bust and the 2008 financial crisis—but centered around artificial intelligence infrastructure, data centers, and corporate overinvestment. It would likely begin as a sudden market correction in overvalued AI firms and GPU suppliers, then spread through the financial system and tech economy as debt and demand collapse.

Market and Investment Collapse

In early stages, overleveraged companies like OpenAI, Anthropic, or firms heavily reliant on GPU compute (e.g., Nvidia, Oracle, Microsoft) would face sharp valuation drops as AI-generated revenues fail to justify trillion-dollar capital expenditures. Investor panic could trigger a chain reaction, collapsing the leveraged network of data‑center finance. Bloomberg and the Bank of England have both warned of a “sudden correction” and circular investing between chip firms and hyperscalers that artificially props up earnings.transformernews+1

The Data Center Bust

According to historian Margaret O’Mara and business analyst Shane Greenstein, AI data centers—many purpose‑built for model training using GPUs—are highly specialized and often remote from urban demand. These centers might last only 3–5 years and have little reuse value outside AI or crypto mining. If capital inflows freeze, thousands of megawatts of compute could become stranded assets, comparable to the empty fiber networks after the dot‑com collapse.transformernews

Economic Impact

The International Monetary Fund estimates roughly a third of current US GDP growth depends on AI-related investment. If the bubble bursts, consumption could fall from loss of “AI wealth effects,” dragging global markets into recession. Analysts at Transformer News liken it to Britain’s 1840s railway mania: vast sums invested in technology that ultimately enriched the future economy—at the cost of investors’ ruin.globalcapital+2

Consequences for Jobs and Technology

For the workforce, the crash would begin with mass layoffs across the tech sector and data‑center construction, followed by second‑order layoffs in software, marketing, and education technology. However, as with the post‑dot‑com era, redundant talent and abandoned infrastructure could later fuel a new, leaner AI industry based on sustainable business models.reddit+2

Systemic and Political Risks

While the contagion risk is smaller than subprime mortgages in 2008, debt-financed AI expansion—Oracle’s $100 billion borrowing plan with OpenAI being one example—creates vulnerability for lenders and investors. Should a major firm default, cascading insolvencies could ripple through the supply chain, forcing governments to intervene. Some analysts expect this crash would prompt stricter AI regulation and financing guardrails reminiscent of those enacted after the Great Depression.transformernews

Long-Term View

If artificial general intelligence (AGI) does eventually deliver major productivity gains, early investments may appear prescient. But if not, a 2020s AI crash would leave disused GPU campuses and massive debt—an exuberant experiment that accelerated technological progress at ruinous human cost.unherd+2

Which industries would collapse first in an AI crash

In the event of an AI crash, several sectors would be hit first and hardest — especially those that have overexpanded based on speculative expectations of AI-driven profits or infrastructure demand. The collapse would cascade through high-capex industries, ripple across financial services, and disrupt employment-dependent consumer sectors.

Semiconductor and GPU Manufacturing

The semiconductor industry would be the first to collapse due to its heavy dependence on AI demand. Data center GPUs currently drive over 90% of Nvidia’s server revenue, and the entire sector’s value nearly doubled between 2024 and 2025 based on AI compute growth forecasts. If hyperscaler demand dries up, the oversupply of GPUs, high-bandwidth memory (HBM), and AI ASICs could cause a price crash similar to the telecom equipment bust in 2002. Chip makers and startups like Groq, Cerebras, and Tenstorrent—heavily leveraged to AI workloads—would struggle to survive the sudden capital freeze.digitalisationworld

Cloud and Data Center Infrastructure

AI-heavy cloud providers such as Microsoft Azure, AWS, Google Cloud, and Oracle Cloud would see massive write-downs in data center assets. Overbuilt hyperscale and sovereign AI campuses could become stranded investments worth billions as training workloads decline and electricity costs remain high. This dynamic mirrors the way dark fiber networks from the 1990s dot-com era lay idle for years after overinvestment.digitalisationworld

Digital Advertising and Marketing

The advertising and media sector—already experiencing erosion due to AI‑generated content—would decline abruptly. Companies like WPP have already lost 50% of their stock value in 2025 due to automated ad-generation technologies cannibalizing human creative work. As AI content generation saturates the market, profit margins in marketing, online publishing, and synthetic media platforms like Shutterstock and Wix could collapse.ainvest

Financial and Staffing Services

Financial services and staffing firms are another early casualty. AI has already automated large portions of transaction processing, compliance, and manual recruitment. Firms such as ManpowerGroup and Robert Half have reportedly seen 30–50% market value declines due to these pressures. In an AI crash, their exposure to risk-laden corporate clients and shrinking demand for human labor matching would deepen losses, while regulators tighten AI governance in compliance-heavy finance.ainvest

Transportation and Logistics

The transportation and logistics sector, closely tied to AI investment through autonomous systems, faces structural weakness. Millions of driving and delivery jobs could disappear due to automation, but the firms funding autonomous fleets—such as Tesla Freight and Aurora Innovations—would hemorrhage cash if capital dries up before widespread profitability. AI‑powered routing and warehouse systems could be written down as expensive overcapacity.ainvest

Secondary Collapse: Retail and Customer Support

Finally, customer‑facing retail and support sectors would be heavily affected. With AI chatbots now handling about 80% of common queries, these labor markets are already contracting. A market shock would worsen layoffs while eroding spending power, compounding the downturn.ainvest

Summary Table

IndustryCollapse TriggerFirst-Order ImpactExample Companies Affected
SemiconductorsGPU oversupply, hyperscaler pullbackRevenue crashes, fab overcapacityNvidia, AMD, TSMC, Cerebras digitalisationworld
Cloud & Data CentersHalt in AI training spendIdle assets, megacenter write-downsMicrosoft, AWS, Oracle Cloud digitalisationworld
Advertising & MediaAutomated ad content cannibalizationLoss of billable creative workWPP, Shutterstock, Wix ainvest
Financial & StaffingJob automation, credit exposureRapid fall in client demand, defaultsManpowerGroup, Robert Half ainvest
TransportationDelay in autonomous projectsJob losses, asset devaluationTesla Freight, Aurora, Waymo ainvest
Retail & SupportCustomer service automationWidespread workforce reductionShopify, Zendesk ainvest

In short, the first phase of an AI crash would decimate GPU suppliers and infrastructure providers, followed by cascading losses in services and labor markets that relied on sustained AI adoption and speculative investor optimism.

The Hyperscalers who would be most affected in an AI crash

The hyperscalers most severely affected by an AI crash would be those that have sunk the largest capital into AI‑specific data center expansion without commensurate returns—primarily Microsoft, Amazon (AWS), Alphabet (Google Cloud), Meta, Oracle, and to a lesser extent GPU‑specialist partners like CoreWeave and Crusoe Energy Systems. These companies are deep in an investment cycle driven by trillion‑dollar valuations and multi‑gigawatt data center commitments, meaning a downturn would cripple balance sheets, strand assets, and force major write‑downs.

Microsoft

Microsoft is the hyperscaler most exposed to an AI collapse. It has committed $80 billion for fiscal 2025 to AI‑optimized data centers, largely to support OpenAI’s model training workloads on Azure. Over half this investment is in the U.S., focusing on high‑power, GPU‑dense facilities that may become stranded if demand for model training plunges. The company also co‑leads multi‑partner mega‑projects like Stargate, a $500 billion AI campus venture involving SoftBank and Oracle.ft+1

Amazon Web Services (AWS)

AWS is next in risk magnitude, with $86 billion in active AI infrastructure commitments spanning Indiana, Virginia, and Frankfurt. Many of its new campuses are dedicated to AI‑as‑a‑Service workloads and custom silicon (Trainium, Inferentia). If model‑training customers scale back, AWS faces overcapacity in power‑hungry clusters designed for sustained maximum utilization. Analysts warn that such facilities are difficult to repurpose for general cloud usage due to 10× higher rack power and cooling loads.thenetworkinstallers+1

Alphabet (Google Cloud)

Google’s parent company, Alphabet, has pledged around $75 billion in AI infrastructure spending in 2025 alone—heavily concentrated in server farms for Gemini model operations. The company’s shift to AI‑dense GPU clusters has already required ripping and rebuilding sites mid‑construction. In a crash, Alphabet’s reliance on advertising to subsidize capex would expose it to compounding financial stress.ft+1

Meta

Meta’s risk is driven by scale and ambition rather than cloud dependency. The company is investing $60–65 billion into a network of AI superclusters, including a 2 GW data center in Louisiana designed purely for model training. Mark Zuckerberg’s goal to reach “superintelligence” entails constant full‑load operation—meaning unused compute in a recession would yield enormous sunk‑cost losses.hanwhadatacenters+1

Oracle

Oracle, a late entrant to the hyperscaler race, ranks as the fourth largest hyperscaler and has become deeply tied to OpenAI’s infrastructure build. It is reportedly providing 400,000 Nvidia GPUs—worth about $40 billion—for OpenAI’s Texas and UAE campuses under the Stargate project. Oracle’s dependency on a few high‑risk customers makes it vulnerable to disproportionate collapse if those clients cut capital expenditures.ft

GPU Cloud Specialists (CoreWeave, Crusoe, Lambda)

Although smaller in scale, CoreWeave, Crusoe Energy Systems, and Lambda Labs face acute financial danger. Each is highly leveraged to GPU leasing economics that assume near‑continuous utilization. A pause in large‑model training would break their cash flow structure, causing defaults among the so‑called “neo‑cloud” providers.hanwhadatacenters

Comparative Exposure Overview

HyperscalerEstimated 2025 AI CapexPrimary Risk ChannelVulnerability in a Crash
Microsoft$80 billionOverexposure to OpenAI workloadsExtremely high hanwhadatacenters
Amazon (AWS)$86 billionIdle compute, train‑specific sitesVery high thenetworkinstallers
Alphabet$75 billionAdvertising decline + AI site overbuildHigh thenetworkinstallers
Meta$60–65 billionPure AI data center utilization riskHigh hanwhadatacenters
Oracle$40 billion (via Stargate)Concentrated tenant risk (OpenAI)Very high ft
CoreWeave / Crusoe / Lambda$10–15 billion rangeDebt leverage and GPU lease dependenceExtreme hanwhadatacenters

Summary

A sustained AI market collapse would first hit these hyperscalers through GPU underutilization, stranded data‑center capacity, and debt‑heavy infrastructure financing. Microsoft, Oracle, and Meta would face the most immediate write‑downs given their recent megaproject commitments. Amazon and Google, while financially stronger, would absorb heavy revenue compression. Specialized GPU‑cloud providers—CoreWeave, Crusoe, and Lambda—could fail outright due to funding constraints and dependence on short‑term AI demand surges.thenetworkinstallers+2

AI Hyperscalers

What Are Hyperscalers?

Hyperscalers are the giants of cloud computing — companies that design, build, and operate massive, global-scale data center infrastructures capable of scaling horizontally almost without limit. The term “hyperscale” refers to architectures that can efficiently handle extremely large and rapidly growing workloads, including AI training, inference, and data processing.

Examples:

  • Amazon Web Services (AWS)
  • Microsoft Azure
  • Google Cloud Platform (GCP)
  • Alibaba Cloud
  • Oracle Cloud Infrastructure (OCI) (smaller but sometimes included)

These companies have multi-billion-dollar capital expenditures (CAPEX) in data centers, networking, and custom hardware (e.g., AWS Inferentia, Google TPU, Azure Maia).


What Are Traditional AI Compute Cloud Providers?

These are smaller or more specialized providers that focus specifically on AI workloads—especially training and fine-tuning large models—often offering GPU or accelerator access, high-bandwidth networking, and lower latency setups.

Examples:

  • CoreWeave
  • Lambda Labs (Lambda Cloud)
  • Vast.ai
  • RunPod, Paperspace, FluidStack, etc.

They often use NVIDIA GPUs (H100, A100, RTX 4090, etc.) and emphasize cost-efficiency, flexibility, or performance for ML engineers and researchers.


Key Comparison: Hyperscalers vs. AI Compute Cloud Providers

DimensionHyperscalersAI Compute Cloud Providers
Scale & ReachGlobal, thousands of data centers; integrated with enterprise ecosystemsSmaller scale, often regional or specialized
HardwareCustom silicon (TPUs, Inferentia, Trainium) + NVIDIA GPUsAlmost entirely NVIDIA GPU-based
Pricing ModelComplex, pay-as-you-go; optimized for enterprise commitments (e.g., reserved instances, savings plans)Simpler, often cheaper hourly or spot pricing; more transparent GPU pricing
Performance FocusBalance of general-purpose and AI-specific workloadsFocused almost entirely on deep learning performance
NetworkingProprietary, very high bandwidth and reliabilityCan vary; some are optimized for high interconnect (e.g., NVLink, InfiniBand)
Ecosystem & IntegrationTight integration with DevOps, databases, storage, analytics, security, identityLightweight—focused mainly on compute, with minimal frills
FlexibilityBroad platform: supports everything from web hosting to LLM trainingNarrow focus: mainly training, inference, and fine-tuning workloads
Target UsersLarge enterprises, governments, global-scale AI projectsStartups, researchers, model trainers, boutique AI labs
Innovation CycleSlower—must maintain enterprise stabilityFaster—can pivot quickly to support new GPUs or AI frameworks
Example Use CaseMulti-cloud enterprise AI strategy integrating ERP + data lakes + model deploymentCost-effective fine-tuning or inference serving for startups

How They’re Converging

There’s an emerging hybrid trend:

  • Hyperscalers are building specialized AI infrastructure (e.g., NVIDIA DGX Cloud partnerships, Microsoft–OpenAI collaboration, Google DeepMind integration).
  • Smaller AI compute providers are adding orchestration layers (e.g., API management, distributed training schedulers) to resemble mini hyperscalers.

Some mid-tier players like Oracle, IBM Cloud, and Tencent Cloud are positioning themselves between these two worlds — offering both enterprise reliability and AI specialization.


In Summary

  • Hyperscalers = scale, reliability, and enterprise integration; ideal for end-to-end AI systems.
  • AI Compute Clouds = agility, affordability, and specialization; ideal for developers or teams training and serving models directly.

Would you like me to include a comparison of cost-performance ratios for AI workloads (e.g., GPT model training on A100s vs TPUs vs custom silicon)? That can show where each platform has an advantage.

Some Emergent Hyperscalers

  • Nscale — a vertically-integrated AI data-centre / GPU cloud scale-play (rapidly expanding via large GPU supply deals and data-centre buildouts). nscale.com+1
  • CoreWeave — GPU-first cloud operator focused on ML/graphics workloads; positions itself as lower-cost, fast access to new NVIDIA hardware. CoreWeave
  • Lambda Labs (Lambda Cloud) — ML-first cloud and appliances for researchers and enterprises; early to H100/HGX and sells private clusters. lambda.ai
  • Vast.ai — a marketplace/aggregator that connects buyers to third-party GPU providers for low-cost, on-demand GPU rentals. Vast AI
  • RunPod — developer-friendly, pay-as-you-go GPU pods and serverless inference/fine-tuning; emphasizes per-second billing and broad GPU options. Runpod+1
  • Paperspace (Gradient / DigitalOcean partnership) — easy UX for ML workflows, managed notebook/cluster services; targets researchers and smaller teams. paperspace.com+1
  • FluidStack — builds and operates large GPU clusters / AI infrastructure for enterprises; touts low cost and large cluster deliveries (recent colocation/HPC deals). fluidstack.io+1
  • Nebius — full-stack AI cloud aiming at hyperscale enterprise contracts (recent large Microsoft capacity agreements and public listing activity). Nebius+1
  • Iris Energy (IREN) — originally a bitcoin miner now pivoting to GPU colocation / AI cloud (scaling GPU fleet and data-centre capacity). Data Center Dynamics+1

Comparison table

ProviderBusiness modelTypical hardwarePricing modelTypical customersNotable strength / recent news
NscaleBuild-own-operate AI data centres + sell GPU capacityNVIDIA GB/B-class & other datacentre GPUs (mass GPU allocations)Enterprise deals / reservations + cloud accessLarge enterprises, cloud partnersLarge GPU supply deals with Microsoft; fast expansion. nscale.com+1
CoreWeavePurpose-built GPU cloud operatorLatest NVIDIA GPUs (A100/H100, etc.)On-demand, reserved; claims competitive price/perfML teams, render farms, game studiosML-focused architecture, early access to new GPUs. CoreWeave
Lambda LabsML-focused cloud + private on-prem appliancesA100/H100/HGX offerings; turnkey clustersOn-demand + private cluster contractsResearchers, enterprises needing private clustersEarly H100/HGX on-demand; private “caged” clusters. lambda.ai
Vast.aiMarketplace / broker — spot / community & datacenter providersVaries (user-supplied & datacenter GPUs)Market pricing / spot-style auctions — often cheapestHobbyists, researchers, cost-sensitive teamsHighly price-competitive via marketplace model. Vast AI
RunPodOn-demand pods, serverless inference & dev UXWide range: H100, A100, RTX 40xx, etc.Per-second billing, pay-as-you-goIndividual devs, startups, ML teams experimentingPer-second billing, fast spin-up, developer tooling. Runpod+1
PaperspaceManaged ML platform (Gradient), notebooks, VMsH100/A100 and consumer GPUs via partnersSubscription tiers + hourly GPU ratesStudents, researchers, startupsEasiest UX for notebooks + learning resources. paperspace.com+1
FluidStackLarge-scale cluster operator & managed AI infraLarge fleets of datacenter GPUsCustom / enterprise pricing (claims big cost savings)Labs, enterprises training frontier modelsBig colocation/HPC deals; expanding capacity via mining/colocation partners. fluidstack.io+1
NebiusFull-stack AI cloud (aims at hyperscale)NVIDIA datacenter GPUs (scale focus)Enterprise contracts / cloud offeringsEnterprises chasing hyperscale AI capacityLarge multi-year capacity deals (e.g., Microsoft). Nebius+1
Iris Energy (IREN)Data-centre owner / ex-miner pivoting to AI cloudBuilding GPU capacity (B300/GB300, etc.) alongside ASICsColocation + AI cloud contracts / asset monetisationEnterprises, HPC customers; also investor communityPivot from bitcoin mining to GPU/AI colocation and cloud. Data Center Dynamics+1

Practical differences that matter when you pick one

  1. Business model & reliability
    • Marketplace providers (Vast.ai) are great for cheap, experimental runs but carry variability in host reliability and support. Vast AI
    • Dedicated GPU clouds (CoreWeave, Lambda, FluidStack, Nebius, Nscale, Iris) provide more predictable SLAs and engineering support for production/federated training. nscale.com+4CoreWeave+4lambda.ai+4
  2. Access to bleeding-edge hardware
    • Lambda and CoreWeave emphasize fast access to the newest NVIDIA stacks (H100, HGX/B200, etc.). Good if you need peak FLOPS. lambda.ai+1
  3. Pricing predictability vs lowest cost
    • RunPod / Vast.ai / Paperspace often win on price for small / short jobs (per-second billing, spot marketplaces). For large, sustained runs, enterprise contracts with Nebius / Nscale / FluidStack or reserved capacity at Lambda/CoreWeave may be more cost-efficient. Runpod+2Vast AI+2
  4. Scale & strategic partnerships
    • Nebius and Nscale are scaling via huge supply agreements and data-centre builds aimed at enterprise contracts (Microsoft news for both). That makes them candidates if you need tens of thousands of GPUs or long-term buying power. Reuters+1
  5. Operational maturity & support
    • CoreWeave, Lambda, and Paperspace have mature dev experience / tooling and are used widely by ML teams. FluidStack and the miner pivots (Iris Energy) are moving fast into HPC/colocation and can offer very large capacity but may require more custom engagement. irisenergy.gcs-web.com+4CoreWeave+4lambda.ai+4

Which should you pick for common scenarios?

Managed notebooks, easy onboarding: Paperspace (Gradient). paperspace.com

Experimentation / cheapest short runs: Vast.ai or RunPod. Vast AI+1

Research / fast access to newest GPUs: Lambda or CoreWeave. lambda.ai+1

Large-scale, enterprise training / long contracts: Nebius, Nscale, FluidStack, or Iris (colocation + committed capacity). Data Center Dynamics+3Reuters+3Reuters+3

Oracle Cloud Infrastructure (OCI) vs Nscale (as of October 19, 2025). I’ll cover: company profiles, business models, core products & hardware, scale & geography, networking/storage, pricing & commercial approach, enterprise features & ecosystem, strengths/weaknesses, risk factors, and recommended fit / use cases. I’ll call out the most important, source-backed facts inline so you can follow up.


OCI vs Nscale

OCI (Oracle Cloud Infrastructure) — Enterprise-grade public cloud from Oracle with a full-stack platform (150+ services), strong emphasis on bare-metal GPU instances, low-latency RDMA networking, and purpose-built AI infrastructure (OCI Supercluster) for very large-scale model training and enterprise workloads. Oracle+1

Nscale — A rapidly-scaling, GPU-focused AI infrastructure company and data-center operator (spinout from mining heritage) that is building hyperscale GPU campuses and selling large blocks of GPU capacity to hyperscalers and cloud partners — recently announced a major multi-year / multi-100k GPU deal with Microsoft and is positioning itself as an AI hyperscaler engine. Reuters+1


1) Business model & target customers

  • OCI: Full public cloud operator (IaaS + PaaS + SaaS) selling compute, storage, networking, database, AI services, and enterprise apps to enterprises, large ISVs, governments, and cloud-native teams. OCI competes with AWS/Azure/GCP on breadth and with a particular push on enterprise and large AI workloads. Oracle+1
  • Nscale: Data-centre owner / AI infrastructure supplier that builds, owns, and operates GPU campuses and sells/leases capacity (colocation, wholesale blocks, and managed deployments) to hyperscalers and strategic partners (e.g., Microsoft). Nscale’s customers are large cloud/hyperscale buyers and enterprises needing multi-thousand-GPU scale. Reuters+1

Takeaway: OCI is a full cloud platform for a wide range of workloads; Nscale is focused on delivering raw GPU capacity and hyperscale AI facilities to large customers and cloud partners.


2) Scale, footprint & recent milestones

  • OCI: Global cloud regions and an enterprise-grade service footprint; OCI advertises support for Supercluster-scale deployments (hundreds of thousands of accelerators per cluster in design) and already offers H100/L40S/A100/AMD MI300X instance families. OCI emphasizes multi-region enterprise availability and managed services. Oracle+1
  • Nscale: Growing extremely fast — public reports (October 2025) show Nscale signing an expanded agreement to supply roughly ~200,000 NVIDIA GB300 GPUs to Microsoft across data centers in Europe and the U.S., plus earlier multi-year deals and very large funding rounds to build GW-scale campuses. This positions Nscale as a major new source of hyperscale GPU capacity. (news: Oct 15–17, 2025). Reuters+1

Takeaway: OCI provides a mature, globally distributed cloud platform; Nscale is an emergent, fast-growing specialist whose business is specifically bulking up GPU supply and datacenter capacity for hyperscalers.


3) Hardware & AI infrastructure

  • OCI: Provides bare-metal GPU instances (claimed as unique among majors), broad GPU families (NVIDIA H100, A100, L40S, GB200/B200 variants, AMD MI300X), and specialized offerings like the OCI Supercluster (designed to scale to many tens of thousands of accelerators with ultralow-latency RDMA networking). OCI highlights very large local storage per node for checkpointing and RDMA networking with microsecond-level latencies. Oracle+1
  • Nscale: Focused on the latest hyperscaler-class silicon (publicly reported deal to supply NVIDIA GB300 / GB-class chips at scale) and on designing campuses with the power/networking needed to host very high-density GPU racks. Nscale’s value prop is enabling massive, contiguous blocks of the newest accelerators for customers who need scale. nscale.com+1

Takeaway: OCI offers a broad, immediately available catalogue of GPU instances inside a full cloud stack (VMs, bare-metal, networking, storage). Nscale promises extremely large, tightly-engineered deployments of the very latest chips (built around wholesale supply deals) — ideal when you need huge contiguous blocks of identical GPUs.


4) Networking, storage, and cluster capabilities

  • OCI: Emphasizes ultrafast RDMA cluster networking (very low latency), substantial local NVMe capacity per GPU node for checkpointing and training, and integrated high-performance block/file/object storage for distributed training. OCI’s Supercluster design targets the network and storage patterns of large-scale ML training. Oracle+1
  • Nscale: As a data-centre builder, Nscale’s engineering focus is on supplying enough power, cooling, and high-bandwidth infrastructure to run dense GPU deployments at hyperscale. Exact publicly-documented RDMA/InfiniBand topology details will depend on the specific deployment/sale (e.g., Microsoft campus). Data Center Dynamics+1

Takeaway: OCI is explicit about turnkey low-latency cluster networking and storage integrated into a full cloud. Nscale provides the raw site-level infrastructure (power, capacity, racks) which customers — or partner hyperscalers — will integrate with their preferred networking and orchestration stacks.


5) Pricing & commercial model

  • OCI: Typical cloud commercial models (pay-as-you-go VMs, bare-metal by the hour, reserved/committed pricing, enterprise contracts). Oracle often positions OCI GPU VMs/bare metal as price-competitive vs AWS/Azure for GPU workloads and offers enterprise purchasing options. Exact on-demand vs reserved comparisons depend on instance type and region. Oracle+1
  • Nscale: Business-to-business, large-block commercial contracts (multi-year supply/colocation agreements, reserved capacity). Pricing is negotiated at scale — Nscale’s publicized Microsoft deal is a wholesale/supply/managed capacity arrangement rather than per-hour public cloud list pricing. For organizations that need thousands of GPUs, Nscale will typically offer custom commercial terms. Reuters+1

Takeaway: OCI is priced and packaged for on-demand to enterprise-committed cloud customers; Nscale sells large committed capacity and colocation — better for multi-year, high-volume needs where custom pricing and term structure matter.


6) Ecosystem, integrations & managed services

  • OCI: Deep integration with Oracle’s enterprise software (databases, Fusion apps), full platform services (Kubernetes, observability, security), and AI developer tooling. OCI customers benefit from a full-stack cloud ecosystem and enterprise SLAs. Oracle
  • Nscale: Ecosystem strategy centers on partnerships with hyperscalers and OEMs (e.g., Dell involvement in recent deals) and with chip vendors (NVIDIA). Nscale’s role is primarily infrastructure supply; customers will typically integrate their own orchestration and cloud stack or rely on partner hyperscalers for higher-level platform services. nscale.com+1

Takeaway: OCI is a one-stop cloud platform. Nscale is infrastructure-first and will rely on partner ecosystems for platform and application services.


7) Strengths & weaknesses (practical lens)

OCI strengths

  • Full cloud platform with enterprise services and AI-optimized bare-metal GPUs. Oracle+1
  • Designed for low-latency distributed training at scale (Supercluster, RDMA). Oracle
  • Broad GPU/accelerator families (NVIDIA + AMD options). Oracle

OCI weaknesses / risks

  • Market share and ecosystem mindshare still behind AWS/Azure/GCP in many regions; vendor lock-in concerns for Oracle-centric enterprises.

Nscale strengths

  • Ability to deliver huge contiguous GPU volumes (100k–200k+ scale) quickly via supply contracts and purpose-built campuses — attractive to hyperscalers and large cloud partners. Recent publicized Microsoft deal is a major signal. Reuters+1
  • Investor & OEM backing that accelerates buildout (Dell, Nokia, others reported). nscale.com

Nscale weaknesses / risks

  • New entrant: rapid growth introduces execution risk (power availability, construction timelines, operational maturity). Big deals depend on multi-year delivery and integration with hyperscaler networks. Financial Times+1

8) Risk & due diligence items

If you’re choosing between them (or evaluating using both), check:

  1. Availability & timeline: OCI instances are available now; Nscale’s large campuses are in active buildout — confirm delivery timelines for GPU blocks you plan to consume. (Nscale’s big deal timelines: deliveries beginning next year in some facilities per press). TechCrunch+1
  2. Network topology & RDMA: If you need low-latency multi-node training, verify the network fabric (OCI documents RDMA / microsecond latencies; for Nscale verify whether customers get InfiniBand/RDMA within the purchased footprint). Oracle+1
  3. Commercial terms: Nscale = custom wholesale/colocation contracts; OCI = public cloud, enterprise agreements and committed-use discounts. Get TCO comparisons for sustained runs. Oracle+1
  4. Operational support & SLAs: OCI provides full cloud SLAs and platform support; Nscale will likely provide data-centre/ops SLAs but may require integration effort depending on the buyer/partner model. Oracle+1

9) Who should pick which?

  • Pick OCI if you want: Immediate, production-ready cloud with GPU bare-metal/VM options, integrated platform services (K8s, databases, monitoring), and predictable on-demand/reserved pricing — especially if you value managed services and global regions. Oracle+1
  • Pick Nscale if you want: Multi-thousand to multi-hundred-thousand contiguous GPU capacity under a negotiated multi-year/colocation deal (hyperscaler-scale training, or to supply a cloud product), and you can accept a bespoke onboarding/ops model in exchange for potentially lower per-GPU cost at massive scale. (Recent Microsoft deal signals Nscale’s focus and capability). Reuters+1

Short recommendation & practical next steps

  • If you’re an enterprise or team needing immediate GPU clusters with full cloud services -> evaluate OCI’s GPU bare-metal and Supercluster options and request price/perf for your model. Use OCI if you want plug-and-play with enterprise services. Oracle+1
  • If you are planning hyperscale capacity (thousands→100k GPUs) and want to reduce per-GPU cost through long-term committed deployments -> open commercial discussions with Nscale (and other infrastructure suppliers) now; verify delivery schedule, power, networking fabric, and integration model. Reuters+1

Major WordPress Vulnerabilities Over the Last Decade

Major WordPress Vulnerability Categories

  1. SQL Injection (SQLi)
    • Attackers inject malicious SQL queries through unsanitized input fields, gaining access to databases, user credentials, or site content.
    • Example: Old WordPress versions before 4.8.3 (2017) had SQLi flaws in the $wpdb->prepare() function.
  2. Cross-Site Scripting (XSS)
    • Malicious scripts injected into websites, often via comments or poorly coded plugins/themes.
    • Common in plugins like WP GDPR Compliance (2018) and Slider Revolution (2014).
  3. Privilege Escalation
    • Bugs that allow attackers to elevate their access (e.g., from subscriber → admin).
    • Example: REST API vulnerability (2017, WordPress 4.7.0/4.7.1) allowed unauthenticated users to modify posts.
  4. Remote Code Execution (RCE)
    • Attackers upload or execute arbitrary code. Usually comes from plugin/theme flaws.
    • Example: File Manager plugin (2020) let unauthenticated users upload and execute PHP files.
  5. Authentication Bypass
    • Weaknesses in login/auth functions letting attackers impersonate users.
    • Example: WordPress 5.7 (2021) had an object injection vulnerability in PHPMailer that could lead to bypass in some configurations.
  6. Cross-Site Request Forgery (CSRF)
    • Tricks users into performing unwanted actions while authenticated.
    • Found often in plugins like Yoast SEO (2015).
  7. File Upload Vulnerabilities
    • Poor validation allows attackers to upload malicious files (e.g., PHP shells).
    • Example: Gravity Forms (2016) suffered from a file upload issue.

Notable WordPress Vulnerabilities by Year

  • 2014: Slider Revolution Vulnerability
    • A premium plugin bundled with many themes. Severe flaw allowed attackers to download sensitive files (wp-config.php), leading to mass exploits.
  • 2015: Cross-Site Scripting in WordPress Core
    • Versions <4.2.1 had a critical XSS flaw that allowed attackers to compromise millions of sites.
  • 2016: REST API Issues & Plugin Flaws
    • Multiple XSS and file upload vulnerabilities reported. The W3 Total Cache plugin exposed database information.
  • 2017: REST API Content Injection (WordPress 4.7.0/4.7.1)
    • Allowed attackers to modify posts without authentication. Over 1.5 million sites defaced before patch.
  • 2018: GDPR Compliance Plugin
    • 100,000+ sites vulnerable to privilege escalation + CSRF. Attackers could create admin accounts.
  • 2019: Social Warfare Plugin RCE
    • Popular plugin (70k+ installs) let attackers inject malicious scripts via reflected XSS → RCE.
  • 2020: File Manager Plugin Vulnerability
    • A zero-day flaw let anyone upload malicious PHP files. Exploited widely; affected 700k+ sites.
  • 2021: WP Database Reset Plugin CSRF + Privilege Escalation
    • Attackers could reset entire sites, lock out admins, and create rogue accounts.
  • 2022: Elementor & Essential Addons
    • Widely used site builder plugin had critical RCE and SQLi vulnerabilities, impacting millions.
  • 2023: WooCommerce Payments Vulnerability
    • Authentication bypass allowed attackers to impersonate admin users. Urgent patch issued.
  • 2024 (recent): WP Automatic Plugin (200k installs)
    • Zero-day RCE exploited in the wild. Attackers uploaded malicious PHP code to gain site access.

Summary

  • Core WordPress is relatively secure today, thanks to fast patching and automatic updates.
  • Plugins & themes remain the biggest attack surface, especially those with large install bases.
  • Zero-days in popular plugins (File Manager, WooCommerce, Elementor, etc.) are the most exploited.
  • Hardening strategies: keep WordPress, themes, and plugins updated; minimize plugin use; use WAF/firewalls; restrict file permissions.

Here are some of the most severe WordPress-related vulnerabilities from roughly the past decade—specifically, those assigned the highest CVSS (Common Vulnerability Scoring System) scores, indicating critical risk:

Top WordPress Vulnerabilities by CVSS Score

1. CVE-2023-5199PHP to Page plugin – CVSS 9.9

  • Type: Authenticated (Subscriber+) Local File Inclusion leading to Remote Code Execution (RCE)
  • Impact: A subscriber-level user can abuse a shortcode vulnerability to include and execute arbitrary files, potentially enabling full server compromise.
  • Severity: One of the highest-ever scores for a WordPress plugin.
    wordfence.com

2. CVE-2025-4394Alone – Charity Multipurpose Non-profit WordPress ThemeCVSS 9.8

  • Type: Arbitrary file upload allowing Remote Code Execution
  • Impact: Attackers could upload ZIP archives containing PHP backdoors, enabling full site takeover—including admin account creation, malware deployment, phishing redirects, etc.
  • Note: Exploitation started on July 12, 2025, just two days before public disclosure, affecting approximately 200 live sites. Patched in version 7.8.5 on June 16, 2025.
    TechRadar

3. CVE-2020-36837ThemeGrill Demo Importer plugin – CVSS 9.9

  • Type: Authentication bypass that resets the database
  • Impact: Attackers already authenticated can reset the site’s entire database—potentially wiping data and creating admin-level access.
    SANS Institute

4. Other Critical CVSS 9.8-Rated Vulnerabilities

Numerous high-severity vulnerabilities (all scored at 9.8) have been identified within the last few years. Here are highlights:

  • CVE-2019-25213 – Advanced Access Manager plugin allowed unauthenticated file reads (e.g., wp-config.php).
  • CVE-2019-25217 – SiteGround Optimizer plugin had Remote Code Execution and Local File Inclusion via auth bypass.
  • CVE-2020-36832 – Ultimate Membership Pro plugin permitted unauthenticated logins as any user, including admins.
  • CVE-2021-4443 – WordPress Mega Menu plugin enabled arbitrary file creation leading to RCE.
    SANS Institute
  • CVE-2024-9265 – Echo RSS Feed Post Generator plugin allowed unauthenticated registration as administrator.
  • CVE-2024-9289 – WordPress & WooCommerce Affiliate Program plugin had authentication bypass to login as admin.
  • CVE-2024-5150 – Login with Phone Number plugin allowed unauthenticated login as any existing user.
    SANS Institute+1
  • CVE-2024-11642 – Post Grid Master plugin suffered from Local File Inclusion via locate_template, enabling RCE.
  • CVE-2025-9636 – Post Grid & Gutenberg Blocks plugin allowed unauthenticated admin registration via privilege escalation.
  • CVE-2024-13446 – Workreap theme plugin permitted takeover via account takeover.
  • CVE-2024-11284 / 11285 / 11286 – WP JobHunt plugin had privilege escalation, email takeover, and auth-bypass issues—each scored 9.8.
  • CVE-2025-2232 – Realteo (Real Estate Plugin) allowed unauthenticated admin account registration.
  • CVE-2025-1771 – Traveler theme had Local File Inclusion enabling arbitrary file execution.
  • CVE-2024-13560 – WP Foodbakery plugin allowed unauthenticated arbitrary file uploads and admin registration vulnerability.
    SANS Institute

5. Additional Noteworthy High-Severity Plugins

  • CVE-2024-10960 (Brizy – Page Builder) – Arbitrary file uploads (CVSS 9.9) leading to possible RCE.
  • CVE-2024-12213 (WP Job Board Pro) – Privilege escalation to register as admin (CVSS 9.8).
    Reddit+1
  • CVE-2021-24284Kaswara Modern WPBakery Addons plugin enabled unauthenticated arbitrary file upload and code execution. Reportedly the CVSS was rated 10.0.
    Reddit

Summary Table

CVE IDComponent & IssueCVSS ScoreImpact Summary
CVE-2023-5199PHP to Page plugin – LFI → RCE via shortcode9.9Authenticated subscriber → RCE
CVE-2025-4394Alone theme – Arbitrary file upload → RCE9.8Full site takeover
CVE-2020-36837ThemeGrill Demo Importer – Auth bypass → DB reset9.9Site reset, admin access
Others (e.g., CVE-2019-25213, 2019-25217, 2024-11642, etc.)Various plugins – File Inclusions, Privilege Escalation, Auth Bypass9.8Data exposure, admin account creation, RCE
CVE-2021-24284WPBakery Addons – Arbitrary upload → RCE10.0Complete site compromise

Recommendations

  1. Plugins & Themes Are the Real Attack Surface
    Almost all of these high CVSS issues stem from third-party plugins/themes, not WordPress core.
  2. Minimize Risk by Keeping Software Updated
    Always update plugins/themes immediately when patches are released.
  3. Reduce Attack Surface
    Use only necessary, well-reviewed plugins and themes. Delete unused ones.
  4. Use Defense-in-Depth
    Employ Web Application Firewalls (WAF), restrict file permissions, and monitor logs for anomalies.

CVE-2021-24284 – Kaswara Modern WPBakery Page Builder Addons

This vulnerability, rated CVSS 10.0, allowed unauthenticated arbitrary file uploads leading to code execution and full site compromise.

Scope of exposure:

  • Estimated number of vulnerable sites: Between 4,000 and 8,000 WordPress installations still had the plugin active at the time of reports The Hacker NewsDark Reading.
  • Sites targeted in attack campaigns: Security researchers observed that approximately 1.6 million websites were scanned in total, though the majority targeted were not actually running the vulnerable plugin BrandefenseSecurity Affairs.
  • Wordfence noted that over 1,000 websites under their protection were still running the plugin and thus continually targeted The Hacker News.

Unfortunately, no publicly disclosed lists or identities of specific affected websites were available—likely due to privacy and the sensitive nature of security incidents.

Summary:

  • Estimated vulnerable sites: 4,000–8,000 still installed the plugin.
  • Total sites scanned in attacks: Around 1.6 million.
  • Confirmed protected sites running the plugin: Over 1,000, tracked by Wordfence.

CVE-2023-5199 – PHP to Page Plugin

This vulnerability involved a Local File Inclusion (LFI) that could escalate to Remote Code Execution (RCE). It’s less clear which—or how many—websites were actually impacted.

Findings:

  • I found no publicly available information naming specific affected sites or providing broad estimations of numbers affected.
  • The known impact stems from the vulnerability leveraged by authenticated users (subscribers or above) using the shortcode—but no data on scale or reported attacks was available from the sources reviewed NVDwiz.io.

Summary Table

CVE IDAffected Websites / Scope of ImpactNotes
CVE-2021-24284~4,000–8,000 still had plugin installed; ~1.6 million scannedNo specific site names disclosed; active exploitation observed
CVE-2023-5199No specific data available on affected sitesNo published numbers or site identities

Notes

While industry sources provide a strong sense of scale—thousands of vulnerable sites and millions scanned—they do not reveal actual site names or URLs affected, likely to protect site owners and avoid enabling further attacks.

1. How to Detect if a Site is Vulnerable

  • Check Plugin/Theme Versions
    • Look up the affected plugin/theme in the WordPress dashboard under Plugins → Installed Plugins or Appearance → Themes.
    • Compare the installed version to the patched version from the vendor’s site or WordPress.org plugin directory.
    • Example: For Kaswara Modern WPBakery Addons, the plugin was removed from WordPress.org and never patched → if installed, it’s vulnerable.
  • Scan with Security Tools
    • Tools like WPScan, Wordfence, or Sucuri SiteCheck can flag known vulnerable plugins/themes and outdated versions.
    • Example WPScan command: wpscan --url https://example.com --enumerate vp (--enumerate vp checks vulnerable plugins)
  • Server Log Inspection
    • Look for suspicious requests (e.g., POST requests uploading .php files, or requests to /wp-content/uploads/).
    • Many mass-exploitation campaigns leave behind traces in access.log like: POST /wp-admin/admin-ajax.php?action=kaswara_ajax_upload

2. Mitigation & Patch Strategies

  • Immediate Actions
    • Disable and delete vulnerable plugins/themes if no patch exists (like Kaswara).
    • If a patch exists (e.g., Alone Theme CVE-2025-4394), update immediately.
  • Harden WordPress
    • Restrict file permissions:
      • wp-config.php → 400 or 440
      • /wp-content/uploads/ → disallow execution of PHP files via .htaccess: <Files *.php> deny from all </Files>
    • Disable direct file editing in wp-config.php: define('DISALLOW_FILE_EDIT', true);
  • Web Application Firewall (WAF)
    • Cloudflare, Wordfence, or Sucuri WAF can block exploit signatures.
    • Example: Wordfence blocked the Kaswara WPBakery exploit in July 2022 before it could execute.

3. Exploit Breakdown (Simplified Example)

Let’s take CVE-2021-24284 (Kaswara WPBakery Addons):

  1. Attacker sends a malicious POST request to vulnerable AJAX action (kaswara_ajax_upload).
  2. No authentication or nonce check is required.
  3. Attacker uploads a .php web shell disguised as an image.
  4. Attacker accesses the file directly via /wp-content/uploads/…/shell.php.
  5. Now they have remote code execution (RCE) → can add admin accounts, modify files, or pivot further into the server.

Actions:

  • If the vulnerable plugin/theme is still installed, the site is exploitable.
  • The safest action for unpatched plugins (like Kaswara) is removal, not just deactivation.
  • Ongoing monitoring via WAF + log inspection is essential since many of these campaigns involve automated bots scanning millions of sites.

Keeping Track of WordPress Vulnerabilities and rogue plugins/themes

Here are the most trusted sources:

1. WPScan Vulnerability Database

  • The most widely used public WordPress vulnerability database.
  • Includes vulnerabilities in:
    • Core WordPress releases
    • Plugins
    • Themes
  • Each entry lists: CVE ID (if assigned), severity (CVSS score), description, affected versions, and fix status.
  • Actively updated and also powers many security scanners.

2. Wordfence Threat Intelligence

  • Commercial security vendor for WordPress.
  • Provides blog advisories, real-time threat feeds, and detailed write-ups on actively exploited vulnerabilities.
  • Often one of the first to spot mass exploitation campaigns (e.g., the Kaswara WPBakery Addons campaign).

3. Patchstack Database

  • Another excellent vulnerability tracker, focusing on plugin and theme flaws.
  • Each entry has severity ratings, exploitability details, and patch status.
  • Patchstack also highlights “unpatched” plugins/themes that should be removed immediately — a big help in avoiding seriously bad plugins.

4. Sucuri Security Blog

  • Regularly posts about active exploits and malware campaigns.
  • Focuses more on real-world compromises than raw vulnerability data.
  • Great for keeping track of what’s being exploited in the wild (not just theoretical risks).

5. NVD / CVE Details

  • The U.S. government’s official CVE database.
  • Useful for finding the official CVSS scores and technical details.
  • Less WordPress-specific, but authoritative for severity and tracking.

“Bad Plugin” Watchlists

  • Wordfence & Patchstack both maintain advisories about plugins that are either:
    • Abandoned / removed from the WordPress.org repository
    • Unpatched with known exploits
    • Actively abused in malware campaigns

Examples of plugins often blacklisted:

  • Kaswara Modern WPBakery Addons (CVE-2021-24284) → removed, never patched
  • Slider Revolution (2014 LFI) → bundled with themes, long exploited
  • WP File Manager (2020 zero-day RCE) → heavily targeted

Recommendation for site owners:

  • Subscribe to WPScan or Patchstack vulnerability alerts (free tiers available).
  • Use Wordfence Security plugin or similar, which automatically blocks exploit attempts.
  • Regularly audit installed plugins/themes → remove anything unmaintained or flagged as unpatched.

Is Kubernetes administration today like Unix Systems Administration in the 1990s?

Click for Table of Contents

Kubernetes already become too unnecessarily complex for enterprise IT?

1. Real-World Data & Surveys: Complexity Is Rising
  • Spectro Cloud’s 2023 Report shows 75% of Kubernetes practitioners encounter issues running clusters—a significant jump from 66% in 2022. Challenges intensify at scale—especially around provisioning, security, monitoring, and multi-cluster setups. [InfoWorldOkoone]
  • DEVOPSdigest (2025) highlights that enterprises often run multiple distributions (EKS, GKE, OpenShift), leading to tooling sprawl, operational inconsistency, and fragmented networking/security stacks, which strain platform teams significantly. [devopsdigest.com]
2. Admissions & Simplified Offerings from Google
  • Google itself acknowledged the persistent complexity of Kubernetes, even after years of improvements—prompting them to offer GKE Autopilot, a managed mode that abstracts away much configuration overhead. [The Register] [Wikipedia]
3. Structural Challenges & Knowledge Gaps
  • Harpoon’s breakdown of root causes points to:
    • Kubernetes’ intricate core architecture, multiple components, and high customizability.
    • Steep learning curve—you need command over containers, infra, networking, storage, CI/CD, automation.
    • Troubleshooting overhead—distributed nature complicates debugging. [harpoon.io]
  • Baytech Consulting (2025) identifies a scaling paradox: what works in pilot can fall apart in enterprise rollouts as complexity, cost, drift, and security fragility compound with growth. [Baytech Consulting]
  • AltexSoft reports Kubernetes salaries and licensing/infra costs can be high, with unpredictable cloud bills. Around 68% of firms report rising Kubernetes costs, mainly due to over-provisioning and scaling without proper observability. Organizations can waste ~32% of cloud spend. [AltexSoft]
5. Community Voices (Reddit)

Community commentary reflects real frustration:

“One does not simply run a K8s cluster… You need additional observability, cert management, DNS, authentication…”Reddit

“If you think you need Kubernetes, you don’t… Unless you really know what you’re doing.”Reddit

“K8s is f… complex even with EKS. Networking is insane.”Reddit

“Kubernetes stacks are among the most complex I’ve ever dealt with.”Reddit

These quotes show that complexity isn’t just theoretical—it’s a real barrier to adoption and effective operation.

6. Research-Published Failures
  • A recent academic paper, Mutiny!, reveals that even tiny faults—like single-bit flips in etcd—can cascade into major issues, including cluster-wide failures or under-provisioning. Demonstrates limited fault tolerance unless actively mitigated. [arXiv]
Is Kubernetes Too Complex?
DimensionEvidence of Complexity
Operational pain75% practitioners reporting issues; enterprise multi-cluster/tool divergence
Vendor admissionGoogle launching Autopilot to reduce Kubernetes complexity
Learning & toolingSteep curve; cumbersome YAML; dozens of moving parts (networking, storage, autoscale)
Financial burdenRising cloud costs; over-provisioning; hidden infrastructure waste
Community sentimentWidespread anecdotes about complexity, overhead, and misapplication
Technical fragilityExperimental research showing failure propagation even from single tiny errors
Powerful with high complexity
  • Kubernetes is undeniably powerful—but that power comes with a steep complexity tax, especially for enterprises scaling across clusters, clouds, and teams.
  • Its flexibility and extensibility, while strengths, can also lead to sprawling architectures and tooling sprawl.
  • Managed services (like GKE Autopilot, EKS, AKS), GitOps, platform engineering teams, and strong governance frameworks are essential to tame this complexity.
  • For many SMBs or smaller projects, simpler alternatives (Nomad, ECS Fargate, Heroku-style platforms) might be more pragmatic unless you truly need Kubernetes’ scale benefits.

Is Systems Admin in 1990s like managing Kubernetes today?

Systems Administration in the 1990s

In the 1990s, sysadmins typically had to manage on-premises servers, networks, and storage, often with little automation. Key traits:

  • Heterogeneous environments
    • Mix of Solaris, HP-UX, AIX, Windows NT, NetWare, early Linux.
    • Each OS had unique tooling, quirks, and administrative models.
  • Manual provisioning
    • Installing from CDs/tapes, hand-editing configs (/etc/*), patching manually.
    • Network setup via raw config files or proprietary tools.
  • Siloed tooling
    • Monitoring with Nagios, Big Brother, MRTG.
    • Backup with Veritas, Arcserve.
    • Identity with NIS or LDAP — all separate, poorly integrated.
  • High skill & resource requirements
    • A small team of “wizards” needed deep knowledge of Unix internals, networking, SCSI storage, TCP/IP stack quirks, etc.
    • Troubleshooting required understanding the whole stack, often without Google or Stack Overflow.
  • Cultural complexity
    • “Snowflake servers” (no two were alike).
    • Documentation gaps → single points of knowledge in individuals’ heads.
    • Vendor support contracts were essential.

Kubernetes Administration Today

Fast forward ~30 years: the “modern sysadmin” (platform/SRE/K8s admin) faces a similar landscape:

  • Heterogeneous environments
    • Mix of Kubernetes distros (EKS, GKE, AKS, OpenShift, Rancher).
    • Add-ons for storage (Rook, Longhorn, CSI drivers), networking (CNI plugins like Calico, Cilium), security (OPA, Kyverno).
  • Manual YAML/Helm/IaC complexity
    • Instead of hand-editing /etc, we’re hand-crafting Kubernetes YAML, Helm charts, CRDs.
    • Misconfiguration is one of the top causes of outages (akin to mis-edited config files in 90s).
  • Siloed tooling
    • Metrics → Prometheus/Mimir/Thanos.
    • Logs → Loki/ELK.
    • Traces → Tempo/Jaeger.
    • CI/CD → ArgoCD, Flux, Jenkins.
    • Security → Falco, Kyverno, Gatekeeper.
      Each solves a slice, but integration is nontrivial — like juggling Nagios, Veritas, and LDAP in the 90s.
  • High skill & resource requirements
    • K8s admins must understand containers, networking (CNI, ingress), distributed storage, security, GitOps, cloud APIs.
    • Debugging pods across namespaces, RBAC issues, or etcd failures can feel like debugging kernel panics in the 90s.
  • Cultural complexity
    • Clusters drift if not well-managed.
    • “Pets vs. cattle” mindset is the modern equivalent of avoiding snowflake servers.
    • Knowledge often concentrated in a few “platform engineers.”

Parallels

1990s Sysadmin WorldKubernetes Admin Today
Mix of Unix flavors & Windows NTMix of Kubernetes distros (EKS, GKE, AKS, OpenShift)
Manual installs/patchingManual YAML, Helm, IaC configs
Siloed tools (Nagios, Arcserve, LDAP)Siloed observability & security stacks
Snowflake serversCluster drift, misconfigured CRDs
Need “wizards” with deep system skillsNeed platform engineers/SREs with broad skills
Vendor support criticalReliance on managed services (GKE Autopilot, EKS)
Troubleshooting = art + experienceDebugging multi-layered microservices stacks

Key Difference

  • Scale & abstraction:
    • 1990s sysadmins often managed tens of servers.
    • Today’s K8s admins manage hundreds/thousands of pods, spread across multi-cluster, multi-cloud, multi-region environments.
  • Automation gap:
    • In the 90s, lack of automation caused complexity.
    • With Kubernetes, abundance of automation frameworks causes choice overload and integration complexity.

Kubernetes today feels like Unix sysadmin in the 90s:

  • A powerful but fragmented ecosystem, with high cognitive overhead.
  • Requires specialists (“K8s wizards”) to keep clusters stable.
  • Enterprise IT is repeating cycles: adopting bleeding-edge, complex infrastructure that eventually stabilizes into commoditized, simplified platforms (just like Linux standardization simplified 2000s IT).

We’re arguably in the “1995 equivalent” of Kubernetes — powerful but messy. In 5–10 years, we might see a “Linux-like consolidation” or abstraction layer that hides most of today’s complexity.

Timeline Analogy: 1990s Sysadmin vs. Kubernetes Admin Today

1990s: Fragmented Unix + Windows NT Era
  • Enterprises ran Solaris, HP-UX, AIX, SCO Unix, Novell NetWare, Windows NT, often side by side.
  • Each had different tooling, package managers, patching mechanisms.
  • Skills weren’t portable — a Solaris admin couldn’t easily manage NetWare.
  • Tooling was siloed (Nagios, Arcserve, MRTG, NIS, LDAP, Veritas).
  • Complexity = every vendor had its own model, and integration was painful.

Analogy to Kubernetes today:
Multiple Kubernetes distros (OpenShift, Rancher, GKE, EKS, AKS) + endless CNIs, CSIs, service meshes, observability stacks. Skills don’t fully transfer across environments.

Early 2000s: Linux Standardization & Automation Emerges
  • Linux (Red Hat, Debian, SUSE) consolidated the Unix ecosystem → standard APIs, packages, and tooling.
  • Automation tools (CFEngine → Puppet/Chef → Ansible) emerged, making configuration repeatable.
  • Virtualization (VMware, Xen) abstracted away hardware, reducing snowflake servers.
  • Enterprises got more portable skillsets and better ROI from staff.

Analogy for Kubernetes:
We’re waiting for similar consolidation in the K8s space — either a dominant “Linux of Kubernetes” (a distro that becomes the de facto enterprise standard) or stronger platform abstractions.

2010s: Cloud + DevOps + Containers
  • AWS, Azure, GCP commoditized infrastructure.
  • DevOps culture + automation pipelines became mainstream.
  • Docker simplified app packaging and delivery.
  • Enterprises shifted from sysadmin “server caretakers” → SRE/DevOps “platform enablers.”

Analogy for Kubernetes:
This was the simplification wave after the complexity of 1990s Unix. Today, K8s is at the “Unix ’95” stage — the complexity is still front and center. The simplification wave (through managed services and PaaS-like abstractions) hasn’t fully happened yet.

2020s: The Future of Kubernetes (Projection)
  • Managed services (GKE Autopilot, EKS Fargate, AKS) are becoming the equivalent of VMware in the 2000s — hiding underlying infrastructure complexity.
  • PaaS-like abstractions (Heroku-style experience on top of K8s, e.g., Render, Fly.io, Knative, Crossplane) will likely commoditize Kubernetes itself.
  • Platform engineering teams will provide “golden paths” to developers, hiding YAML and cluster admin pain.
  • Just like Linux became invisible (we use it daily without thinking), Kubernetes may fade into the substrate — invisible to developers, only visible to infra specialists.

History Repeats
EraSysadmin WorldKubernetes WorldParallel
1990sFragmented Unix, NT, manual opsFragmented K8s distros, YAML/Helm, manual configsHigh complexity, vendor sprawl
2000sLinux standardizes, automation matures(Future) K8s consolidates or is abstracted awayReduced friction, portable skills
2010sCloud, DevOps, containers simplify infra(Future) PaaS & managed services simplify K8sDevs focus on apps, not infra
2020sLinux invisible (everywhere, but hidden)Kubernetes invisible (substrate under platforms)Only infra teams touch it directly
Summary

Kubernetes today is like Unix in the mid-1990s: powerful but fragmented and resource-intensive. Over the next decade, we’ll likely see Linux-like consolidation (fewer distros, stronger defaults) and/or VMware-like abstraction (managed offerings, PaaS layers) that make Kubernetes complexity mostly invisible to developers.

The Nature of IT RIFs (reduction in force aka layoffs aka mass redundancies)

If you work for any IT company and see Slack users all of a sudden disappearing – then your company is performing a RIF. Out of the blue or with very short notice – a colleague or two’s Slack account is closed and you are left wondering why.

This trend has been around for a while now and sprung sites such as https://layoffs.fyi/ documenting the unprecedented amount of layoffs in the IT industry. Other sites document layoffs in other industries (e.g. UK education and civil service) too, and paints a gloomy picture of the state of unemployment and an extreme tough jobs market.

My current employer is make a round of RIF this moment in time! Hence this article about RIFs. I was affected by a RIF a year ago with a different corporation, so I am putting in motion things to do from the lessons I learnt from last time. I hope this will help anyone affected this time…

Trust No One

When you are being told that there is a round of RIF and “we are not affected” or “we are safe” by your manager or director – do not trust them. When this announcement is made interpret this as “you need to make plans and execute them ASAP in preparation that you will be affected”. Until the RIF round is “official” over, then consider this “unknown” period is your “at-risk” period.

By UK employment rules (the minimum that corporations will follow) your employer will have to give you an “at-risk” period (different from above!) When they give you this notice, you are able to stop work and look for other roles – internally OR externally).

My previous employer when they gave me this “at-risk” period, had already frozen hiring and no new “req”s were granted, making it impossible to get an internal role if you wanted to stay with your employer. In this situation you are effectively certain of being made redundant and will have to leave their employment…

You need to put things into place if you are going to survive the redundancy.

The Nature of IT RIFs

Two is not a pattern to draw definite conclusions on but it seems to me that when a corporation announces a record profit-making quarter, they follow this up by a record spend which forces them to make a RIF. This is the way…

The nature of IT recruitment and redundancies seem to have established a boom and bust pattern. Corporations overspend and over-recruit to achieve a commercial objective or goal (usually adding as much value to the corps as possible) and when this funding-period is over, they then perform a RIF to be able to start the next project. This is an evil cycle for the all employees, not just those who are let go.

When a RIF occurs, there’s little rhyme or reason why specific individual is affected. The main directive or goal of a RIF is to reduce costs so the corps can make up the huge spend or fund the new project – nowadays it is certain to be AI. The lowest hanging fruits will be picked first and then maybe the projects that are costing most but have delivered little and then just randomly in areas that (to the bosses) are not important. Of course, to the individual, we are all important so we ask the question why me? Why my team? Why my organisation?

There is no reason – even if your manager or director gives you a reason – this will not be it!

Accept and Move On

Successful people turn disadvantages to advantages – they accept the situation, deal with it fast – learn from the situation – and move on! They do not “sulk” or “get down” or “get stuck” – they learn, try again, try something different until they succeed. This is what anyone affected by RIF must do. When I say “accept and move on” yes, I mean accept the severance package and start on your CV/Resume and start job hunting… or if you are due a good package, buy that Porsche you’ve always wanted and drive it… into a traffic jam…

One of the help that might be available to someone who is “at-risk” is free consultation with a career coach. I must admit, I was very skeptical about this free facility at first, but once I ventured out to look at the job market, I find myself turning around and was open to help, tips, advice and motivation of any kind to get a head start.

The job market has changed a lot and has also gotten tougher and tougher with each round of redundancies. You need al the advice and coaching you can get. The successful things you did to attain the job/role that you’ve juts been made redundant from will NOT work this time! You need new job hunting skills, tools and be adaptable to the current state of the market.

Those who have not hunted for new roles or moved jobs in the past 5 or 10 years will have to learn and act fast! I see that even talent advisors and experience recruiters struggle to find new roles for themselves let alone for others…

What To Do?

This is my list – it needs to be adapted for your personal needs/situation – it is just to give you something to start with:

  1. Update/rewrite your CV/Resume
    • Your CV/resume will be current so update
    • Your CV/resume will not be in a modern format/layout
    • Your CV/resume will need to be tailored to the role
    • Your CV/resume will need to be in a format for auto-form-filling easier
    • Your CV/resume will need to be in a format for AI to process and not reject you without passing it on to a human!
  2. Create a generic cover letter
    • Your roles will be very similar in requirements, so a generic letter will save time
    • Leave areas for specifics, but don’t forget to change those specifics
  3. Sign up to LinkedIn and other job boards
    • These sites will have job hunting tips and advice so take advantage
    • These sites allow you to network so take advantage
    • These sites might have training courses or practise facilities
  4. Reach-out to contact and ex-colleagues
    • There maybe suitable vacancies with their employer
    • Ask them to spread the message that you are looking for a new role
  5. Create a spreadsheet of job applications
    • You will soon lose track of what company, the recruiter, the role, etc that you’ve applied for an why – keep a spreadsheet of all relevant info
  6. Create a routine of job searching/application and rest
    • You will need to be disciplined so a route that works with rest breaks to relieve the stress will keep you going until you are successful
  7. Practise interviews. conversations, and coding tests, etc.
    • You will need to be sharp and effective in your interview, practise and deploy all the tricks and methods for effective interview e.g. using S.T.A.R. method and the like.
    • Practise in a Zoom session and record yourself, playback to evaluate how you perform and what you should do and not do, say and not say

Good Luck!

Do not give up! And do not stop once you’ve achieved a new role! Work as though you are under threat of being made redundant – there is no such thing as a safe job any more – always actively develop and progress to the next role…

I am writing this in a situation when I am actually in my “at-risk” period… But as I’ve started this process well before the RIF news, I think I am ahead in the job queue (although not necessary near the very start!)

Popularity of WordPress Over the Decade

I asked ChatGPT: Give me a graph of wordpress popularity over the past decade compared to other CMS such as Joomla and Drupal

Key Insights

WordPress

  • 2025: Powers about 43.5% of all websites, holding 61.2% of the CMS market share [aquipress.com]
  • Growth: Expanded from around 21% in 2014 to a peak of 65.2% in early 2022, and has slightly declined to ~61.3–61.4% by spring 2025 [themeisle.com]
  • Stability: Experienced consistent growth across the last decade, though growth has decelerated recently [themeisle.com].

Joomla

  • 2025: Commands around 1.5% of all websites, representing 2.1% of the CMS market [w3tech.com]
  • Decline: Dropped from roughly 9% in 2014 to just over 2% in 2025; this is a decline of approximately 16% since 2024 [searchenginejournal.com]
  • Competitive Shift: Previously ranked 2nd or 3rd, it’s now overtaken by hosted platforms like Wix and Shopify [themeisle.com].

Drupal

  • 2025: Maintains around 0.8% of all websites, or 1.2% of the CMS market share [searchjournal.com]
  • Long-Term Decline: By 2025, its share fell from around 3.5% in 2014 to ~1.2%, showing a 25% drop since 2024 [tooltester.com]

🔍 Trends at a Glance

PlatformShare on All Websites (2025)CMS Market Share (2025)Trend Over Past Decade
WordPress~43.5%~61%Steady growth until early 2022, slight decline afterward
Joomla~1.5%~2.1%Continuous decline
Drupal~0.8%~1.2%Steady decline