Two completely different systems. Two completely different workloads. Two completely different architectures. And yet, somehow, nearly identical numbers on the kill-a-watt.

Ubuntu Workstation
61W
i7-8700K / 1080 Ti / 2x NVMe
Unraid NAS
68W
Pentium 8505 / 4x HDD / 10GbE
VS

I stared at these numbers for a while. One system has a 95W TDP processor overclocked to 5GHz and a 250W graphics card. The other has a 15W mobile chip and no GPU at all. How do they end up within 7 watts of each other at idle?

The answer is that every system has a tax. The workstation pays the silicon tax - old, leaky transistors and an overbuilt GPU doing absolutely nothing. The NAS pays the spinning rust tax - Newtonian physics demanding tribute from four platters of rotating metal. Different debts, same collector.


Ubuntu - The Silicon Tax (61W)

The workstation is a 2017-era gaming build repurposed for development. It draws 61W at the wall doing nothing but displaying a terminal cursor. Here's where every watt goes.

CPU: 18W of leaky silicon

The i7-8700K is a 14nm part from 2017. At idle, it should be sipping power. Instead it pulls 18W, and three physics problems explain why:

1. Transistor leakage. At 14nm, gate oxide is thin enough that electrons quantum-tunnel through it even when the transistor is "off." Every one of the 3 billion transistors leaks a tiny current. Multiply tiny by 3 billion and you get real watts. Modern 3nm parts have high-k metal gates that reduce this dramatically - but in 2017, Intel was still on SiO2 at 14nm.

2. Overclock voltage floor. This chip runs a 5GHz all-core overclock, which sets a minimum voltage around 0.9V. Stock idle would drop to ~0.6V. Power scales with V², so:

$ python3 -c "print(f'Voltage ratio: {0.9/0.6:.2f}x')"
Voltage ratio: 1.50x

$ python3 -c "print(f'Power ratio (V^2): {(0.9/0.6)**2:.2f}x')"
Power ratio (V^2): 2.25x

That's 2.25x the dynamic power from voltage alone, before counting any frequency difference. The overclock doesn't just cost watts under load - it sets a higher floor at idle.

3. Shallow C-states. The overclock prevents the CPU from entering its deepest sleep states (C6/C7). Instead of parking cores at near-zero power, they hover in C1/C3 - partially awake, partially consuming.

GPU: 8W of dead weight

The GTX 1080 Ti is a 471mm² die on 16nm - one of the largest consumer GPUs ever built. Even in P8 (deepest idle state), it can't escape physics:

1080 Ti Idle Power Breakdown
ComponentDraw
Die leakage (471mm² @ 16nm)3-4W
GDDR5X refresh (11GB, always active)1-2W
PCIe link maintenance~1W
VRM quiescent draw~1W
Total idle~8W

Eight watts for a component producing zero useful output. Pure dead weight. The card exists in the system for the rare occasions I need CUDA, but 99% of the time it's a space heater with a fan attached.

PSU: the efficiency valley

Here's a fun one. The system has a 1000W power supply drawing 61W. That's roughly 5% load. And 80 Plus efficiency curves have a dirty secret: they crater below 10% load.

The PSU Problem

At 5% load, even an 80 Plus Gold PSU operates at only 80-82% efficiency. The power supply itself wastes 12-15W as heat just converting AC to DC.

A right-sized 300W PSU at the same 61W draw would operate at ~20% load and ~87% efficiency, saving roughly 3W. Not world-changing, but it's free watts left on the table by over-provisioning.

The efficient parts

Not everything is wasteful. The modern components are remarkably efficient:

Ubuntu Workstation - Full Breakdown
ComponentDraw
CPU (i7-8700K @ 5GHz, idle)18W
PSU loss (1000W @ 5% load)12-15W
Motherboard (VRMs, chipset, USB, audio)~10W
GPU (1080 Ti, P8 idle)~8W
2x NVMe SSDs~5W
32GB DDR4 (2 DIMMs)~3W
iGPU (Intel UHD 630, driving display)~0W*
Total at the wall~61W

*iGPU power is included in the CPU figure - it adds roughly 1-2W when active but is essentially free relative to the discrete GPU alternative.


Unraid - The Spinning Rust Tax (68W)

The NAS is a UGREEN DXP4800+ running Unraid. A Pentium Gold 8505 with integrated graphics, no discrete GPU, 38GB of RAM, four enterprise HDDs, and a 10GbE NIC. It draws 68W at idle. But the power profile is completely inverted.

CPU: 5W of modern efficiency

The Pentium Gold 8505 is everything the 8700K isn't. Built on Intel 7 (10nm ESF), it's a 2023 hybrid architecture chip with one P-core and four E-cores. At idle:

  • Voltage drops to ~0.6V at 400MHz
  • Deep C10 sleep states park cores at near-zero power
  • E-cores handle background tasks without waking the P-core
  • 10nm gate oxide dramatically reduces transistor leakage

This is what a modern idle CPU looks like: 5W. The 8700K pays 3.6x more to do the same nothing.

GPU: 0W (the right answer)

No discrete GPU. The Intel UHD iGPU handles the rare video output needed for initial setup. Its power draw is folded into the CPU's 5W figure. This is the correct answer for a headless server.

HDDs: 25-28W of Newtonian physics

And here's the tax. Four enterprise drives spinning at 7200 RPM, continuously, whether anyone is reading data or not.

This isn't semiconductor inefficiency. This is mechanical work. An Exos 24TB drive has roughly 10 platters - actual aluminum discs with real mass, spinning 120 times per second. The bearings that hold the spindle generate friction. Friction converts kinetic energy to heat. Heat is waste. This is thermodynamics, not transistor physics.

  • Parity - Exos 24TB (ST24000NM000C, ~10 platters) 7.2W
  • Disk 3 - Exos 22TB (ST22000NM000C, ~9 platters) 6.5W
  • Disk 1 - OOS 16TB (OOS16000G) 5.8W
  • Disk 2 - Barracuda 16TB (ST16000NM001G) 5.5W

Plus about 1W of electronics per drive for the controller, cache DRAM, and head actuator standby. Total: 25-28W just to keep platters spinning.

Why Not Spin Down?

Unraid can spin down idle drives, and I do use this for the array disks. But spindown has costs: 10-15 second latency on access, increased wear from thermal cycling, and the spin-up surge itself draws 25-30W per drive for a few seconds. For a NAS with 37 Docker containers and periodic background tasks, drives spin up frequently enough that the savings are modest and the wear trade-off is real.

10GbE NIC: 5W of always-on signaling

The Aquantia AQC113 10GbE transceiver draws about 5W at idle. Even with no traffic, it maintains link synchronization - continuously sending and receiving idle symbols to keep the 10Gbps connection alive. A 1GbE NIC would draw ~1W. The 5x bandwidth costs 5x the power, even when unused.

Unraid NAS - Full Breakdown
ComponentDraw
4x Enterprise HDDs (7200 RPM)25-28W
PSU + power delivery losses~10W
Motherboard (chipset, USB, BMC)~8W
CPU (Pentium Gold 8505, idle)~5W
10GbE NIC (AQC113)~5W
38GB DDR4 (2 DIMMs)~4W
2x NVMe SSDs (cache + minicache)~5W
iGPU (Intel UHD, headless)~0W
Total at the wall~68W

Why They Converge

The interesting question isn't "why are they close" - it's "what would happen if you removed each system's tax?"

Strip the workstation's silicon tax: remove the discrete GPU (-8W), run the CPU at stock voltage (-8W), right-size the PSU (-3W). You'd land around ~42W.

Strip the NAS's spinning rust tax: replace the four HDDs with SSDs (-20W), swap the 10GbE for 2.5GbE (-3W). You'd land around ~40W.

Power Composition
Ubuntu
base ~38W
tax ~23W
61W
Unraid
base ~40W
tax ~28W
68W

Both systems converge on a ~35-40W baseline. That's the cost of having a computer turned on in 2026. Call it the existence tax:

The Existence Tax - What Every Computer Pays
ComponentDraw
Motherboard (VRMs, chipset, clock gen, USB)8-10W
PSU conversion overhead8-12W
RAM (2-4 DIMMs, refresh)3-4W
NVMe storage (1-2 drives)3-5W
NIC (1-10 GbE)1-5W
Baseline at idle~35-40W

You can't optimize below this floor without eliminating components entirely. Every motherboard needs a voltage regulator. Every DRAM chip needs periodic refresh. Every NVMe controller has a minimum operating power. The PSU always loses some energy to heat. This is the cost of being a computer that exists and is plugged in.

It's like two cars both getting 25 MPG - one is a V8 with great aerodynamics, the other is a 4-cylinder towing a trailer. Different problems, same gas station receipt.

The Combined Bill

Both systems run 24/7. The workstation doubles as an Ubuntu server. Together:

$ python3 -c "
ubuntu, unraid = 61, 68
total = ubuntu + unraid
monthly_kwh = total * 24 * 30 / 1000
annual_kwh = total * 24 * 365 / 1000
cost = annual_kwh * 0.28 # CA electricity
print(f'Combined idle: {total}W')
print(f'Monthly: {monthly_kwh:.0f} kWh')
print(f'Annual: {annual_kwh:.0f} kWh')
print(f'Annual cost (@$0.28/kWh): ${cost:.0f}')
"
Combined idle: 129W
Monthly: 93 kWh
Annual: 1130 kWh
Annual cost (@$0.28/kWh): $316

With the third node (a 14900K build that runs intermittently) and network gear, the homelab averages around ~157W always-on. That's about $385/year in California electricity just to keep the lights blinking.

Is it worth optimizing? Maybe. Undervolting the 8700K and pulling the 1080 Ti would save ~16W, or about $39/year. Replacing the HDDs with SSDs would save ~20W, or about $49/year, but the cost of 70TB of SSD storage is... not $49. Every optimization has a payback period, and most of them are measured in years.

The real lesson is architectural. When I eventually build the next workstation, the checklist is clear: modern process node (lower leakage), no discrete GPU unless needed daily, right-sized PSU, and SSDs wherever possible. Not because any single optimization is dramatic, but because they compound. A well-architected modern system can idle at 20-25W. Getting there from 61W isn't one fix - it's removing every tax at once.