Linux vmstat Command Deep Dive: From Kernel Statistics to Performance Bottleneck Diagnosis#

Published: May 11, 2026 10:08

When tuning Linux system performance, you’ve likely used top or htop. But when you need to dig into CPU scheduling, memory paging, and I/O wait times, vmstat is the real powerhouse. Born in 1983, this tool remains the go-to choice for sysadmins diagnosing performance bottlenecks.

The Six Dimensions of vmstat Output#

Running vmstat directly gives you a table:

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs  us sy id wa st
 2  0      0 123456  12345 234567    0    0    12    15  120  350   5  2 92  1  0

These 6 groups represent: processes, memory, swap, block I/O, system interrupts, and CPU time distribution. Each value is real-time statistics read from the /proc filesystem.

Process States: r and b Columns#

r is the run queue length—processes running or waiting for CPU. If this consistently exceeds your CPU core count, CPU is your bottleneck.

b is the count of uninterruptible sleep processes, typically waiting for I/O. b > 0 that keeps growing usually indicates disk I/O blocking.

// How the kernel counts the r column
// Source: kernel/sched/core.c
for_each_possible_cpu(cpu) {
    struct rq *rq = cpu_rq(cpu);
    running += rq->nr_running;      // Running processes
    uninterruptible += rq->nr_uninterruptible; // D-state processes
}

Memory and Swap: Key Signals from si and so#

si (swap in) and so (swap out) are the most critical memory metrics. If these are non-zero, the system is using swap, indicating potential memory shortage.

# Refresh every second to observe swap activity
vmstat 1

If you see si/so consistently > 0 with low free memory, consider:

  • Adding physical memory
  • Adjusting vm.swappiness (reduce swap tendency)
  • Checking for memory leaks

CPU Time Distribution: us/sy/id/wa/st#

These 5 columns sum to 100%:

  • us: User-space CPU, actual application usage
  • sy: Kernel-space CPU, system call overhead
  • id: Idle, higher is better
  • wa: I/O wait, direct signal of disk bottleneck
  • st: Time stolen by virtualization (common in cloud servers)
# When wa is too high
# 1. Check I/O details
iostat -x 1

# 2. Find high I/O processes
iotop -o

# 3. Check for deleted but open files
lsof | grep deleted

Sampling Interval and the First Sample Trap#

vmstat 2 5 samples every 2 seconds, 5 times total. But the first sample shows averages since boot, which has little reference value.

# Correct usage: skip first line
vmstat 1 3 | tail -n +2

# Or continuous observation
watch -n 1 'vmstat 1 2 | tail -1'

This is a detail many overlook. The first line might show 99% CPU idle, but the system could currently be under heavy load.

Advanced Options: -s, -d, -m#

vmstat -s shows detailed memory statistics, more comprehensive than free:

$ vmstat -s
      8167832 K total memory
      1234567 K used memory
      2345678 K active memory
      1234567 K inactive memory
       456789 K free memory
       ...

vmstat -d displays disk statistics—reads, writes, and sectors per device. Useful for disk performance troubleshooting.

vmstat -m shows slab allocator statistics, great for diagnosing kernel memory leaks:

# See which kernel objects use most memory
vmstat -m | sort -k3 -n -r | head -10

Real-World Case: CPU or I/O Bottleneck?#

Imagine your production service slows down, and vmstat 1 shows:

r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs  us sy id wa st
15 0      0 123456  12345 234567    0    0    15    20  1500 8000  80 15  5  0  0

Analysis:

  • r=15, long run queue, CPU is busy
  • us=80%, high user-space usage
  • wa=0, no I/O wait

Conclusion: CPU compute-bound, need to optimize code or scale up.

Another scenario:

r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs  us sy id wa st
2  8      0 123456  12345 234567    0    0   500   300   200  500   5  3 20 72  0

Analysis:

  • b=8, many processes blocked on I/O
  • wa=72%, CPU mostly waiting for disk
  • bi/bo are high, active disk read/write

Conclusion: Disk I/O bottleneck, consider SSD upgrade or query optimization.

vmstat with Other Tools#

vmstat provides system-level overview, but pinpointing specific issues requires companion tools:

# CPU bottleneck → use top to find high-CPU processes
top -H -p <pid>

# I/O bottleneck → use iostat for device details
iostat -x 1

# Memory bottleneck → use free and slabtop
slabtop

# Network bottleneck → use sar
sar -n DEV 1

Web Implementation: Real-time vmstat Monitoring#

Reading /proc filesystem in Node.js enables similar functionality:

// Read /proc/stat for CPU times
const fs = require('fs');

function readCpuStats() {
  const stat = fs.readFileSync('/proc/stat', 'utf-8');
  const lines = stat.split('\n');
  const cpu = lines[0].split(/\s+/).slice(1).map(Number);
  // user, nice, system, idle, iowait, irq, softirq
  return {
    user: cpu[0] + cpu[1],
    system: cpu[2] + cpu[5] + cpu[6],
    idle: cpu[3],
    iowait: cpu[4]
  };
}

// Calculate usage between samples
setInterval(() => {
  const curr = readCpuStats();
  const diff = {
    user: curr.user - prev.user,
    system: curr.system - prev.system,
    idle: curr.idle - prev.idle,
    iowait: curr.iowait - prev.iowait
  };
  const total = diff.user + diff.system + diff.idle + diff.iowait;
  console.log(`CPU: user ${(diff.user/total*100).toFixed(1)}%, system ${(diff.system/total*100).toFixed(1)}%, iowait ${(diff.iowait/total*100).toFixed(1)}%`);
  prev = curr;
}, 1000);

Summary#

The core value of vmstat is quickly identifying bottleneck type: CPU, memory, or I/O. Remember these key indicators:

  • r > CPU cores → CPU bottleneck
  • si/so > 0 → Memory shortage
  • wa > 20% → I/O bottleneck
  • b > 0 persistent → Check disk or lock contention

Next time your system slows down, run vmstat 1 first, then dig deeper.


Related Tools: