Solaris Performance and Monitoring tools

Here’s a detailed overview of key performance and monitoring tools in Solaris, categorized by functionality:


1. General System Monitoring Tools

ToolDescriptionCommon Usage
prstatSimilar to Linux top. Displays active processes and CPU/memory usage dynamically.prstat -a → Show per-user and per-process statistics.
mpstatReports CPU usage per processor. Helps detect CPU bottlenecks.mpstat 5 → Refresh every 5 seconds.
vmstatReports system-wide virtual memory, process, CPU, and paging statistics.vmstat 5 → Monitor performance trends.
iostatDisplays I/O statistics for devices and partitions. Useful for disk performance tuning.iostat -xn 5 → Extended output every 5 seconds.
sarCollects, reports, and saves system activity data. Can report on CPU, I/O, memory, etc.sar -u 5 5 → CPU utilization 5 times every 5 sec.
uptimeQuick summary of system load averages and uptime.uptime → Shows 1, 5, 15-minute load averages.
dtraceDynamic tracing framework for kernel and user-space debugging/performance tuning.Used for detailed custom tracing scripts.

2. CPU Performance Tools

ToolDescription
mpstatMonitors per-CPU utilization and identifies load imbalance across CPUs.
psrinfoDisplays processor status, speed, and online/offline information.
psradmUsed to enable/disable CPUs. Useful for performance testing or power management.
cpustatCollects detailed CPU performance counter data (cache misses, cycles, instructions).
pbindBinds a process to a specific CPU (for performance tuning).

3. Memory and Paging Tools

ToolDescription
vmstatMonitors virtual memory activity, paging, and process queues.
sar -rReports memory and swap space utilization.
swapShows current swap space usage and configuration.
mdbModular debugger, can inspect kernel memory data structures (advanced).

4. Disk and File System Tools

ToolDescription
iostatMonitors disk I/O performance. Extended options show service times and queue lengths.
df / duReports file system usage.
fsstatReports file system-level statistics (reads, writes, operations).
metastatFor monitoring Solaris Volume Manager disk sets and RAID volumes.
zpool / zfsFor monitoring and managing ZFS file systems and pools.

5. Network Performance Tools

ToolDescription
netstatDisplays network connections, routing tables, interface stats, and protocol usage.
snoopSolaris-native packet sniffer (like tcpdump on Linux).
dlstatMonitors network data-link activity (bandwidth, throughput).
kstatProvides kernel statistics — can display NIC statistics and error counters.
nddViews and sets network driver and protocol parameters.
ping, tracerouteBasic connectivity and latency measurement tools.

6. Comprehensive and Advanced Monitoring Tools

ToolDescription
DTraceThe most powerful Solaris performance framework. Enables dynamic tracing of kernel and user-space events. Used for deep system diagnosis.
kstatKernel statistics access command; allows fine-grained performance metrics.
PerfMon / Sun Management CenterGUI-based system performance monitoring and alerting tools (legacy but still used in enterprise environments).
svcs / svcpropFor monitoring SMF (Service Management Facility) services — system service status and dependencies.
fmadm / fmstatFor monitoring the Fault Management Architecture (FMA) — detects and diagnoses hardware faults.

7. Logging and Auditing

ToolDescription
/var/adm/messagesGeneral system log for kernel and hardware messages.
dmesgDisplays recent kernel messages (boot and hardware-related).
logadmManages and rotates system logs.
auditreduce / prauditUsed for viewing and filtering audit trail logs.

8. Example Performance Diagnosis Workflow

A typical Solaris performance analysis session might involve:

prstat -a                     # Check CPU usage per process/user
mpstat 5                      # See per-CPU utilization
vmstat 5                      # Monitor memory, swap, and run queue
iostat -xn 5                  # Monitor disk throughput and latency
netstat -i                    # Check network interface stats
zpool iostat 5                # If using ZFS, check storage pool I/O
sar -u 5 5                    # Log CPU usage samples

For deeper analysis:

dtrace -n 'syscall:::entry { @[probefunc] = count(); }'  # Trace syscalls

Would you like me to include example DTrace scripts (for CPU, memory, or I/O tracing)?
They’re often the most powerful part of Solaris performance tuning.

Solaris a technically exceptional UNIX.

Many of Solaris’s performance and monitoring tools later inspired Linux utilities, but some remain unique to Solaris (either entirely, or in how deeply they integrate with the kernel).

Here’s a breakdown:


Tools and Features Unique (or Originally Native) to Solaris

1. DTrace (Dynamic Tracing Framework)

  • Unique to Solaris.
  • Description:
    DTrace is a kernel-level and user-level dynamic tracing framework for live system instrumentation. It can safely trace any event in real time — system calls, function calls, I/O, scheduler behavior, etc. — without rebooting or recompiling.
  • Why it’s special:
    It allows deep observability with almost zero overhead when inactive.
  • Linux equivalent:
    Later attempts to emulate it include:
    • SystemTap (inspired by DTrace, less stable and flexible)
    • eBPF / bpftrace (modern and close in spirit but architecturally different)
  • Conclusion:
    DTrace was a Solaris original, and Linux has only recently approached its level with eBPF (15+ years later).

2. FMA (Fault Management Architecture)

  • Unique to Solaris.
  • Tools: fmadm, fmstat
  • Description:
    A kernel-level predictive self-healing framework that detects, diagnoses, and isolates hardware faults (CPU, memory, I/O, etc.) before they cause system crashes.
  • Why it’s special:
    FMA was part of the “Predictive Self-Healing” architecture in Solaris 10. It could automatically offline faulty components (like CPUs or DIMMs) dynamically.
  • Linux equivalent:
    None fully equivalent. Linux has some hardware error handling via mcelog or RAS Daemon, but not a unified predictive framework like FMA.
  • Conclusion:
    Completely Solaris-unique in scope and integration.

3. Service Management Facility (SMF)

  • Tools: svcs, svcadm, svcprop
  • Description:
    Framework for managing system services with dependencies, restart policies, and states.
  • Why it’s special:
    Replaced traditional init.d scripts with dependency-aware, fault-tolerant service management.
  • Linux equivalent:
    systemd (modern Linux init system) was heavily inspired by SMF concepts, though developed independently.
  • Conclusion:
    SMF is a Solaris innovation, later mirrored in Linux via systemd.

4. ZFS and Related Tools

  • Tools: zpool, zfs, fsstat
  • Description:
    Advanced 128-bit filesystem with integrated volume management, snapshots, and checksumming.
  • Why it’s special:
    ZFS was revolutionary: it unified file system + volume manager + data integrity.
  • Linux equivalent:
    OpenZFS on Linux exists now, but it was originally developed for Solaris.
    It was copied/forked from the OpenSolaris codebase.
  • Conclusion:
    ZFS is a Solaris-born technology that Linux later adopted externally.

5. kstat / kstat Interface

  • Description:
    Solaris’s kernel statistics interface for retrieving detailed kernel metrics (per-device, per-driver, etc.).
  • Tools: kstat, kstat(3KSTAT) API.
  • Why it’s special:
    Provides consistent, hierarchical kernel metrics access — used by many Solaris monitoring tools internally.
  • Linux equivalent:
    Linux has /proc and /sys but no unified kernel statistics API equivalent to kstat.
  • Conclusion:
    Unique to Solaris, though conceptually related to /proc.

6. kvmstat, cpustat, pbind, psrinfo

  • Description:
    Tools that give low-level access to CPU hardware counters, processor binding, and kernel VM stats.
  • Why it’s special:
    Very tight integration with Solaris kernel subsystems.
  • Linux equivalent:
    Some analogues exist (numactl, taskset, perf), but Solaris’s CPU observability was deeper and more unified.
  • Conclusion:
    Originally Solaris-only, partially imitated in Linux perf tools.

7. snoop (Solaris-native packet sniffer)

  • Description:
    Built-in packet capture and decoding utility.
  • Why it’s special:
    Integrated directly into Solaris networking stack (STREAMS-based), faster than early tcpdump.
  • Linux equivalent:
    tcpdump and wireshark perform similar functions now, but snoop was native long before these were standard on UNIX.
  • Conclusion:
    Solaris-originated concept, later paralleled on Linux.

Common or Shared Tools (Not Unique)

These tools or equivalents existed long before Solaris or exist identically in Linux:

ToolOrigin / Availability
vmstat, iostat, sar, mpstat, uptimeCommon to all UNIX systems (from AT&T UNIX / BSD).
prstatSolaris version of top. Linux equivalent: top, htop.
netstat, ping, tracerouteCommon across all UNIX variants.
df, du, ps, dmesg, swapStandard UNIX commands.

Summary Table

Solaris Tool/FeatureUnique to Solaris?Linux Equivalent / Derivative
DTraceYesSystemTap, eBPF, bpftrace
FMA (fmadm, fmstat)YesPartial RAS tools
SMF (svcs, svcadm)Yessystemd (inspired by SMF)
ZFS (zpool, zfs)YesOpenZFS (ported)
kstatYes/proc, /sys (conceptually similar)
snoopOriginally Solaristcpdump later filled the role
kvmstat, cpustat, psrinfo, pbindOriginally Solarisperf, taskset, numactl (similar idea)
vmstat, iostat, sar, prstat, mpstat, netstatCommon UNIX heritageShared across all UNIX/Linux systems

In summary:

  • Solaris-original innovations: DTrace, FMA, SMF, ZFS, kstat
  • Later imitated or ported to Linux: systemd, eBPF, OpenZFS, bpftrace
  • Traditional UNIX tools: shared across all UNIX flavors.

Here’s a comprehensive comparison chart between Solaris and Linux system performance/monitoring tools, including what each does, how to use it, and the closest equivalent command or concept in Linux.


Solaris vs Linux System Performance and Monitoring Tools

CategorySolaris ToolLinux EquivalentPurpose / Notes
CPU Monitoringprstattop, htopDisplays real-time CPU and memory usage per process or per user.
mpstatmpstat (sysstat package)CPU usage per processor/core. Syntax is almost identical.
psrinfolscpuDisplays processor status, speed, and count.
psradmecho 0 > /sys/devices/system/cpu/cpuX/onlineEnables/disables CPUs. Solaris command is more straightforward.
cpustatperf stat, perf topCollects CPU hardware counter statistics.
pbindtasksetBinds a process to a specific CPU. Used for CPU affinity.

| Memory Monitoring | vmstat | vmstat | Same function—reports memory, paging, and process info. |
| | sar -r | sar -r (sysstat package) | Reports memory and swap usage. |
| | swap -l / swap -s | swapon -s, free -h | Shows swap devices and usage. |
| | mdb (kernel debugger) | crash, gdb | Advanced kernel/memory inspection tool. |


| Disk and Filesystem Monitoring | iostat | iostat (sysstat package) | I/O statistics by device, with similar syntax. |
| | df, du | df, du | Disk space usage. |
| | fsstat | (no exact equivalent) | Solaris-only; detailed file system operations stats. |
| | metastat | mdadm --detail | For Solaris Volume Manager / Linux software RAID. |
| | zpool, zfs | zpool, zfs (OpenZFS) | Originally Solaris-only, now ported to Linux as OpenZFS. |


| Network Monitoring | netstat | netstat, ss, ip | Displays sockets, interfaces, and routing. |
| | snoop | tcpdump, wireshark | Solaris-native packet sniffer (older but very efficient). |
| | dlstat | ifstat, ip -s link | Monitors data-link level network throughput. |
| | ndd | ethtool, sysctl | Views/sets network driver and protocol parameters. |
| | ping, traceroute | ping, traceroute | Identical purpose. |


| System Activity Logging | sar | sar (sysstat package) | Collects system statistics over time. Identical toolset. |
| | /var/adm/messages | /var/log/messages | Main system log file. |
| | dmesg | dmesg | Kernel and boot-time messages. |
| | logadm | logrotate | Log rotation and management. |


| Service and Process Management | svcs, svcadm, svcprop | systemctl, journalctl | Solaris SMF vs Linux systemd — both manage services with dependencies and states. |
| | init.d scripts (legacy) | /etc/init.d/ | Classic UNIX service scripts (now mostly replaced by SMF/systemd). |


| Fault & Hardware Management | fmadm, fmstat | Partial: mcelog, rasdaemon | Solaris Fault Management Architecture (FMA) detects and isolates hardware faults; Linux equivalents are partial. |
| | cfgadm | lshw, udevadm info | Manages dynamic reconfiguration of hardware. |


| Comprehensive Tracing & Diagnostics | DTrace | SystemTap, perf, ftrace, eBPF, bpftrace | Solaris’s most famous tool — dynamic kernel/user tracing. Linux tools were all later attempts to replicate its power. |
| | kstat | /proc, /sys, sysstat tools | Solaris kernel statistics interface; Linux uses virtual filesystems for similar info. |


| Virtualization / Zones | zoneadm, zlogin | lxc, docker | Solaris Zones were the precursor to Linux containers — same isolation concept. |


Summary by Category

AreaSolaris Original or Superior ToolLinux Later Counterpart
Deep TracingDTraceeBPF, bpftrace, SystemTap
Service ManagementSMF (svcs/svcadm)systemd
FilesystemZFSOpenZFS (ported)
Fault ManagementFMA (fmadm/fmstat)Partial: mcelog, rasdaemon
Kernel Stats Interfacekstat/proc, /sys
Networking Sniffersnooptcpdump
ContainerizationZonesLXC, Docker
Common UNIX Toolsiostat, vmstat, mpstat, sarSame tools, shared UNIX heritage

Key Takeaways

  • Solaris was the innovator — DTrace, ZFS, SMF, and Zones were years ahead of Linux equivalents.
  • Linux caught up by borrowing the concepts, not the implementations (due to licensing differences).
  • Shared tools (iostat, vmstat, sar) come from the common UNIX System V heritage.
  • For deep kernel-level debugging and self-healing, Solaris still has unmatched elegance (especially with DTrace + FMA).