A beautiful terminal dashboard for monitoring GPUs. Supports AMD, NVIDIA, and Apple Silicon.
╔══════════════════════════════════════════════════════════╗
║ P I C O M O N ║
║ GPU Monitoring TUI ║
╚══════════════════════════════════════════════════════════╝
- Multi-GPU Support: Works with AMD (via
amd-smi), NVIDIA (vianvidia-smi), and Apple Silicon - Beautiful TUI: Modern terminal interface built with Textual
- Multiple Screens: Dashboard, System overview, GPU details, and shareable Rig Card
- Live Metrics: GPU utilization, memory usage, power draw, temperature
- Sparkline History: Visual history of metrics over time
- Shareable Rig Cards: Generate ASCII art summaries to share your setup
Overview of all GPUs with live metrics and sparklines:
┌─────────────────────────────────────────────────────────────────────────┐
│ PICOMON GPU Monitor │
│ ▲ 30min history │ 3.0s refresh │
├─────────────────────────────────────────────────────────────────────────┤
│ ┌──────────────────────────────┐ ┌──────────────────────────────┐ │
│ │ GPU 0 Apple M3 Max GPU │ │ GPU 1 RTX 4090 │ │
│ │ │ │ │ │
│ │ GFX ████████░░░░░░░ 52% │ │ GFX ██████████████░░ 78% │ │
│ │ UMC ███░░░░░░░░░░░░ 18% │ │ UMC █████████░░░░░░ 56% │ │
│ │ PWR ██░░░░░░░░░░░░░ 6W │ │ PWR █████████████░░ 320W │ │
│ │ VRAM █████████████░░ 80% │ │ VRAM ████████░░░░░░░ 52% │ │
│ │ │ │ │ │
│ │ ▁▂▃▄▅▆▇█▇▆▅▄▃▂▁▂▃▄▅▆ GFX │ │ ▂▃▅▆▇█████▇▆▅▄▃▂▁▂▃▄ GFX │ │
│ │ ▁▁▂▂▃▄▅▆▇█▇▆▅▄▃▂▁▁▂▃ PWR │ │ ▂▃▄▅▆▇████▇▆▅▄▃▂▂▃▄▅ PWR │ │
│ └──────────────────────────────┘ └──────────────────────────────┘ │
├─────────────────────────────────────────────────────────────────────────┤
│ TOTAL: 2 GPUs │ 326/450W (72%) │ 20.5/40.0 GB VRAM (51%) │ Avg GFX: 65% │
└─────────────────────────────────────────────────────────────────────────┘
Shareable ASCII art summary - press r to view, c to copy:
╔══════════════════════════════════════════════════════════╗
║ P I C O M O N ║
║ GPU Monitoring TUI ║
║──────────────────────────────────────────────────────────║
║ SYSTEM my-workstation ║
║ CPU AMD Ryzen 9 7950X ║
║ RAM 128 GB ║
║──────────────────────────────────────────────────────────║
║ GPU CLUSTER ║
║ ──────────────────────────────────────── ║
║ 2 × NVIDIA RTX 4090 ║
║ VRAM 48 GB (24 GB × 2) ║
║ TDP 900 W (450 W × 2) ║
║──────────────────────────────────────────────────────────║
║ LIVE STATS ║
║ GPU Load ████████████████░░░░░░░░░░░░░░ 52.3% ║
║ Power Draw ██████████░░░░░░░░░░░░░░░░░░░░ 380W ║
║ VRAM Used ████████████░░░░░░░░░░░░░░░░░░ 20GB ║
║══════════════════════════════════════════════════════════║
║ Generated by picomon | 2025-12-26 12:00 ║
╚══════════════════════════════════════════════════════════╝
- Python 3.9+
- One of the following (auto-detected):
- AMD GPUs:
amd-smiCLI on PATH - NVIDIA GPUs:
nvidia-smiCLI on PATH - Apple Silicon: macOS with Metal support
- AMD GPUs:
pip install picomonpicomon| Option | Description | Default |
|---|---|---|
--update-interval |
Seconds between refreshes | 3.0 |
--history-minutes |
Rolling history window | 30 |
--provider |
Force specific provider (amd, nvidia, apple) | auto |
--list-providers |
List available GPU providers | - |
--classic |
Use legacy curses UI (AMD only) | - |
| Key | Action |
|---|---|
1 |
Dashboard (GPU overview) |
2 |
System overview |
3-9 |
GPU detail pages |
r |
Rig Card (shareable summary) |
? |
Help screen |
Tab |
Next screen |
Shift+Tab |
Previous screen |
Escape |
Go back / Close |
q |
Quit |
| Key | Action |
|---|---|
c |
Copy rig card to clipboard |
s |
Save rig card to file |
m |
Toggle between full/compact mode |
The main screen showing all detected GPUs in a grid layout. Each GPU card displays:
- Current GFX (compute) utilization
- UMC (memory controller) utilization
- Power draw vs TDP limit
- VRAM usage
- Sparkline history for GFX and power
Click on a GPU card or press the corresponding number key (3-9) to see detailed stats.
Press 2 to see system-wide statistics:
- Hostname, OS, kernel version, uptime
- CPU model, cores, and live usage with sparkline
- RAM and swap usage
- Aggregate GPU cluster stats (total VRAM, TDP, averages)
Press 3-9 to view detailed per-GPU information:
- Full metric history charts
- Temperature, clock speeds, fan speed (when available)
- PCI bus information
- Architecture details
Press r to generate a shareable ASCII art summary of your system. Perfect for:
- Sharing your setup on social media
- Quick system documentation
- Showing off your GPU cluster
Picomon can be imported and used programmatically in your Python applications. The module provides access to GPU monitoring capabilities, metrics collection, and the provider system.
from picomon import PicomonConfig, run_monitor
# Run with custom settings
config = PicomonConfig(update_interval=1.5, history_minutes=60)
run_monitor(["--update-interval", str(config.update_interval)])Use the provider system to collect GPU metrics programmatically:
from picomon.providers import detect_providers, get_provider
from datetime import datetime
# Auto-detect available providers
providers = detect_providers()
print(f"Found {len(providers)} GPU providers")
for provider in providers:
print(f"{provider.name}: {provider.get_gpu_count()} GPUs")
# Get static GPU information
static_info = provider.get_static_info()
for gpu_id, info in static_info.items():
print(f" GPU {gpu_id}: {info.name} ({info.vram_total_mb:.0f} MB VRAM)")
# Get current metrics
metrics = provider.get_metrics()
for metric in metrics:
print(f" GPU {metric.gpu_idx}: {metric.gpu_utilization:.1f}% load, "
f"{metric.power_draw_w:.1f}W power, "
f"{metric.vram_used_mb:.0f} MB VRAM used")Create a simple monitoring loop that tracks GPU utilization:
import time
from picomon.providers import get_provider
def monitor_gpus(duration_seconds=60, interval=1.0):
"""Monitor GPU metrics for a specified duration."""
provider = get_provider()
if not provider:
print("No GPU provider available")
return
print(f"Monitoring {provider.name} GPUs for {duration_seconds} seconds...")
start_time = time.time()
while time.time() - start_time < duration_seconds:
metrics = provider.get_metrics()
# Clear screen and print current status
print("\033[2J\033[H", end="") # ANSI escape codes to clear screen
print(f"{'GPU':<4} {'Utilization':<12} {'Power':<8} {'VRAM':<10}")
print("-" * 40)
for metric in metrics:
# Get VRAM total from static info
static_info = provider.get_static_info()
vram_total = static_info[metric.gpu_idx].vram_total_mb
vram_percent = (metric.vram_used_mb / vram_total) * 100
print(f"{metric.gpu_idx:<4} "
f"{metric.gpu_utilization:<12.1f}% "
f"{metric.power_draw_w:<8.1f}W "
f"{metric.vram_used_mb:<10.0f}MB")
time.sleep(interval)
# Run monitoring for 30 seconds
monitor_gpus(30, 2.0)For systems with GPUs from multiple vendors, use the MultiProvider:
from picomon.providers import MultiProvider, get_all_providers
# Create a multi-provider that aggregates all GPUs
multi_provider = MultiProvider()
print(f"Total GPUs: {multi_provider.get_gpu_count()}")
# Get all GPU information across vendors
all_info = multi_provider.get_static_info()
for gpu_id, info in all_info.items():
print(f"GPU {gpu_id}: {info.vendor} {info.name}")
# Get metrics from all GPUs
all_metrics = multi_provider.get_metrics()
total_power = sum(m.power_draw_w for m in all_metrics)
avg_utilization = sum(m.gpu_utilization for m in all_metrics) / len(all_metrics)
print(f"System total power: {total_power:.1f}W")
print(f"Average GPU utilization: {avg_utilization:.1f}%")Create custom monitoring applications by extending the provider system:
from picomon.providers import get_all_providers
from datetime import datetime
import psutil # Example: integrate with system monitoring
class CustomMonitoringApp:
def __init__(self):
self.providers = get_all_providers()
self.system_monitor = psutil
def get_system_snapshot(self):
"""Get a complete system snapshot including GPUs."""
snapshot = {
'timestamp': datetime.now(),
'cpu_percent': self.system_monitor.cpu_percent(),
'memory_percent': self.system_monitor.virtual_memory().percent,
'gpus': []
}
for provider in self.providers:
metrics = provider.get_metrics()
static_info = provider.get_static_info()
for metric in metrics:
snapshot['gpus'].append({
'id': metric.gpu_idx,
'vendor': provider.vendor,
'utilization': metric.gpu_utilization,
'power': metric.power_draw_w,
'vram_used': metric.vram_used_mb,
'vram_total': static_info[metric.gpu_idx].vram_total_mb
})
return snapshot
def log_high_utilization(self, threshold=80.0):
"""Alert when GPU utilization exceeds threshold."""
snapshot = self.get_system_snapshot()
for gpu in snapshot['gpus']:
if gpu['utilization'] > threshold:
print(f"ALERT: GPU {gpu['id']} ({gpu['vendor']}) "
f"utilization at {gpu['utilization']:.1f}% > {threshold}%")
# Usage
monitor = CustomMonitoringApp()
monitor.log_high_utilization(85.0)Export GPU metrics for analysis:
import json
import csv
from picomon.providers import get_provider
from datetime import datetime
def export_metrics_to_csv(filename="gpu_metrics.csv", duration=300):
"""Export GPU metrics to CSV file for analysis."""
provider = get_provider()
if not provider:
return
with open(filename, 'w', newline='') as csvfile:
writer = csv.writer(csvfile)
writer.writerow(['timestamp', 'gpu_id', 'utilization', 'power_w', 'vram_used_mb'])
start_time = time.time()
while time.time() - start_time < duration:
metrics = provider.get_metrics()
for metric in metrics:
writer.writerow([
metric.timestamp.isoformat(),
metric.gpu_idx,
metric.gpu_utilization,
metric.power_draw_w,
metric.vram_used_mb
])
time.sleep(5) # Sample every 5 seconds
print(f"Metrics exported to {filename}")
def export_system_info_json(filename="system_info.json"):
"""Export complete system information to JSON."""
providers = get_all_providers()
system_info = {
'timestamp': datetime.now().isoformat(),
'providers': []
}
for provider in providers:
provider_info = {
'name': provider.name,
'vendor': provider.vendor,
'gpu_count': provider.get_gpu_count(),
'gpus': []
}
static_info = provider.get_static_info()
for gpu_id, info in static_info.items():
provider_info['gpus'].append({
'id': gpu_id,
'name': info.name,
'architecture': info.architecture,
'vram_total_mb': info.vram_total_mb,
'power_limit_w': info.power_limit_w,
'pcie_bus': info.pcie_bus
})
system_info['providers'].append(provider_info)
with open(filename, 'w') as f:
json.dump(system_info, f, indent=2)
print(f"System info exported to {filename}")
# Example usage
export_system_info_json()
export_metrics_to_csv(duration=60) # Export 1 minute of dataPicomon uses a pluggable provider system:
from picomon.providers import detect_providers, get_provider
# Auto-detect available providers
providers = detect_providers()
for p in providers:
print(f"{p.name}: {p.get_gpu_count()} GPUs")
# Get specific provider
nvidia = get_provider("nvidia")
if nvidia:
info = nvidia.get_static_info()
metrics = nvidia.get_metrics()# Clone the repo
git clone https://github.com/omarkamali/picomon.git
cd picomon
# Install in development mode
pip install -e ".[dev]"
# Run tests
pytest
# Run the app
python -m picomonI built picomon because:
- nvtop kept crashing on some AMD devices with assertion errors
- Wanted multi-vendor support - one tool for AMD, NVIDIA, and Apple Silicon
- Needed something lightweight - no heavy GUI dependencies
- Shareability - the rig card feature makes it easy to show off your setup
MIT - Omar Kamali