PostgreSQL Advanced Monitoring
I/O statistics, background writer, WAL stats, and wait events.
Beyond query performance and table health, Basira collects PostgreSQL internals that help diagnose storage, checkpoint, and contention issues.
I/O Statistics (PostgreSQL 16+)
The I/O Stats page shows read and write operations broken down by backend type, object type, and context.
| Dimension | Examples |
|---|---|
| Backend type | client backend, autovacuum worker, checkpointer, background writer |
| Object type | relation, temp relation |
| Context | normal, vacuum, bulkread, bulkwrite |
What to look for
- High read I/O from client backends — queries are reading from disk instead of shared buffers. Increase
shared_buffersor optimize queries. - High I/O from autovacuum — autovacuum is doing significant work. This is normal on write-heavy tables but can impact query latency.
- Bulk read spikes — sequential scans or large sorts hitting disk. Check for missing indexes.
Background Writer & Checkpoints
The BGWriter page tracks how PostgreSQL writes dirty buffers to disk.
Checkpoint Metrics
| Metric | Description |
|---|---|
| Checkpoints timed | Checkpoints triggered by checkpoint_timeout (scheduled) |
| Checkpoints requested | Checkpoints triggered by max_wal_size (forced) |
| Checkpoint write time | Time spent writing dirty buffers during checkpoints |
| Checkpoint sync time | Time spent syncing files to disk |
Buffer Metrics
| Metric | Description |
|---|---|
| Buffers checkpoint | Buffers written during checkpoints |
| Buffers clean | Buffers written by the background writer |
| Buffers backend | Buffers written directly by backends (expensive) |
| Backend fsyncs | Direct fsyncs by backends (very expensive) |
| Max written clean | Times the background writer stopped because it wrote too many buffers |
| Buffers allocated | Total buffers allocated |
What to look for
- High checkpoints requested — WAL is filling up before
checkpoint_timeout. Increasemax_wal_sizeor reduce write volume. - High buffers backend — backends are flushing dirty pages themselves because the background writer and checkpoints can't keep up. Increase
bgwriter_lru_maxpagesorcheckpoint_completion_target. - Backend fsyncs > 0 — backends are doing synchronous I/O. This is expensive and indicates the OS or filesystem isn't handling writeback well.
WAL Statistics (PostgreSQL 14+)
The WAL Stats page shows Write-Ahead Log generation over time.
| Metric | Description |
|---|---|
| WAL records | Number of WAL records generated |
| WAL bytes | Bytes of WAL generated |
What to look for
- WAL spike — correlates with bulk writes, large transactions, or
CREATE INDEX. Expected during data loads. - Sustained high WAL — heavy write workload. Ensure replication can keep up and disk throughput is sufficient.
Wait Events
Wait events show what PostgreSQL processes are waiting on. See Active Queries for the real-time view.
Common wait event types:
| Type | Description | Common causes |
|---|---|---|
| IO | Waiting for I/O | Disk bottleneck, shared buffer miss |
| Lock | Waiting for heavyweight lock | Concurrent DDL or row-level conflicts |
| LWLock | Waiting for lightweight lock | Internal contention (buffer mapping, WAL insert, etc.) |
| Client | Waiting for client activity | idle in transaction sessions |
| Activity | Background process waiting | Normal for idle background workers |
High LWLock:BufferMapping waits suggest shared buffer contention — consider increasing shared_buffers. High Lock:transactionid waits indicate row-level contention between concurrent transactions.