stdout Sink
The stdout sink writes every CDC event as a self-contained JSON object on its own line. Output goes to the process’s standard output, making it trivially composable with any tool in the Unix pipeline: jq, grep, wc, stream processors, or a redirect to a file for later replay via the stdin source.
stdout is the fastest way to verify that CDC is working. Point a pipeline at stdout before wiring up a cloud sink — you’ll see events flowing within seconds of starting the pipeline, with no IAM, no credentials, and no cloud dependency.
Output format
Each event is one JSON object terminated by a newline (\n). Fields:
| Field | Description |
|---|---|
_ns_op | Operation type: INSERT, UPDATE, or DELETE |
_ns_table | Fully qualified source table name (e.g. public.orders, dbo.orders) |
_ns_lsn | Log sequence number of the change on the source database |
_ns_committed_at | UTC timestamp when the transaction committed on the source |
_ns_before | For UPDATE and DELETE events: object containing column values before the change. Present only when include_before is true (the default). |
| (column names) | All column values after the change. For DELETE events, only columns available in the source change stream are included (primary key columns at minimum; full row with REPLICA IDENTITY FULL on Postgres or CDC mode on SQL Server). |
Example events
INSERT:
{"_ns_op":"INSERT","_ns_table":"public.orders","_ns_lsn":"0/1A2B3C4","_ns_committed_at":"2025-01-15T10:32:00Z","id":1001,"customer_id":42,"total":99.95,"status":"pending","created_at":"2025-01-15T10:31:58Z"}
UPDATE:
{"_ns_op":"UPDATE","_ns_table":"public.orders","_ns_lsn":"0/1A2B400","_ns_committed_at":"2025-01-15T10:33:12Z","_ns_before":{"id":1001,"customer_id":42,"total":99.95,"status":"pending","created_at":"2025-01-15T10:31:58Z"},"id":1001,"customer_id":42,"total":99.95,"status":"shipped","created_at":"2025-01-15T10:31:58Z"}
DELETE:
{"_ns_op":"DELETE","_ns_table":"public.orders","_ns_lsn":"0/1A2B450","_ns_committed_at":"2025-01-15T10:40:00Z","_ns_before":{"id":1001,"customer_id":42,"total":99.95,"status":"shipped","created_at":"2025-01-15T10:31:58Z"},"id":1001}
Configuration
connections:
- name: local-out
type: stdout
pipelines:
- name: debug-pipeline
source:
connection: prod-postgres
tables:
- public.orders
sink:
connection: local-out
Sink properties
| Property | Default | Description |
|---|---|---|
pretty | false | Pretty-print JSON output with indentation. Useful for manual inspection. Breaks line-by-line parsing tools that expect one event per line. |
include_before | true | Include the _ns_before field on UPDATE and DELETE events. Set to false to reduce output volume when before-images are not needed. |
Composing with jq
Because every event is a valid JSON object on its own line, stdout output integrates directly with jq:
# Count events by operation type
nanosync apply --file pipeline.yaml | jq -r '._ns_op' | sort | uniq -c
# Watch only DELETE events as they arrive
nanosync apply --file pipeline.yaml | jq 'select(._ns_op == "DELETE")'
# Extract the IDs of newly inserted rows
nanosync apply --file pipeline.yaml | jq 'select(._ns_op == "INSERT") | .id'
# Measure end-to-end latency: diff between commit time and now
nanosync apply --file pipeline.yaml | jq 'now - (._ns_committed_at | fromdateiso8601)'
# Save events to a file for later replay
nanosync apply --file pipeline.yaml > events.jsonl
Capturing and replaying events
stdout and stdin form a capture-replay pair. Capture from a live source:
nanosync apply --file postgres-pipeline.yaml > events.jsonl
Replay later against a different sink (e.g. a dev BigQuery dataset) by pointing a pipeline at the stdin source:
cat events.jsonl | nanosync apply --file replay-pipeline.yaml
Limitations
- At-least-once delivery only. stdout has no persistence layer and no deduplication. If the process is killed and restarted, nanosync re-reads from the last checkpoint and re-emits any events that were written to stdout but not yet checkpointed.
- No backpressure. If the consuming process (e.g.
jq) is slower than the source, events buffer in the OS pipe. For sustained high-throughput streams, use a sink with a persistent write path (BigQuery, Kafka, local files) instead. - Not suitable for production replication. stdout is a diagnostic and development tool. For production pipelines, use a sink that provides durable storage and idempotent writes.