Local Testing with stdout and stdin
Local testing lets you validate connector config, inspect event shape, and simulate full pipelines without pointing nanosync at a real sink. No cloud credentials, no running server, no committed state.
The quickest test — stream to stdout
nanosync stream reads CDC events from a source and writes them to stdout as JSON lines. No server, no YAML, no sink config needed:
nanosync stream \
--source "postgres://user:pass@localhost/mydb" \
--tables "public.orders"
Nanosync connects, creates a temporary replication slot, and begins streaming. Events appear on stdout immediately. The slot is cleaned up when the command exits.
Output format
Every event is a JSON object on its own line. The nanosync metadata fields are prefixed with _ns_:
{"_ns_op":"INSERT","_ns_table":"public.orders","_ns_lsn":"0/1A2B3C","_ns_committed_at":"2024-11-14T09:23:41.123Z","id":42,"customer_id":7,"total_cents":4999,"status":"pending"}
{"_ns_op":"UPDATE","_ns_table":"public.orders","_ns_lsn":"0/1A2B3D","_ns_committed_at":"2024-11-14T09:23:42.005Z","id":42,"customer_id":7,"total_cents":4999,"status":"confirmed"}
{"_ns_op":"DELETE","_ns_table":"public.orders","_ns_lsn":"0/1A2B3E","_ns_committed_at":"2024-11-14T09:23:43.811Z","id":42}
| Field | Description |
|---|---|
_ns_op | Operation: INSERT, UPDATE, or DELETE |
_ns_table | Source table in schema.table format |
_ns_lsn | Log Sequence Number at commit |
_ns_committed_at | Wall-clock commit timestamp (UTC, RFC3339) |
| all other fields | Column values from the row |
UPDATEs include the full after-image. The before-image is included under _ns_before if the source table has REPLICA IDENTITY FULL set.
Filtering events with jq
Pipe stdout directly into jq to inspect specific operations or tables:
nanosync stream \
--source "postgres://user:pass@localhost/mydb" \
--tables "public.orders" \
| jq 'select(._ns_op == "INSERT")'
Filter by table when streaming multiple tables:
nanosync stream \
--source "postgres://user:pass@localhost/mydb" \
--tables "public.orders,public.order_items" \
| jq 'select(._ns_table == "public.order_items")'
Extract specific fields:
nanosync stream \
--source "postgres://user:pass@localhost/mydb" \
--tables "public.orders" \
| jq '{op: ._ns_op, id: .id, status: .status}'
Saving events to a file
Redirect stdout to capture a real event stream for replay later:
nanosync stream \
--source "postgres://user:pass@localhost/mydb" \
--tables "public.orders" \
> events.jsonl
Press Ctrl-C when you have enough events. The file is valid JSONL — one complete JSON object per line — and can be replayed as many times as needed.
Replaying saved events through a pipeline
The stdin source type reads events from standard input. Use it to replay a captured event file through any sink without needing the original source database:
connections:
- name: captured-events
type: stdin
- name: my-bigquery
type: bigquery
properties:
project_id: my-project
dataset_id: replication
credentials_file: /path/to/key.json
pipelines:
- name: replay-test
source:
connection: captured-events
sink:
connection: my-bigquery
Apply it with events piped in:
cat events.jsonl | nanosync apply --file pipeline.yaml
Nanosync reads until EOF, flushes the final batch to the sink, and exits. This is the fastest way to test a new sink connector: capture 1,000 real events, replay them, verify they land correctly.
Using a stdout sink in a full pipeline
Define stdout as a named sink connection in your pipeline YAML. Useful for CI pipelines where you want to assert on event output without a real sink:
connections:
- name: prod-postgres
type: postgres
dsn: "postgres://..."
- name: local-out
type: stdout
pipelines:
- name: debug-pipeline
source:
connection: prod-postgres
tables: [public.orders]
sink:
connection: local-out
Start the server and apply normally:
nanosync start dev
nanosync apply --file pipeline.yaml
Events appear in the server’s stdout. Pipe or redirect the server process output to jq or a log file as needed.
CI assertions on stdout output
In a CI script, use process substitution to assert on event content:
nanosync stream \
--source "postgres://user:pass@localhost/testdb" \
--tables "public.orders" \
--timeout 5s \
| jq -e 'select(._ns_op == "INSERT") | .id' > /dev/null
echo "✓ INSERT events received"
--timeout stops the stream after the specified duration, making it safe for scripted use.
stdout + jq is the fastest way to verify a new connector config before pointing it at a real sink. Get the event shape right locally, then swap the sink connection.