Quickstart
Install nanosync and stream your first database changes in under 2 minutes.
One binary. No Kafka. No config files.
Install
curl -fsSL https://nanosync.dev/install.sh | sh brew install nanosync/tap/nanosync docker pull ghcr.io/nanosyncorg/nanosync:latest See the Downloads page for all platforms and architectures.
# macOS Apple Silicon
curl -Lo nanosync.tar.gz https://github.com/nanosyncorg/nanosync-public/releases/download/v0.0.4/nanosync_v0.0.4_darwin_arm64.tar.gz
tar -xzf nanosync.tar.gz && mv nanosync ~/.nanosync/bin/ See data moving
No IAM, no YAML, no WAL config. Just point at a database and go:
nanosync stream \
--source "postgres://user:pass@localhost:5432/mydb" \
--tables "public.orders" \
--snapshot-only
You’ll see every row as a JSON event hitting stdout:
INF snapshot started tables=1 workers=4 partitions=48
INF snapshot progress done=48/48 rows=4,800,000 elapsed=2.1s
{"_ns_op":"INSERT","_ns_table":"public.orders","id":9022,"status":"pending","amount":149.99}
{"_ns_op":"INSERT","_ns_table":"public.orders","id":9021,"status":"shipped","amount":149.99}
That’s nanosync. No daemon, no state written anywhere. Ctrl-C and it’s done.
Want live CDC instead of a snapshot? Remove --snapshot-only — nanosync will snapshot existing rows, then stay attached and stream inserts, updates, and deletes as they happen.
Ready for production? See the Guides section to set up a persistent pipeline into BigQuery, Kafka, or files.
Next steps
- Postgres → BigQuery guide — end-to-end production pipeline
- SQL Server → BigQuery guide — SQL Server CDC, tlog mode
- Configuration reference — all YAML options
- CLI reference — full command listing