New SQL Connectors in Public Preview
- Alexander Komyagin

- Mar 9
- 3 min read

We're excited to share that support for SQL Server, Oracle, and PostgreSQL via our SQLBatch connector is now in public preview. The SQLBatch connector supports both initial sync and CDC (all query-based), and is available in our Open Source Dsync. Check it out on GitHub.
Usage
The SQLBatch connector accepts a YAML configuration file with user-defined SQL queries declaring "virtual namespaces" to be migrated:
driver: sqlserver # or oracle, postgres
connectionstring: sqlserver://user:password@host:port?database=DbName
mappings:
- namespace: db.users
query: "SELECT id, name, email FROM users"
partitionquery: "SELECT id FROM users WHERE id % 4 = 0 ORDER BY id"
cols: [id]
limit: 1000
changes:
- initialcursorquery: "SELECT MAX(updated_at) FROM users"
query: "SELECT id, 'U', updated_at FROM users WHERE updated_at > $1 ORDER BY updated_at LIMIT 1000"
interval: 5sEach virtual namespace has the following key properties:
Unique name
Query to get the full count
Query to partition the dataset
Query to retrieve the objects
One or more Change Tracking queries
One for the main table and one per each embedded object
To start Dsync with the SQLBatch connector and the config saved in cfg.yml:
dsync sqlbatch --config=cfg.yml /dev/null --log-jsonA complete example for migrating or syncing the sample Microsoft AdventureWorks database from SQL Server to Cosmos DB for NoSQL is available here.
Read more about the connector usage in the docs.
To help create or bootstrap the config file you can use your favorite AI coding agent (like Claude Code, Codex, Gemini or Factory.AI Droid) with the Skills framework, or by converting MongoDB's Relational Migrator mappings using our AI Agent. Both assets will be published soon but feel free to contact us in the meantime if you need them.
Note that the Open Source version of Dsync only supports a single namespace for CDC. For CDC with multiple namespace consider our Enterprise version.
How it works
Dialects
In the SQLBatch connector, dialects or drivers abstract away the differences between SQL database vendors (SQL Server, PostgreSQL, DB2, etc.). Each dialect handles:
1. SQL syntax variations - Different databases use different syntax for pagination (LIMIT vs TOP), string concatenation, date functions, etc.
2. Data type mappings - How native types (e.g., NVARCHAR, BIGINT, DATETIME2) translate to the target document store
3. CDC mechanisms - Change Data Capture works differently across databases (SQL Server uses CT/CDC tables, PostgreSQL uses logical replication slots, DB2 uses journals)
4. Query generation - The dialect generates vendor-specific queries for initial loads and incremental syncs
When you configure a SQLBatch source, you specify the dialect (e.g., driver: sqlserver or driver: postgres), and the connector uses the appropriate SQL generation and type conversion logic for that database platform.
Denormalization
The connector uses a user-defined query to denormalize the data at the source. Most recent versions of major SQL database vendors do a great job at supporting JSON, and their query engines are able to perform complex JOINs much more efficiently than any streaming processor.
For additional transformations beyond what RDMBS can do, such as converting JSON into native BSON types, Dsync supports extensible transformers.
Change Tracking / CDC
The SQLBatch connector relies on a change tracking (CT) query and a configurable polling interval to determine what objects have changed at the source. Since the final JSON objects are often composed from several tables, the connector supports multiple CT queries per namespace - one for each of the source tables that are embedded in the object.
The modified or deleted objects are then efficiently refetched using the main query in batches - hence the name "SQLBatch". Compared to WAL-based log parsing, our approach allows us to eliminate a whole class of possible data integrity issues, require no custom software on database servers, and only needing regular read access.
At Adiom, we're making usually complex and messy data migration and replication easy. You can check the SQLBatch connector out on GitHub and read more about its usage in the docs.
We also have DB2 support for SQLBatch available in Private Preview. Contact us to request access.

![[Part 5] Production Database Migration or Modernization: A Comprehensive Planning Guide](https://static.wixstatic.com/media/c2da01_0ceabc9f5f074ca4b9e4b951d7b6b849~mv2.png/v1/fill/w_980,h_501,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/c2da01_0ceabc9f5f074ca4b9e4b951d7b6b849~mv2.png)
![[Part 4] Production Database Migration or Modernization: A Comprehensive Planning Guide](https://static.wixstatic.com/media/c2da01_b3afddafc6d84ee4bc5d9a8108ae0209~mv2.png/v1/fill/w_980,h_507,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/c2da01_b3afddafc6d84ee4bc5d9a8108ae0209~mv2.png)
![[Part 3] Production Database Migration or Modernization: A Comprehensive Planning Guide](https://static.wixstatic.com/media/c2da01_35a7086c0c724316aaf9705cfdd75297~mv2.png/v1/fill/w_980,h_512,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/c2da01_35a7086c0c724316aaf9705cfdd75297~mv2.png)
Comments