Things You Need To Know About SQL Data Sync Service

The following is a collection of the things I’m usually asked, things that I usually encounter and clarify in forums and some other little known facts. So, I thought I’d post it here and just update it from time to time.

1. Provisioning

Before a member database can participate in synchronization, the Data Sync Service needs to create database objects to track, enumerate and apply changes. This step is what is commonly referred to as Provisioning.

Provisioning is the process of preparing the databases to participate in the sync community. It creates triggers and tracking tables for each table being synched, creates stored procedure to select and apply changes and adds a user-defined table type that’s used for doing bulk change application.

Adding a table to more than one sync group also increases the number of these objects.

Provisioning also initially populates the tracking tables with metadata for existing rows.

Since Data Sync Service is largely based on Sync Framework, this blog post Sync Framework Provisioning, more or less describes what gets provisioned, except the Data Sync Service created object has a different naming convention.

More information: Provisioning Destination Databases

2. Change Tracking

Change tracking in the Data Sync Service is done via triggers and changes are recorded in the tracking tables.

As such, there is an added overhead of triggers firing on CRUD operations.

Likewise, there’s also an extra overhead of storing change tracking metadata in terms of space consumed.  The tracking table contains the PK of the table being synched, plus any column used in filtering and a few other columns for change tracking metadata. For narrow tables or those with fewer columns, you will find sometimes that the tracking table may even be bigger than the table being tracked since the tracking table contains metadata for both existing and deleted rows.

You should take the change tracking into consideration when sizing the Azure SQL Database.

The change tracking tables is initially populated with metadata for existing rows during provisioning and assumes that the rows are inserts.

Since change tracking is done via triggers, bulk operations that normally don’t fire triggers (bcp, bulk insert, truncate, SSMS Import/Export wizard) will not update the change tracking tables.  As such, changes made using the above bulk operations won’t be picked up by the service.  This actually makes the table being synched and its corresponding tracking table out-of-sync.  If you need to use bulk operations, make sure to override the default behaviour, see: Controlling Trigger Execution When Bulk Importing Data

Note also that change tracking is at row level. The service can’t tell which column has changed; it can only tell that the row has changed. If you change a column in a twenty-column table that is being synched, the entire row is sent, not just the column that was changed.

Change tracking also does not record a chronological record of changes that’s made to a row. When synching changes, it’s not trying to replay the actions that occurred for a row. So if you think that when you insert and update a row, you will have a record for the insert and a record for a subsequent update and when applying the change, it will replay the insert and the update separately, the answer is no. If a row undergoes many updates, it does not keep a history of these updates. Only the last update is synched.

Additional Information: Considerations for Using Azure Data Sync, Managing Synchronization Conflicts, Database Considerations and Constraints

3. Initial Synchronization

When synching for the first time between the hub and a member, make sure that only one of them contains data.

As previously mentioned, Provisioning initially populates the tracking tables with metadata.

Since the pair has not synched before, there is no way for the service to tell if two members have the same set of data and has synched before since the synchronization knowledge is initially empty and is only populated after a sync.

So, if the pair has been prepopulated with data and the service can’t tell if they have identical data,  it will assume rows from a member are all inserts and will try to insert it on the other member. If the data happens to be identical (same PKs), this will cause conflicts and will slow down the sync and there is maximum threshold that will cause the sync to fail.

Please remember this as well when deprovisioning or removing sync group and recreating it. When you recreate a sync group and the member and the hub contain data already, you will run into this same issue. The same issue about prepopulated members applies when you removed a previously synched member from a sync group and you add them back.

More information: Initial Synchronization

4. Data sync not schema sync

The Data Sync service syncs data only. While it’s capable of creating the corresponding tables being synched on a member database during provisioning, it only creates the bare minimum to be able to sync. It will not create or sync stored procedures, triggers or views from the source database. If you want the target database members to have a full fidelity copy of the database schema from the source, you can script the source database and run the script on the target database members first.

Moreover, if you change the schema of your member databases, these changes will not be picked up and synched by the service. Only columns explicitly selected as part of the Dataset in Sync Group will be synched. If you want to include or remove columns as result of a schema change, you will have to modify the sync group.

Additional Information: Provisioning Destination Databases, Edit Sync Group

5. Hub-spoke

The Data Sync Service sync topology is hub-spoke model.

Each member syncs with the hub only. No member syncs with another member directly. Changes from a member go thru the hub first and that’s where the other members pick up the change. Internally, the service actually creates a “pairing” between a member and a hub and isolates the synchronization knowledge for every member-hub pair.

If you’re familiar with Sync Framework, this means there is a 1-to-1 correspondence in Sync Group between a member scope and hub scope such that the number of scopes in the hub is actually equal to the number of members in the Sync Group.

Note that only an Azure SQL Database can be designated as a Hub.

6. Sync Order

The order in which the members are synched is non-deterministic. It means that a member may be the first to sync on a particular sync run and maybe the last member to sync on the next sync run.

The Data Sync Service loops thru each member and syncs it with the hub and there is no way to specify in which order the members should synchronize with the Hub.

Additional Information: Managing Synchronization Conflicts

7. Conflict resolution

A conflict occurs when a change is made on both source and destination. For example, a row was updated on both member and hub. Or a row was updated on a member and the same row was deleted in the hub. To handle conflicts, you need to set a conflict resolution policy.
– Hub wins means if a conflict is encountered between client data and hub data, client data is ignored and hub data wins.
– Client wins means if a conflict is encountered between client and hub data, client data wins. In a multi-member sync group, this means that the last update from the last client to sync in sync run is the change recorded in the hub and eventually synched to the other members. (Last writer)

Additional Information: Managing Synchronization Conflicts, Conflict Resolution

8. Data Convergence

Note that a sync run does not necessarily result to data convergence on all members. Sync happen between a member and a hub. Assuming you have 3 members, that’s 3 syncs, Member 1/Hub, Member 2/Hub and Member 3/Hub.

The Data Sync Service is hub-spoke synchronization. All data syncs go thru the hub. The members do not sync with one another. A change need to go to the hub before it reaches the other members.

So in a sync run, assuming Member 1 syncs first, then it will not have any changes that were subsequently synched from Members 2 and 3. Members 2 and 3 though will have the changes from Member 1 since by the time they synched, Member 1 may have uploaded its changes already.

In the first scenario, to get them all fully synched, you may have perform two syncs. On first sync, Member 1 misses on the changes from Members 2 and 3. On second sync, the changes from Members 2 and 3 are already in the Hub, so Member 1 finally gets them.

Additional Information: Managing Synchronization Conflicts

9. Batching

Internally, when sending changes, the Data Sync Service does them in batches. The current batch size is 10MB.

During synchronization, the service applies each batch as a transaction. So if you have 100MB worth of changes, you get 1o batches and each batch is committed separately. For example, if the service successfully applies the first 5 batches and gets a fatal error on the 6th batch, the first batches has been successfully synched already and subsequent syncs will start on where it left off, the 6th batch.

10. Transactional Consistency

Sync transactions are different than business transactions. The Data Sync service uses transactions to apply changes in a batch. However, it is not aware of whatever business transactions have created those changes.

For example, assuming you has an Order and OrderDetail table. In your application, you will most likely treat adding an order as adding a row in the Order table and the corresponding child rows in the OrderDetail table in a single transaction.

Data Sync Service is not aware of these transaction boundaries though. In fact you can create two different sync groups to synchronize the Order and OrderDetail table separately. The service tracks changes for every row at table level. A change has happened and that is all it knows. So you will run into issues where the parent row wasn’t applied and then when the child rows are applied they’ll fail because of FK constraints. What was sync on one table is completely not visible to the other table being synched. The synching of the OrderDetail table is completely ignorant of the result of synching the Order table.

A common misconception about the all or nothing concept of transaction as well is that if we try to sync 10 rows and one encountered an error applying the change, the service will roll back the other nine rows. This is not a case in the sync service though; a failure of a specific row is raised as a conflict and is handled thru conflict resolution. So you get a sync results that says 9 rows successfully applied and 1 failed.

Some scenarios that will cause an entire batch to roll back are a fatal error and errors/conflicts reaching the maximum threshold (currently at 1000).

Additional Information: Managing Synchronization Conflicts

11. Primary Key Updates

The Data Sync Service currently does not pick up or sync changes in primary key values.

If you modify the primary key value of a row, the corresponding PK stored in the tracking table does not change. This is because the update trigger attempts to update the tracking table by doing a join between the inserted logical table and the corresponding entry in the tracking table. The inserted logical table contains the new PK value whereas the tracking table contains the old PK value and does fail the join condition in the trigger.

The easiest workaround for this is to insert a new row with the new PK and delete the old one. Of course, this gets more complicated if you have child tables with FK constraints as you have to update them to point to the new PK value as well.

More information: Sync fails in the following scenarios

12. Filtering

Every column used in a filter is added to the tracking tables in addition to the PK.

Let’s say you filter based on two columns, these two columns are duplicated in the tracking tables. The more columns you use as filter, the more values you duplicate in the tracking tables. Again, for narrow tables, this may result to a tracking table that is bigger in size compared to the table being synched.

The Data Sync service also does not support the automatic deletion of rows that no longer satisfy a filter condition.

For example, let’s say we filter based on State and we defined a sync group with a filter State=WA and a sync group with Filter=CA. When a member in the WA sync group syncs, it gets rows that passes the filter condition and receives rows having State=WA. The same thing happens when a member in the CA sync group syncs, it gets rows with State=CA.

We then update a row whose State=WA to State=CA. The row has now gone out-of-scope and no longer satisfies our filter condition in the WA sync group and enters the CA sync group.

When we sync the CA sync group, it gets the row whose state was changed to CA.

However, when we sync the WA sync group, members that has previously received the row when its value was State=WA will not get an update instruction to change the state to CA nor will they get a delete instruction to remove the row that no longer satisfies the filter condition. You now have a member database holding rows that has since moved out of its filter condition and now belongs to another sync group.

13.  Row Size

Because of the batch size mentioned in item #9, you cannot have a row that exceeds the 10mb batch size.

If you’re synching blobs (images, docs, etc.), you will most likely hit this limit and encounter an error. The reason you cannot sync a row bigger than this batch size is that the service will not and cannot split a row into two separate batches. A row is the smallest unit of change it can sync.

Imagine the overhead of sending the first 5 columns of a row because that’s what fits in the batch, then sending the next 3 columns and reassembling the row before applying it. It’s also problematic applying the partial 5 columns first and applying the 3 columns next. You now applied a single row change in the source twice in the destination.

14. Sync knowledge size

The Sync knowledge stores information about the replicas/copies that are synching and associated metadata related to timestamp ranges of changes synched between replicas.

The sync knowledge is stored in a row inside DataSync.scope_info_dss, the sync knowledge is exchanged between replicas trying to sync with each other.

The batch size specified above also applies to the Sync Knowledge and a sync knowledge that grows to more than 10mb will cause the sync to fail. Filters and a lot of deletes may cause fragmentation in the sync knowledge and causes it to grow and you might run into this issue.

15. Same Table in multiple sync groups

A table can be added to more than one sync group.

You should however make sure that this does not result to Sync Loops.

Sync groups on the same table having different column counts is also permitted but may cause some issues.

For example, you may have a table with three columns: Id – PK, Col1 – not nullable, Col2 – not nullable.

You can create a Sync Group that has all three columns and another Sync Group that has only two columns: Id and Col1 only.

The second Sync Group though will fail synching an insert since Col2 is not nullable.

16. Synchronization Loops

A Synchronization loop normally occurs when you have tables that are part of more than one sync group.

What happens is that a change applied by one sync group is detected as a change by the other sync group and change by that other sync group is picked up by the first sync group thus creating a loop whereby you have the same change going back and forth.

Additional Information: Synchronization Loops, Design to Avoid Synchronization Loops

17. Metadata Cleanup

The Data Sync Service periodically does metadata clean-up. This removes entries from the tracking table that are more than the retention period.

For example, there is no point keeping the metadata for deleted rows when these changes has long been propagated to the member database.

Currently, this retention period is set to 45days. That means delete metadata for rows deleted more than 45 days ago are cleaned up. If a member has not synched within this retention period, the service will detect it as an outdated member and prevent that member from synching.

Additional Information: Why is my database status “Out-of-Date”?, Avoid Out-of-Date Databases and Sync Groups

18. Data Sync Agent

The Data Sync Agent is used to allow on-premise SQL Server databases to participate in a sync group.

The Data Sync Agent does not connect directly to Azure SQL Database and the Azure-based Data Sync Service does not directly connect to the on-premise SQL Databases either.

The Agent uses WCF for communicating with the Azure-based Data Sync Service.

So, it’s not required to open up port 1433 on-premise to allow Data Sync Service to connect to the on-premise database since it never connects to it directly. Neither do you need to allow the Data Sync Agent IP address on the Azure SQL Database firewall since the agent never connects directly to the Azure SQL Database.

Also, the minimum requirement for the on-premise database is SQL Server 2005 SP2. However, you will find that Data Sync will also work with SQL Express even with a SQL 2012 LocalDB instance.

Additional Information: Data Security, SQL Server 2012, SQL Express 2012 LocalDB, Sync Framework, Sync Framework v4 and SQL Azure Data Sync

And lastly, it’s still a Preview

So provide feedback and vote for features.

Additional Information: SQL Azure Forums and SQL Data Sync Feature Voting Site

For more information on FAQs, known issues, best practices and troubleshooting guide, see the following:

Best Practices

SQL Data Sync Limitations

FAQ (SQL Data Sync)

Troubleshooting Guide (SQL Data Sync)

Known Issues (SQL Data Sync)



  1. Hello June thaks for this overview. I havd the problems with slow sinhronization whn initial data is present. Now i know that the performance is inpacted when initial data is present. June is there any way to handle the pre-populated rows conflicts manualy? How sould i start do i need to build my own provider SqlSyncProvider or. i can overide one? What method need i handle… Is it possible to handle it in code and not have this performance inpact?

    1. I modified the script for the default inserts into the tracking tables, filtered by date. that filter the records. But that is not enough becuse i can’t filter all tables. So i need a better solution and control over the process where can i start, do i need to build a custom provider and how? Thanks for any pointer into the right direction.

      1. as I have mentioned, the current service has no API, so there is no programmatic way to hook into it and influence the sync. have you thought to creating multiple sync groups instead with different filter values?

    2. the Azure Data Sync Service has no API right now and it uses a slightly different SqlSyncProvider. The latest release of the service has improvements on initial sync, so you might want to consider not pre-populating member databases.

  2. Hello June and thank you for the replay. I am using sync framework in .net. Basically i am making the SqlSyncProvider in code and call Synchronize on SyncOrchestrator. But looks the performance is not so good on large DB with preexisting data. I filtered all the data that is possible, now i need to have better control. I need the data, becuse it is a real life req. So basicly i want to code in c# my own provider or shange the sql scripts… Maybe to manualy filter the data in code after the population of the tracking data… any pointer or possibility to have better control of this performance building my own provider?

    1. the slow performance is not because of the size of the data but because of the pre-existing data that is firing conflicts. are the two databases exactly the same? if they are, just provision of them, back up, restore on the other one and performpostrestorefixup

  3. Excellent article sir,,, Cleared many doubts and concepts

  4. Helpful article. Thanks. The problem I’m having is contending with schema changes. Is there a best practice for work flow. i.e – deprovision client>modify schema>provision client>deprovision server>modify server>provision server. What happens to changes not yet synchronized. Are they lost?

    1. when you deprovision, the change tracking metadata is removed as well (assuming you deprovision store or the last scope against the table). so whatever has been tagged as changed loses that tagging. i suggest you sync first before you deprovision.

  5. Pradeep Kumar · · Reply

    Hello June thanks for this overview. Helpful article.
    I have problem to handle multiple version of database in syn Fx 2.1.
    If we have a server DB and also some clients which have same structure of database are already
    doing sync with server DB. Here client and server have same structure of database.
    In Future if we have to add some columns or to add some tables for 3 clients then how will be
    handled both version of database in synching as we have to run rest (older version of database) of clients and changed (new version of database) clients using same changed server database.

    Thank you in advance.

    1. depends on the schema changes you’re making…are you just adding columns? resizing? removing?

  6. Excellent artical (Y).
    Can You explain in step by step way how to implement Synchronization to Android Vs MSSQL?

    1. there’s no Sync Framework sync provider for Android. you will have to write your own using the Sync Framework Toolkit.,

  7. Sir,
    When I provision ms sql server 2008 as server, the error come: User does not have permission to alter database ‘@db’, the database does not exist, or the database is not in a state that allows access checks. ALTER DATABASE statement failed. Please guide me.

    1. check the account you use to connect to the database has enough permissions.

  8. Wish you could specify what time of day a sync will occur if sync time is every day or more.

  9. Stephan · · Reply

    JuneT, excellent article. Thank you very much for this.

    I have a question which I haven’t been able to find an answer to: if the hub and spoke databases go out of sync for any reason, what is the correct process to get them all back into sync with each other?

    It seems to me, from our experience with using the Azure SQL data sync, is that once the databases go out of sync, there’s no way to get them back into sync, even when the subsequent data syncs run successfully.

    1. unfortunately, there’s no supported or publicly documented way to work around it. the service uses Sync Framework, but it’s not easy/safe to apply the same hacks on the service.

  10. Excellent post. Thank you so much for this. Pertinent even today.
    Saved me a ton of time trying to figure out why the Sync was not working on an AzureAWS setup. (Turned out to be the Bulk Import issue). Setting the .FireTrigger flag on the Bulk Importer code solved this for me.

    Thanks again for taking the time to put this. With the sparse documentation around Sync, this was very much required.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: