Part 2 – Bidirectional Synchronization where the Client and Server Primary Keys are different

As a follow up post to Part 1 – Upload Synchronization where the Client and Server Primary Keys are different, let’s extend the code from the previous post to enable Bidirectional synchronization where the client and server table has different PKs.

Let’s extend the scenario so that we now want to download changes to the rows belonging to a particular client. In the last post, we identified each client with a GUID ClientId. We’ll use the same ClientId to filter the download.

Here’s the revised code for the server side provisioning:

Code Snippet
  1. //setup the server connection
  2. var serverConn = new SqlConnection("Data Source=localhost; Initial Catalog=Aggregator; Integrated Security=True");
  4. // define a new scope named TestScope
  5. var scopeDesc = new DbSyncScopeDescription("TestScope");
  7. // get the description of the TestTable table from the server database
  8. var tableDesc = SqlSyncDescriptionBuilder.GetDescriptionForTable("TestTable", serverConn);
  10. // add the table description to the sync scope definition
  11. scopeDesc.Tables.Add(tableDesc);
  13. // create provisioning scope based on the scope definition
  14. var serverProvision = new SqlSyncScopeProvisioning(serverConn, scopeDesc);
  16. // set the filter column on the Orders table to OriginState
  17. serverProvision.Tables["TestTable"].AddFilterColumn("ClientId");
  19. // set the filter value to NC
  20. serverProvision.Tables["TestTable"].FilterClause = "[side].[ClientId] = '89198182-24C9-4CD7-8427-F6790270E62C'";
  22. if (!serverProvision.ScopeExists("TestScope"))
  23. {
  24.     // skipping the creation of table since table already exists on server
  25.     serverProvision.SetCreateTableDefault(DbSyncCreationOption.Skip);
  27.     //apply the scope definition
  28.     serverProvision.Apply();
  29. }

Notice how we specified a static filter on the ClientId column to the same value we set during the upload.

We’ll change the SyncDirection to UploadDownload to enable the bidirectional sync. We then subscribe to the ChangesSelected event on the server side this time to remove the ClientId column before the dataset reaches the client.

Code Snippet
  1. // create the sync orchestrator
  2. var syncOrchestrator = new SyncOrchestrator();
  4. // set local provider of orchestrator
  5. syncOrchestrator.LocalProvider = new SqlSyncProvider("TestScope", clientConn);
  7. // set the remote provider of orchestrator
  8. syncOrchestrator.RemoteProvider = new SqlSyncProvider("TestScope", serverConn);
  10. // set the direction of sync session to Upload and Download
  11. syncOrchestrator.Direction = SyncDirectionOrder.UploadAndDownload;
  13. // subscribe for errors that occur when applying changes to the client
  14. ((SqlSyncProvider)syncOrchestrator.LocalProvider).ChangesSelected += ClientChangesSelected;
  15. ((SqlSyncProvider)syncOrchestrator.RemoteProvider).ChangesSelected += ServerChangesSelected;
  17. // execute the synchronization process
  18. var syncStats = syncOrchestrator.Synchronize();
  20. // print statistics
  21. Console.WriteLine("Start Time: " + syncStats.SyncStartTime);
  22. Console.WriteLine("Total Changes Uploaded: " + syncStats.UploadChangesTotal);
  23. Console.WriteLine("Total Changes Downloaded: " + syncStats.DownloadChangesTotal);
  24. Console.WriteLine("Complete Time: " + syncStats.SyncEndTime);
  25. Console.WriteLine(String.Empty);
  26. Console.ReadKey();

The sync knowledge on the client side knows only the Id column as the PK of the client table, so in order to check if it has the rows coming from the server, the comparison should only be made on the Id column.

So let’s take out the ClientId column:

Code Snippet
  1. static void ServerChangesSelected(object sender, DbChangesSelectedEventArgs e)
  2. {
  3.     //let's check if we're synching the table we're interested
  4.     if (e.Context.DataSet.Tables.Contains("testtable"))
  5.     {
  6.         var columns = e.Context.DataSet.Tables["testtable"].Columns;
  8.         if (columns.Contains("ClientId"))
  9.         {
  10.             //let's remove the new column
  11.             columns.Remove("ClientId");
  12.         }
  13.     }
  14. }

There you go, the client should now be able to receive changes from the server side for rows whose ClientId is equal to its ClientId.

Again, I’d love to hear any feedback if this works for you as well.



  1. The problem with this solution is that each server record is tied to only one client. This creates a business rule whereby Client A could only ever sync to Client A records and Client B could only ever sync Client B records … without this rule, there would be a conflict whenever multiple Client records with the same ID value were retrieved by the filter.

    1. in this scenario, the client records have duplicate/overlapping IDs already even before they’re sync. If each client needs to have both Client A and Client B records, then they should not have been designed to have overlapping keys in the first place. And as mentioned, this is for scenarios where the schema cant be changed on the client and data needs to be aggregated/consolidated on the server.

  2. […] Part 1 – Upload Synchronization where the Client and Server Primary Keys are different  and Part 2 – Bidirectional Synchronization where the Client and Server Primary Keys are different ) and in both approach, we relied on intercepting the change dataset to alter the structure of the […]

  3. Is it possible to use a column other than the primary key to sync on?

    Server has a table Customers, it has an int Id primary key and a UID guid column
    Client has a int Id primary key, and an UID guid column

    The UID is not part of the key, but when syncing it is the value I want to use to determine if one row is equal to another across server and client, on insert on both sides I am happy with auto increment values for the Id’s as I will not be syncing on them.

    Any suggestions?

  4. this is nice concept but this example is for only one table but i want to do it with like 100 tables..what is the best way to achieve that..?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: