Okay, I think you and I are suggesting roughly the same solution from
two different vantage points, Jonathan. I still think in the end mine
will require a lot less storage, however. If both your solution and
mine require two writes, why not empty the temp tables at the end of the
sync, write changes to both the temp and main tables, and then use the
temp tables to determine what to sync to the Palm? I know you're still
thinking this equates to a fancy "dirty bit" solution, but with your
version of the solution I still don't see how you'll know what changed
without doing a record by record compare. In my solution you'll have a
full database and just a changes database, which requires a lot less
storage. I hope I'm still not missing something here.
As for Access being a disappointment, you're right and wrong. If people
would have used it for what it was, which was a nice desktop data store,
everything would have been great. Instead, people thought that because
it was easy to use it would make a nice, cheap enterprise solution, and
as it turns out they were wrong. I only say this because I've been
through all of that myself. And ironically enough, a old co-worker and
I actually wrote an SQL Server implementation of a project that used the
trigger solution to create a bitmask change fieldset for syncing
purposes (though the target for that project was not a Palm database).
That was actually a pretty slick solution for a data sync need, though
we never got to implement it in a production environment.
From: %%email.bounce%% [mailto:%%email.bounce%%] On Behalf Of Jonathan
Sent: Tuesday, February 12, 2008 1:20 AM
To: Conduit Developer Forum
Subject: Re: Performing fast sync with Access database
First of all thanks for your detailed explanation of your solution!
That is an excellent idea.
As for my previous suggestion, I didn't mean to create a copy after each
You copy the database just once. Upon sync, you can use this middle-man
database copy as a way to determine which field in every record was
modified in the main database (as an alternative to the bit masks).
After completing the synchronization, the main database and the copy
database should hold exactly the same data.
That way you're not copying the database during each sync. You're just
making each change twice - to the middle database and to the "main"
I am still thinking about this solution as way to avoid adding special
code to handle the bitmasks. There are quite a lot of update/add
functions in my project and that will take some time to accomplish.
However, I am still struggling on a way to pull out just the modified
records using this copy of the database, as I wrote in my first post.
Another thing I didn't mention is that it is not possible
<http://www.thescripts.com/forum/post725997-2.html> to create triggers
in Access. Triggers are a way in SQL server to have the database perform
actions automatically upon an update, addition or deletion of a record.
That could have been extremely useful as well - you could just write the
code for the bit masks in these triggers - but, as I said, Access is a
If this "two DBs" solution won't see light soon, I will abandon it and
start implementing your bit-mask technique. Thanks.
This is becoming quite an interesting thread, and I'm sure it will be
helpful to more people in the future. Thanks for participating.
On Feb 12, 2008 6:41 PM, Mark Janveaux <mark@phys...>
Creating a copy of the database each time it syncs is a little much!
That will not scale well at all .
Off the top of my head, you're really interested in tracking a 'delta
set' and possibly the last
modification date of each member in the delta set so as to figure out
how to merge each member (db field).
The fastest (and in my opinion, an elegant) way of doing this would be
to maintain a bitmask for each table.
For example, an 8 byte array would be able to track the dirty status of
up to 256 columns per row.
You wouldn't have to necessarily modify your existing tables. You would
just create one table which
would have a schema along the lines of:
column: fld_tbl_name (unique, possibly PK)
column: fld_delta_set (8 byte array)
The update operations for each table would have to be able to update the
appropriate bit for the field
in the mask (delta set), but that is very very simple arithmetic. The
ordinal number of the column would
correspond to the appropriate bit to set in the mask. (i.e. Column n =
Upon sync, you would start by looking at this version of the table on
both sides. You would do a simple
AND operation to figure out if there were any changes made at all.
Furthermore, to figure out
exactly what fields you need to merge you would simple loop through the
mask and build your query
after each AND operation proved false. You could use a 'handheld wins'
/ 'server wins' type rule to make
this very easy. Alternatively you could have a separate table which
holds an update time stamp for every
single delta set member (the primary key would be the tablename + the
bit number in the mask) . this
table could be a two column table to make it real simple:
column: fld_pk (string, table_name+bit) (eg value. tbl_sample0,
tbl_sample1, corresponding to first and second bits (fields) of
column: fld_last_modified (timestamp)
If you went ahead and implemented the timestamp table you would have the
ability to merge, on a last modified basis every
single field of the database and would only pull the records you were
interested in pulling without having to keep multiple copies of the
basically, what I'm describing is 'flattening' the audit data into a
single field which can be compared using bitmask operations . it would
be extremely efficient and very simple to implement. . I've written a
lot of sync engines and have done this in the past.