Creating a copy of the database each time it syncs is a little much! That will not scale well at all ...
Off the top of my head, you're really interested in tracking a 'delta set' and possibly the last
modification date of each member in the delta set so as to figure out how to merge each member (db field).
The fastest (and in my opinion, an elegant) way of doing this would be to maintain a bitmask for each table.
For example, an 8 byte array would be able to track the dirty status of up to 256 columns per row.
You wouldn't have to necessarily modify your existing tables. You would just create one table which
would have a schema along the lines of:
The update operations for each table would have to be able to update the appropriate bit for the field
in the mask (delta set), but that is very very simple arithmetic. The ordinal number of the column would
correspond to the appropriate bit to set in the mask. (i.e. Column n = bit n)
Upon sync, you would start by looking at this version of the table on both sides. You would do a simple
AND operation to figure out if there were any changes made at all. Furthermore, to figure out
exactly what fields you need to merge you would simple loop through the mask and build your query
after each AND operation proved false. You could use a 'handheld wins' / 'server wins' type rule to make
this very easy. Alternatively you could have a separate table which holds an update time stamp for every
single delta set member (the primary key would be the tablename + the bit number in the mask) ... this
table could be a two column table to make it real simple:
column: fld_pk (string, table_name+bit) (eg value. tbl_sample0, tbl_sample1, corresponding to first and second bits (fields) of tbl_sample)
column: fld_last_modified (timestamp)
If you went ahead and implemented the timestamp table you would have the ability to merge, on a last modified basis every
single field of the database and would only pull the records you were interested in pulling without having to keep multiple copies of the database...
basically, what I'm describing is 'flattening' the audit data into a single field which can be compared using bitmask operations ... it would
be extremely efficient and very simple to implement. ... I've written a lot of sync engines and have done this in the past.
Let me know if this makes sense to you...
From: %%email.bounce%% [mailto:jonathan.carse@gmai...]
Sent: Monday, February 11, 2008 10:41 PM
To: Conduit Developer Forum
Subject: Re: Performing fast sync with Access database
Yes, indeed that seems to be the most logical solution.
Before I joined my company and wrote the conduit for them, they used Appforge's Universal Conduit.
This is a conduit written by Appforge (I think they were purchased by Oracle) that creates a mirror of an Access database (well, any database that can be DSN'ed into the system, actually) onto a Palm device.
And that conduit worked fantastic -
1. If no changes were made to either side, the synchronization will not take any time at all
2. If changes were made, only the affected records will be sync'ed.
3. If changes were made to the same record on both sides, the changes will be merged, so no data is lost.
All of these were accomplished excellently without manually embedding a dirty bit.
The third bullet above indicates that there is more than just a dirty bit set - it keeps track of every field, not just every record. Sort of a dirty bit for each field, if you will.
So I'm assuming Appforge found a way to track after the data cleverly enough to make synchronizations fast and accurate, without having to modify the existing database, or rely on a stronger database engine than Access.
My bet is that they make a copy of the database each time it is sync'ed. Then, on the next sync, they have 3 versions for each record - the PC's, the Palm's, and the record that was saved after the last sync. That way they know which fields were changed on each side.
I tried to do the same thing -
I made a copy of my MDB file, and used the linked tables option to link my two MDB files. So now I have two versions for each table - the current table and the table that will be saved after each sync.
But now I'm stuck because I can't think of a fast enough SELECT clause that will pull only the modified records out of the current table.
If Access supported the "EXCEPT" SQL operator (http://msdn2.microsoft.com/en-us/library/ms188055.aspx), it could be done using the following simple query:
SELECT * FROM tblCurrentTable
SELECT * FROM tblLastSyncTable
Does anybody have any better ideas?
I really don't want to start adding dirty bits to all my tables, and I want a much more elegant and generic solution to the problem.
If you're wondering why I'm not just using Appforge's solution, the answer is that they only support creating a mirror between the Access database and the palm database. I had to write a conduit that synchronizes only part of the "big" Access database to the "small" palm database.
2008/2/12, epankoke@comc...<mailto:epankoke@comc...> <epankoke@comc...<mailto:epankoke@comc...>>:
I worked on a project once that used an Access Database, and what we did was create our own "dirty" flag at the end of the record. Basically, a one byte flag that was 0 for unchanged and 1 for changed (we actually had a more complex setup, but this would be the basic route to take). Then all you have to look at on the Access side is the recordset where DirtyFlag = 1.
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email ______________________________________________________________________