So lets say we have 3 datacenters each named after its physical location.
Canada.site.com
Usa.site.com
Uk.site.com
When a user connects to site.com it trys to forward them to the closet site possible.
So, Andy who is a canadian lands on Canada.site.com
His request for index goes to apache, apache then goes to the sql server, located at sql.canada.site.com, then the sql data goes back to apache and is visible in Andys browser.
So, Jeff who is an American lands on Usa.site.com
Like andy, his request for index goes to apache, then to sql.usa.site.com, then the sql data goes back to apache and is visable in Jeffs browser.
Now John has been using the site and is about to make a update.
Johns data gets sent from the apache server @ Uk.site.com, apache tells sql at sql.uk.site.com to update with the new data, then John can see the data when he views the page.
HOWEVER, when Andy Or Jeff visit the page, they will not see johns update, because they are not on the same servers they will not be loading from the same SQL server.
Similarally if Andy or Jeff make an update neither will see it.
This is where we implement the syncing script. When John made his update, the apache server at uk.site.com also sent out a "warning" to canada.site.com and us.site.com that a change was made and where the change is located.
Canada.site.com and us.site.com then go an get the new data and update there respective sql servers.
The whole update takes appropriately 2 times the total latency from server to server plus the latency from sql to apache. Since the databases are typically located on fibre backbones, the update MAY propagate Faster then the original user will get the original page back.
(thus taking that math into effect a fast users in the states might actually see the change before the user making the change in the uk)
Canada.site.com
Usa.site.com
Uk.site.com
When a user connects to site.com it trys to forward them to the closet site possible.
So, Andy who is a canadian lands on Canada.site.com
His request for index goes to apache, apache then goes to the sql server, located at sql.canada.site.com, then the sql data goes back to apache and is visible in Andys browser.
So, Jeff who is an American lands on Usa.site.com
Like andy, his request for index goes to apache, then to sql.usa.site.com, then the sql data goes back to apache and is visable in Jeffs browser.
Now John has been using the site and is about to make a update.
Johns data gets sent from the apache server @ Uk.site.com, apache tells sql at sql.uk.site.com to update with the new data, then John can see the data when he views the page.
HOWEVER, when Andy Or Jeff visit the page, they will not see johns update, because they are not on the same servers they will not be loading from the same SQL server.
Similarally if Andy or Jeff make an update neither will see it.
This is where we implement the syncing script. When John made his update, the apache server at uk.site.com also sent out a "warning" to canada.site.com and us.site.com that a change was made and where the change is located.
Canada.site.com and us.site.com then go an get the new data and update there respective sql servers.
The whole update takes appropriately 2 times the total latency from server to server plus the latency from sql to apache. Since the databases are typically located on fibre backbones, the update MAY propagate Faster then the original user will get the original page back.
(thus taking that math into effect a fast users in the states might actually see the change before the user making the change in the uk)
