Simple MySQL Replication Using FTP

This problem happened to me while finishing my other web based project not so long ago. The scenario was too complex to imagine, but both server must synchronized & had the same record as another in a second. Let see the picture below:



From the picture above, the website visitor accepted to submit their data. While the local server also serve the local data transaction in LAN. So what the point is?

Yes! The thing is … the servers are separate each others. The first server (dedicated as the data center) are served for public internet access which take a place on commercial hosting machine. This web server – as any commercial web hosting – are restricted from any way, except from the user or web administrator who connect from the browser (known as cPanel). There might be no free SMTP port to use, no direct MySQL connection from third party software, limitation to connect to the FTP service automatically. In manner logic, there is no possible way to synchronized both the servers automatically.

So, is there any plan to make it synchronized all the time, 24/7? Yes it is! Thanks to LINUX to allowed me to do this. With full support of cron daemon & bash scripting, it is possible to make it done. The changes are involve in web server & local server. In web server, there is some actions to do such as dumping the database into a file & store it to the directory specified.

/usr/bin/mysqldump –uweb_server_user --password=web_server_password database_name table_name > /path/to/file/specified


The script above must run periodically every second. So, you will need crond daemon help to execute it. The complete syntax is written below:

* * * * * /usr/bin/mysqldump –uweb_server_user --password=web_server_password database_name table_name > /path/to/file/specified


Enough by now. Let’s move on the local server. The local server jobs are quite simple. First, getting the database dump file from the web server, parsing the file, querying the SQL syntax, joining & committing into the local database server, dumping again to the file & last is put the file up to web server path available.

Getting the database dump file from the web server:

ftp ftp.host.address <<FTP-session
binary
get /path/to/file/specified
bye
FTP-session


Parsing the file, querying the SQL syntax, joining & committing into the local database server:

$syncfile = new SyncDBContent("local_server_host","database_name","local_server_user","local_server_password");
$syncfile->exec_syncfile('/path/to/file/specified ');


Again, dumping the database result set into a file:


/usr/bin/mysqldump –ulocal_server_user --password=local_server_password database_name table_name > /path/to/file/specified


Last, putting the file uploaded to the web server:

ftp ftp.host.address <<FTP-session
binary
put /path/to/file/specified
bye
FTP-session


The actions above executed automatically, so create a session file named .netrc in order to auto-passed FTP login. The value of .netrc file is declared below:

machine ftp.host.address
login web_server_user
password web_server_password


Again, you must recombined the process above with crond daemon. Make sure that the script is executed every second. Once it done, you finished with both servers. Now, you have a synchronized between web server & local server. Before do all the rest process, don’t forget to backing up the servers. There is no guarantee that the plan above is work correctly on yours. So, good luck & always take care. Cheers…

Labels:


PS: If you've benefit from this blog,
you can support it by making a small contribution.

Enter your email address to receive feed update from this blog:

Post a Comment

 

  1. Anonymous Anonymous said,

    Saturday, July 07, 2007 10:33:00 AM

    if u use crontab to 'sync' the file (via ftp) on a second interval, during the transmission of ur file to the local server, u might getting a zero-size file or file writting denied error on ur remote site. (as if u dump all the data into just 1 file /sec basis).

    unless each transaction using 1 flatfile for synchronization purposes. (or make use of the xml file for data synchronization/mapping)

    just my thoughts.

  2. Blogger Eko Wahyudiharto said,

    Monday, July 09, 2007 8:37:00 AM

    Actually, my plan was totally working. There is no zero-size file or file writing denied on the remote size (of course i did chmod on it first).

    Based on above article, i was using a file contains of the databases dump. So, there will be no chance to loosing something.

    Anyway, thanks for your comments.

  3. Blogger Anuj Sharma said,

    Tuesday, July 27, 2010 12:32:00 PM

    I found Good Articel on
    http://www.winservers.co.in/2010/07/all-about-ftp-must-read//

Post a Comment

Leave comments here...