Setting up replication from a Live database

What happens when you have a mission critical, somewhat large database (45gb in this case) that your slave breaks on. How do you get it back up with zero downtime? It's a good question and one I had to solve the other day. Normally on a smaller database it might make sense to put up a maintenance page, stop access to the db and do a dump, and then bring it back up, and then set up replication. But in the case of a mission critical app, that's large so it will take some time to do the actual dump anyways, you will want to do your MySQL dump in a different way.

I'm assuming you already know how to set up replication, and just need the info to do the dump, well here it is
mysqldump -u root -p [database you want to back up] \
--routines --single-transaction \
--skip-add-locks --skip-lock-tables \
--master-data > [database]-[date].sql
If you want to gzip it up while the dump is happening you will want to replace the > with
 | gzip -9 > [database]-[date].sql.gz
Once that is done, you can find the master position of this dump by doing
gzip -dc [database]-[date].sql.gz | head -30
It should technically change the master for you automatically when you import it into the slave, but I do it manually again just in case. Start your slave, and watch it catch up to the master. Profit! Note: This method is considerably slower than just doing a normal dump. I would only do it if the database is indeed mission critical, and you can't afford any downtime on it. Good luck!
Latest
Previous
Next Post »