Historical backups

in Stability Tags: backupdatabaseMySQL

Historical backups are multiple snapshots that are taken over time. When things go slightly wrong (ie. a product is accidentally deleted), one can go back in time to restore the appropriate snapshot (or part thereof).

For Magento Go Big and Excellence Nodes historical back-ups are made every day and saved for 7 days. We also save 1 backup per week for 3 weeks. This means you’ll have 4 weeks worth of backups. If you need a historical backup, please contact us via support@byte.nl. For Magento Start and Grow hosting plans we provide instructions on this page on how to set up periodic backup, possibly copied to an offsite location.

If you have a Magento 2 installation please use n98-magerun2 and change the root dir into magento2 in the commands mentioned below.

Database

Most important volatile data is the database.

Creating

Add the following lines your crontab. It will not lock your database, so it can be safely run at busy times. This example runs nightly.

PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=yourmailadress@here
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
0 2 * * * mkdir -p ~/backup; flock -n ~/.mysqldump chronic n98-magerun db:dump --root-dir=~/public --compression=gz --no-interaction --strip @stripped ~/backup
0 5 * * * chronic find /data/web/backup/ -type f -mtime +7 -delete

Notes: * This requires some free space on your Hypernode. For a typical 1G database, the compressed dump file takes 20M (the index data is not copied). You can use df -h ~ to verify the available amount of space. * chronic will only mail the output, if there is a failure. * flock ensures that only a single instance runs. If, for some reason, a backup run takes a long time, it will never run twice at the same time.

You can also use this script to create daily, weekly and monthly dumps of your database: databaseBackup.sh .

Off-site backup

If you want to copy your data to an off-site location, we recommend the external “tarsnap” service. It encrypts your data and then stores it on Amazon S3 storage platform. S3 is extremely reliable, as the data is duplicated at at least three different data centers. Nobody but you has access to the data, as it is encrypted using the best open standards. Not even the NSA can get to it.

Tarsnap charges $0.25 per GB per month (for storage and bandwith), but only transfers and stores changed data (incremental updates).

How to get up and running in 10 minutes

First, register for an account at tarsnap and upload some prepaid funds. For an average Magento shop, €100 will pay for a year of backup service.

Second, you will need a private key to encrypt your data. On your Hypernode, runtarsnap-keygen --keyfile ~/tarsnap.key --user <email> --machine <mysite>. You should copy thistarsnap.key file to a safe location, as your backup is inaccessible without it! A local USB stick is a good idea.

Third, create a tarsnap configuration file:

mkdir -p ~/.tarsnapcache
cat > ~/.tarsnaprc <<EOM
cachedir ~/.tarsnapcache
keyfile ~/tarsnap.key
print-stats
humanize-numbers

exclude .git
exclude *.log
exclude *.zip
exclude *.lock
exclude *.gz
exclude .tarsnapcache
exclude data/web/*/var/*/*
exclude data/web/*/media/catalog/product/cache/*
exclude data/web/*/media/js/*
exclude data/web/*/media/css/*.css
EOM

Four, test this setup. The initial upload could take up to several hours.

# Create a backup archive called "backup-WEEKDAY"
tarsnap -c -f backup-`date "+%A"` -v  ~

Five, implement this as a daily cron. To make optimal use of the tarsnap efficiency algorithm, you should NOT use compression on your local database dump and you should dump always to the same file. Lets put this in a tiny script first. Copy the following code into your ssh client:

cat > ~/backup/makebackup.sh <<EOM
#!/bin/sh
TODAY=\$(date "+%A");
flock -n ~/.mysqldump chronic n98-magerun db:dump --root-dir=~/public --no-interaction --strip @stripped ~/backup/mysql-latest.sql; 
flock -n ~/.tarsnap.lock tarsnap -d -f backup-\$TODAY 2>/dev/null; 
flock -n ~/.tarsnap.lock chronic tarsnap -c -f backup-\$TODAY ~ 

EOM
chmod 755 ~/backup/*.sh

NB: If you don´t copy the above code but instead create the makebackup.sh file first and then put the lines of code in, you’ll have the change a part of the code. You have to skip the \ escape characters with $TODAY and $(date “+%A”) or else the script will not work.

If you want to save some space on your Hypernode, you can delete the created backup by adding
rm ~/backup/mysql-latest.sql to your backup script. This implicates there is no backup present for recovery, so if you need your backup you’ve got to retrieve it from tarsnap first in case of emergency.

So including the database dump command above, your crontab -e would become:

PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=yourmailadress@here
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
0 2 * * * flock -n ~/.makebackup.lock ~/backup/makebackup.sh 

This will create an archive for every day of the week. It will overwrite old archives. You will only pay for the unique, compressed amount of data. For an average shop, this is 1/3 of the size.

Restoring

Restore your database:

n98-magerun db:import --root-dir=~/public <backup_filename>

List the archives:

tarsnap --list-archives --verbose

List the contents of a specific archive:

tarsnap -t -f <archive>

Extract a file:

tarsnap -x -f <archive> <file or files>
tarsnap -x -f backup-Monday data/web/site.com/Nieuwsbrief.png

Note: restoring will use the same directory structure. The command above will recreate “data/web/site.com” from the current working dir. So do a cd / first, to overwrite the old files in the right location.

2