Home IT Linux Windows Database Network Programming Server Mobile  
  Home \ Database \ Use XtraBackup to MySQL database online incremental backup and recovery     - CentOS installation Docker series (Linux)

- Denyhosts prevent hackers using SSH scanning (Linux)

- Kibana use installation (Linux)

- Ubuntu 14.04 install Nmap 6.46.1 (Linux)

- How to install web crawler tool in Ubuntu 14.04 LTS: Scrapy (Linux)

- using the ssh command to check the socket / Network Connections (Linux)

- Slow update statement Performance Analysis (Database)

- Install RAID 6 (Striping double distributed parity) (Linux)

- C / C ++ language variable scope: local variables, global variables, file-level variables (Programming)

- RocketMQ Message Queuing simple deployment (Linux)

- Storm how to ensure that at least once semantics (Programming)

- Linux SSH commands (Linux)

- OpenSUSE / Linux network configuration (Linux)

- Getting Started with Linux: Learn how to install and access CentOS 7 Remote Desktop on a VPS (Server)

- 10 Regulation of painless SQL Schema (Database)

- Sniffer install applications in Linux supplement (Linux)

- Java application server WildFly (Server)

- Linux vi command list (Linux)

- To create a full command line Android Build System (Linux)

- Tip: Use Cryptsetup U disk encryption (Linux)

  Use XtraBackup to MySQL database online incremental backup and recovery
  Add Date : 2017-08-31      
  First, Percona Xtrabackup Profile

1, Xtrabackup bin directory file description

1) innobackupex

innobackupex is a symbolic link xtrabackup of. innobackupex still supports all features and syntax as 2.2 version did, but is now deprecated and will be removed in next major release.

2) xtrabackup

By the C compiler from a binary file that can Kerb MySQL database instance with MyISAM, InnoDB, and XtraDB tables

3) xbcrypt

It used to encrypt or decrypt the backup data

4) xbstream

Or for extracting compressed file format xbstream

5) xbcloud

utility used for downloading and uploading full or part of xbstream archive from / to cloud.

2, Percona XtraBackup info

Is a free open source MySQL database hot backup software, it can on the InnoDB storage engine and XtraDB non-blocking database backup (backup for MyISAM tables also need to add locks)

you can achieve the following benefits: (https://www.percona.com/doc/percona-xtrabackup/2.3/intro.html)
Backups that complete quickly and reliably
Uninterrupted transaction processing during backups
Savings on disk space and network bandwidth
Automatic backup verification
Higher uptime due to faster restore time

Create hot InnoDB backups without pausing your database
Make incremental backups of MySQL
Stream compressed MySQL backups to another server
Move tables between MySQL servers on-line
Create new MySQL replication slaves easily
Backup MySQL without adding load to the server
Without stopping database InnoDB hot backup
Incremental Backup MySQL
Stream compression to transfer to other servers
Online Mobile table
It can be relatively easy to create master-slave synchronization
Without increasing server load when backing up MySQL
3, Xtrabackup tool support for the InnoDB storage engine incremental backup works as follows

1, InnoDB internally maintains a redo / undo log files, transaction log files can also be called. The transaction log records are stored for each InnoDB table data modification. When InnoDB start, InnoDB checks the data files and transaction logs, and perform two steps: it is applied (roll forward) has been submitted to the transaction log file data, and modify data but had not submitted rollback.

2, Xtrabackup at startup will remember log sequence number (LSN), and copy all the data files. Copy process takes some time, so this time if there are changes to the data file, it will leave the database in a different point in time. At this time, xtrabackup runs a background process to monitor the transaction log, and copy the latest changes from the transaction log. Xtrabackup must continue to do this operation because the transaction log is written will repeat the cycle, and the transaction log can be reused. So xtrabackup since the launch of start, stop will modify the transaction log of each data file are recorded.

3, the above is xtrabackup backup process. Next is to prepare (prepare) process, in this process, before xtrabackup using the copied transaction log, perform a disaster recovery of individual data files (like when mysql just started to do the same). After the end of this process, you can do to restore the database to restore, this is implemented in xtrabackup compiled binary. Innobackupex program allows us to back up MyISAM tables and frm file thereby increasing the convenience and functionality. Innobackupex starts xtrabackup, until xtrabackup copy the data file, and then execute FLUSH TABLES WITH READ LOCK to prevent a new brush to write in and MyISAM table data to the hard disk, and then copy the MyISAM data files, and then release the lock.

4, the backup MyISAM and InnoDB table will be in the final agreement at the end of preparation (prepare) process, InnoDB table data has been rolled forward to the end of the entire backup point, rather than roll back to the point xtrabackup just started. This point in time and execute FLUSH TABLES WITH READ LOCK point in time is the same, so myisam table data and InnoDB table data is synchronized. Oracle's similar, InnoDB prepare the process may be referred to recover (restore), data replication process myisam may be referred to restore (restore).

5, Xtrabackup and innobackupex Both tools provide many features not previously mentioned. There manuals for each function are described in detail. Brief, these tools provide, such as flow (streaming), incremental (incremental) backup by copying data files, copy the log files and submit data to the log file (before rolling) to achieve a variety of complex backup.

Second, the installation xtrabackup

1, the installation

yum -y install perl perl-devel libaio libaio-devel
yum -y install perl-DBI perl-DBD-MySQL perl-TermReadKey perl-devel perl-Time-HiRes
cd / usr / local / src
wget -c https://www.percona.com/downloads/XtraBackup/Percona-XtraBackup-2.3.2/ \
binary / tarball / percona-xtrabackup-2.3.2-Linux-x86_64.tar.gz
tar -zxf percona-xtrabackup-2.3.2-Linux-x86_64.tar.gz
cd percona-xtrabackup-2.3.2-Linux-x86_64 /
mkdir / usr / local / xtrabackup
mv bin / usr / local / xtrabackup /
ln -s / usr / local / xtrabackup / bin / * / usr / bin /

2, modify the my.cnf

datadir = / var / lib / mysql
innodb_data_home_dir = / data / mysql / ibdata
innodb_log_group_home_dir = / data / mysql / iblogs
innodb_data_file_path = ibdata1: 10M; ibdata2: 10M: autoextend
innodb_log_files_in_group = 2
innodb_log_file_size = 1G

Third, all database backup and restore

1, the backup

// All database backup
innobackupex --user = root --password = 123456 / data / backup /
// Single database backup
innobackupex --user = root --password = 123456 --database = backup_test / data / backup /
// Multi-database
innobackupex - user = root --password = 123456 --include = '.. dba * | dbb *' / data / backup /
// Multi-table
innobackupex --user = root --password = 123456 --include = 'dba.tablea | dbb.tableb' / data / backup /
// Database backup and compression
log = zztx01_`date +% F_% H-% M-% S`.log
db = zztx01_`date +% F_% H-% M-% S`.tar.gz
innobackupex --user = root --stream = tar / data / backup 2> / data / backup / $ log | gzip 1> / data / backup / $ db
// But note decompression is manual, and add the -i parameter, otherwise can not extract all the files, wondering for a long time
// If an error can add --defaults-file = / etc / my.cnf

2. Restore

service mysqld stop
mv / data / mysql / data / mysql_bak && mkdir -p / data / mysql
// - Command apply-log option is ready to start mysql service on a backup
innobackupex --defaults-file = / etc / my.cnf --user = root --apply-log / data / backup / 2015-09-18_16-35-12
// - The initial position command copy-back option from the backup copy of the directory data, indexes, log into my.cnf file specified
innobackupex --defaults-file = / etc / my.cnf --user = root --copy-back / data / backup / 2015-09-18_16-35-12
chown -R mysql.mysql / data / mysq
service mysqld start

Fourth, incremental backup and restore
1, create a test database and table
create database backup_test; // Create a library
CREATE TABLE `backup` (// Create table
`Name` varchar (20) NOT NULL DEFAULT '',
`Del` tinyint (1) NOT NULL DEFAULT '0',
2, incremental backup
# - Incremental: Incremental backup file folder
# - Incremental-dir: Which do for incremental backups
// First backup
mysql> INSERT INTO backup (name) VALUES ( 'xx'), ( 'xxxx'); // insert data
innobackupex --user = root --incremental-basedir = / data / backup / 2015-09-18_16-35-12 --incremental / data / backup /
// Backup again
mysql> INSERT INTO backup (name) VALUES ( 'test'), ( 'testd'); // insert the data
innobackupex --user = root --incremental-basedir = / data / backup / 2015-09-18_18-05-20 --incremental / data / backup /
3. See incremental backup log files
[Root @ localhost 2015-09-18_16-35-12] # cat xtrabackup_checkpoints // Full backup directory file
backup_type = full-prepared
from_lsn = 0 // bountiful start at 0
to_lsn = 23853959
last_lsn = 23853959
compact = 0
[Root @ localhost 2015-09-18_18-05-20] # cat xtrabackup_checkpoints // first incremental backup files directory
backup_type = incremental
from_lsn = 23853959
to_lsn = 23854112
last_lsn = 23854112
compact = 0
[Root @ localhost 2015-09-18_18-11-43] # cat xtrabackup_checkpoints // incremental file backup directory under the second
backup_type = incremental
from_lsn = 23854112
to_lsn = 23854712
last_lsn = 23854712
compact = 0
After the incremental backup done, delete the backup_test database, drop database backup_test; this can be compared with the restored
4, the incremental reduction
Two steps
innobackupex --apply-log / path / to / BACKUP-DIR
In this case the data can be accessed using the program; -use-memory option can be used to specify the memory used to speed up the progress of the default 100M;
innobackupex --copy-back / path / to / BACKUP-DIR
Read datadir / innodb_data_home_dir / innodb_data_file_path variables from my.cnf
First copy MyISAM table, then innodb table, and finally logfile; - data-dir directory must be empty
Start Merge
innobackupex --apply-log --redo-only / data / backup / 2015-09-18_16-35-12
innobackupex --apply-log --redo-only --incremental / data / backup / 2015-09-18_16-35-12 --incremental-dir = / data / backup / 2015-09-18_18-05-20
innobackupex --apply-log --redo-only --incremental / data / backup / 2015-09-18_16-35-12 --incremental-dir = / data / backup / 2015-09-18_18-11-43
# / Data / backup / 2015-09-18_16-35-12 full backup directory
# / Data / backup / 2015-09-18_18-05-20 first incremental backup directory generated
# / Data / backup / 2015-09-18_18-11-43 second incremental backup directory generated
Data recovery
service mysqld stop
innobackupex --copy-back / data / backup / 2015-09-18_16-35-12
service mysqld start

Five, innobackup common parameters Description

The same xtrabackup --defaults-file parameters

The encapsulated xtrabackup parameters of --prepare

When the data recovery copy the backup data files to the datadir MySQL server;

--remote-host = HOSTNAME
By ssh will store the backup data to the server process;

--stream = [tar]
Backup file output format, use tar tar4ibd, which is available in XtarBackup binary file. There are designated --stream = tar, which is tar4ibd file directory must be in your $ PATH (as used to be tar4ibd If the backup compression, the file is available in XtraBackup the binary package).
In the parameter stream = tar backup when your xtrabackup_logfile may temporarily placed in the / tmp catalog, if you back up when larger concurrent writes words xtrabackup_logfile may be very large (5G +), is likely to hold your full / tmp directory, you can solve this problem by parameter --tmpdir specified directory.

--tmpdir = DIRECTORY
When a designated --remote-host or --stream, temporary storage of the transaction log directory, use the default MySQL configuration file specified in the temporary directory tmpdir

--redo-only --apply-log group,
Mandatory only when the backup log redo, skip rollback. This is necessary to do an incremental backup.

--use-memory = #
This parameter is used when prepare, innodb prepare instance control the amount of memory

--throttle = IOS
--throttle parameters with the xtrabackup

--sleep = ibbackup is used to specify data per backup 1M, how many milliseconds copy process stops, but also to minimize the impact when the backup of normal business, you can view ibbackup specific manuals;

--compress [= LEVEL]
Zhong backup data row compression, only supports ibbackup, xtrabackup not yet been achieved;

--include = REGEXP
For xtrabackup parameters --tables package also supports ibbackup. Backup database table that contains, for example: - include = "test *.", Meaning that you want to back up all of the test database tables. If you need a full backup, omit this parameter; if you need to back test 2 tables Library under: test1 and test2, then write to: - include = "test.test1 | test.test2". You can also use wildcards, such as: - include = "test.test *".

--databases = LIST
It lists the databases to be backed up, if not specified, all tables contain MyISAM and InnoDB database will be backed up;

Decompressed data file backup support ibbackup, xtrabackup not yet implemented the feature;

Backup from the library, plus the next generation will be more --slave-info xtrabackup_slave_info a backup directory file, here are saved and the offset master log file, the file contents like: CHANGE MASTER TO MASTER_LOG_FILE = '', MASTER_LOG_POS = 0

--socket = SOCKET
Specify the location mysql.sock, so that the backup process login mysql.
- OpenGL Superb Learning Notes - Fragment Shader (Programming)
- Oracle Database Restore (Database)
- Using PHP MySQL library (Programming)
- Oracle 11G using DG Broker create DataGuard (Database)
- Ubuntu install ruby (Linux)
- Hadoop2.6.3 build clusters and the development of MapReduce WIN7 by Eclipse on Linux demo (Server)
- VMware Workstation virtual machine Ubuntu achieve shared with the host (Linux)
- DataGuard add temporary data files bug (Database)
- Install apr support for Tomcat on Linux (Server)
- Ubuntu 14.04 build Gitlab (Linux)
- TOAD connect DB2 error SQL1460N solve (Database)
- Linux SVN installation and configuration graphic tutorials (Server)
- Kali Linux 2.0 U disk installation errors Your installation cd-rom could not be mounted (Linux)
- 10 useful Linux command line tips (Linux)
- Ubuntu manually set the DSL broadband connection (Linux)
- Linux 6 install Oracle 11g (64bit) (Database)
- Eclipse installs support for Java 8 (Linux)
- Oracle 10g, 11g database silent installation of small differences (Database)
- Example of use WebSocket (Programming)
- Python: Finding meet the conditions specified in the file directory (Programming)
  CopyRight 2002-2016 newfreesoft.com, All Rights Reserved.