|
Fabric is both fragmented Oracle have developed MySQL Cluster Management tool to read and write separation, although personally feel that the current version, there are many flaws, but should gradually improve, that would be a good tool.
Step One: Download and install Fabric
Fabric can be downloaded from the official website of MySQL, MySQL Utilites where she belongs, the official download address is http://dev.mysql.com/downloads/utilities/
I downloaded the source code version, mysql-utilities-1.5.6.zip, because it is written in Python, so installation is different with the C language:
$ Unzip mysql-utilities-1.5.6.zip
$ Cd mysql-utilities-1.5.6 /
$ Python setup.py build
$ Sudo python setup.py install
After installation, the original python script to remove the suffix py default placed in the / usr / local / bin / directory and can be executed directly.
The first step: the deployment of multi-instance MySQL5.6
Before running Fabric, we must first prepare a number of each database, according to our need to configure the number of different databases, we need to deploy basic literacy separate master-slave mode, GTID feature uses MySQL needs Fabric will be master-slave replication so need MySQL5.6 above, additional characteristics due GTID with the MariaDB MySQL different, Fabric does not support MariaDB, using MariaDB if there is an error. So only use MySQL5.6 above.
We need to deploy at least three instances of MySQL Fabric to see the effect,
Address the role of port data file path configuration file path
Fabric Metabase localhost 10000 / dev / shm / data / fa00 fabric / fa00.cnf
Business Database 1 localhost 10011 / dev / shm / data / fa11 fabric / fa11.cnf
localhost 10012 / dev / shm / data / fa12 fabric / fa12.cnf
TIP: Since I started many databases on a single computer, so data files are on the hard drive memory in my machine some default / dev / shm, according to other machines will be different, such as / run / shm etc., select this path does not matter, as long as the parent directory is created in advance. Because the use of memory, hard drive, memory decrease MySQL's own needs, I reduced the memory usage of MySQL required, this also according to their own situation. Note: The production environment can not be so configured.
Here we first modify the configuration file needs to be modified to read as follows :( fa00.cnf example)
# Adjust your own digital front six rows, each database can have the same, here is change the last two corresponding to 00 digits.
[Client]
port = 10000
socket = /tmp/fa00.sock
[Mysqld]
port = 10000
socket = /tmp/fa00.sock
datadir = / dev / shm / data / fa00
server-id = 10000
user = lyw
# Master-slave replication-related
log-bin = mysql-bin
gtid-mode = on
log-slave-updates = true
enforce-gtid-consistency = true
# Files, memory size, save memory.
innodb_buffer_pool_size = 32M
innodb_log_file_size = 5M
Modify good 3 Profiles fa00.cnf, after fa11.cnf, fa12.cnf, we initialize data, and start the way we use bulk operations, reduce the workload. Created in the mysql directory the following script init_start.sh and initialize the database file fabric.sql, and perform init_start.sh, it can create all the database and start.
fabric.sql content:
use mysql;
delete from user where user = '';
flush privileges;
. Grant all on * * to 'fabric' @ '%' identified by '123456';
create database lyw;
reset master;
init_start.sh content:
#! / Bin / bash
mkdir -p / dev / shm / data
for cnf in `ls fabric / *. cnf`
do
scripts / mysql_install_db --defaults-file = $ cnf
bin / mysqld --defaults-file = $ cnf &
done
# Wait and let mysqld startup is completed,
sleep 3
for cnf in `ls fabric / *. cnf`
do
bin / mysql --defaults-file = $ cnf -uroot < fabric.sql
done
After the script is ready, do init_start.sh complete initialization and start-up of all databases, you can start the mysql client to check whether the database is initialized good. Fabric does not need to take the initiative to change master to execute this sql statement to turn on the main line from, but by the Fabric itself to perform. This database is ready, then start the real Fabric configuration.
The third step: Fabric separate read and write master-slave configuration
fabric default configuration file path is /usr/local/etc/mysql/fabric.cfg, other installation method is /etc/mysql/fabric.cfg (other systems, according to its own configuration) so in order to facilitate the subsequent operations, we using this profile, of course, you can also use the --config parameter specifies the configuration file.
fabric.cfg reads as follows:
[DEFAULT]
prefix = / usr / local
sysconfdir = / usr / local / etc
logdir = / var / log
# Storage configuration is the fabric of metadata stored in the database
[Storage]
address = localhost: 10000
user = fabric
password = 123456
database = fabric
auth_plugin = mysql_native_password
connection_timeout = 6
connection_attempts = 6
connection_delay = 1
[Servers]
user = fabric
password = 123456
backup_user = fabric_backup
backup_password = secret
restore_user = fabric_restore
restore_password = secret
unreachable_timeout = 5
# Fabric external protocol, here is the xmlrpc protocol
[Protocol.xmlrpc]
address = localhost: 32274
threads = 5
user = admin
password = 123456
disable_authentication = yes
realm = MySQL Fabric
ssl_ca =
ssl_cert =
ssl_key =
# Fabric external protocol, here is the mysql protocol, use mysql connection, but not the same as with normal database operations
[Protocol.mysql]
address = localhost: 32275
user = admin
password = 123456
disable_authentication = yes
ssl_ca =
ssl_cert =
ssl_key =
[Executor]
executors = 5
[Logging]
level = INFO
url = file: ///var/log/fabric.log
[Sharding]
mysqldump_program = / usr / bin / mysqldump
mysqlclient_program = / usr / bin / mysql
[Statistics]
prune_time = 3600
[Failure_tracking]
notifications = 300
notification_clients = 50
notification_interval = 60
failover_interval = 0
detections = 3
detection_interval = 6
detection_timeout = 1
prune_time = 3600
[Connector]
ttl = 1
Once configured, you need to initialize the fabric metadata
$ Mysqlfabric manage setup
. . . . . .
Finishing initial setup
=======================
Password for admin user is not yet set.
Password for admin / xmlrpc: here you need to set the admin password
Repeat Password: Repeat password
Password set.
Password set.
Then fa00 have fabric database metadata, we can use the mysql client view.
Then start
$ Mysqlfabric manage start
Since then, although the start of the fabric, but no previous configuration after two business contact database generated, then we need to use the command line to establish contact. I look at the packet mysqlfabric help,
$ Mysqlfabric help group
Commands available in group 'group' are:
group activate group_id [--synchronous]
group description group_id [--description = NONE] [--synchronous]
group deactivate group_id [--synchronous]
group create group_id [--description = NONE] [--synchronous]
group remove group_id server_id [--synchronous]
group add group_id address [--timeout = NONE] [--update_only] [--synchronous]
group health group_id
group lookup_servers group_id [--server_id = NONE] [--status = NONE] [--mode = NONE]
group destroy group_id [--synchronous]
group demote group_id [--update_only] [--synchronous]
group promote group_id [--slave_id = NONE] [--update_only] [--synchronous]
group lookup_groups [--group_id = NONE]
We first need to create a set of group-1:
$ Mysqlfabric group create group-1
Then two business database 10011 and 10012 into this group:
$ Mysqlfabric group add group-1 127.0.0.1:10011
$ Mysqlfabric group add group-1 127.0.0.1:10012
You can then view the status of this case group-1 of this group
$ Mysqlfabric group lookup_servers group-1
Fabric UUID: 5ca1ab1e-a007-feed-f00d-cab3fe13249e
Time-To-Live: 1
server_uuid address status mode weight
------------------------------------ -------------- ----------- --------- ------
d4919ca2-754a-11e5-8a5e-34238703623c 127.0.0.1:10011 SECONDARY READ_ONLY 1.0
d6597f06-754a-11e5-8a5e-34238703623c 127.0.0.1:10012 SECONDARY READ_ONLY 1.0
As can be seen from the status and mode of these two fields, this time just added two servers has not yet entered into force, are from the library as a read-only status, two libraries has not established substantial contact, we need one database upgrade writable Owner:
$ Mysqlfabric group promote group-1
$ Mysqlfabric group lookup_servers group-1
Fabric UUID: 5ca1ab1e-a007-feed-f00d-cab3fe13249e
Time-To-Live: 1
server_uuid address status mode weight
------------------------------------ -------------- ----------- ------ ----------
d4919ca2-754a-11e5-8a5e-34238703623c 127.0.0.1:10011 SECONDARY READ_ONLY 1.0
d6597f06-754a-11e5-8a5e-34238703623c 127.0.0.1:10012 PRIMARY READ_WRITE 1.0
After lifting promote command, one of which will be promoted to the primary database, the other is from. Master and slave can also view the mysql client in executing the following command from the library is shown below, there is no information on the implementation of the main library
mysql> show slave status \ G;
*************************** 1. row ******************** *******
Slave_IO_State: Waiting for master to send event
Master_Host: 127.0.0.1
Master_User: fabric
Master_Port: 10012
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 151
Relay_Log_File: lyw-hp-relay-bin.000002
Relay_Log_Pos: 361
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
. . . . . .
So far the main way from basic available, we can not wait to try the next client how to use. Official python and java two types of client, so if Fabric, preferably both business language.
Fabric is written in python, we are here as an example on the use of python client, directly open the python interactive interface
$ Python
Python 2.7.9 (default, Apr 2 2015, 15:33:21)
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> Import mysql.connector
>>> From mysql.connector import fabric
>>> Conn = mysql.connector.connect (fabric = { "host": "localhost", "port": 32274, "username": "lyw", "password": "123456"}, user = "fabric" , password = "123456", autocommit = True)
>>> Conn.set_property (group = 'group-1', scope = fabric.SCOPE_GLOBAL, mode = fabric.MODE_READWRITE)
>>> Cur = conn.cursor ()
>>> Cur.execute ( 'create database lyw')
>>> Cur.execute ( 'use lyw;')
>>> Cur.execute ( 'create table t1 (id int, v varchar (32))')
>>> Cur.execute ( 'insert into t1 values (1, "aaa"), (2, "bbb")')
>>> Cur.execute ( 'select * from t1')
>>> Cur.fetchall ()
[(1, u'aaa '), (2, u'bbb')]
# The following switch to the read-only data from the database, then the following sql statement is executed from the switch to the library need to 'use lyw'
>>> Conn.set_property (group = 'group-1', scope = fabric.SCOPE_GLOBAL, mode = fabric.MODE_READONLY)
>>> Cur.execute ( 'use lyw')
>>> Cur.execute ( 'select * from t1')
>>> Cur.fetchall ()
[(1, u'aaa '), (2, u'bbb')]
This python example can be seen well from the master basic way of using fabric,
Note: Do not write from the library to perform, otherwise the data will not meet expectations, are not synchronized to the master.
At this time, if the primary database hung up, then the write operation is not available, fabric will not automatically switch, to automatically switch, you need to execute the following command:
$ Mysqlfabric group activate group-1
Main library hang up after a few seconds
$ Mysqlfabric group lookup_servers group-1
Fabric UUID: 5ca1ab1e-a007-feed-f00d-cab3fe13249e
Time-To-Live: 1
server_uuid address status mode weight
------------------------------------ -------------- --------- ---------- ------
a30d844d-7550-11e5-8a84-34238703623c 127.0.0.1:10011 PRIMARY READ_WRITE 1.0
a4d66a55-7550-11e5-8a84-34238703623c 127.0.0.1:10012 FAULTY READ_WRITE 1.0
Then we get rid of the main library, after a few seconds to view the status can be found from the library to toggle the main library. The original primary database becomes FAULTY
But when the master source repository hang restart, which are not automatically added to the cluster, you need to remove, and then add back the job.
$ Mysqlfabric group remove group-1 127.0.0.1:10012
$ Mysqlfabric group add group-1 127.0.0.1:10012
$ Mysqlfabric group lookup_servers group-1
Fabric UUID: 5ca1ab1e-a007-feed-f00d-cab3fe13249e
Time-To-Live: 1
server_uuid address status mode weight
------------------------------------ -------------- ----------- ------ ----------
a30d844d-7550-11e5-8a84-34238703623c 127.0.0.1:10011 PRIMARY READ_WRITE 1.0
a4d66a55-7550-11e5-8a84-34238703623c 127.0.0.1:10012 SECONDARY READ_ONLY 1.0
Read-only status from the library, go check the data, the data can be found during the downtime is restored (it takes time)
Step four: + master slice from deployment
We've prepared a lot of scripts, deployment is easy, and here I first remove the previous data, redeployment, the reader can operate according to their preferences.
This time we want to deploy the following list database
Address the role of the port configuration file path path
Fabric metadata localhost 10000 / dev / shm / data / fa00 fabric / fa00.cnf
Global business database localhost 10091 / dev / shm / data / fa91 fabric / fa91.cnf
localhost 10092 / dev / shm / data / fa92 fabric / fa92.cnf
Business Database 1 localhost 10011 / dev / shm / data / fa11 fabric / fa11.cnf
localhost 10012 / dev / shm / data / fa12 fabric / fa12.cnf
Business Database 2 localhost 10021 / dev / shm / data / fa21 fabric / fa21.cnf
localhost 10022 / dev / shm / data / fa22 fabric / fa22.cnf
Business database 3 localhost 10031 / dev / shm / data / fa31 fabric / fa31.cnf
localhost 10032 / dev / shm / data / fa32 fabric / fa32.cnf
Before we copied the configuration file, modify the primary data, which are ready to nine profiles fa00.cnf ~ fa32.cnf
Then we initialize our environment
$ Sh init_start.sh
$ Mysqlfabric manage setup
$ Mysqlfabric manage start
Just after 3 rows initialized, relaxed bar.
Then we want to create four fabric group, group-g, group-1, group-2, group-3
$ Mysqlfabric group create group-g
$ Mysqlfabric group create group-1
$ Mysqlfabric group create group-2
$ Mysqlfabric group create group-3
This database will be placed in four groups,
$ Mysqlfabric group add group-g 127.0.0.1:10091
$ Mysqlfabric group add group-g 127.0.0.1:10092
$ Mysqlfabric group add group-1 127.0.0.1:10011
$ Mysqlfabric group add group-1 127.0.0.1:10012
$ Mysqlfabric group add group-2 127.0.0.1:10021
$ Mysqlfabric group add group-2 127.0.0.1:10022
$ Mysqlfabric group add group-3 127.0.0.1:10031
$ Mysqlfabric group add group-3 127.0.0.1:10032
And have chosen the Lord
$ Mysqlfabric group promote group-g
$ Mysqlfabric group promote group-1
$ Mysqlfabric group promote group-2
$ Mysqlfabric group promote group-3
Until now purely created four separate groups, we used the command group are related, the next is the highlight of the fragmentation, need sharding related commands, we first look at the simple help sharding
$ Mysqlfabric help sharding
Commands available in group 'sharding' are:
sharding list_definitions
sharding remove_definition shard_mapping_id [--synchronous]
sharding move_shard shard_id group_id [--update_only] [--synchronous]
sharding disable_shard shard_id [--synchronous]
sharding remove_table table_name [--synchronous]
sharding split_shard shard_id group_id [--split_value = NONE] [--update_only] [--synchronous]
sharding create_definition type_name group_id [--synchronous]
sharding add_shard shard_mapping_id groupid_lb_list [--state = DISABLED] [--synchronous]
sharding add_table shard_mapping_id table_name column_name [--synchronous]
sharding lookup_table table_name
sharding enable_shard shard_id [--synchronous]
sharding remove_shard shard_id [--synchronous]
sharding list_tables sharding_type
sharding prune_shard table_name [--synchronous]
sharding lookup_servers table_name key [--hint = LOCAL]
Well, first we need to create a definition:
$ Mysqlfabric sharding create_definition RANGE group-g
Fabric UUID: 5ca1ab1e-a007-feed-f00d-cab3fe13249e
Time-To-Live: 1
uuid finished success result
------------------------------------ -------- ------ --------
c106bd7a-e8f8-405e-97ec-6886cec87346 1 1 1
Digital note of the result of a field, this is shard_mapping_id, subsequent operations are needed him.
Then add a table definition, back 3 parameters are shard_mapping_id, table names, field names slice
$ Mysqlfabric sharding add_table 1 lyw.table1 id
Then increase the interval defined slice, then they will spend those three groups
$ Mysqlfabric sharding add_shard 1 "group-1/0, group-2/10000, group-3/20000" --state = ENABLED
Because we use here it is the range of the way, so the starting number will be added behind a grouping of each group.
Back --state = ENABLED indicates become effective immediately.
Based on the fragmentation range has been created, we can not wait to test out. We still perform with python interface
$ Python
Python 2.7.9 (default, Apr 2 2015, 15:33:21)
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> Import mysql.connector
>>> From mysql.connector import fabric
>>> Conn = mysql.connector.connect (fabric = { "host": "localhost", "port": 32274, "username": "lyw", "password": "123456"}, user = "fabric" , password = "123456", autocommit = True)
>>> Conn.set_property (group = 'group-g', scope = fabric.SCOPE_GLOBAL, mode = fabric.MODE_READWRITE)
>>> Cur = conn.cursor ()
>>> Cur.execute ( 'create database lyw')
>>> Cur.execute ( 'use lyw;')
>>> Cur.execute ( 'create table table1 (id int, v varchar (32))')
At this time, we went to eight databases directly view, you will find all databases and created a database lyw table table1; because we use the scope = fabric.SCOPE_GLOBAL parameters, so this operation after the implementation of this library 10091, it will sync to all his database to remember data from other databases is delayed.
Then we insert data fragmentation, fabric slicing operation a little trouble with two auxiliary functions at more convenient, the following code can be pasted into the interactive interface to go.
import random
def ins (conn, table, key):
conn.set_property (tables = [table], key = key, scope = fabric.SCOPE_LOCAL, mode = fabric.MODE_READWRITE)
cur = conn.cursor ()
# Cur.execute ( 'use lyw;')
cur.execute ( "insert into% s values ( '% s', 'aaa')"% (table, key))
def rand_ins (conn, table, count):
for i in range (count):
key = random.randint (0, 30000)
ins (conn, table, key)
Then we randomly inserted a number of pieces of data in the interface function execution rand_ins
>>> Connl = mysql.connector.connect (fabric = { "host": "localhost", "port": 32274, "username": "lyw", "password": "123456"}, user = "fabric" , password = "123456", autocommit = True)
>>> Rand_ins (connl, 'lyw.table1', 100)
This random insertion into the database of the 100 data, we can use the mysql client direct link to view the back of the six libraries, each library has a general database, the first set of data is less than 10,000 data, the second group are data is 10,000 to 20,000, 20,000 or more are the third group of data and the total number of rows is 100, is the total number of our insert.
Note: Use the fragmentation, if key change, we need to regain the cursor and databases, namely the first three rows ins function, you can directly use the database table names written in the sql statement. Otherwise, no data will be inserted in line with expectations.
In addition RANGE sub-chip, fabric also provides other ways, there are HASH, RANGE_DATETIME, RANGE_STRING
Here we talk about HASH fragments.
HASH slice
$ Mysqlfabric sharding create_definition HASH group-g
Fabric UUID: 5ca1ab1e-a007-feed-f00d-cab3fe13249e
Time-To-Live: 1
uuid finished success result
------------------------------------ -------- ------ --------
9ba5c378-9f99-43bc-8f54-7580cff565f6 1 1 2
$ Mysqlfabric sharding add_table 2 lyw.table2 id
$ Mysqlfabric sharding add_shard 2 "group-1, group-2, group-3" --state = ENABLED
Configuration is complete, do not increase as long as the group after the number 3 line command, hash mode, the system automatically calculates the packet obtained.
We test
>>> Conn = mysql.connector.connect (fabric = { "host": "localhost", "port": 32274, "username": "lyw", "password": "123456"}, user = "fabric" , password = "123456", autocommit = True)
>>> Conn.set_property (group = 'group-g', scope = fabric.SCOPE_GLOBAL, mode = fabric.MODE_READWRITE)
>>> Cur = conn.cursor ()
>>> Cur.execute ( 'use lyw;')
>>> Cur.execute ( 'create table table2 (id int, v varchar (32))')
>>> Connl = mysql.connector.connect (fabric = { "host": "localhost", "port": 32274, "username": "lyw", "password": "123456"}, user = "fabric" , password = "123456", autocommit = True)
>>> Rand_ins (connl, 'lyw.table2', 100)
The same can be found in each library has the data, but there is no good law, and may also find rows of data per database differ quite far, this luck, may be a difference of 10 times the big, so I think the Hash method of fabric is not good enough, too unevenly distributed, HASH way not recommended.
RANGE_DATETIME slice
$ Mysqlfabric sharding create_definition RANGE_DATETIME group-g
$ Mysqlfabric sharding add_table 3 lyw.table3 dt
$ Mysqlfabric sharding add_shard 3 'group-1 / 2015-1-1, group-2 / 2015-2-1, group-3 / 2015-3-1' --state = ENABLED
3 rows so you can, and remember to write data when the key with datetime.date type, can not be used datetime.datetime.
When the need to follow the time partition, you can do so, but I personally think that time is the most recent case of thermal data, with the fabric fragments is not the appropriateness of using mysql comes time slice will be better.
Note: The current version of the fabric on the datetime have bug, plus minutes and seconds if you need to modify the code 3 before running, because I just made a brief test, there may be other bug, it is not in this release diff.
RANGE_STRING slice
$ Mysqlfabric sharding create_definition RANGE_STRING group-g
$ Mysqlfabric sharding add_table 4 lyw.table4 name
$ Mysqlfabric sharding add_shard 4 'group-1 / a, group-2 / c, group-3 / e' --state = ENABLED
Such as a name and b beginning of the line will be placed in group-1, c and d will be placed at the beginning of the group-2, greater than or equal e of group-3 will be placed inside. This way there are still big slice useful, highly recommend. You can test front ins function.
These are the basic functions Fabric used, after a little bit to get to the bottom Fabric also find the existence of such defects may not be suitable for your business scenario, and I wrote a Cobar cluster deployment uses: we can refer to. I had a few days to write an article on Fabric and Cobar comparison article, look at the use of your choice. |
|
|
|