Percona XtraDB Cluster Debian howto
Percona XtraDB MySQL Cluster unter Debian Squeeze installieren
Welche verteile hat der Percona XtraDB MySQL Cluster:
- Synchronous replication
- Multi-master replication support
- Parallel replication
- Automatic node provisioning
Für diese Installation wird Benötigt.
Drei Server / VM´s mit Debian Squeeze oder höher
Debian Server für die Installation vorbereiten
Hosts Einträge erstellen.
192.168.70.110 percona-mysql01.example.local percona-mysql01 192.168.70.111 percona-mysql02.example.local percona-mysql02 192.168.70.112 percona-mysql03.example.local percona-mysql03
APT vorbereiten
gpg -a –export CD2EFD2A | apt-key add –
APT Source list editieren
# # Percona XtraDB MySQL # deb http://repo.percona.com/apt squeeze main deb-src http://repo.percona.com/apt squeeze main
Benötigte Pakete installieren
apt-get install percona-xtradb-cluster-client-5.5 percona-xtradb-cluster-server-5.5 percona-xtradb-cluster-galera-2.x percona-xtrabackup mytop
Bei der Installation kann ein Root Passwort vergeben werden.
Dies ist nicht unbedingt nötig, sollte aber gesetzt werden.
Root Passwort in /root/.my.cnf eintragen
[client] user=root password=ROOT_PASSWORT
Durch die Installation unter Debian muss noch das Passwort vom debian-sys-maint geändert werden. Dieses muss auf allen Servern gleich sein.
Um das Passwort in der MySQL Datenbank zu ändern wird dieser Befehl benötigt.
GRANT ALL PRIVILEGES ON *.* TO ‚debian-sys-maint’@’localhost‘ IDENTIFIED BY ‚GLEICHES_PASSWORT‚ WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO ‚debian-sys-maint’@’localhost‘ IDENTIFIED BY ‚GLEICHES_PASSWORT‘ WITH GRANT OPTION;
flush privileges;
Das vergebene Passwort in die /etc/mysql/debian.cnf eingetragen.
# Automatically generated for Debian scripts. DO NOT TOUCH! [client] host = localhost user = debian-sys-maint password = GLEICHES_PASSWORT socket = /var/run/mysqld/mysqld.sock [mysql_upgrade] host = localhost user = debian-sys-maint password = GLEICHES_PASSWORT socket = /var/run/mysqld/mysqld.sock basedir = /usr
Nachdem dies alles auf allen drei Nodes gemacht wurde, wird MySQL erstmal gestoppt.
Auf den Nodes 02 und 03 wird der Inhalt des /var/lib/mysql Verzeichnisses gelöscht.
Erste Node Konfigurieren
[mysqld] wsrep_provider=/usr/lib/libgalera_smm.so wsrep_cluster_address=gcomm://percona-mysql02:4567,percona-mysql03:4567,0.0.0.0:4567 wsrep_slave_threads=8 wsrep_node_name=node1 wsrep_sst_method=xtrabackup wsrep_sst_auth=root:ROOT_PASSWORT wsrep_cluster_name=MYSQL_CLUSTER_NAME binlog_format=ROW default_storage_engine=InnoDB innodb_autoinc_lock_mode=2 innodb_locks_unsafe_for_binlog=1 innodb_file_per_table datadir=/var/lib/mysql user=mysql
Befehlserklärung:
- wsrep_cluster_address=gcomm://
Hier werden die Beteiligten Nodes eingetragen - wsrep_slave_threads=8
Anzahl der Replikation Threads - wsrep_node_name=node1
Name der Node, am besten immer node1,node2,node3,.. - wsrep_sst_method=xtrabackup
Replikation Methode wenn die node nicht aktuell ist. Es gibt rsync und xtrabackup - wsrep_sst_auth=root:ROOT_PASSWORT
xtrabackup benötigt das root Passwort darum wird es hier eingegeben. - wsrep_cluster_name=MYSQL_CLUSTER_NAME
Interner Name für den Percona Cluster.
Erste node Starten.
Zweite und dritte Node Konfigurieren
Node2
[mysqld] wsrep_provider=/usr/lib/libgalera_smm.so wsrep_cluster_address=gcomm://percona-mysql01:4567,percona-mysql03:4567,0.0.0.0:4567 wsrep_slave_threads=8 wsrep_node_name=node2 wsrep_sst_method=xtrabackup wsrep_sst_auth=root:ROOT_PASSWORT wsrep_cluster_name=MYSQL_CLUSTER_NAME binlog_format=ROW default_storage_engine=InnoDB innodb_autoinc_lock_mode=2 innodb_locks_unsafe_for_binlog=1 innodb_file_per_table datadir=/var/lib/mysql user=mysql
Node3
[mysqld] wsrep_provider=/usr/lib/libgalera_smm.so wsrep_cluster_address=gcomm://percona-mysql01:4567,percona-mysql02:4567,0.0.0.0:4567 wsrep_slave_threads=8 wsrep_node_name=node3 wsrep_sst_method=xtrabackup wsrep_sst_auth=root:ROOT_PASSWORT wsrep_cluster_name=MYSQL_CLUSTER_NAME binlog_format=ROW default_storage_engine=InnoDB innodb_autoinc_lock_mode=2 innodb_locks_unsafe_for_binlog=1 innodb_file_per_table datadir=/var/lib/mysql user=mysql
Wenn man jetzt die Node2 startet kann man unter /var/log/mysql/mysql.err das hinzufügen der Node beobachten.
130310 17:39:11 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 130310 17:39:12 mysqld_safe WSREP: Running position recovery with --log_error=/tmp/tmp.2xPW5HbXFa 130310 17:39:13 mysqld_safe WSREP: Failed to recover position: 130310 17:39:13 [Note] WSREP: Read nil XID from storage engines, skipping position init 130310 17:39:13 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib/libgalera_smm.so' 130310 17:39:13 [Note] WSREP: wsrep_load(): Galera 2.3(r143) by Codership Oy <info@codership.com> loaded succesfully. 130310 17:39:13 [Warning] WSREP: Could not open saved state file for reading: /var/lib/mysql//grastate.dat 130310 17:39:13 [Note] WSREP: Found saved state: 00000000-0000-0000-0000-000000000000:-1 130310 17:39:13 [Note] WSREP: Preallocating 134219048/134219048 bytes in '/var/lib/mysql//galera.cache'... 130310 17:39:13 [Note] WSREP: Passing config to GCS: base_host = 192.168.70.112; base_port = 4567; cert.log_conflicts = no; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.size = 128M; gcs.fc_debug = 0; gcs.fc_factor = 1; gcs.fc_limit = 16; gcs.fc_master_slave = NO; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = NO; replicator.causal_read_timeout = PT30S; replicator.commit_order = 3 130310 17:39:13 [Note] WSREP: Assign initial position for certification: -1, protocol version: -1 130310 17:39:13 [Note] WSREP: wsrep_sst_grab() 130310 17:39:13 [Note] WSREP: Start replication 130310 17:39:13 [Note] WSREP: Setting initial position to 00000000-0000-0000-0000-000000000000:-1 130310 17:39:13 [Note] WSREP: protonet asio version 0 130310 17:39:13 [Note] WSREP: backend: asio 130310 17:39:13 [Note] WSREP: GMCast version 0 130310 17:39:13 [Note] WSREP: (0a83f9bb-89a1-11e2-0800-08c508ce7632, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567 130310 17:39:13 [Note] WSREP: (0a83f9bb-89a1-11e2-0800-08c508ce7632, 'tcp://0.0.0.0:4567') multicast: , ttl: 1 130310 17:39:13 [Note] WSREP: EVS version 0 130310 17:39:13 [Note] WSREP: PC version 0 130310 17:39:13 [Note] WSREP: gcomm: connecting to group 'MYSQL_CLUSTER_NAME', peer 'percona-mysql01:4567,percona-mysql02:4567,0.0.0.0:4567' 130310 17:39:13 [Note] WSREP: declaring 11e6a6ea-88b8-11e2-0800-f482b3a2f316 stable 130310 17:39:13 [Note] WSREP: declaring bb98bc76-88b9-11e2-0800-6d63e5403959 stable 130310 17:39:13 [Note] WSREP: view(view_id(PRIM,0a83f9bb-89a1-11e2-0800-08c508ce7632,29) memb { 0a83f9bb-89a1-11e2-0800-08c508ce7632, 11e6a6ea-88b8-11e2-0800-f482b3a2f316, bb98bc76-88b9-11e2-0800-6d63e5403959, } joined { } left { } partitioned { }) 130310 17:39:13 [Note] WSREP: gcomm: connected 130310 17:39:13 [Note] WSREP: Changing maximum packet size to 64500, resulting msg size: 32636 130310 17:39:13 [Note] WSREP: Shifting CLOSED -> OPEN (TO: 0) 130310 17:39:13 [Note] WSREP: Opened channel 'MYSQL_CLUSTER_NAME' 130310 17:39:13 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 0, memb_num = 3 130310 17:39:13 [Note] WSREP: Waiting for SST to complete. 130310 17:39:13 [Note] WSREP: STATE_EXCHANGE: sent state UUID: 0ad0b4d7-89a1-11e2-0800-bcb798ad9fa5 130310 17:39:13 [Note] WSREP: STATE EXCHANGE: sent state msg: 0ad0b4d7-89a1-11e2-0800-bcb798ad9fa5 130310 17:39:13 [Note] WSREP: STATE EXCHANGE: got state msg: 0ad0b4d7-89a1-11e2-0800-bcb798ad9fa5 from 0 (node3) 130310 17:39:13 [Note] WSREP: STATE EXCHANGE: got state msg: 0ad0b4d7-89a1-11e2-0800-bcb798ad9fa5 from 1 (node2) 130310 17:39:13 [Note] WSREP: STATE EXCHANGE: got state msg: 0ad0b4d7-89a1-11e2-0800-bcb798ad9fa5 from 2 (node1) 130310 17:39:13 [Note] WSREP: Quorum results: version = 2, component = PRIMARY, conf_id = 28, members = 2/3 (joined/total), act_id = 1046947, last_appl. = -1, protocols = 0/4/2 (gcs/repl/appl), group UUID = ac789a2c-88af-11e2-0800-3d09894f5600 130310 17:39:13 [Note] WSREP: Flow-control interval: [28, 28] 130310 17:39:13 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 1046947) 130310 17:39:13 [Note] WSREP: State transfer required: Group state: ac789a2c-88af-11e2-0800-3d09894f5600:1046947 Local state: 00000000-0000-0000-0000-000000000000:-1 130310 17:39:13 [Note] WSREP: New cluster view: global state: ac789a2c-88af-11e2-0800-3d09894f5600:1046947, view# 29: Primary, number of nodes: 3, my index: 0, protocol version 2 130310 17:39:13 [Warning] WSREP: Gap in state sequence. Need state transfer. 130310 17:39:15 [Note] WSREP: Running: 'wsrep_sst_xtrabackup - alternative way to do SST --role 'joiner' --address '192.168.70.112' --auth 'root:ROOT_PASSWORT' --datadir '/var/lib/mysql/' --defaults-file '/etc/mysql/my.cnf' --parent '19067'' 130310 17:39:16 [Note] WSREP: Prepared SST request: xtrabackup - alternative way to do SST|192.168.70.112:4444/xtrabackup_sst 130310 17:39:16 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification. 130310 17:39:16 [Note] WSREP: Assign initial position for certification: 1046947, protocol version: 2 130310 17:39:16 [Warning] WSREP: Failed to prepare for incremental state transfer: Local state UUID (00000000-0000-0000-0000-000000000000) does not match group state UUID (ac789a2c-88af-11e2-0800-3d09894f5600): 1 (Operation not permitted) at galera/src/replicator_str.cpp:prepare_for_IST():442. IST will be unavailable. 130310 17:39:16 [Note] WSREP: Node 0 (node3) requested state transfer from '*any*'. Selected 1 (node2)(SYNCED) as donor. 130310 17:39:16 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 1046947) 130310 17:39:16 [Note] WSREP: Requesting state transfer: success, donor: 1 130310 17:42:24 [Note] WSREP: 1 (node2): State transfer to 0 (node3) complete. 130310 17:42:24 [Note] WSREP: Member 1 (node2) synced with group. 130310 17:42:39 [Note] WSREP: SST complete, seqno: 1048400 130310 17:42:39 [Note] Plugin 'FEDERATED' is disabled. 130310 17:42:39 InnoDB: The InnoDB memory heap is disabled 130310 17:42:39 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130310 17:42:39 InnoDB: Compressed tables use zlib 1.2.3 130310 17:42:39 InnoDB: Using Linux native AIO 130310 17:42:39 InnoDB: Initializing buffer pool, size = 256.0M 130310 17:42:39 InnoDB: Completed initialization of buffer pool 130310 17:42:39 InnoDB: highest supported file format is Barracuda. 130310 17:42:39 InnoDB: Waiting for the background threads to start 130310 17:42:40 Percona XtraDB (http://www.percona.com) 1.1.8-rel29.1 started; log sequence number 374651553292 130310 17:42:40 [Note] Server hostname (bind-address): '0.0.0.0'; port: 3306 130310 17:42:40 [Note] - '0.0.0.0' resolves to '0.0.0.0'; 130310 17:42:40 [Note] Server socket created on IP: '0.0.0.0'. 130310 17:42:40 [Note] Event Scheduler: Loaded 0 events 130310 17:42:40 [Note] WSREP: Signalling provider to continue. 130310 17:42:40 [Note] WSREP: SST received: ac789a2c-88af-11e2-0800-3d09894f5600:1048400 130310 17:42:40 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.5.29-55-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 Percona XtraDB Cluster (GPL), wsrep_23.7.2.r3843 130310 17:42:40 [Note] WSREP: 0 (node3): State transfer from 1 (node2) complete. 130310 17:42:40 [Note] WSREP: Shifting JOINER -> JOINED (TO: 1048668) 130310 17:42:41 [Note] WSREP: Member 0 (node3) synced with group. 130310 17:42:41 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 1048668) 130310 17:42:41 [Note] WSREP: Synchronized with group, ready for connections 130310 17:42:41 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
Dies kann je nach Datenbank Inhalt und Netzwerk geschwindigkeit etwas dauern.
Prüfen ob die Replikation funktioniert
SHOW STATUS LIKE ‚wsrep_local_state_comment‘;
Wenn da so aussieht passt alles
+---------------------------+------------+ | Variable_name | Value | +---------------------------+------------+ | wsrep_local_state_comment | Synced (6) | +---------------------------+------------+ 1 row in set (0.00 sec)
Schreibe einen Kommentar