The Situation
MariaDB, a fork of MySQL, has had multi-master clustering support from the the initial version 10 release. However, the more recent releases have made it increasingly easy to setup a multi-master database cluster. By easy
I mean it. But first, what is a multi-master
cluster?
A multi-master cluster is one where each database instance is a master
of course. The cluster contains no read-replicas, slave nodes, or 2nd class instances. Every instance is a master. The up side is no lag, the down side every instance has to confirm writes. So, the big caveat here is that the network and throughput between all the instances needs to be as good as possible. The cluster performance is limited by the slowest machine.
Preflight Requirements
- Linux
- CLI Terminal
- (optional) Terraform 0.12.x
- AWS account:
(optional) I put together a small project that starts three EC2 instances. Feel free to use this to start up the example environment resources.
git clone https://github.com/davidjeddy/database_clustering_with_mariadb_and_galera.git
cd ./database_clustering_with_mariadb_and_galera
export AWS_ACCESS_KEY_ID=YOUR_API_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=YOUR_API_SECRET_KEY
export AWS_PEM_KEY_NAME=NAME_OF_YOUR_PEM_KEY
terraform init
terraform plan -out plan.out -var 'key_name='${AWS_PEM_KEY_NAME}
terraform apply plan.out
Once completed the output should look like this:
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
Outputs:
db-a-key = maria_with_galera
db-a-ssh = ec2-3-84-95-153.compute-1.amazonaws.com
db-b-key = maria_with_galera
db-b-ssh = ec2-3-95-187-84.compute-1.amazonaws.com
db-c-key = maria_with_galera
db-c-ssh = ec2-54-89-180-243.compute-1.amazonaws.com
If that is what you get, we are ready to move on to the next part.
Setup
Now that we have three EC2 instances started up and running we can dig into the configuration for each database service. Open three new terminals; so in total we will have 4: localhost, DB-A, DB-B, DB-C. Using ssh log into the three database EC2 instances. After each login execution we should have something similar to the below.
ssh -i ~/.ssh/maria_with_galera.pem ubuntu@ec2-3-84-95-153.compute-1.amazonaws.com
The authenticity of host 'ec2-3-84-95-153.compute-1.amazonaws.com (3.84.95.153)' can't be established.
ECDSA key fingerprint is SHA256:rxmG0jtvI47tH3Yf3fAls9IsMPkho4DaRcSfA+NWNNs.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ec2-3-84-95-153.compute-1.amazonaws.com,3.84.95.153' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-1054-aws x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Wed Nov 27 16:07:48 UTC 2019
System load: 0.0 Processes: 85
Usage of /: 13.6% of 7.69GB Users logged in: 0
Memory usage: 30% IP address for eth0: 172.31.40.213
Swap usage: 0%
0 packages can be updated.
0 updates are security updates.
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
Take note of the IP address for eth0
on each instance. This is the private IP address that will be needed later. On each of the DB instances run the following commands to update the machine and install the MariaDB service and dependencies.
sudo apt-get update -y
sudo apt-get install -y mariadb-server rsync
The output this time is very long, but the ending should look like this.
...
Created symlink /etc/systemd/system/mysql.service → /lib/systemd/system/mariadb.service.
Created symlink /etc/systemd/system/mysqld.service → /lib/systemd/system/mariadb.service.
Created symlink /etc/systemd/system/multi-user.target.wants/mariadb.service → /lib/systemd/system/mariadb.service.
Setting up mariadb-server (1:10.1.43-0ubuntu0.18.04.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for systemd (237-3ubuntu10.31) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Processing triggers for ureadahead (0.100.0-21) ...
To be extra sure we have everything installed, lets check the version of both MariaDB and rsync.
ubuntu@ip-172-31-40-213:~$ mysql --version && rsync --version
mysql Ver 15.1 Distrib 10.1.43-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2
rsync version 3.1.2 protocol version 31
...
are welcome to redistribute it under certain conditions. See the GNU
General Public Licence for details.
Since we need to configure the clustering go ahead and stop the MariaDB service on each instance using the standard stop command.
sudo systemctl stop mysql
sudo systemctl status mysql # never not always double check
You may have noticed that the command is mysql
and not mariadb
. This is because MariaDB is a fork of MySQL and the MariaDB team wants to keep binary compatibility with MySQL. This helps projects migrate with the least amount of headache.
Now do this same process on the DB-B and DB-C instances.
Configurations
Here is where the magic happens! We are going to create a new configuration file for each node at the location /etc/mysql/conf.d/galera.cnf
. Open the file and add the following content. Where the configuration says [DB-A IP] replace with the PRIVATE IP address of that instance that we saw when we logged into each instance in the previous section. Also replace [DB-A NAME] with the name of the cluster node. DB-A
, DB-B
, or DB-C
depending on what EC2 instance the file is located on.
[mysqld]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0
# Galera Provider Configuration
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so
# Galera Cluster Configuration
wsrep_cluster_name="test_cluster"
wsrep_cluster_address="gcomm://[DB-A IP],[DB-B IP],[DB-C IP]"
# Galera Synchronization Configuration
wsrep_sst_method=rsync
# Galera Node Configuration
wsrep_node_address="[DB-A IP]"
wsrep_node_name="[DB-A NAME]"
So, DB-A configuration should look like this:
[mysqld]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0
# Galera Provider Configuration
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so
# Galera Cluster Configuration
wsrep_cluster_name="test_cluster"
wsrep_cluster_address="gcomm://172.31.40.213,172.31.39.251,172.31.38.71"
# Galera Synchronization Configuration
wsrep_sst_method=rsync
# Galera Node Configuration
wsrep_node_address="172.31.40.213"
wsrep_node_name="DBA"
All three configurations should basically the same, minus the node_address
and node_name
being adjusted for each node.
Bringing It All Together
This is a very important step now; when starting the database on the first instance, aka DB-A
, we have to bootstrap the cluster. Since no other instances are running the boot strap process tells the database hey, your the first one, chill out
when it does not detect any other cluster members. After that though, DB-B
and DB-C
should join the cluster without an issue. So to start this first node use the following command on the DB-A instances.
ubuntu@ip-172-31-40-213:~$ sudo galera_new_cluster
ubuntu@ip-172-31-40-213:~$ sudo systemctl status mysql
● mariadb.service - MariaDB 10.1.43 database server
Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2019-11-27 16:32:03 UTC; 5s ago
Docs: man:mysqld(8)
https://mariadb.com/kb/en/library/systemd/
...
Nov 27 16:32:03 ip-172-31-40-213 /etc/mysql/debian-start[5129]: Checking for insecure root accounts.
Nov 27 16:32:03 ip-172-31-40-213 /etc/mysql/debian-start[5133]: Triggering myisam-recover for all MyISAM tables and aria-recover for all Aria tables
The important part here is the Active: active (running)
. Now, that we have the first cluster node running lets check the cluster status.
ubuntu@ip-172-31-40-213:~$ sudo mysql -u root -e "SHOW GLOBAL STATUS LIKE 'wsrep_cluster%';"
+--------------------------+--------------------------------------+
| Variable_name | Value |
+--------------------------+--------------------------------------+
| wsrep_cluster_conf_id | 1 |
| wsrep_cluster_size | 1 |
| wsrep_cluster_state_uuid | 71780aba-1133-11ea-a814-beaa932daf25 |
| wsrep_cluster_status | Primary |
+--------------------------+--------------------------------------+
Hey; Check that out! We have a single instance cluster running. Awesome. Now we need to start DB-B and DB-C. Switch to each of those terminals and run the not bootstrapping command but instead the normal service start command.
ubuntu@ip-172-31-39-251:~$ sudo systemctl status mysql
● mariadb.service - MariaDB 10.1.43 database server
Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2019-11-27 16:36:45 UTC; 11s ago
...
Nov 27 16:36:45 ip-172-31-39-251 /etc/mysql/debian-start[15042]: Checking for insecure root accounts.
Nov 27 16:36:45 ip-172-31-39-251 /etc/mysql/debian-start[15046]: Triggering myisam-recover for all MyISAM tables and aria-recover for all Aria tables
Again the Active: active (running)
is the important part. Switch back to DB-A and run the global status check command like we did after starting the DB-A services.
ubuntu@ip-172-31-40-213:~$ sudo mysql -u root -e "SHOW GLOBAL STATUS LIKE 'wsrep_cluster%';"
+--------------------------+--------------------------------------+
| Variable_name | Value |
+--------------------------+--------------------------------------+
| wsrep_cluster_conf_id | 3 |
| wsrep_cluster_size | 3 |
| wsrep_cluster_state_uuid | 71780aba-1133-11ea-a814-beaa932daf25 |
| wsrep_cluster_status | Primary |
+--------------------------+--------------------------------------+
Yea buddy! A three node database cluster up and running!
Confirmation
To be super sure everything is running and replicating as expected lets execute a few SQL commands to change the state of the database and then check the new state. On DB-A
lets add a new schema and table with a data point.
sudo mysql -u root -e "CREATE DATABASE testing; CREATE TABLE testing.table1 (id int null);INSERT INTO testing.table1 SET id = 1;"
Now let’s do a select statement on DB-C
:
ubuntu@ip-172-31-38-71:~$ sudo mysql -u root -e "SELECT * FROM testing.table1;"
+------+
| id |
+------+
| 1 |
+------+
YES! The new schema, table, and data replicated from DB-A
to DB-C
. We can run the select command on DB-B
and see the same result! We can write to DB-C
and see it replicated on DB-A
and DB-B
. Each node takes reads and writes then replicates the changes to all the other nodes!
Boom! A three node multi-master database cluster up and running! Log into one of the instances (does not matter since this is a multi-master) and create a new schema. Then exit and check the status of the cluster again. See the state value change? Yea, replication at work!
Conclusion
This is just the tip the functionality iceberg that is database clustering. I have had to skip over a very large number of topics like replication lag, placement geography, read-only replicas, bin_log format, and so much more. But this gives you a solid introduction to the concept of database clustering. Have fun!