Three server MySQL Cluster

In this guide we will show how to create a MySQL cluster using three physical servers (the minimum number required) - two storage nodes (each running a SQL node) and a management node. The management node can be running on extremely old hardware.

This guide is very similar to a guide I produced many years ago, when MySQL Cluster was still Beta software. There are many other references available to look at when considering building a MySQL Cluster, one of which is my book. Please feel free to contact me with any questions - I will do my best to help, but please do include as many details as you can and allow some time for a response.

Please note that all cluster nodes must be in the same local network, with no firewall between them. It is strongly recommended that this network is dedicated for use by the MySQL Cluster.

Installing the management node

From the download page at http://dev.mysql.com download the ”Cluster storage engine management" package (MySQL-Cluster-gpl-management-7.a.b-c.rhel5.x86_64.rpm). and "Cluster storage engine basic tools" package (MySQL-Cluster-gpl-tools-7.a.b-c.rhel5.x86_64.rpm. Install these two RPMs:


[root@node5 ~]# cd ~/
[root@node5 ~]# mkdir mysql
[root@node5 ~]# cd mysql
[root@node5 mysql]# wget http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.0/MySQL-Cluster-gpl-management-7.0.6-0.rhel5.x86_64.rpm/from/http://mirrors.dedipower.com/www.mysql.com/
--16:26:09--  http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.0/MySQL-Cluster-gpl-management-7.0.6-0.rhel5.x86_64.rpm/from/http://mirrors.dedipower.com/www.mysql.com/
<snip>
16:26:10 (9.78 MB/s) - `MySQL-Cluster-gpl-management-7.0.6-0.rhel5.x86_64.rpm' saved [1316142/1316142]
[root@node5 ~]# wget http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.0/MySQL-Cluster-gpl-tools-7.0.6-0.rhel5.x86_64.rpm/from/http://mirrors.ukfast.co.uk/sites/ftp.mysql.com/
--18:45:57--  http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.0/MySQL-Cluster-gpl-tools-7.0.6-0.rhel5.x86_64.rpm/from/http://mirrors.ukfast.co.uk/sites/ftp.mysql.com/
<snip>
18:46:00 (10.2 MB/s) - `MySQL-Cluster-gpl-tools-7.0.6-0.rhel5.x86_64.rpm' saved [9524521/9524521]
[root@node5 mysql]# rpm -ivh MySQL-Cluster-*.rpm
Preparing...                ########################################### [100%]
   1:MySQL-Cluster-gpl-manag########################################### [100%]
   1:MySQL-Cluster-gpl-manag########################################### [100%]

Now create the simplest possible global cluster configuration file in /usr/local/mysql-cluster/config.ini:


[ndb_mgmd]
HostName=10.0.0.5
DataDir=/var/lib/mysql-cluster


[ndbd default]
DataDir=/var/lib/mysql-cluster
NoOfReplicas=2


[ndbd]
HostName=10.0.0.1


[ndbd]
HostName=10.0.0.2


[mysqld]
HostName=10.2.0.3


[mysqld]
HostName=10.2.0.4


[mysqld]
[mysqld]

Now, create the management node DataDirectory and start the management node. Once started, run the management client:

[root@node5 mysql-cluster]# mkdir -p /usr/local/mysql/cluster
[root@node5 mysql-cluster]# ndb_mgmd --config-file=/usr/local/mysql-cluster/config.ini
2009-06-28 22:14:01 [MgmSrvr] INFO -- NDB Cluster Management Server. mysql-5.1.34 ndb-7.0.6
2009-06-28 22:14:01 [MgmSrvr] INFO -- Loaded config from '//mysql-cluster/ndb_1_config.bin.1'
[root@node5 mysql-cluster]# ndb_mgm
-- NDB Cluster -- Management Client –
ndb_mgm> SHOW
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=3 (not connected, accepting connect from 10.0.0.1)
id=4 (not connected, accepting connect from 10.0.0.2)


[ndb_mgmd(MGM)] 1 node(s)
id=1 @node5 (mysql-5.1.34 ndb-7.0.6)


[mysqld(API)] 4 node(s)
id=11 (not connected, accepting connect from 10.2.0.2)
id=12 (not connected, accepting connect from 10.2.0.3)
id=13 (not connected, accepting connect from any host)
id=14 (not connected, accepting connect from any host)

Now, switch to the storage/SQL nodes (to save servers, we will run both a storage and SQL node on each of these two servers. However, it is best practice not to do this as a SQL node can use a large amount of RAM during a query which could potentially cause the storage node to crash). Execute the following commands on each server.

Firstly, download and install the following RPMs:
  • "Cluster storage engine basic tools" (MySQL-Cluster-gpl-tools-7.a.b-c.rhel5.x86_64.rpm)
  • "Cluster storage engine extra tools" (MySQL-Cluster-gpl-storage-7.a.b-c.rhel5.x86_64.rpm)
  • "Server" (from the cluster section; MySQL-Cluster-gpl-server-7.a.b-c.rhel5.x86_64.rpm)
  • "Client" (this is identical to the "standard" MySQL Client and has the same filename; MySQL-client-community-5.a.b-c.rhel5.x86_64.rpm)

[root@node1 ~]# cd ~/
[root@node1 ~]# mkdir mysql-storagenode
[root@node1 ~]# cd mysql-storagenode/
[root@node1 mysql-storagenode]# wget http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.0/MySQL-Cluster-gpl-storage-7.0.6-0.rhel5.x86_64.rpm/from/http://mirrors.ukfast.co.uk/sites/ftp.mysql.com/
--21:17:04--  http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.0/MySQL-Cluster-gpl-storage-7.0.6-0.rhel5.x86_64.rpm/from/http://mirrors.ukfast.co.uk/sites/ftp.mysql.com/
Resolving dev.mysql.com... 213.136.52.29
<snip>
21:18:06 (9.25 MB/s) - `MySQL-Cluster-gpl-storage-7.0.6-0.rhel5.x86_64.rpm' saved [4004834/4004834]

[root@node1 mysql-storagenode]# wget http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.0/MySQL-Cluster-gpl-tools-7.0.6-0.rhel5.x86_64.rpm/from/http://mirrors.ukfast.co.uk/sites/ftp.mysql.com/
<snip>

[root@node1 mysql-storagenode]# wget http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.0/MySQL-Cluster-gpl-server-7.a.b-c.rhel5.x86_64.rpm/from/http://mirrors.ukfast.co.uk/sites/ftp.mysql.com/
<snip>


[root@node1 mysql-storagenode]# wget http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.0/MySQL-client-community-5.a.b-c.rhel5.x86_64.rpm/from/http://mirrors.ukfast.co.uk/sites/ftp.mysql.com/
<snip>


[root@node1 mysql-storagenode]# rpm -ivh MySQL-*.rpm
Preparing...                ########################################### [100%]
   1:MySQL-Cluster-gpl-stora########################################### [ 25%]
   2:MySQL-Cluster-gpl-tools########################################### [ 50%]
   3:MySQL-Cluster-gpl-serve########################################### [ 75%]
   4:MySQL-client-community-########################################### [100%]

Now, modify /etc/my.cnf:
  • Change the [mysqld] section to tell the MySQL server ("SQL Node") where the management node is, and tell it to enable the MySQL Cluster storage engine
  • Change the [mysql_cluster] section to tell MySQL cluster binaries, such as the storage engine process (ndbd), where the management node is

[mysqld]
# Enable MySQL CLuster
ndbcluster
# Tell this node where to find its management node 
#ndb-connectstring=IP_OF_MGM_NODE
ndb-connectstring=10.0.0.5
[mysql_cluster]
#ndb-connectstring=IP_OF_MGM_NODE
ndb-connectstring=10.0.0.5

Finally, create the data directory and start the storage node:

[root@node1 ~]# mkdir -p /var/lib/mysql-cluster
[root@node1 ~]# ndbd --initial
2009-07-06 00:31:57 [ndbd] INFO     -- Configuration fetched from '10.0.0.5:1186', generation: 1
[root@node1 ~]# 

Return to the management node and the results of a SHOW command should show the new storage nodes starting and then started. 

 On either SQL/Storage node 1 or 2 enter the following commands:


mysql
use test;
CREATE TABLE ctest (i INT) ENGINE=NDBCLUSTER;
INSERT INTO ctest () VALUES (1);
SELECT * FROM ctest;
You should see 1 row returned (with the value 1).

If this works, now go to the other server and run the same SELECT and see what you get. Insert from that host and go back to host 1 and see if it works. If it works then congratulations, your cluster works!

The final test is to kill one server to see what happens. If you have physical access to the machine simply unplug its network cable and see if the other server keeps on going fine (try the SELECT query). If you dont have physical access do the following:

[root@node1 ~]# ps aux | grep ndbd

You get an output like this:


root      5578  0.0  0.3  6220 1964 ?        S    03:14   0:00 ndbd
root      5579  0.0 20.4 492072 102828 ?     R    03:14   0:04 ndbd
root     23532  0.0  0.1  3680  684 pts/1    S    07:59   0:00 grep ndbd

In this case ignore the command "grep ndbd" (the last line) but kill the first two processes by issuing the command kill -9 pid pid:

kill -9 5578 5579

Then try the select on the both servers. While you are at it run a SHOW command on the managment node to see that the server has died. To restart the storage node, just issue

ndbd

Note there is no --inital after the initial start.

Comments