Since I recently configured and installed a MySQL-cluster, I thought I’d share the procedure. A lot of the examples around explains how to set it all up on the same machine for “testing purposes” — which, in theory, is the same as setting it up on different machines. I’ll be explaining the latter, that is, installing it onto different machines.
To achieve true redundancy in a MySQL-cluster, you need at least 3 seperate, physical machines; two data-nodes, and one management-node. The latter you can use a virtual machine for, as long as it doesn’t run on the two data-nodes (which means you still need at least 3 physical machines). You can also use the management-node as a mysql-proxy for transparent failover/load-balancing for the clients.
My setup was done using two physical machines (db0 and db1) running Ubuntu 8.04 (Hardy Heron), and one virtual machine (mysql-mgmt) running Debian 6 (Squeeze). The VM is not running on the two physical machines. db0 and db1 is the actual data-nodes/servers, and mysql-mgmt is going to be used as the management-node for the cluster. In addition, mysql-mgmt is also going to be configured with mysql-proxy, so that we have transparent failover/load-balancing for the clients.
Update 2011-10-26: I’ve changed the setup a bit, compared to my original walkthrough. I hit some memory-limits when using the NDB-engine. This caused MySQL to fail inserting new rows (stating that the table was full). There are some variables that you can set (DataMemory and IndexMemory), to increase the memory-consumption for the ndb-process (which was what caused the issues). Since I had limited amount of memory available on the mysql-mgmt virtual machine (and lots on db0/1), I decided to run ndb_mgmd on db0 + db1. Apparently, you can do this, and it’s still redundant. The post has been changed to reflect this.
My setup was done using two physical machines (db0 and db1) running Ubuntu 8.04 (Hardy Heron), and one virtual machine (mysql-proxy) running Debian 6 (Squeeze). Previously, the virtual machine ran ndb_mgmd, but due to the above mentioned issues, both db0 and db1 runs their own ndb_mgmd-processes. The virtual machine is now only used to run mysql-proxy (and hence it’s hostname has changed to reflect this).
Update 2012-01-30: morphium pointed out that /etc/my.cnf needed it’s own [mysql_cluster]-section, so that ndbd and ndb_mgmd connects to something else than localhost (which is the default if no explicit hosts is defined). The post has been updated to reflect this.
1. Prepare db0 + db1
Go to MySQLs homepage, and find the download-link for the latest MySQL Cluster-package, available here. Then proceed as shown below (changing the ‘datadir’ to your likings);
root@db0:~# cd /usr/local/Repeat on db1. Do not start the MySQL-server yet.
root@db0:/usr/local# wget -q http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.1/mysql-cluster-gpl-7.1.10-linux-x86_64-glibc23.tar.gz/from/http://mirror.switch.ch/ftp/mirror/mysql/
root@db0:/usr/local# mv index.html mysql-cluster-gpl-7.1.10-linux-x86_64-glibc23.tar.gz
root@db0:/usr/local# tar -xzf mysql-cluster-gpl-7.1.10-linux-x86_64-glibc23.tar.gz
root@db0:/usr/local# rm -f mysql-cluster-gpl-7.1.10-linux-x86_64-glibc23.tar.gz
root@db0:/usr/local# ln -s mysql-cluster-gpl-7.1.10-linux-x86_64-glibc23 mysql
root@db0:/usr/local# ls -ald mysql*
lrwxrwxrwx 1 root root 45 2011-03-04 18:30 mysql -> mysql-cluster-gpl-7.1.10-linux-x86_64-glibc23
drwxr-xr-x 13 root mysql 4096 2011-03-05 00:42 mysql-cluster-gpl-7.1.10-linux-x86_64-glibc23
root@db0:/usr/local# cd mysql
root@db0:/usr/local/mysql# mkdir /opt/oracle/disk/mysql_cluster
root@db0:/usr/local/mysql# mkdir /opt/oracle/disk/mysql_cluster/mysqld_data
root@db0:/usr/local/mysql# groupadd mysql
root@db0:/usr/local/mysql# useradd -g mysql mysql
root@db0:/usr/local/mysql# chown mysql:root /opt/oracle/disk/mysql_cluster/mysqld_data
root@db0:/usr/local/mysql# scripts/mysql_install_db --user=mysql --no-defaults --datadir=/opt/oracle/disk/mysql_cluster/mysqld_data/
root@db0:/usr/local/mysql# cp support-files/mysql.server /etc/init.d/
root@db0:/usr/local/mysql# chmod +x /etc/init.d/mysql.server
root@db0:/usr/local/mysql# update-rc.d mysql.server defaults
root@db0:/usr/local/mysql# vim /etc/my.cnfPut the following in the my.cnf-file;
[mysqld]Repeat on db 1. And (again), do not start the MySQL-server yet.
basedir=/usr/local/mysql
datadir=/opt/oracle/disk/mysql_cluster/mysqld_data
event_scheduler=on
default-storage-engine=ndbcluster
ndbcluster
ndb-connectstring=db0.internal,db1.internal # IP/host of the NDB_MGMD-nodes
key_buffer = 512M
key_buffer_size = 512M
sort_buffer_size = 512M
table_cache = 1024
read_buffer_size = 512M
[mysql_cluster]
ndb-connectstring=db0.internal,db1.internal # IP/host of the NDB_MGMD-nodes
2. Prepare ndb_mgmd
We can now prepare ndb_mgmd as follows;
root@db0:~# cd /usr/local/mysqlPut the following into the config.ini-file;
root@db0:/usr/local/mysql# chmod +x bin/ndb_mgm*
root@db0:/usr/local/mysql# mkdir /var/lib/mysql-cluster
root@db0:/usr/local/mysql# vim /var/lib/mysql-cluster/config.ini
[NDBD DEFAULT]There are two variables in the above config you’d want to pay attention to; DataMemory and IndexMemory. These values needs to be changed according to how large tables you need. Without setting these values, they default to 80MB (DataMemory) and 18MB (IndexMemory), which is not much (after around 200.000 rows, you’ll get messages stating that the table is full when trying to insert new rows). My values are probably way to high for most cases, but since we have a few tables with a lot of messages, and lots of RAM, I just set them a bit high to avoid issues. Keep in mind that NDBD will allocate/reserve the amount of memory you set for DataMemory (so in my config above, NDBD uses 8GB of memory from the second the service is started).
NoOfReplicas=2
DataDir=/var/lib/mysql-cluster
DataMemory=8G
IndexMemory=4G
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]
# 2 Managment Servers
[NDB_MGMD]
HostName=db0.internal # IP/host of first NDB_MGMD-node
NodeId=1
[NDB_MGMD]
HostName=db1.internal # IP/host of second NDB_MGMD-node
NodeId=2
# 2 Storage Engines
[NDBD]
HostName=db0.internal # IP/host of first NDBD-node
NodeId=3
[NDBD]
HostName=db1.internal # IP/host of second NDBD-node
NodeId=4
# 2 MySQL Clients
# Lave this blank to allow rapid changes of the mysql clients.
[MYSQLD]
[MYSQLD]
Now we’re ready to start the management-server for the first time. Please notice the use of the parameter ‘–initial’. This should only be used the first time you start it. Once you’ve started it for the first time, you remove the ‘–initial’ parameter.
root@db0:/usr/local/mysql/bin# ndb_mgmd -f /var/lib/mysql-cluster/config.ini --initial --config-dir=/var/lib/mysql-cluster/Repeat on db1.
MySQL Cluster Management Server mysql-5.1.51 ndb-7.1.10
When done, we go back to the storage-servers.
3. Finalize db0 + db1
Now we make the ndb data-dirs, and start the ndb-service;
root@db0:/usr/local/mysql# mkdir /var/lib/mysql-clusterRepeat on db1.
root@db0:/usr/local/mysql# cd /var/lib/mysql-cluster
root@db0:/var/lib/mysql-cluster# /usr/local/mysql/bin/ndbd --initial
2011-03-04 22:51:54 [ndbd] INFO -- Angel connected to 'localhost:1186'
2011-03-04 22:51:54 [ndbd] INFO -- Angel allocated nodeid: 3
We’re now going to alter some of the tables, so that they use the ‘ndbcluster’-engine. This is to ensure that user/host-priviledges also gets synced (so that if you add a user on one server, it gets replicated to the other).
root@db0:/var/lib/mysql-cluster# /etc/init.d/mysql.server startNow we’ll stop the MySQL-service on both servers, and copy the data on db0 over to db1. Then we’ll start the servers again.
Starting MySQL.. *
root@db1:/var/lib/mysql-cluster# /etc/init.d/mysql.server start
Starting MySQL.. *
root@db0:/usr/local/mysql# mysql
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.1.51-ndb-7.1.10-cluster-gpl MySQL Cluster Server (GPL)
Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved.
This software comes with ABSOLUTELY NO WARRANTY. This is free software,
and you are welcome to modify and redistribute it under the GPL v2 license
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> use mysql
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> ALTER TABLE mysql.user ENGINE=NDBCLUSTER;
Query OK, 6 rows affected (0.25 sec)
Records: 6 Duplicates: 0 Warnings: 0
mysql> ALTER TABLE mysql.db ENGINE=NDBCLUSTER;
Query OK, 2 rows affected (0.16 sec)
Records: 2 Duplicates: 0 Warnings: 0
mysql> ALTER TABLE mysql.host ENGINE=NDBCLUSTER;
Query OK, 0 rows affected (0.18 sec)
Records: 0 Duplicates: 0 Warnings: 0
mysql> ALTER TABLE mysql.tables_priv ENGINE=NDBCLUSTER;
Query OK, 0 rows affected (0.16 sec)
Records: 0 Duplicates: 0 Warnings: 0
mysql> ALTER TABLE mysql.columns_priv ENGINE=NDBCLUSTER;
Query OK, 0 rows affected (0.16 sec)
Records: 0 Duplicates: 0 Warnings: 0
mysql> SET GLOBAL event_scheduler=1;
Query OK, 0 rows affected (0.00 sec)
mysql> CREATE EVENT `mysql`.`flush_priv_tables` ON SCHEDULE EVERY 30 second ON COMPLETION PRESERVE DO FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
mysql> exit
root@db0:/usr/local/mysql# /etc/init.d/mysql.server stop
root@db1:/usr/local/mysql# /etc/init.d/mysql.server stop
root@db0:/opt/oracle/disk/mysql_cluster# scp -r mysqld_data/ db1:/opt/oracle/disk/mysql_cluster/
root@db0:/usr/local/mysql# /etc/init.d/mysql.server start
Starting MySQL.. *
root@db1:/usr/local/mysql# /etc/init.d/mysql.server start
Starting MySQL.. *
4. Testing
So far, so good. We’re now gonna check if the management/control of the cluster works as it should.
root@db0:~# ndb_mgmSeems to be working just fine! However, we also want to make sure that replication works. We’re going to populate a database with some data, and check that it replicates to the other server. We’re also going to shut down one server, alter some data, and start it again, to see if the data synchronizes.
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: db0.internal:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=3 @10.10.10.10 (mysql-5.1.51 ndb-7.1.10, Nodegroup: 0)
id=4 @10.10.10.20 (mysql-5.1.51 ndb-7.1.10, Nodegroup: 0, Master)
[ndb_mgmd(MGM)] 2 node(s)
id=1 @10.10.10.10 (mysql-5.1.51 ndb-7.1.10)
id=2 @10.10.10.20 (mysql-5.1.51 ndb-7.1.10)
[mysqld(API)] 2 node(s)
id=5 @10.10.10.10 (mysql-5.1.51 ndb-7.1.10)
id=6 @10.10.10.20 (mysql-5.1.51 ndb-7.1.10)
root@db0:/var/lib/mysql-cluster# mysqlWe now move over to db1 to check if it got replicated.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.1.51-ndb-7.1.10-cluster-gpl MySQL Cluster Server (GPL)
Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved.
This software comes with ABSOLUTELY NO WARRANTY. This is free software,and you are welcome to modify and redistribute it under the GPL v2 license
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> use test
Database changed
mysql> CREATE TABLE loltest (i INT) ENGINE=NDBCLUSTER;
Query OK, 0 rows affected (0.15 sec)
mysql> INSERT INTO loltest () VALUES (1);
Query OK, 1 row affected (0.00 sec)
mysql> SELECT * FROM loltest;
+------+
| i |
+------+
| 1 |
+------+
1 row in set (0.00 sec)
root@db1:/var/lib/mysql-cluster# mysqlThat seems to be working just fine, as well. However, we also want to test that the servers synchronize when a server comes back up after being down (so that changes done to the other server gets synchronized).
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.1.51-ndb-7.1.10-cluster-gpl MySQL Cluster Server (GPL)
Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved.
This software comes with ABSOLUTELY NO WARRANTY. This is free software,and you are welcome to modify and redistribute it under the GPL v2 license
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> use test
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> SELECT * FROM loltest;
+------+
| i |
+------+
| 1 |
+------+
1 row in set (0.00 sec)
root@db0:/var/lib/mysql-cluster# /etc/init.d/mysql.server stopThat also seems to be working just fine. That also concludes the configuration of the cluster it self.
Shutting down MySQL..... *
root@db0:~# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: db0.internal:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=3 @10.10.10.10 (mysql-5.1.51 ndb-7.1.10, Nodegroup: 0)
id=4 @10.10.10.20 (mysql-5.1.51 ndb-7.1.10, Nodegroup: 0, Master)
[ndb_mgmd(MGM)] 2 node(s)
id=1 @10.10.10.10 (mysql-5.1.51 ndb-7.1.10)
id=2 @10.10.10.20 (mysql-5.1.51 ndb-7.1.10)
[mysqld(API)] 2 node(s)
id=5 (not connected, accepting connect from any host)
id=6 @10.10.10.20 (mysql-5.1.51 ndb-7.1.10)
root@db1:/var/lib/mysql-cluster# mysql
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.1.51-ndb-7.1.10-cluster-gpl MySQL Cluster Server (GPL)
Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved.
This software comes with ABSOLUTELY NO WARRANTY. This is free software,
and you are welcome to modify and redistribute it under the GPL v2 license
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> use test
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> INSERT INTO loltest () VALUES (99);
Query OK, 1 row affected (0.00 sec)
mysql> INSERT INTO loltest () VALUES (999);
Query OK, 1 row affected (0.00 sec)
mysql> INSERT INTO loltest () VALUES (100);
Query OK, 1 row affected (0.00 sec)
mysql> SELECT * FROM loltest;
+------+
| i |
+------+
| 1 |
| 100 |
| 99 |
| 999 |
+------+
4 rows in set (0.00 sec)
root@db0:/var/lib/mysql-cluster# /etc/init.d/mysql.server start
Starting MySQL. *
root@db0:/var/lib/mysql-cluster# mysql
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.1.51-ndb-7.1.10-cluster-gpl MySQL Cluster Server (GPL)
Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved.
This software comes with ABSOLUTELY NO WARRANTY. This is free software,
and you are welcome to modify and redistribute it under the GPL v2 license
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> use test
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> SELECT * FROM loltest;
+------+
| i |
+------+
| 99 |
| 999 |
| 1 |
| 100 |
+------+
4 rows in set (0.00 sec)
5. Failover/redundancy/load-balancing
The only thing that’s left is to setup a mysql-proxy, which all the clients will use as it’s MySQL hostname. This mysql-proxy is then ‘the middleman’, completely transparent for both the servers and the clients. Should a server go down, the clients won’t notice it. It also does automatic load-balancing. If you proceed with this, keep in mind that this mysql-proxy becomes a single-point-of-failure in the setup (hence it kinda makes the whole MySQL-cluster useless).
In my setup, I chose to install the mysql-proxy on the mysql-mgmt machine.
I’ve installed mysql-proxy on it’s own virtual host. Since this is virtualized, it’s also redundant should something happen. You could also use two physical machines, and use Linux HA etc, however that’s quite more complex than using a VM (at least if you already have virtualization available).
root@mysql-proxy:~# apt-get install mysql-proxyAdd the following to the mysql-proxy.conf-file;
root@mysql-proxy:~# mkdir /etc/mysql-proxy
root@mysql-proxy:~# cd /etc/mysql-proxy
root@mysql-proxy:/etc/mysql-proxy# vim mysql-proxy.conf
[mysql-proxy]Then you can start the mysql-proxy service;
daemon = true
keepalive = true
proxy-address = mysql-proxy.internal:3306
# db0
proxy-backend-addresses = db0.internal:3306
# db1
proxy-backend-addresses = db1.internal:3306
root@mysql-proxy:/etc/mysql-proxy# mysql-proxy --defaults-file=/etc/mysql-proxy/mysql-proxy.confNow point your clients to use the hostname of the mysql-proxy server, and you’re good to go!
6. Init-scripts (automatic startup at boot)
The ‘ndbd’- and ‘ndb_mgmd’-services needs init-scripts in order to be loaded automatically at boot. Since they don’t seem to be provided in the MySQL Cluster-package, I made them myself.
The init-script that’s included with the mysql-proxy service didn’t work for me, so I wrote my own for that as well.
Copy them, save them in /etc/init.d/. Then make them executable (chmod +x /etc/init.d/<filename>). Finally you add them to rc.d, so that they’re loaded at boot; update-rc.d <filename> defaults.
/etc/init.d/ndbd
#!/bin/bash
# Linux Standard Base comments
### BEGIN INIT INFO
# Provides: ndbd
# Required-Start: $local_fs $network $syslog $remote_fs
# Required-Stop: $local_fs $network $syslog $remote_fs
# Should-Start:
# Should-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: mysql cluster manager client
# Description: mysql cluster manager client
### END INIT INFO
ndbd_bin=/usr/local/mysql/bin/ndbd
if ! test -x $ndbd_bin; then
echo "Can't execute $ndbd_bin";
exit;
fi
start_ndbd(){
number_of_ndbd_pids=`ps aux|grep -iv "grep"|grep -i "/usr/local/mysql/bin/ndbd"|wc -l`
if [ $number_of_ndbd_pids -eq 0 ]; then
$ndbd_bin
echo "ndbd started."
else
echo "ndbd is already running."
fi
}
stop_ndbd(){
number_of_ndbd_pids=`ps aux|grep -iv "grep"|grep -i "/usr/local/mysql/bin/ndbd"|wc -l`
if [ $number_of_ndbd_pids -ne 0 ]; then
ndbd_pids=`pgrep ndbd`
for ndbd_pid in $(echo $ndbd_pids); do
kill $ndbd_pid 2> /dev/null
done
number_of_ndbd_pids=`ps aux|grep -iv "grep"|grep -i "/usr/local/mysql/bin/ndbd"|wc -l`
if [ $number_of_ndbd_pids -eq 0 ]; then
echo "ndbd stopped."
else
echo "Could not stop ndbd."
fi
else
echo "ndbd is not running."
fi
}
case "$1" in
'start' )
start_ndbd
;;
'stop' )
stop_ndbd
;;
'restart' )
stop_ndbd
start_ndbd
;;
*)
echo "Usage: $0 {start|stop|restart}" >&2
;;
esac
#!/bin/bash
# Linux Standard Base comments
### BEGIN INIT INFO
# Provides: ndb_mgmd
# Required-Start: $local_fs $network $syslog $remote_fs
# Required-Stop: $local_fs $network $syslog $remote_fs
# Should-Start:
# Should-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: mysql cluster manager
# Description: mysql cluster manager
### END INIT INFO
ndb_mgmd=/usr/local/mysql/bin/ndb_mgmd
config_file=/var/lib/mysql-cluster/config.ini
config_dir=/var/lib/mysql-cluster
if ! test -x $ndb_mgmd; then
echo "Can't execute $ndb_mgmd"
exit;
fi
start_ndb_mgmd(){
number_of_ndb_mgmd_pids=`ps aux|grep -iv "grep"|grep -i "$ndb_mgmd"|wc -l`
if [ $number_of_ndb_mgmd_pids -eq 0 ]; then
$ndb_mgmd -f $config_file --config-dir=$config_dir
echo "ndb_mgmd started."
else
echo "ndb_mgmd is already running."
fi
}
stop_ndb_mgmd(){
number_of_ndb_mgmd_pids=`ps aux|grep -iv "grep"|grep -i "$ndb_mgmd"|wc -l`
if [ $number_of_ndb_mgmd_pids -ne 0 ]; then
ndb_mgmd_pids=`pgrep ndb_mgmd`
for ndb_mgmd_pid in $(echo $ndb_mgmd_pids); do
kill $ndb_mgmd_pid 2> /dev/null
done
number_of_ndb_mgmd_pids=`ps aux|grep -iv "grep"|grep -i "$ndb_mgmd"|wc -l`
if [ $number_of_ndb_mgmd_pids -eq 0 ]; then
echo "ndb_mgmd stopped."
else
echo "Could not stop ndb_mgmd."
fi
else
echo "ndb_mgmd is not running."
fi
}
case "$1" in
'start' )
start_ndb_mgmd
;;
'stop' )
stop_ndb_mgmd
;;
'restart' )
stop_ndb_mgmd
start_ndb_mgmd
;;
*)
echo "Usage: $0 {start|stop|restart}" >&2
;;
esac
#! /bin/bash
### BEGIN INIT INFO
# Provides: mysql-proxy
# Required-Start: $local_fs $network $syslog $remote_fs
# Required-Stop: $local_fs $network $syslog $remote_fs
# Should-Start:
# Should-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: MySQL Proxy
# Description: MySQL Proxy
### END INIT INFO
mysql_proxy=/usr/bin/mysql-proxy
config_file=/etc/mysql-proxy/mysql-proxy.conf
if ! test -x $mysql_proxy; then
echo "Can't execute $mysql_proxy"
exit;
fi
start_mysql_proxy(){
number_of_mysql_proxy_pids=`ps aux|grep -iv "grep"|grep -i "/usr/bin/mysql-proxy"|wc -l`
if [ $number_of_mysql_proxy_pids -eq 0 ]; then
$mysql_proxy --defaults-file=$config_file
echo "mysql-proxy started."
else
echo "mysql-proxy is already running."
fi
}
stop_mysql_proxy(){
number_of_mysql_proxy_pids=`ps aux|grep -iv "grep"|grep -i "/usr/bin/mysql-proxy"|wc -l`
if [ $number_of_mysql_proxy_pids -ne 0 ]; then
mysql_proxy_pids=`pgrep mysql-proxy`
for mysql_proxy_pid in $(echo $mysql_proxy_pids); do
kill $mysql_proxy_pid 2> /dev/null
done
number_of_mysql_proxy_pids=`ps aux|grep -iv "grep"|grep -i "/usr/bin/mysql-proxy"|wc -l`
if [ $number_of_mysql_proxy_pids -eq 0 ]; then
echo "mysql-proxy stopped."
else
echo "Could not stop mysql-proxy."
fi
else
echo "mysql-proxy is not running."
fi
}
case "$1" in
'start' )
start_mysql_proxy
;;
'stop' )
stop_mysql_proxy
;;
'restart' )
stop_mysql_proxy
start_mysql_proxy
;;
*)
echo "Usage: $0 {start|stop|restart}" >&2
;;
esac