Chris.Sonnier's picture

I needed an appliance that I could deploy, from 1 template, and have the web application connect to a local MySQL(so there is no configuration needed on the application), but use read/write to one MySQL database, a.k.a Multi-Master. Also we wanted to be able to reuse the extra hdd space on each appliance to form a HA storage volume that we could distribute across all nodes and provide redudancy of those files(also can eliminate the need for Samba to share files between appliances).

Below are the installed apps on an Ubuntu 10.04 Turnkey Image:

  • MySQL
  • RoR
  • Apache
  • Gluster
  • Galera for MySQL
  • Samba (not always necessary)

Result: A single image, that with small configuration changes provides a cluster of Multi-Master MySql servers connected to your web application with clustered storage.

If you have any question feel free to ask.

-Chris

Comments

Jeremy Davis's picture

But I definately think it would have application. And it could be a cool project.

Feel free to create a blueprint detailing your thoughts (or just linking to this even...) Also you could consider creating a TKLPatch.

isk's picture

... and I'd also be interested to know how you got GlusterFS (which version?) running on an Ubuntu 10.04 appliance since there only appears to be an x64 debian package at version 3.3 available and the apt-get repo is hopelessly out of date (and doesn't work).

Cheers

Chris.Sonnier's picture

glusterfs 3.2.5 

I believe you will have to compile and build it yourself. I did install a PAE kernel for Ubuntu 10.04, but not sure if that has anything to do with it.  Here is a link to the version that I am running http://download.gluster.com/pub/gluster/glusterfs/3.2/3.2.5/

Let me know if you have any issues and I will try to help you out.

 

Good Day!

isk's picture

Thanks for the offer. I bit the bullet and attempted a source build for the 3.3 version and managed to get a pair of appliances replicating in no time. Here's what I did (apols if this is wrong place for this stuff, I always keep notes of what I do and hope someone else will find them useful)...:

Installing GlusterFS 3.3 from source on Turnkey Linux Appliance (Ubuntu 10.04)
==============================================================================

 1. Login via SSH as "root".
 2. Stay in home folder;

      cd ~
    
 3. Download the GlusterFS 3.3 source package:

      wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/glusterfs-3.3.0...
    
 4. Unpack the downloaded package:

      tar -xvzf ./glusterfs-3.3.0.tar.gz
    
 5. Change to the package directory:

      cd glusterfs-3.3.0
    
 6. Install package dependencies:

      apt-get update
      apt-get install gcc flex bison libreadline5-dev
    
 7. Run the configuration utility:

    ./configure
   
       GlusterFS configure summary
        ===========================
        FUSE client        : yes
        Infiniband verbs   : no
        epoll IO multiplex : yes
        argp-standalone    : no
        fusermount         : no
        readline           : yes
        georeplication     : yes
       
 8. Build GlusterFS:

      make                                (put kettle on for nice cup of tea)
      make install
    
 9. Make sure the shared library can be found:

    echo "include /usr/local/lib" >> /etc/ld.so.conf
    ldconfig
   
10. Verify the installed version:

        glusterfs --version

        glusterfs 3.3.0 built on Jun  8 2012 21:34:47

        Repository revision: git://git.gluster.com/glusterfs.git
        Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
        GlusterFS comes with ABSOLUTELY NO WARRANTY.
        You may redistribute copies of GlusterFS under the terms of the GNU General Public License.

11. Use Webmin to open ports in firewall (** harden later **)

        tcp    111
        udp    111
        tcp    24007:24011
        tcp 38465:38485
       
12. Start GlusterFS daemon:

        service glusterd start
       
Configuring GlusterFS 3.3 for two server file system replication
================================================================

1. Perform GlusterFS source installation procedure as above for each Turnkey Linux node appliance.
2. Make sure each node appliance can be resolved by DNS from each other.
3. Add servers to trusted storage pool:

         From server1.yourdomain:
         
                 gluster peer probe server2.yourdomain
                 Probe successful
                
         From server2.yourdomain:
         
                 gluster peer probe server1.yourdomain
                 Probe successful

4. Confirm peers can now see each other:

            From server1.yourdomain:
         
                  gluster peer status
                  
                  Number of Peers: 1

                 Hostname: server2.yourdomain
                 Uuid: df3811cc-3593-48e0-ac59-d82338543327
                 State: Peer in Cluster (Connected)

            From server2.yourdomain:
         
                  gluster peer status
                  
                  Number of Peers: 1

                 Hostname: server1.yourdomain
                 Uuid: 47619cc6-eba2-4bae-a0ad-17b745150c2d
                 State: Peer in Cluster (Connected)

5. Create replicated volumes:

            From server1.yourdomain:
           
                 gluster volume create your-volume-name replica 2 transport tcp server1.yourdomain:/exp1 server2.yourdomain:/exp2

                 Creation of volume your-volume-name has been successful. Please start the volume to access data.
                 
6. Start the volume:

            From server1.yourdomain:
           
                 gluster volume start your-volume-name
                   
                 Starting volume your-volume-name has been successful

7. Display volume information:

            From server1.yourdomain:
           
                 gluster volume info your-volume-name
                 
                 Volume Name: your-volume-name
                 Type: Replicate
                 Volume ID: b9ff3770-53d9-4209-9df6-c0006ade6dde
                 Status: Started
                 Number of Bricks: 1 x 2 = 2
                 Transport-type: tcp
                 Bricks:
                 Brick1: server1.yourdomain:/exp1
                 Brick2: server2.yourdomain:/exp2

 8. Add the FUSE loadable kernel module (LKM) to the Linux kernel for each client node:

          From server1.yourdomain:
         
               modprobe fuse
               dmesg | grep -i fuse
               fuse init (API version 7.13)
               
          From server2.yourdomain:
         
               modprobe fuse
               dmesg | grep -i fuse
               fuse init (API version 7.13)


9. Mount the volume on each server node:

            From server1.yourdomain:
           
                 mkdir /mnt/glusterfs
                 mount -t glusterfs server1.yourdomain:/your-volume-name /mnt/glusterfs
                 
            From server2.yourdomain:
           
                 mkdir /mnt/glusterfs
                 mount -t glusterfs server2.yourdomain:/your-volume-name /mnt/glusterfs                

10. Test the replication:

          From server1.yourdomain:
         
             touch /mnt/glusterfs/hello.world
            
          From server2.yourdomain:
         
               ls -l /mnt/glusterfs
               
               total 1
                 -rw-r--r-- 1 root root    0 Jun  8 22:48 hello.world
                 
11. To do...
 
         - Harden firewall
         - More testing
         - Add additional nodes (using snapshots)

        - Autostart daemons

        - Automount glusterfs

More reading: http://www.gluster.org/community/documentation/index.php/Main_Page

Enjoy!

Chris.Sonnier's picture

 

Good to hear you got it working! Thanks for the notes, I may give this a try and update our template appliance. I'll be sure to note your contribution when we get some time to host finally host a public image of our appliance. I'll be sure to send you the link to it when we get one available.

isk's picture

My next task was to get MySQL multi-master replication working with Galera/wsrep but I can't get second node to sync. I believe the problem lies in the MySQL version 5.1 that comes in the LAMP appliance and the available memory in a micro instance - there are reported issues of high memory usage in wsrep in older builds. 

Have you managed to get Galera working with 5.1?

When I get a moment I intend to try the Percona XtraDB cluster which is built for MySQL 5.5 and latest Galera/wsrep on a core Ubuntu 10.04 LTS Lucid for which they have a Debian package and multi node deployment tool. I may also try the SeveralNines ClusterControl for MySQL 5.5 Galera which uses an additional instance as a cluster monitor.

Chris.Sonnier's picture

I did have a look at SeveralNines and it looked great, but since we wanted to have a base template that contained everything necessary to run our application standalone, or in a cluster, I decided to pass on the ClusterControl since it required a different setup than the rest of the cluster. I think it is something worthy of looking back at and possibly integrating into a single 'master' monitor that can handle monitoring for database, filesystem, server hardware, etc.

Chris.Sonnier's picture

Yes Galera is running on MySQL 5.1.69. and have not experienced any high memory usage problems.  Here is where I got all the Galera files for our build:

  1. https://launchpad.net/codership-mysql/5.1/5.1.59-22.2
  2. https://launchpad.net/galera/1.x/22.1.1

Did you make sure to uninstall MySQL before installing the MySQL version you need for Galera? Galera will only sync InnoDB tables, so be sure you have all your table storage engines set to InnoDB. 

This was a good helpful site to get started:

http://www.magicposition.com/2012/01/16/installing-galera-mysql-clusteri...

isk's picture

Hi Chris

Actually I used slightly different packages for the 5.1.62 build which is the version that came with the LAMP appliance. As such I only needed to remove the packages for mysql-server-5.1, mysql-server, and mysql-server-core-5.1 and left the client and common stuff as is:

1. https://launchpad.net/codership-mysql/5.1/5.1.62-23.4/+download/mysql-se...

2. https://launchpad.net/galera/2.x/23.2.1/+download/galera-23.2.1-i386.deb

I haven't got to the InnoDB part yet as I was trying to sync two nodes with just the basic installation. As far as I can see all permissions were correct and I set the firewalls wide open. The second node can see the cluster and the UUID for the cluster is correct but it gets stuck on the initial SST(4) sync stage then crashes MySQL, restarts, gets stuck, crashes, etc...

The bootstrap node runs fine and phpMyAdmin reports the correct client and server version and the wsrep parameters are all good. The second node has a blank client UUID and the ready status stays OFF.

The tutorial at magicpoisiton looks promising and is pretty much what I did with a few minor exceptions. I placed the wsrep parameter settings into the config file at /etc/mysql/conf.d/wsrep.cnf which includes the bind to 0.0.0.0 (having commented out the bind to localhost in my.cnf). I'll try some of the different settings they give such as binding to the actual IP of the server....

Cheers

Ian

 

 

 

isk's picture

My bad. I had a typo in the wsrep_sst_auth parameter setting so mysqldump didn't have the authority to perform the sync. Fixed now and all working with three nodes. Happy days

Add new comment