GlusterFS and NFS-Ganesha integration

Over the past few years, there was an enormous increase in the number of user-space filesystems being developed and deployed. But one of the common challenges which all those filesystems’ users had to face was that there was a huge performance hit when their filesystems were exported via kernel-NFS (well-known and widely used network protocol).To address this issue, few of them have started developing NFS protocol as well as part of their filesystem (eg: Gluster-NFS). But there was a limitation on the protocol compliance and the version supported by them.  Hence in 2007, a group of people from CEA, France, had decided to develop a user-space NFS server which

  • can support both version 3 & 4
  • is protocol-compliant
  • can be able to access various filesystems
  • can be able to manage very large data and meta-data caches
  • can scale as much as possible
  • be free software
  • is portable to any Unix-like filesystems.

And this user-space NFS server is termed as NFS-Ganesha which is now getting widely deployed by many of the file-systems.

Nfs-ganesha can now support NFS (v3, 4.0, 4.1 pNFS) and 9P (from the Plan9 operating system) protocols concurrently. It provides a FUSE-compatible File System Abstraction Layer(FSAL) to allow the file-system developers to plug in their own storage mechanism and access it from any NFS client.

With NFS-GANESHA, the NFS client talks to the NFS-GANESHA server instead, which is in the user address space already. NFS-GANESHA can access the FUSE filesystems directly through its FSAL without copying any data to or from the kernel, thus potentially improving response times. Of course the network streams themselves (TCP/UDP) will still be handled by the Linux kernel when using NFS-GANESHA.

Even GlusterFS has been integrated with NFS-Ganesha, in the recent past to export the volumes created via glusterfs, using “libgfapi”.   libgfapi is a new userspace library developed to access data in glusterfs. It performs I/O on gluster volumes directly without FUSE mount. It is a filesystem like api which runs/sits in the application process context(which is NFS-Ganesha here) and eliminates the use of fuse and the kernel vfs layer from the glusterfs volume access. Thus by integrating NFS-Ganesha and libgfapi, the speed and latency have been improved compared to FUSE mount access.

In this post, I will guide you through the steps which can be used to setup NFS-Ganesha(V2.1 release) using GlusterFS as backend filesystem.

Pre-requisites

i) Before starting to setup NFS-Ganesha, you need to create GlusterFS volume. Please refer to the below document to setup and create glusterfs volumes.

http://www.gluster.org/community/documentation/index.php/QuickStart

ii) Disable kernel-nfs, gluster-nfs services on the system using the cmds-

  • service nfs stop
  • gluster vol set <volname> nfs.disable ON (Note: this command has to be repeated for all the volumes in the trusted-pool)

iii) Usually the libgfapi.so* files are installed in “/usr/lib” or “/usr/local/lib”, based on whether you have installed glusterfs using rpm or sources. Verify if those libgfapi.so* files are linked in “/usr/lib64” and “/usr/local/lib64” as well. If not create the links for those .so files in those directories.

iv) IPv6 should be enabled on the system . To enable IPv6 support, ensure that you have commented out or removed the line “options ipv6 disable=1” in /etc/modprobe.d/ipv6.conf.  This change will require the machine reboot.

Installing nfs-ganesha

i) using rpm install

* nfs-ganesha rpms are available in Fedora19 or later packages. So to install nfs-ganesha, run

#yum install nfs-ganesha

* Using CentOS or EL, rpms are available at the below link –

        http://download.gluster.org/pub/gluster/glusterfs/nfs-ganesha

Note: “ganesha.nfsd” will be installed in “/usr/bin”

ii) using sources

#cd /root
#git clone git://github.com/nfs-ganesha/nfs-ganesha.git
#cd nfs-ganesha
#git submodule update --init
#git checkout -b next origin/next
(Note: origin/next is the current development branch. To go to a specific release, say V2.1, use the command "git checkout -b next V2.1")
#rm -rf ~/build; mkdir ~/build ; cd ~/build
#cmake -DUSE_FSAL_GLUSTER=ON -DCURSES_LIBRARY=/usr/lib64 -DCURSES_INCLUDE_PATH=/usr/include/ncurses -DCMAKE_BUILD_TYPE=Maintainer /root/nfs-ganesha/src
(For debug bld use -DDEBUG_SYMS=ON, For dynamic exports use -DUSE_DBUS=ON)
#make; make install

Note: cmake-2.8 or higher version, libcap-devel, libnfsidmap, dbus-devel, doxygen, ncurses* packages are need to be installed prior to compiling the code. If in Fedora, libjemalloc,  libjemalloc-devel are also required.

When installed via sources, “ganesha.nfsd” will be copied to “/usr/local/bin”

Exporting GlusterFS volume

Here I will provide details of how one can export glusterfs volumes via nfs-ganesha manually. There are few CLI options, d-bus commands available to dynamically export/unexport volumes. I will explain those options usage as well in an another post.

Step1:

To export any GlusterFS volume or directory, create the EXPORT block for each of those entries in a .conf file, for example export.conf.  The following are the minimal set of parameters required to export any entry.

# cat export.conf 

EXPORT{    
	Export_Id = 1 ;   # Export ID unique to each export
	Path = "volume_path";  # Path of the volume to be exported. Eg: "/test_volume"

	FSAL { 
		name = GLUSTER;
		hostname = "10.xx.xx.xx";  # IP of one of the nodes in the trusted pool
		volume = "volume_name";	 # Volume name. Eg: "test_volume"
	}

	Access_type = RW;	 # Access permissions
	Squash = No_root_squash; # To enable/disable root squashing
	Disable_ACL = TRUE;	 # To enable/disable ACL
	Pseudo = "pseudo_path";	 # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"
	Protocols = "3", "4" ;	 # NFS protocols supported
	Transports = "UDP", "TCP" ; # Transport protocols supported
	SecType = "sys";	 # Security flavors supported
}

For more parameters available, please refer to “/root/nfs-ganesha/src/config_samples/export.txt” or https://github.com/nfs-ganesha/nfs-ganesha/blob/master/src/config_samples/export.txt.

Note: This file is sufficient enough to export volumes with default set of parameters.  To start with, you can choose to rename this file to “nfs-ganesha.conf” and execute Step4.

Step2:

In addition to the above, to configure any global options, define the corresponding blocks to contain them along with their new values in “nfs-ganesha.conf” file. For example,

NFSv4
{
    # Lifetime for NFSv4 Leases
    Lease_Lifetime = 60 ;
}

Sample file containing various such options is available as README file in the git repo – https://github.com/soumyakoduri/nfs-ganesha/blob/README/src/FSAL/FSAL_GLUSTER/README .

Note: An exclusive list of all such options available can be found at “/root/nfs-ganesha/src/config_samples/config.txt” or https://github.com/nfs-ganesha/nfs-ganesha/blob/master/src/config_samples/config.txt

Step3:

When there are many volumes/sub-directories to be exported, for better readability, you can choose to define EXPORT blocks for each of those entries in separate files and include them in nfs-ganesha.conf  (as shown below) at the end.

%include "export.conf"

Step4:

To start nfs-ganesha manually, execute the following command:

#ganesha.nfsd -f <location_of_nfs-ganesha.conf_file> -L <location_of_log_file> -N <log_level> -d

For example:

#ganesha.nfsd -f /root/build/nfs-ganesha.conf -L nfs-ganesha.log -N NIV_DEBUG -d

where:

nfs-ganesha.log is the log file for the ganesha.nfsd process.

NIV_DEBUG is the log level.

The above 4 steps should be able to get you started with nfs-ganesha. After following above steps, verify if the volume is exported.

To check if nfs-ganesha has started, execute the following command:

#ps aux | grep ganesha

To check if the volume is exported, run

#showmount -e localhost

Additional Notes:

To switch back to gluster-nfs/kernel-nfs, kill the ganesha daemon and start those services using the below commands –

#pkill ganesha
#service nfs start (for kernel-nfs)
#gluster vol set <volname> nfs.disable OFF

Hope this document helps you to  configure NFS-Ganesha using GlusterFS. For any queries/troubleshooting, please leave in your comment. Will be glad to help you out.

References:

https://github.com/nfs-ganesha/nfs-ganesha/wiki

http://archive09.linux.com/feature/153789

https://forge.gluster.org/nfs-ganesha-and-glusterfs-integration/pages/Home

http://humblec.com/libgfapi-interface-glusterfs/

Web Interface to manage Gluster Nodes

Ovirt is an open source tool used to create/manage gluster nodes through an easy to use web interface.

This document is to cover how gluster can be used with ovirt.

Want to manage gluster nodes with ease using ovirt ?  Create your own ovirt by following these simple steps.

Machine Requirements :

  • Fedora19  with 4GB of memory(minimum) and 20GB of hard disk space.
  • Recomended Browsers:
  • Mozilla Firefox 17
  • IE9  and above for the web admin portal.

Installation steps:

  •  Download and install fedora19 ISO.
  • Add the official ovirt repository for fedora yum localinstall http://resources.ovirt.org/releases/ovirt-release.noarch.rpm” 
  •  Install ovirt-engine by running the command  “yum install -y ovirt-engine.”
  • Once the installation is completed, run the command to set up ovirt with gluster, “engine-setup”

Once you run the above command, user will be prompted with the below questions . Provide the answer as follow.

The installer will take you through a series of interactive questions as listed in the following example. If you do not enter a value when prompted, the installer uses the default settings which are stated in [ ] brackets.

The default ports 80 and 443 must be available to access the manager on HTTP and HTTPS respectively.

[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files: [‘/etc/ovirt-engine-setup.conf.d/10-packaging.conf’]
Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20140411224650.log
Version: otopi-1.2.0 (otopi-1.2.0-1.fc19)
[ INFO ] Stage: Environment packages setup
[ INFO ] Yum Status: Downloading Packages
[ INFO ] Yum Download/Verify: iproute-3.12.0-2.fc19.x86_64
[ INFO ] Yum Status: Check Package Signatures
[ INFO ] Yum Status: Running Test Transaction
[ INFO ] Yum Status: Running Transaction
[ INFO ] Yum update: 1/2: iproute-3.12.0-2.fc19.x86_64
[ INFO ] Yum updated: 2/2: iproute
[ INFO ] Yum Verify: 1/2: iproute.x86_64 0:3.12.0-2.fc19 – u
[ INFO ] Yum Verify: 2/2: iproute.x86_64 0:3.9.0-1.fc19 – ud
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
[ INFO ] Stage: Environment customization

–== PRODUCT OPTIONS ==–

–== PACKAGES ==–

[ INFO ] Checking for product updates…
[ INFO ] No product updates found

–== NETWORK CONFIGURATION ==–

Host fully qualified DNS name of this server [localhost.localdomain]:  [ Give the FQDN  / locally resolvable host]

If user does not provide a FQDN, setup will result in the following warning.

[WARNING] Failed to resolve localhost.localdomain using DNS, it can be resolved only locally
Setup can automatically configure the firewall on this system.
Note: automatic configuration of the firewall may overwrite current settings.

Do you want Setup to configure the firewall? (Yes, No) [Yes] : Yes

[ INFO ] firewalld will be configured as firewall manager.

–== DATABASE CONFIGURATION ==–

Where is the Engine database located? (Local, Remote) [Local]: Local

Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.
Would you like Setup to automatically configure postgresql and create Engine database, or prefer to perform that manually? (Automatic, Manual) [Automatic]: Automatic

–== OVIRT ENGINE CONFIGURATION ==–

Application mode (Both, Virt, Gluster) [Both]: Gluster (The input being provided here is Gluster, as we are interested only to monitor gluster nodes).

Engine admin password:  [provide a password, which would be used to login]
Confirm engine admin password: [confirm the password]

If the password provided is too short, setup results in an warning as below.

[WARNING] Password is weak: it is based on a dictionary word
Use weak password? (Yes, No) [No]: [provide yes, if you want to use the short password]

–== PKI CONFIGURATION ==–

Organization name for certificate [localdomain]: ABCD

–== APACHE CONFIGURATION ==–

Setup can configure apache to use SSL using a certificate issued from the internal CA.
Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]: [Automatic]

Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications.
Do you wish to set the application as the default page of the web server? (Yes, No) [Yes]: Yes

–== SYSTEM CONFIGURATION ==–

Configure WebSocket Proxy on this machine? (Yes, No) [Yes]: [No]

[ INFO ] NFS configuration skipped with application mode Gluster

–== MISC CONFIGURATION ==–

–== END OF CONFIGURATION ==–

[ INFO ] Stage: Setup validation
[WARNING] Cannot validate host name settings, reason: resolved host does not match any of the local addresses
[WARNING] Less than 16384MB of memory is available

If system has less than 16B of memory , setup would display above warning. (since this is the maximum recomended)

–== CONFIGURATION PREVIEW ==–

Engine database name : engine
Engine database secured connection : False
Engine database host : localhost
Engine database user name : engine
Engine database host name validation : False
Engine database port : 5432
PKI organization : ABCD
Application mode : gluster
Firewall manager : firewalld
Update Firewall : True
Configure WebSocket Proxy : True
Host FQDN : localhost.localdomain
Configure local Engine database : True
Set application as default page : True
Configure Apache SSL : True

Please confirm installation settings (OK, Cancel) [OK]:

The installation commences. The following message displays, indicating that the installation was successful.

[ INFO ] Stage: Transaction setup
[ INFO ] Stopping engine service
[ INFO ] Stopping websocket-proxy service
[ INFO ] Stage: Misc configuration
[ INFO ] Stage: Package installation
[ INFO ] Stage: Misc configuration
[ INFO ] Creating PostgreSQL ‘engine’ database
[ INFO ] Configuring PostgreSQL
[ INFO ] Creating Engine database schema
[ INFO ] Creating CA
[ INFO ] Configuring WebSocket Proxy
[ INFO ] Generating post install configuration file ‘/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf’
[ INFO ] Stage: Transaction commit
[ INFO ] Stage: Closing up

–== SUMMARY ==–

SSH fingerprint: <SSH_FINGERPRINT>
Internal CA: <CA_FINGERPRINT>
Web access is enabled at:
http://example.ovirt.org:80/ovirt-engine
https://example.ovirt.org:443/ovirt-engine
Please use the user “admin” and password specified in order to login into oVirt Engine

–== END OF SUMMARY ==–

[ INFO ] Starting engine service
[ INFO ] Restarting httpd
[ INFO ] Restarting nfs services
[ INFO ] Generating answer file ‘/var/lib/ovirt-engine/setup/answers/20140310163837-setup.conf’
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20140310163604.log
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ INFO ] Execution of setup completed successfully

`Installation completed successfully`

Great!! you are almost there.

Now browse through the following URL “https://<ip>/ovirt-engine&#8221; , provide the user name as admin and password is the one which you gave in the setup.

Add your gluster nodes to the console and enjoy features like adding new/ improting existing cluster, creating/deleting volumes, adding/deleting bricks, set/reset volume options, optimize volume for virt store, Rebalance , remove brick features.

Another fantastic way to manage your Gluster nodes through UI

Not interested to peform all the above steps and just wanna do the actions mentioned above, do the following steps?

Is it possible? Yes, why not? Go through the steps below.

1) Install docker on your machine by performing the step “yum install -y docker”

2) Now start running docker by running the command “systemctl start docker”

3) Search for the image by running the command “docker search kasturidocker/centos_ovirt_3.5”

4) Just login to the above linux container by running the command “docker run -i -t  kasturidocker/centos_ovirt_3.5 /bin/bash”

Done, that is it.

5) Check if ovirt-engine is running , by running the command “service ovirt-engine status”, if not start it.

6) Get ip of the system and browse through the URL “http://<ip>/ovirt-enigine&#8221;.

Your web console is ready just in 6 steps, start adding gluster nodes and manage it.

 

Ovirt + Docker

A small blog on how to put Ovirt inside a docker.

Install docker on your system.

Get an account in docker.

pull a base image from docker which ovirt supports. For example : Fedora and centos.

Let us install ovirt on centos, by pulling centos base image from docker.

Instructions to follow:

docker run -i -t centos /bin/bash – user will be logged into the bash prompt of centos machine.

Now install all the packages which ovirt requires on top of centos base image.

As a first step, add the official ovirt repository by running the command “yum localinstall http://resources.ovirt.org/releases/ovirt-release.noarch.rpm&#8221;

Add / enable the epel repo by running the command “rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm”

Install ovirt-engine by running the command “yum install -y ovirt-engine”

Before running the engine-setup, make sure /etc/sysconfig/network is present with the following contents

NETWORKING=yes
HOSTNAME=localhost.localdomain

With out the above, postgresql will not start and data base would not be created.

Run the engine-setup command , and when prompted with a question, “Do you wish to setup firewall as IPtables?” Input the answer as “No”, otherwise execution fail.

Once engine setup is done, browse through the URL “https://<ip>/ovirt-engine&#8221;.

Pushing Image back to docker central repository

To push the repository, user need to peform the below repository.

Exit from the centos with ovirt installed base image .

Run the command to commit the image which has been created recently “docker commit <commitid>  <docker_username>/<give the name for your image>”

Now push the docker image to the central repository so that others can make use of that by running the command “docker push <docker_username>/<name of newly created user>

That is it !!! Now newly created image that is centos + ovirt is pushed to the central repository.

You can verify the same by searching for the image pushed . Run the command “docker search <newly created docker image>”

There you go, in the result set returned you should see an image pushed by you.