GlusterFS and NFS-Ganesha integration

Over the past few years, there was an enormous increase in the number of user-space filesystems being developed and deployed. But one of the common challenges which all those filesystems’ users had to face was that there was a huge performance hit when their filesystems were exported via kernel-NFS (well-known and widely used network protocol).To address this issue, few of them have started developing NFS protocol as well as part of their filesystem (eg: Gluster-NFS). But there was a limitation on the protocol compliance and the version supported by them.  Hence in 2007, a group of people from CEA, France, had decided to develop a user-space NFS server which

  • can support both version 3 & 4
  • is protocol-compliant
  • can be able to access various filesystems
  • can be able to manage very large data and meta-data caches
  • can scale as much as possible
  • be free software
  • is portable to any Unix-like filesystems.

And this user-space NFS server is termed as NFS-Ganesha which is now getting widely deployed by many of the file-systems.

Nfs-ganesha can now support NFS (v3, 4.0, 4.1 pNFS) and 9P (from the Plan9 operating system) protocols concurrently. It provides a FUSE-compatible File System Abstraction Layer(FSAL) to allow the file-system developers to plug in their own storage mechanism and access it from any NFS client.

With NFS-GANESHA, the NFS client talks to the NFS-GANESHA server instead, which is in the user address space already. NFS-GANESHA can access the FUSE filesystems directly through its FSAL without copying any data to or from the kernel, thus potentially improving response times. Of course the network streams themselves (TCP/UDP) will still be handled by the Linux kernel when using NFS-GANESHA.

Even GlusterFS has been integrated with NFS-Ganesha, in the recent past to export the volumes created via glusterfs, using “libgfapi”.   libgfapi is a new userspace library developed to access data in glusterfs. It performs I/O on gluster volumes directly without FUSE mount. It is a filesystem like api which runs/sits in the application process context(which is NFS-Ganesha here) and eliminates the use of fuse and the kernel vfs layer from the glusterfs volume access. Thus by integrating NFS-Ganesha and libgfapi, the speed and latency have been improved compared to FUSE mount access.

In this post, I will guide you through the steps which can be used to setup NFS-Ganesha(V2.1 release) using GlusterFS as backend filesystem.


i) Before starting to setup NFS-Ganesha, you need to create GlusterFS volume. Please refer to the below document to setup and create glusterfs volumes.

ii) Disable kernel-nfs, gluster-nfs services on the system using the cmds-

  • service nfs stop
  • gluster vol set <volname> nfs.disable ON (Note: this command has to be repeated for all the volumes in the trusted-pool)

iii) Usually the* files are installed in “/usr/lib” or “/usr/local/lib”, based on whether you have installed glusterfs using rpm or sources. Verify if those* files are linked in “/usr/lib64” and “/usr/local/lib64” as well. If not create the links for those .so files in those directories.

iv) IPv6 should be enabled on the system . To enable IPv6 support, ensure that you have commented out or removed the line “options ipv6 disable=1” in /etc/modprobe.d/ipv6.conf.  This change will require the machine reboot.

Installing nfs-ganesha

i) using rpm install

* nfs-ganesha rpms are available in Fedora19 or later packages. So to install nfs-ganesha, run

#yum install nfs-ganesha

* Using CentOS or EL, rpms are available at the below link –

Note: “ganesha.nfsd” will be installed in “/usr/bin”

ii) using sources

#cd /root
#git clone git://
#cd nfs-ganesha
#git submodule update --init
#git checkout -b next origin/next
(Note: origin/next is the current development branch. To go to a specific release, say V2.1, use the command "git checkout -b next V2.1")
#rm -rf ~/build; mkdir ~/build ; cd ~/build
#cmake -DUSE_FSAL_GLUSTER=ON -DCURSES_LIBRARY=/usr/lib64 -DCURSES_INCLUDE_PATH=/usr/include/ncurses -DCMAKE_BUILD_TYPE=Maintainer /root/nfs-ganesha/src
(For debug bld use -DDEBUG_SYMS=ON, For dynamic exports use -DUSE_DBUS=ON)
#make; make install

Note: cmake-2.8 or higher version, libcap-devel, libnfsidmap, dbus-devel, doxygen, ncurses* packages are need to be installed prior to compiling the code. If in Fedora, libjemalloc,  libjemalloc-devel are also required.

When installed via sources, “ganesha.nfsd” will be copied to “/usr/local/bin”

Exporting GlusterFS volume

Here I will provide details of how one can export glusterfs volumes via nfs-ganesha manually. There are few CLI options, d-bus commands available to dynamically export/unexport volumes. I will explain those options usage as well in an another post.


To export any GlusterFS volume or directory, create the EXPORT block for each of those entries in a .conf file, for example export.conf.  The following are the minimal set of parameters required to export any entry.

# cat export.conf 

	Export_Id = 1 ;   # Export ID unique to each export
	Path = "volume_path";  # Path of the volume to be exported. Eg: "/test_volume"

	FSAL { 
		name = GLUSTER;
		hostname = "10.xx.xx.xx";  # IP of one of the nodes in the trusted pool
		volume = "volume_name";	 # Volume name. Eg: "test_volume"

	Access_type = RW;	 # Access permissions
	Squash = No_root_squash; # To enable/disable root squashing
	Disable_ACL = TRUE;	 # To enable/disable ACL
	Pseudo = "pseudo_path";	 # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"
	Protocols = "3", "4" ;	 # NFS protocols supported
	Transports = "UDP", "TCP" ; # Transport protocols supported
	SecType = "sys";	 # Security flavors supported

For more parameters available, please refer to “/root/nfs-ganesha/src/config_samples/export.txt” or

Note: This file is sufficient enough to export volumes with default set of parameters.  To start with, you can choose to rename this file to “nfs-ganesha.conf” and execute Step4.


In addition to the above, to configure any global options, define the corresponding blocks to contain them along with their new values in “nfs-ganesha.conf” file. For example,

    # Lifetime for NFSv4 Leases
    Lease_Lifetime = 60 ;

Sample file containing various such options is available as README file in the git repo – .

Note: An exclusive list of all such options available can be found at “/root/nfs-ganesha/src/config_samples/config.txt” or


When there are many volumes/sub-directories to be exported, for better readability, you can choose to define EXPORT blocks for each of those entries in separate files and include them in nfs-ganesha.conf  (as shown below) at the end.

%include "export.conf"


To start nfs-ganesha manually, execute the following command:

#ganesha.nfsd -f <location_of_nfs-ganesha.conf_file> -L <location_of_log_file> -N <log_level> -d

For example:

#ganesha.nfsd -f /root/build/nfs-ganesha.conf -L nfs-ganesha.log -N NIV_DEBUG -d


nfs-ganesha.log is the log file for the ganesha.nfsd process.

NIV_DEBUG is the log level.

The above 4 steps should be able to get you started with nfs-ganesha. After following above steps, verify if the volume is exported.

To check if nfs-ganesha has started, execute the following command:

#ps aux | grep ganesha

To check if the volume is exported, run

#showmount -e localhost

Additional Notes:

To switch back to gluster-nfs/kernel-nfs, kill the ganesha daemon and start those services using the below commands –

#pkill ganesha
#service nfs start (for kernel-nfs)
#gluster vol set <volname> nfs.disable OFF

Hope this document helps you to  configure NFS-Ganesha using GlusterFS. For any queries/troubleshooting, please leave in your comment. Will be glad to help you out.



5 thoughts on “GlusterFS and NFS-Ganesha integration

  1. What’s the performance impact of using NFS-Ganesha (which is pretty much fully compliant from start to finish…) versus using the GlusterNFS implementation?

    • Hi,
      I haven’t got a chance to compare it with Gluster-NFS on performance front.
      But surely, even though it may seem that there is a hit for few fops initially while running NFS-Ganesha , you would see performance boost gradually because of the caching done by Ganesha server.

      Moreover with respect to GlusterFS atleast, NFS-Ganesha will be the future with NFSv4.x protocols being developed and supported by a wider active community.

  2. Just thinking: is gluster going to support nfs-ganesha as a nfs 4.1 data server on each gluster server node, so that pNFS clients can leverage gluster’s parallel access?

  3. Pingback: Quick guide to export Gluster volume via NFS-Ganesha on Fedora |

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s