Giter Club home page Giter Club logo

nfs-ganesha's Introduction

Coverity Scan Build Status nfs-ganesha

NFS-Ganesha is an NFSv3,v4,v4.1 fileserver that runs in user mode on most UNIX/Linux systems. It also supports the 9p.2000L protocol.

For more information, consult the project wiki.

CONTRIBUTING

Code contributions to Ganesha are managed by submission to gerrithub for review. We do not merge from github pull requests.

See src/CONTRIBUTING_HOWTO.txt for details.

BUILDING

See src/COMPILING_HOWTO.txt

nfs-ganesha's People

Contributors

achender avatar amitd avatar dang avatar eshelmarc avatar fatih-acar avatar ffilz avatar grajoria avatar itsdipit avatar jgwahlig avatar jtlayton avatar kalebskeithley avatar kinglongmee avatar kvaneesh avatar lieb avatar madhuthorat avatar mattbenjamin avatar matvore avatar mmakc avatar patlucas avatar paulsheer avatar phdeniel avatar rojingeorge avatar rongzeng avatar sderr avatar soumyakoduri avatar sswen avatar sukwoo avatar tfb-bull avatar thotz avatar tl-cea avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nfs-ganesha's Issues

Ganesha v2.0 release: segfault in nfs3_Mkdir() and nfs3_Mknod()

There are a couple of paths through the nfs3_Mkdir() and nfs3_Mknod() functions which call cache_inode_setattr() with the "req_ctx" pointer and "is_open_write" boolean reversed in the argument list.

I don't know how this even compiles, there must be a missing prototype or missing header include...

Here's a diff that fixes the argument ordering. It would be best to also fix the prototype includes and make the build fail for issues like this, but I don't have the expertise to provide that patch.

diff --git a/src/Protocols/NFS/nfs3_Mkdir.c b/src/Protocols/NFS/nfs3_Mkdir.c
index bd30a7d..ab4c536 100644
--- a/src/Protocols/NFS/nfs3_Mkdir.c
+++ b/src/Protocols/NFS/nfs3_Mkdir.c
@@ -169,7 +169,7 @@ int nfs_Mkdir(nfs_arg_t *arg, exportlist_t *export,
            || ((sattr.mask & ATTR_GROUP)
                && (req_ctx->creds->caller_gid != sattr.group))) {
                cache_status =
-                   cache_inode_setattr(dir_entry, &sattr, req_ctx, false);
+                   cache_inode_setattr(dir_entry, &sattr, false, req_ctx);

                if (cache_status != CACHE_INODE_SUCCESS)
                        goto out_fail;
diff --git a/src/Protocols/NFS/nfs3_Mknod.c b/src/Protocols/NFS/nfs3_Mknod.c
index 486ed18..4bb56f9 100644
--- a/src/Protocols/NFS/nfs3_Mknod.c
+++ b/src/Protocols/NFS/nfs3_Mknod.c
@@ -257,8 +257,8 @@ int nfs3_Mknod(nfs_arg_t *arg, exportlist_t *export,
                && (req_ctx->creds->caller_gid != sattr.group))) {
                cache_status = cache_inode_setattr(node_entry,
                                                   &sattr,
-                                                  req_ctx,
-                                                  false);
+                                                  false,
+                                                  req_ctx);

                if (cache_status != CACHE_INODE_SUCCESS)
                        goto out_fail;

Proxy Configuration Outdated/Not Working

Trying to follow the PROXY configuration page on the Wiki, and it isn't working:
https://github.com/nfs-ganesha/nfs-ganesha/wiki/PROXY

If I got through the included config.txt file and attempt to convert over to the "new" configuration style, it gets better, but still fails, with the following error at startup:

Sep  5 12:58:32 nfs-proxy nfs-ganesha[3289]: [main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep  5 12:58:32 nfs-proxy nfs-ganesha[3289]: [main] config_errs_to_log :CONFIG :CRIT :Config File (/etc/ganesha/ganesha.conf:49): Multiple addresses for 192.168.100.27
Sep  5 12:58:32 nfs-proxy nfs-ganesha[3289]: [main] config_errs_to_log :CONFIG :CRIT :Config File (/etc/ganesha/ganesha.conf:47): 1 (invalid param value) errors found block PROXY
Sep  5 12:58:32 nfs-proxy nfs-ganesha[3289]: [main] config_errs_to_log :CONFIG :CRIT :Config File (/etc/ganesha/ganesha.conf:41): Failed to initialize FSAL (PROXY)
Sep  5 12:58:32 nfs-proxy nfs-ganesha[3289]: [main] config_errs_to_log :CONFIG :CRIT :Config File (/etc/ganesha/ganesha.conf:41): 1 validation errors in block FSAL
Sep  5 12:58:32 nfs-proxy nfs-ganesha[3289]: [main] config_errs_to_log :CONFIG :CRIT :Config File (/etc/ganesha/ganesha.conf:41): Errors processing block (FSAL)
Sep  5 12:58:32 nfs-proxy nfs-ganesha[3289]: [main] config_errs_to_log :CONFIG :CRIT :Config File (/etc/ganesha/ganesha.conf:10): 1 validation errors in block EXPORT
Sep  5 12:58:32 nfs-proxy nfs-ganesha[3289]: [main] config_errs_to_log :CONFIG :CRIT :Config File (/etc/ganesha/ganesha.conf:10): Errors processing block (EXPORT)
Sep  5 12:58:32 nfs-proxy nfs-ganesha[3289]: [main] config_errs_to_log :CONFIG :CRIT :Config File (/etc/ganesha/ganesha.conf:47): 1 (fsal load, invalid param value, block validation) errors found block EXPORT

Not sure what I'm doing wrong, here, but either the config info is still not quite right, or there's a bug looking at the Srv_Addr parameter in the PROXY section. Here's my config:

EXPORT_DEFAULTS
{
        SecType = sys;

        # Restrict all exports to NFS v4 unless otherwise specified
        Protocols = 4,3;
}


EXPORT
{
        Export_Id = 1;
        Path = /apps;
        Pseudo = /alpha/apps;
        Tag = apps;

        # Override the default set in EXPORT_DEFAULTS
        Protocols = 3,4;
        MaxRead = 65536;
        MaxWrite = 65536;
        PrefRead = 65536;
        PrefWrite = 65536;

        # All clients for which there is no CLIENT block that specifies a
        # different Access_Type will have RW access (this would be an unusual
        # specification in the real world since barring a firewall, this
        # export is world readable and writeable).
        Access_Type = RO;

        # FSAL block
        #
        # This is required to indicate which Ganesha File System Abstraction
        # Layer (FSAL) will be used for this export.
        #
        # The only option available for all FSALs is:
        #
        # Name (required)       The name of the FSAL
        #
        # Some FSALs have additional options, see individual FSAL documentation.

        FSAL
        {
                Name = PROXY;
        }
}

PROXY {
        Remote_Server {
                Srv_Addr = 192.168.100.27;
        }
}

crash during shutdown

While running ganesha (V2.0) under supervisord, it crashes when I execute "supervisorctl stop ganesha" with the following backtrace. Looks like the "node" argument to opr_rbtree_remove() is garbage.

This is with one connected NFSv3 client.

Cannot access memory at address 0xffeee9d318
(gdb) where
#0  0x000000012019033c in opr_rbtree_remove (head=0x12428b888, node=0x20) at /home/tad/src/ganesha/src/libntirpc/src/rbtree.c:415
#1  0x00000001201ab014 in svc_xprt_shutdown () at /home/tad/src/ganesha/src/libntirpc/src/svc_xprt.c:396
#2  0x000000012019e90c in svc_shutdown (flags=0) at /home/tad/src/ganesha/src/libntirpc/src/svc.c:954
#3  0x0000000120053944 in do_shutdown () at /home/tad/src/ganesha/src/MainNFSD/nfs_admin_thread.c:492
#4  0x0000000120053f20 in admin_thread (UnusedArg=0x0) at /home/tad/src/ganesha/src/MainNFSD/nfs_admin_thread.c:557
#5  0x000000ffeebd9c84 in ?? ()

Clean up log messages

Marc has a patch for log cleanup that was deferred because of Frank's work.
It may be appropriate to take this patchset now. TBD.

Ganesha V2.0: Export with missing path leads to segfault in mnt_Mnt()

With Ganesha V2.0, I have an export config that refers to a path that does not exist.

During initialization, the share config parsing spits out the error:

07/08/2014 10:33:42 : epoch 53e3b876 : oneblox-0062 : ganesha.nfsd-5219[main] nfs_export_get_root_entry :NFS STARTUP :CRIT :Lookup failed on path, ExportId=1 Path=/exports/Public FSAL_ERROR=(No such file or directory,2)

When mounting by tag (I haven't verified if this is a problem when mounting by path), there is an indirection through p_current_item->exp_root_cache_inode which as it turns out is a NULL pointer. This results in a Ganesha Segfault.

Unfortunately we are not in a position to move to Ganesha V2.1 currently, so I can't verify if this is also a problem there.

The patch here fixes the Ganesha V2.0 crash:

diff --git a/src/Protocols/NFS/mnt_Mnt.c b/src/Protocols/NFS/mnt_Mnt.c
index a408de6..3cf75aa 100644
--- a/src/Protocols/NFS/mnt_Mnt.c
+++ b/src/Protocols/NFS/mnt_Mnt.c
@@ -116,6 +116,7 @@ int mnt_Mnt(nfs_arg_t *arg, exportlist_t *export,
        LogEvent(COMPONENT_NFSPROTO,
                 "MOUNT: Export entry for %s not found", arg->arg_mnt);

+ notfound:
        /* entry not found. */
        /* @todo : not MNT3ERR_NOENT => ok */
        switch (req->rq_vers) {
@@ -163,6 +164,13 @@ int mnt_Mnt(nfs_arg_t *arg, exportlist_t *export,
         * retrieve the associated NFS handle
         */
        if (!ispath || !strcmp(arg->arg_mnt, p_current_item->fullpath)) {
+               if (p_current_item->exp_root_cache_inode == NULL) {
+                       LogCrit(COMPONENT_NFSPROTO,
+                               "MOUNT: Export entry %s has no root cache inode (path missing?)",
+                                p_current_item->fullpath);
+                       goto notfound;
+               }
+       
                pfsal_handle = p_current_item->exp_root_cache_inode->obj_handle;
        } else {
                exp_hdl = p_current_item->export_hdl;

cmake options broken for proxy with HANDLE_MAPPING

Cmake config options are set to allow _HANDLE_MAPPING= to be ON or OFF.

FSAL_PROXY/CMakeLists.txt however checks for "ENABLE_HANDLE_MAPPING"

---snip---
if(ENABLE_HANDLE_MAPPING)
SET(fsalproxy_LIB_SRCS
${fsalproxy_LIB_SRCS}
handle_mapping/handle_mapping.c
handle_mapping/handle_mapping_db.c
)
endif(ENABLE_HANDLE_MAPPING)

This causes handle_mapping library not to be built.

build error on CentOS 70

[ 92%] Building C object FSAL/FSAL_GLUSTER/CMakeFiles/fsalgluster.dir/export.c.o
/home/leol/GIT/nfs-ganesha/src/FSAL/FSAL_GLUSTER/export.c: In function ‘lookup_path’:
/home/leol/GIT/nfs-ganesha/src/FSAL/FSAL_GLUSTER/export.c:125:2: error: too few arguments to function ‘glfs_h_lookupat’
glhandle = glfs_h_lookupat(glfs_export->gl_fs, NULL, realpath, &sb);
^
In file included from /home/leol/GIT/nfs-ganesha/src/FSAL/FSAL_GLUSTER/gluster_internal.h:31:0,
from /home/leol/GIT/nfs-ganesha/src/FSAL/FSAL_GLUSTER/export.c:40:
/usr/include/glusterfs/api/glfs-handles.h:155:21: note: declared here
struct glfs_object glfs_h_lookupat (struct glfs *fs,
^
make[2]: *
* [FSAL/FSAL_GLUSTER/CMakeFiles/fsalgluster.dir/export.c.o] Error 1
make[1]: *** [FSAL/FSAL_GLUSTER/CMakeFiles/fsalgluster.dir/all] Error 2
make: *** [all] Error 2

has anyone seen this before?

../include/FSAL/FSAL_CEPH/fsal_types.h:65:5: error: unknown type name vinodeno_t

I cannot find vinodeno_t data type define anywhere.

I m having difficulty compiling nfs-ganesha-1.5.0 with CEPH option on Ubuntu 12.04.2

./configure --with-fsal=CEPH

make

In file included from ../include/fsal_types.h:297:0,
from ../include/fsal.h:72,
from cache_inode_access.c:48:
../include/FSAL/FSAL_CEPH/fsal_types.h:65:5: error: unknown type name âvinodeno_tâ
../include/FSAL/FSAL_CEPH/fsal_types.h:112:3: error: unknown type name âvinodeno_tâ
../include/FSAL/FSAL_CEPH/fsal_types.h:120:3: error: unknown type name âFhâ
../include/FSAL/FSAL_CEPH/fsal_types.h:121:3: error: unknown type name âvinodeno_tâ
make[1]: *** [cache_inode_access.lo] Error 1
make[1]: Leaving directory `/root/nfs-ganesha-1.5.0/Cache_inode'
make: *** [all-recursive] Error 1

question: why are you trying to refresh attrs of src file in cache_inode_rename?

        status_ref_dir_src = cache_inode_refresh_attrs_locked(dir_src);

        if (dir_src != dir_dest)
                status_ref_dir_dst =
                        cache_inode_refresh_attrs_locked(dir_dest);

        status_ref_src = cache_inode_refresh_attrs_locked(lookup_src);

        LogFullDebug(COMPONENT_CACHE_INODE, "done refreshing attributes");

        if (FSAL_IS_ERROR(fsal_status)) {
                status = cache_inode_error_convert(fsal_status);

                LogFullDebug(COMPONENT_CACHE_INODE,
                             "FSAL rename failed with %s",
                             cache_inode_err_str(status));

                goto out;
        }

        if (lookup_dst) {
                /* Force a refresh of the overwritten inode */
                status_ref_dst = cache_inode_refresh_attrs_locked(lookup_dst);
                if (status_ref_dst == CACHE_INODE_ESTALE)
                        status_ref_dst = CACHE_INODE_SUCCESS;
        }

I am seeing this code and understood you are trying to update the attrs of cache inodes after fsal rename. But how are you sure that calling getattrs on the previous (must be deprecated) file succeeds?

In particular, I think this line always fails because the src is nowhere in the fsal backend.

status_ref_src = cache_inode_refresh_attrs_locked(lookup_src);

Or the anticipation on the fsal rename method is that it keeps the old entry?

Improve logging subsystem

Frank was working on logging subsystem changes. An earlier decision was to
postpone these to 2.1 as IBM was still testing in 1.5.

Stale file handle error

    i = ceph_ll_get_inode(export->cmount, *vi);
    if (!i)
        return ceph2fsal_error(-ESTALE);

The function ceph_ll_get_inodegets Inode from inodemap , that is , cache of ceph. However , the nfs client may encounter a Stale file hanle error .

You'd better call ceph_ll_lookup_inode before return ESTALE.

question: is it possible to disable cache inode in nfs-ganesha?

Hi, all.

I am implementing FSAL for some distributed filesystem. The upcall is very difficult to implement so I am searching for a quicker way to realize multi-head.

I found a setting to disable delegation that also needs upcall impl to recall the delegation but I couldn't yet find a config to disable cache inode which also needs the impl to invalidate the stale data when some other node changed it.

Which leads to my question: is it possible to disable cache inode?

IP notation not accepting CIDR

For fields Root_Access or RW_Access, we cannot use CIDR notation (X.X.X.X/Y), but the biggest problem is that this notation is silently ignored, leading to hours of debugging ...

We should first state that CIDR notation is not supported, then warn / fail if such notation is encountered.

Then, it will be great to support this notation for every IP access in configuration.

nfsproxy handle mapping lib broken in ganesha pre-2.0-dev_33

Compilation of the src/FSAL/FSAL_PROXY/handle_mapping lib is broken:

[ 86%] Building C object FSAL/FSAL_PROXY/CMakeFiles/fsalproxy.dir/handle_mapping/handle_mapping.c.o
/home/ec2-user/nfs-ganesha-next/src/FSAL/FSAL_PROXY/handle_mapping/handle_mapping.c: In function ‘hash_digest_idx’:
/home/ec2-user/nfs-ganesha-next/src/FSAL/FSAL_PROXY/handle_mapping/handle_mapping.c:117: error: ‘hash_parameter_t’ has no member named ‘alphabet_length’
/home/ec2-user/nfs-ganesha-next/src/FSAL/FSAL_PROXY/handle_mapping/handle_mapping.c: In function ‘HandleMap_GetFH’:
/home/ec2-user/nfs-ganesha-next/src/FSAL/FSAL_PROXY/handle_mapping/handle_mapping.c:325: error: ‘p_in_nfs23_digest’ undeclared (first use in this function)
/home/ec2-user/nfs-ganesha-next/src/FSAL/FSAL_PROXY/handle_mapping/handle_mapping.c:325: error: (Each undeclared identifier is reported only once
/home/ec2-user/nfs-ganesha-next/src/FSAL/FSAL_PROXY/handle_mapping/handle_mapping.c:325: error: for each function it appears in.)
/home/ec2-user/nfs-ganesha-next/src/FSAL/FSAL_PROXY/handle_mapping/handle_mapping.c:335: error: ‘p_out_fsal_handle’ undeclared (first use in this function)
make[2]: *** [FSAL/FSAL_PROXY/CMakeFiles/fsalproxy.dir/handle_mapping/handle_mapping.c.o] Error 1
make[1]: *** [FSAL/FSAL_PROXY/CMakeFiles/fsalproxy.dir/all] Error 2
make: *** [all] Error 2

GPFS lock_op: conflicting_lock->lock_type and flock.l_type aren't the same type

       if (conflicting_lock != NULL) {
                if (lock_op == FSAL_OP_LOCKT
                    && glock_args.flock.l_type != F_UNLCK) {
                        conflicting_lock->lock_length = glock_args.flock.l_len;
                        conflicting_lock->lock_start = glock_args.flock.l_start;
                        conflicting_lock->lock_type = glock_args.flock.l_type;

Hi.

I suspect this is a bug. The final line in this code fragment from GPFSFSAL_lock_op assigns different types badly assuming that l_type is always the same in all architectures (1,2,3 for F_RDLCK, F_WRLCK and F_UNLCK respectively). This assumption corrupts when it comes to some architecture that uses different numbers actually. (e.g. http://lxr.free-electrons.com/source/arch/alpha/include/uapi/asm/fcntl.h#L47)

I think you should convert l_type back to lock_type and then assign.

Question about the error of compile source into debian package.

Hi all,
If this question should not ask here, please just ignore.
Try to compile source into deb package for ubuntu 14.04 by using "dpkg-buildpackage" .
But there is an error "dpkg-shlibdeps: error: no dependency information found for /usr/lib/libntirpc.so.1.3"
Should I add "ld_library_path" to file debian/rule?
Appreciate for any comments or suggestions.

Please find the attachment to check error log.

err.txt

missing nanoseconds in setattr atime and mtime

We're running Ganesha V2.0, and are having trouble with a proprietary NFS testsuite. It is failing because the atime and mtime setattr requests drop the nanoseconds component of the time from the client.

The following patch fixes the problem for us, but I'm guessing there is some reason that the client nanoseconds was being overridden?

diff --git a/src/Protocols/NFS/nfs_proto_tools.c b/src/Protocols/NFS/nfs_proto_tools.c
index c0f63fb..6b4b7d0 100644
--- a/src/Protocols/NFS/nfs_proto_tools.c
+++ b/src/Protocols/NFS/nfs_proto_tools.c
@@ -3464,7 +3464,8 @@ bool nfs3_Sattr_To_FSALattr(struct attrlist *FSAL_attr, sattr3 *sattr)
                if (sattr->atime.set_it == SET_TO_CLIENT_TIME) {
                        FSAL_attr->atime.tv_sec =
                            sattr->atime.set_atime_u.atime.tv_sec;
-                       FSAL_attr->atime.tv_nsec = 0;
+                       FSAL_attr->atime.tv_nsec =
+                           sattr->atime.set_atime_u.atime.tv_nsec;
                        FSAL_attr->mask |= ATTR_ATIME;
                } else if (sattr->atime.set_it == SET_TO_SERVER_TIME) {
                        /* Use the server's current time */
@@ -3486,7 +3487,8 @@ bool nfs3_Sattr_To_FSALattr(struct attrlist *FSAL_attr, sattr3 *sattr)
                if (sattr->mtime.set_it == SET_TO_CLIENT_TIME) {
                        FSAL_attr->mtime.tv_sec =
                            sattr->mtime.set_mtime_u.mtime.tv_sec;
-                       FSAL_attr->mtime.tv_nsec = 0;
+                       FSAL_attr->mtime.tv_nsec =
+                           sattr->mtime.set_mtime_u.mtime.tv_nsec;
                        FSAL_attr->mask |= ATTR_MTIME;
                } else if (sattr->mtime.set_it == SET_TO_SERVER_TIME) {
                        /* Use the server's current time */

incorrect license header

According to its header, file src/log/log_functions.c is "under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either".

Please correct header to mention the particular version of the license -- probably "either version 3 of the License, or (at your option) any later version".

Crashes on Ubuntu 14.04

I just tried to use nfs-ganesha on my Ubuntu 14.04 system, but ganesha.nfsd crashes at startup. I tried both the current next and V2.1-stable branches with a rather minimalistic config file, but it just crashes without an error message.

Here is the stacktrace
$ sudo gdb --args ganesha.nfsd -f /usr/local/etc/ganesha.conf
GNU gdb (Ubuntu 7.7.1-0ubuntu5~14.04.2) 7.7.1
Copyright (C) 2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
http://www.gnu.org/software/gdb/bugs/.
Find the GDB manual and other documentation resources online at:
http://www.gnu.org/software/gdb/documentation/.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from ganesha.nfsd...(no debugging symbols found)...done.
(gdb) r
Starting program: /usr/local/bin/ganesha.nfsd -f /usr/local/etc/ganesha.conf
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7ffff57ff700 (LWP 7955)]

Program received signal SIGSEGV, Segmentation fault.
0x00000000004b3321 in init_export_cb ()
(gdb) bt
#0 0x00000000004b3321 in init_export_cb ()
#1 0x00000000004c271d in foreach_gsh_export ()
#2 0x00000000004321be in nfs_Init.isra.1 ()
#3 0x0000000000433591 in nfs_start ()
#4 0x000000000041c129 in main ()

(gdb) q

Here is the config file:
$ cat /usr/local/etc/ganesha.conf

Export entries

EXPORT
{

Export Id (mandatory)

Export_Id = 1 ;

Exported path (mandatory)

Path = "/home/server/nfs";

Pseudo path for NFSv4 export (mandatory)

Pseudo = "/home/server/nfs";

Squash = No_Root_Squash;

}

Debug log when starting with the parameters -L /tmp/ganesha.log -N NIV_DEBUG
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] main :MAIN :EVENT :ganesha.nfsd Starting: Version 2.1.1-pre, built at Mar 16 2015 19:57:51 on j17186
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] load_config_from_parse :CONFIG :EVENT :Using defaults for LOG
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] load_config_from_parse :CONFIG :EVENT :Using defaults for NFS_Core_Param
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] load_config_from_parse :CONFIG :EVENT :Using defaults for NFS_IP_Name
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] load_config_from_parse :CONFIG :EVENT :Using defaults for NFS_KRB5
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] load_config_from_parse :CONFIG :EVENT :Using defaults for NFSv4
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] load_config_from_parse :CONFIG :EVENT :Using defaults for CacheInode
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] init_server_pkgs :NFS STARTUP :INFO :Cache Inode library successfully initialized
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] init_server_pkgs :NFS STARTUP :DEBUG :Now building IP/name cache
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] init_server_pkgs :NFS STARTUP :INFO :IP/name cache successfully initialized
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] load_config_from_parse :CONFIG :EVENT :Using defaults for EXPORT_DEFAULTS
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] export_commit :CONFIG :EVENT :Export 1 created at pseudo (/home/server/nfs) with path (/home/server/nfs) and tag ((null)) perms ( , , , , , , anon_uid= -2, anon_gid= -2)
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] pseudofs_create_export :FSAL :DEBUG :Created exp 0x7ffff58341a0 - /
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] build_default_root :CONFIG :EVENT :Export 0 (/) successfully created
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] lower_my_caps :NFS STARTUP :EVENT :currenty set capabilities are: = cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap+ep
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] cache_inode_lru_pkginit :INODE LRU :INFO :Attempting to increase soft limit from 1024 to hard limit of 4096
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] cache_inode_lru_pkginit :INODE LRU :INFO :Setting the system-imposed limit on FDs to 4096.
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] nfs_Init :NFS STARTUP :DEBUG :Now building NFSv4 ACL cache
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[cache_lru] lru_run :INODE LRU :DEBUG :FD count is 0 and low water mark is 2048: not reaping.
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[cache_lru] lru_run :INODE LRU :DEBUG :After work, open_fd_count:0 count:0 fdrate:1 threadwait=90

17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] nfs4_acls_init :NFS4 ACL :DEBUG :Initialize NFSv4 ACLs
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] nfs4_acls_init :NFS4 ACL :DEBUG :sizeof(fsal_ace_t)=20, sizeof(fsal_acl_t)=80
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] nfs4_acls_test :NFS4 ACL :DEBUG :acldata.aces = 0x7ffff5811180
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] nfs4_acls_test :NFS4 ACL :DEBUG :acl = 0x7ffff58341f0, ref = 1, status = 0
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] nfs4_acls_test :NFS4 ACL :DEBUG :acldata2.aces = 0x7ffff5811200
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] nfs4_ace_free :NFS4 ACL :DEBUG :free ace 0x7ffff5811200
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] nfs4_acl_entry_inc_ref :NFS4 ACL :DEBUG :(acl, ref) = (0x7ffff58341f0, 2)
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] nfs4_acls_test :NFS4 ACL :DEBUG :re-access: acl = 0x7ffff58341f0, ref = 2, status = 2
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] nfs4_acl_entry_dec_ref :NFS4 ACL :DEBUG :(acl, ref) = (0x7ffff58341f0, 1)
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] nfs4_acls_test :NFS4 ACL :DEBUG :release: acl = 0x7ffff58341f0, ref = 1, status = 0
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] nfs4_acl_release_entry :NFS4 ACL :DEBUG :Free ACL 0x7ffff58341f0
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] nfs4_acl_entry_dec_ref :NFS4 ACL :DEBUG :(acl, ref) = (0x7ffff58341f0, 0)
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] nfs4_ace_free :NFS4 ACL :DEBUG :free ace 0x7ffff5811180
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] nfs_Init :NFS STARTUP :INFO :NFSv4 ACL cache successfully initialized
17/03/2015 00:10:05 : epoch 550762cd : j17186 : ganesha.nfsd-7941[main] init_export_root :EXPORT :DEBUG :About to lookup_path for ExportId=1 Path=/home/server/nfs

showmount -e reverses ip octets if they are specified as CIDR

Part of /etc/ganesha/ganesha.conf:

EXPORT
{
        Export_Id = 9;
        Path = "/space/common";
        Pseudo = "/common";
        FSAL {
                Name = XFS;
        }
        CLIENT
        {
                Clients = 192.168.33.56/29;
                Access_Type = RW;
        }
}

showmount -e output:

root@test:~# showmount -e stor
Export list for stor:
/space/common 56.33.168.192

But access checks are performed correctly. 192.168.33.60 box can RW access this export, and 56.33.168.192 can't.

This is nfs-ganesha from Centos 7:

root@stor:/etc/ganesha# rpm -qi nfs-ganesha
Name        : nfs-ganesha
Version     : 2.3.1
Release     : 4.el7
Architecture: x86_64
Install Date: Mon 02 May 2016 05:15:05 PM MSK
Group       : Applications/System
Size        : 1631827
License     : LGPLv3+
Signature   : RSA/SHA256, Thu 07 Apr 2016 12:52:34 PM MSK, Key ID 6a2faea2352c64e5
Source RPM  : nfs-ganesha-2.3.1-4.el7.src.rpm
Build Date  : Wed 06 Apr 2016 01:46:58 PM MSK
Build Host  : buildvm-07.phx2.fedoraproject.org
Relocations : (not relocatable)
Packager    : Fedora Project
Vendor      : Fedora Project
URL         : https://github.com/nfs-ganesha/nfs-ganesha/wiki
Summary     : NFS Server running in user space
Description :
nfs-ganesha : NFS-GANESHA is a NFS Server running in user space.
It comes with various back-end modules (called FSALs) provided as
shared objects to support different file systems and name-spaces.

Return value of consume_ev_sig_nb() not handled ?

Hi,

In svc_rqst.c file, A socket pair is created per event channel (for UDP, TCP etc.)
But I can't really find out what is the use of this socket pair.

A new event signaled ( via ev_sig() function) whenever there are events for the socket epoll via epoll_ctl() call. ev_sig() just writes the data into the socket pair (writer end).
On the other end, consume_ev_sig_nb() triggered whenever there are events in the socket pair's receiver end.
But the caller don't process returned value from consume_ev_sig_nb() call, It simply ignores it.

** (void)consume_ev_sig_nb(sr_rec->sv[1]);

So my question is what is the need for this socket pair in svc_rqst.c file, when the event/signal consumer is not doing any operation based on the event.

source-less PDF files (please convert to Markdown)

The following PDF files are generated from unknown sources which are missing from the source tree:

  • src/Docs/ganesha_logging.pdf
  • src/Docs/nfs-ganesha-adminguide.pdf

Since there are no graphics or images in those files I suggest converting 'em to markdown format.

File src/Docs/nfs-ganesha-userguide.pdf apparently generated from src/Docs/nfs-ganesha-userguide.rtf but perhaps both of them could benefit from conversion to Markdown as well.

FYI pre-built source-less binaries are non-DFSG-compliant and therefore non-distributable in Debian...

Add better tracing diagnostics

There was also some "tracing" patches. One of the tasks for 2.1 was to look
into using LTTng. Small additions would be fine but code that would be
deprecated by an LTTng effort is probably not appropriate.

nfs4_getfacl empty

Hello,

I'm trying to setup a simple nfs4 server. I'm using centos distribution and modify only /etc/ganesha/ganesha.conf with a modified path.
/etc/idmap.conf is configured.

After starting server, i can mount the share with nfs v4.

But when i'm trying to get acl with nfs4_getface, il's empty.

The only error i see in logs is "nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs"

Do you have any idea of a place to check or missing parameter for nfs4 acl support ?

non-free file(s) / please clarify license

src/FSAL/FSAL_PT/fsal_attrs.c contains

Copyright IBM Corp. 2012, 2012
All Rights Reserved

which indicates non-free file: please remove it or clarify its licensing terms with IBM.
Thanks.

terminate called after throwing an instance of 'ceph::FailedAssertion'

After compiling and running ganesha, where I followed the Blog post: http://blog.widodh.nl/2014/12/nfs-ganesha-with-libcephfs-on-ubuntu-14-04/

I used V2.3-stable and i use the flowing config:

# cat /usr/local/etc/ganesha.conf
EXPORT
{
    Export_ID = 1;
    Path = "/";
    Pseudo = "/";
    Access_Type = RW;
    NFS_Protocols = "4.1";
    Squash = No_Root_Squash;
    Transport_Protocols = "TCP", "UDP";
    SecType = "none";

    FSAL {
        Name = CEPH;
    }
}

When I mount the export with linux, all works fine. When I try to mount from an Vmware ESX host to this mount point. ganesha.nfsd stops working:

<root@ceph-client-01-[LOC]:~# ganesha.nfsd -f /usr/local/etc/ganesha.conf -L /tmp/ganesha.log -N NIV_DEBUG -F
./include/xlist.h: In function 'xlist<T>::item::~item() [with T = Dentry*]' thread 7f22a47f0700 time 2016-04-15 14:21:47.887450
./include/xlist.h: 32: FAILED assert(!is_on_list())
 ceph version 9.2.1 (752b6a3020c3de74e07d2a8b4c5e48dab5a6b6fd)
 1: (()+0x225fab) [0x7f22ccb0efab]
 2: (Client::_ll_put(Inode*, int)+0x48d) [0x7f22cc9b5b4d]
 3: (Client::ll_forget(Inode*, int)+0x1d9) [0x7f22cc9b5ef9]
 4: (ceph_ll_put()+0xd) [0x7f22cc95c83d]
 5: (deconstruct_handle()+0x34) [0x7f22ceec4d6c]
 6: (()+0x3b6e) [0x7f22ceec2b6e]
 7: (mnt_Mnt()+0x570) [0x454ced]
 8: (nfs_rpc_execute()+0x2082) [0x443b0b]
 9: ganesha.nfsd() [0x44450e]
 10: ganesha.nfsd() [0x523d3b]
 11: (()+0x8182) [0x7f22d0183182]
 12: (clone()+0x6d) [0x7f22cfa5b47d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
terminate called after throwing an instance of 'ceph::FailedAssertion'
Aborted
root@ceph-client-01-[LOC]:~# objdump -rdS /usr/bin/ganesha.nfsd > /tmp/ganesha.nfsd-dump

I attache the objdump to this ticket.

ganesha.nfsd-dump.txt

What did I do wrong or did I hit an bug?

Unknown or unsettable key: Stats_File_Path (item NFS_Core_Param)

Many sample config files provided with nfs-ganesha refer to this config :

NFS_Core_Param
{
    Nb_Worker = 10;
    NFS_Port = 2049;
    Stats_File_Path = "/tmp/ganesha.stats";
    Stats_Update_Delay = 60;
}

But it seems variables Stats_File_Path and Stats_Update_Delay are not handled properly by nfs_read_core_conf :

[main] nfs_read_core_conf :CONFIG :CRIT :Unknown or unsettable key: Stats_File_Path (item NFS_Core_Param)
[main] nfs_set_param_from_conf :NFS STARTUP :CRIT :Error while parsing core configuration
[main] main :NFS STARTUP :FATAL :Error setting parameters from configuration file.

So either these variables can no longer be used (and it should appear in documentation + samples provided should be updated), or configuration parsing is buggy.

fsal_dirs.c:166: error: cannot convert to a pointer type

/libtirpc/tirpc/ -g -O2 -D_REENTRANT -Wall -Wimplicit -Wformat -Wmissing-braces -Wno-pointer-sign -I../../RPCAL/gssd -MT fsal_dirs.lo -MD -MP -MF .deps/fsal_dirs.Tpo -c fsal_dirs.c -fPIC -DPIC -o .libs/fsal_dirs.o
fsal_dirs.c: In function âCEPHFSAL_readdirâ:
fsal_dirs.c:166: error: cannot convert to a pointer type
fsal_dirs.c:166: error: âcephfsal_cookie_tâ has no member named âcookieâ
fsal_dirs.c:167: error: cannot convert to a pointer type
fsal_dirs.c:167: error: âcephfsal_cookie_tâ has no member named âcookieâ
make[2]: *** [fsal_dirs.lo] Error 1
make[2]: Leaving directory /root/nfs-ganesha-1.5.1/FSAL/FSAL_CEPH' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory/root/nfs-ganesha-1.5.1/FSAL'

BSD-4-clause: src/os/freebsd/mntent_compat.c

File src/os/freebsd/mntent_compat.c is licensed under "BSD-4-clause" (old BSD) license. Unfortunately this GPL-incompatible license is universally deprecated.
It is not recognised as DFSG-compatible license hence this file can not be included to Debian etc.

See more

Please consider removing this file or re-license it under "new" BSD (aka BSD-3-clause) license if all copyright holders agree to that:

XFS mode of VFS fsal does not handle symlinks propely in pre-2.0-dev_34

This regression has been around for a bit but we now have a root cause.

The XFS option in VFS is for Linux kernels earlier than 2.6.39. XFS on later kernels works just fine under VFS (not using the option).

The open_by_handle() libhandle function and the syscall underneath do not handle opens on symlinks and return and EPERM error. This shows up whenever the server attempts to get the attributes (fstat) of a symlink. This is similar to the problem with sockets and special files already handled with the "unopenable" union member. We have to keep the directory handle+name around so we can do an fstatat on the symlink.

Not all files listed in Ganesha 2.1 and 2.2

I've run into a strange issue where, while verified on the exporting system, not all files (or folders) are listed via ls. When this occurs, the missing file(s) can be listed directly by path, and fail to appear in the find command.

Renaming the file causes it to appear, however, naming it back causes the rename to appear as the old name and new name.

This occurs with or with out InodeCaching

EXPORT
{
# Export Id (mandatory, each EXPORT must have a unique Export_Id)
Export_Id = 77;

    # Exported path (mandatory)
    Path = /mnt/storage/content;

SecType = sys;
Disable_ACL = TRUE;
Squash = No_root_squash;
Access_type = RW;

    # Pseudo Path (required for NFS v4)
    Pseudo = /content;

    # Exporting FSAL
    FSAL {
            Name = VFS;
            pnfs = true;
    }

}

CacheInode
{
Directory_Expiration_Time = 0;
Attr_Expiration_Time = 0;
Symlink_Expiration_Time = 0;
}

VFS fsal needs a notify mechanism for exported filesystem changes

Ganesha needs to be notified if the underlying filesystem is changed by other services or local users. For example, if the system administrator creates or moves a directory within an export, ganasha will not see it until the cache entry for that directory is refreshed. Currently, that will only happen if the entry is LRU'd due to inactivity. If it is an active directory, it may stay cached (stale) for a long time.

This task is to use kernel services (either fanotify or inotify) to watch cached directories and mark the entry as changed.

Better check if system supports file handle

When using Ganesha on a kernel without CONFIG_FHANDLE, error in log is not very understandable :
ganesha.nfsd-15726[main] vfs_create_export :FSAL :MAJ :vfs_fd_to_handle: root_path: /tmp, root_fd=4, errno=(38) Function not implemented

It would be great to have a better error code : maybe doing a specific test at launch ?

CMake should not download

In 2.3~rc5 I've noticed that CMakeLists.txt may attempt to download libntirpc during build. This is wrong because it fails in clean build environments without internet access (e.g. on Debian build servers, etc.).

CMake should not download anything during build. Also it is a potential security problem.

SIGHUP does not update Exports

When the NFS-Ganesha server handles a SIGHUP, it should reload the configuration file and add new Exports and remove Exports no longer found in the new configuration file. However sending SIGHUP to NFS-Ganesha does not result in any changes to the Exports. I've tested this with version 2.2, but I think this behavior is not fixed on the "next" branch.

Looking at the source code, from the signal handler in sigmgr_thread(), handling a SIGHUP calls the admin_replace_exports() function. In admin_replace_exports() a command for the admin thread is issued, which is the same as the current command, typically admin_none_pending. In the admin thread function admin_thread(), the while loop waits for new commands via the condition variable admin_control_cv, but the only command that leads to any action is admin_shutdown.

Thus calling admin_replace_exports() results in no further action done on the admin thread.

I also did not find any function that would actually re-load the configuration file, re-parse the EXPORT blocks and add/remove anything new. Does such a function exist and just needs to be called from the admin thread?

Rewrite Pseudo filesystem as a fsal

Pseudo filesystem handling is a series of special case tests in V4 ops with calls to nfs4_pseudo. Move all the pseudo fs logic to a fsal. This eliminates all the special code scattered about in Protocols/NFS/nfs4_op_* by turning the pseudofs into the top level "export" with its own export id embedded in the handle.

Jeremy's work with pseudo fs persistent handles for 1.5 is merged into 2.0 via the fsal

Unable to start up ganesha.nfsd

Environment:
CentOS 6.4 x86_64, Ganesha V2.2 Stable.

Build command:
cmake -DUSE_9P=OFF -DCMAKE_BUILD_TYPE=Maintainer /root/nfs-ganesha/src/

When I launched ganesha.nfsd, an error occurred:

localhost nfs-ganesha[17545]: [main] nfs_Init_svc :DISP :FATAL :Cannot get udp netconfig, cannot get an entry for udp in netconfig file. Check file /etc/netconfig...

In CentOS, there is no such file: "/etc/netconfig" .

How can I solve the problem?

Unable to remount nfs 4.0 partition when using kerberos

When nfs 4.0 is used and partition is unmounted an then mounted on the same client the error occurs:

$ mount.nfs4 -o sec=krb5i,minorversion=0 example.com:/dir1/ /mnt/nfs
mount.nfs4: Operation not permitted

This error stops to occur after 60 seconds, or after Lease_Lifetime, if set.

The "operation not permitted" is reported because SETCLIENTID nfs call returns NFS4ERR_CLID_INUSE.

From src/Protocols/NFS/nfs4_op_setclientid.c, nfs4_op_setclientid function:

if (!nfs_compare_clientcred(&conf->cid_credential,
                    &data->credential)  ...)  { 
... return NFS4ERR_CLID_INUSE ... 
}

The nfs_compare_clientcred function(src/support/nfs_creds.c) compares old credential for this client with new credential:

bool nfs_compare_clientcred(nfs_client_cred_t *cred1,
            nfs_client_cred_t *cred2)
{
     // ..
 switch (cred1->flavor) {
 case AUTH_UNIX:
          // ... handling AUTH UNIX ...
 default:
    if (memcmp
        (&cred1->auth_union, &cred2->auth_union, cred1->length))
        return false;
    break;
 }
}

In the case of RPCSEC_GSS the memcmp is used which compares two structs of:
auth_gss = {svc, qop, gss_context_id}

The result of comparison of two structs is always false, because gss_context_id is a handle.

Because of this, if the same client unmounts and mounts the nfs partition in a lease time, the access is denied. Also, this will occur in case of connection problems.

From RFC 3530, page 67:

As a security measure, the server MUST NOT cancel a client's leased
   state if the principal established the state for a given id string is
   not the same as the principal issuing the SETCLIENTID.

(so it is possible to check a principals, not handles)

...

Note that if the id string in a SETCLIENTID request is properly
constructed, and if the client takes care to use the same principal
for each successive use of SETCLIENTID, then, barring an active
denial of service attack, NFS4ERR_CLID_INUSE should never be
returned.

However, client bugs, server bugs, or perhaps a deliberate change of
the principal owner of the id string (such as the case of a client
that changes security flavors, and under the new flavor, there is no
mapping to the previous owner) will in rare cases result in
NFS4ERR_CLID_INUSE.

Best,
Alexander Bersenev

Cache inode entry cache not handling entry delete properly; corrupting refcnt.

Cache Inode entry's refcount gets corrupted resulting into threads accessing cache inode entry when it is already deleted. As entry gets deleted as soon the ref count drops to 0.

The problem is in src/include/cache_inode_hash.h#cih_remove_checked() method. That code due to race condition can decrement the entry's refcount more than once resulting into corruption. The probable fix which works correctly is as follows. I am making sure that inavl flag is checked explicitly before you decrement the refcnt for entry so that it happens only once in the lifetime of the cache inode entry.

static inline void
cih_remove_checked(cache_entry_t *entry)
{
struct avltree_node *node;
cih_partition_t *cp =
cih_partition_of_scalar(&cih_fhcache, entry->fh_hk.key.hk);

    if (entry->fh_hk.inavl)
    {
        PTHREAD_RWLOCK_wrlock(&cp->lock);
        if (entry->fh_hk.inavl)
        {
            node = cih_fhcache_inline_lookup(&cp->t, &entry->fh_hk.node_k);
            if (node)
            {
                avltree_remove(node, &cp->t);
                cp->cache[cih_cache_offsetof(&cih_fhcache,
                                            entry->fh_hk.key.hk)] = NULL;
                entry->fh_hk.inavl = false;
                /* return sentinel ref */
                cache_inode_lru_unref(entry, LRU_FLAG_NONE);
            }
        }
        PTHREAD_RWLOCK_unlock(&cp->lock);
    }

}

related stack trace
#0 0x00000000004cb43e in cache_inode_lock_trust_attrs (entry=0x7fa70be5bc80, need_wr_lock=false) at /home/ovj/nfs-ganesha-hdvg/src/cache_inode/cache_inode_misc.c:899
#1 0x000000000047e47d in cache_entry_to_nfs3_Fattr (entry=0x7fa70be5bc80, Fattr=0x7fa6a0991f70) at /home/ovj/nfs-ganesha-hdvg/src/Protocols/NFS/nfs_proto_tools.c:3567
#2 0x0000000000479a4a in nfs_SetPostOpAttr (entry=0x7fa70be5bc80, attr=0x7fa6a0991f68) at /home/ovj/nfs-ganesha-hdvg/src/Protocols/NFS/nfs_proto_tools.c:77
#3 0x0000000000479c3c in nfs_SetWccData (before_attr=0x0, entry=0x7fa70be5bc80, wcc_data=0x7fa6a0991f48) at /home/ovj/nfs-ganesha-hdvg/src/Protocols/NFS/nfs_proto_tools.c:130
#4 0x0000000000456477 in nfs3_write (arg=0x7fa69047f740, worker=0x7fa70bc70180, req=0x7fa69047f688, res=0x7fa6a0991f40) at /home/ovj/nfs-ganesha-hdvg/src/Protocols/NFS/nfs3_write.c:269
#5 0x000000000044c919 in nfs_rpc_execute (req=0x7fa69204c280, worker_data=0x7fa70bc70180) at /home/ovj/nfs-ganesha-hdvg/src/MainNFSD/nfs_worker_thread.c:1257
#6 0x000000000044d2a5 in worker_run (ctx=0x7fa7157e2300) at /home/ovj/nfs-ganesha-hdvg/src/MainNFSD/nfs_worker_thread.c:1509

How to add and remove exports using dbus-send?

We are running nfs-ganesha as a Docker container. We would like to be able to add and remove exports without restarting the ganesha daemon. We would like to avoid ganeshactl since this is a PyQt GUI application. It increases the docker image size quite a bit among other things. It seems to me that it should be possible to send D-BUS messages via dbus-send command to add and remove exports. Can you show me how to do that?

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.