209.2. NFS Server Configuration

209.2 NFS Server Configuration

Weight: 3
Description: Candidates should be able to export filesystems using NFS. This objective includes access restrictions, mounting an NFS filesystem on a client and securing NFS.
Key Knowledge Areas:
    NFS version 3 configuration files
    NFS tools and utilities
    Access restrictions to certain hosts and/or subnets
    Mount options on server and client
    TCP Wrappers
    Awareness of NFSv4
Terms and Utilities:
    /etc/exports
    exportfs
    showmount
    nfsstat
    /proc/mounts
    /etc/fstab
    rpcinfo
    mountd
    portmapper

What is NFS?

NFS (Network File System) is basically developed for sharing of files and folders between Linux/Unix systems by Sun Microsystems in 1980. It allows us to mount our local file systems over a network and remote hosts to interact with them as they are mounted locally on the same system. With the help of NFS, we can set up file sharing between Unix to Linux system and visa versa.

NFS Advantages

    NFS allows local access to remote files.
    It uses standard client/server architecture for file sharing between all unix based machines.
    With NFS it is not necessary that both machines run on the same OS.
    With the help of NFS we can configure centralized storage solutions.
    Users get their data with no concern about physical location of data.
    No manual refresh needed for new files.
    Newer version of NFS also supports acl, pseudo root mounts.
    Can be secured with Firewalls and Kerberos.

NFS Versions

Currently, there are three versions of NFS. NFS version 2 (NFSv2) is older and is widely supported. NFS version 3 (NFSv3) has more features but has some security issues and dis advantages. And NFS version 4 (NFSv4) wich is lates full feature and secure version of NFS.
    NFS v2 – March 1989
    NFS v3 – June 1995
    NFS v4 – December 2000
    NFS v4.1 – January 2010
For LPIC2 exam we are required to work NFS version3 but we also have to know a little bit about NFS version4 and differences.

NFS v3

The NFS server version 3 service includes three facilities:
    nfs: It translates remote file sharing requests into requests on the local file system.
    portmap : It maps calls made from other machines to the correct RPC service .it uses rpcbind(not required with NFSv4).
    rpcbind: The rpcbind service redirects the client to the proper port number so it can communicate with the requested service.(not required with NFSv4)

NFS v3 Disadvantage

Probably the greatest disadvantage is the issue of security. Because NFS v3 is based on RPC, remote procedure calls, it is inherently insecure and should only be used on a trusted network behind a firewall. This is not to say that steps can't be taken to secure NFS but it still will not be ready for the wilds of the open Internet.

NFS v4

    In NFSv4 , there is no more rpcbind and portmapper.
    In NFSv3 there is a nfslock service which that starts the appropriate RPC processes to allow NFS clients to lock files on the server but NFSv4 has native file locking mechanisem.
    In NFSv3 rpc.mountd service is responsible for mounting and unmounting of file systems, In NFSv4 there no rpc.mound.
    While NFSv3 Works with TCP/UDP ports, NFSv4 just works with TCP

Installing NFSv3

Lets install NFS server (v3) on a CentOS system:
1
[[email protected] ~]# yum nfs-utils rpcbin
2
Loaded plugins: fastestmirror, langpacks
3
No such command: nfs-utils. Please use /bin/yum --help
4
[[email protected] ~]# yum install nfs-utils rpcbind
5
Loaded plugins: fastestmirror, langpacks
6
Loading mirror speeds from cached hostfile
7
* base: mirrors.gigenet.com
8
* epel: mirror.clarkson.edu
9
* extras: mirror.cs.vt.edu
10
* updates: mirrors.sorengard.com
11
Package 1:nfs-utils-1.3.0-0.54.el7.x86_64 already installed and latest version
12
Package rpcbind-0.2.0-44.el7.x86_64 already installed and latest version
13
Nothing to do
Copied!

Important NFS Configuration Files:

/etc/exports : Its a main configuration file of NFS, all exported files and directories are defined in this file at the NFS Server end.
/etc/fstab : To mount a NFS directory on your system across the reboots, we need to make an entry in /etc/fstab.
/etc/sysconfig/nfs : Configuration file of NFS to control on which port rpc and other services are listening.
    We will spend more time on them during the course. Now lets create something to share:
1
[[email protected] ~]# mkdir /nfsshare
2
[[email protected] ~]# chmod 777 /nfsshare
Copied!
We will discuse about NFS security later in this course but for now keep it simple and grant 777 permission.( Also in RedHat based distributions do not forget to turn off selinux by using setenforce 0command.
Next we have to export this file system to one or more clients.

/etc/exports

Its a main configuration file of NFS, all exported files and directories are defined in this file at the NFS Server end. A line for an exported file system has the following structure:
1
<export> <host1>(<options>) <hostN>(<options>)...
Copied!
This file might be populated by some examples and explanation or might not exist and should be created, like here on CentOS:
1
[[email protected] ~]# cat /etc/exports
2
/nfsshare centos7-2(rw) (ro)
Copied!
In this example we have shared /nfsshared directory with centos7-2 client with read and write permissions. Also we have let everybody to have readonly permission. It is possible to use Ip addresses and wild cards intstead of HostNames.The following methods can be used to specify host names:
    single host — Where one particular host is specified with a fully qualified domain name, hostname, or IP address.
    wildcards — Where a * or ? character is used to take into account a grouping of fully qualified domain names that match a particular string of letters. Wildcards should not be used with IP addresses; however, it is possible for them to work accidentally if reverse DNS lookups fail.
    IP networks — Allows the matching of hosts based on their IP addresses within a larger network. For example, 192.168.0.0/28 allows the first 16 IP addresses, from 192.168.0.0 to 192.168.0.15, to access the exported file system, but not 192.168.0.16 and higher.
    netgroups —This option is pretty old but noy bad to know. It Permits an NIS netgroup name, written as @<group-name>, to be used. This effectively puts the NIS server in charge of access control for this exported file system, where users can be added and removed from an NIS group without affecting /etc/exports.
Some other options we can use in /etc/exports file for file sharing is as follows:
    ro: With the help of this option we can provide read only access to the shared files i.e client will only be able to read.
    rw: This option allows the client server to both read and write access within the shared directory.
    sync: Syncronize nfs file system to the disk immediatly, so this way we reduce the chance of ifle system corruption.
    no_subtree_check: This option prevents the subtree checking. When a shared directory is the subdirectory of a larger file system, nfs performs scans of every directory above it, in order to verify its permissions and details. Disabling the subtree check may increase the reliability of NFS, but reduce security.
    no_root_squash: This phrase allows root to connect to the designated directory. [it needs more explanation:]

Mapping user's ides and group ides, What NFS does?

The way that NFS permissions work is that , they are the permissions both for subdirectories within the share as well as files created by users, they have userid(uid) and groupid(gid) of the user on the remote system that creates them. The problem is that those user ides and group ides can be different on different systems. From the security perspective that sounds like a problem.Why? Lets imagine in a client server environment we have a user with uid 1101( on the server) called user1, and we have user3 on the remote system (client system) with the same user id 1101.
The problem is that any files created by user1 locally on the server (or from any other systems ) , are made with permissions for uid1101. Now imagine another user like user3 with the same uid 1101 is permitted to get connected to the NFS server. user3 is another person, from different computer but he/she has uid 1101 so he/she would have the same persmissions as user1 has.
So we have problem in syncronizing user's permissions becaused they are handled by user id and group id. So we need to map uid and gid between client and server for users who access them. It could be done by choosing right uid and gid while we are creating users. also there are some other solutions like OpenLDAP to automatically syncronized accounts.
But the option that we are talking about , whould be different from root user perspective. the root user is going to have thae same id on the both client and server. so lets explain it again :
    root_squash : says do not map root client account to the server root account, it prevents remote root user to have the same access to the file system as the server root user.
    no_root_squash: says hey go and map users root client acount to the server root account. It depends on you and security profile of your organization.
Now that we know about more options, lets complete our configuration:
1
[[email protected] ~]# cat /etc/exports
2
/nfsshare centos7-2(rw,no_root_squash,sync) (ro)
Copied!
Now we need to start a couple of processes we need to start, we need to start nfs server and we need to start rpcbind process.One Supports the other, and we need both of them so NFS server works corectlly.
1
[[email protected] ~]# systemctl start rpcbind
2
[[email protected] ~]# systemctl status rpcbind
3
● rpcbind.service - RPC bind service
4
Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; disabled; vendor preset: enabled)
5
Active: active (running) since Wed 2018-07-04 04:28:05 EDT; 3 days ago
6
Process: 24901 ExecStart=/sbin/rpcbind -w $RPCBIND_ARGS (code=exited, status=0/SUCCESS)
7
Main PID: 24902 (rpcbind)
8
Tasks: 1
9
Memory: 660.0K
10
CGroup: /system.slice/rpcbind.service
11
└─24902 /sbin/rpcbind -w
12
13
Jul 04 04:28:05 centos7-1 systemd[1]: Starting RPC bind service...
14
Jul 04 04:28:05 centos7-1 systemd[1]: Started RPC bind service.
15
16
[[email protected] ~]# systemctl start nfs.service
17
[[email protected] ~]# systemctl status nfs.service
18
● nfs-server.service - NFS server and services
19
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled)
20
Active: active (exited) since Sun 2018-07-08 03:04:40 EDT; 43s ago
21
Process: 29178 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
22
Process: 29175 ExecStartPre=/bin/sh -c /bin/kill -HUP `cat /run/gssproxy.pid` (code=exited, status=0/SUCCESS)
23
Process: 29172 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
24
Main PID: 29178 (code=exited, status=0/SUCCESS)
25
Tasks: 0
26
Memory: 0B
27
CGroup: /system.slice/nfs-server.service
28
29
Jul 08 03:04:40 centos7-1 systemd[1]: Starting NFS server and services...
30
Jul 08 03:04:40 centos7-1 exportfs[29172]: exportfs: No host name given with /nfss...ng
31
Jul 08 03:04:40 centos7-1 systemd[1]: Started NFS server and services.
32
Hint: Some lines were ellipsized, use -l to show in full.
Copied!
nfsd process is primary process that handles clients. But Because RPC-based services rely on rpcbind to make all connections with incoming client requests, rpcbind must be available before any of these services start,(1-rpc-->2-nfs) lets check the rpc services and tale a closer look at them:
1
[[email protected] ~]# ps ax | grep rpc
2
699 ? S< 0:00 [rpciod]
3
29162 ? Ss 0:00 /usr/sbin/rpc.statd
4
29450 ? Ss 0:00 /sbin/rpcbind -w
5
29451 ? Ss 0:00 /usr/sbin/rpc.mountd
6
29452 ? Ss 0:00 /usr/sbin/rpc.idmapd
7
29623 pts/0 S+ 0:00 grep --color=auto rpc
Copied!
Based on Linux distro that we are using, we might see some additional rpc services. following RPC processes facilitate NFS services.For those who like details:
1
The following RPC processes facilitate NFS services:
2
3
rpc.mountd — This process receives mount requests from NFS clients and verifies the requested file system is currently exported. This process is started automatically by the nfs service and does not require user configuration. This is not used with NFSv4.
4
5
rpc.nfsd — Allows explicit NFS versions and protocols the server advertises to be defined. It works with the Linux kernel to meet the dynamic demands of NFS clients, such as providing server threads each time an NFS client connects. This process corresponds to the nfs service.
6
7
rpc.lockd — allows NFS clients to lock files on the server. If rpc.lockd is not started, file locking will fail. rpc.lockd implements the Network Lock Manager (NLM) protocol. This process corresponds to the nfslock service. This is not used with NFSv4.
8
9
rpc.statd — This process implements the Network Status Monitor (NSM) RPC protocol which notifies NFS clients when an NFS server is restarted without being gracefully brought down. This process is started automatically by the nfslock service and does not require user configuration. This is not used with NFSv4.
10
11
rpc.rquotad — This process provides user quota information for remote users. This process is started automatically by the nfs service and does not require user configuration.
12
13
rpc.idmapd — This process provides NFSv4 client and server upcalls which map between on-the-wire NFSv4 names (which are strings in the form of [email protected]) and local UIDs and GIDs. For idmapd to function with NFSv4, the /etc/idmapd.conf must be configured. This service is required for use with NFSv4.
Copied!
among them rpc.idmap is for hybrid NFSv3 and NFSv4 systems, which is why we see here.

rpcinfo

rpcbindprovides coordination between RPC services and the port numbers used to communicate with them, it is useful to view the status of current RPC services usingrpcbindwhen troubleshooting. Therpcinfocommand shows each RPC-based service with port numbers, an RPC program number, a version number, and an IP protocol type (TCP or UDP).
rpcinfo makes an RPC call to an RPC server and reports what it finds. It has lots of options but for now we just show how rpc feeds portmap:
2
rpcbind rpcdebug rpc.gssd rpcinfo rpc.nfsd rpc.statd
3
rpcclient rpcgen rpc.idmapd rpc.mountd rpc.rquotad
4
5
[[email protected] ~]# rpcinfo -?
6
rpcinfo: invalid option -- '?'
7
Usage: rpcinfo [-m | -s] [host]
8
rpcinfo -p [host]
9
rpcinfo -T netid host prognum [versnum]
10
rpcinfo -l host prognum versnum
11
rpcinfo [-n portnum] -u | -t host prognum [versnum]
12
rpcinfo -a serv_address -T netid prognum [version]
13
rpcinfo -b prognum versnum
14
rpcinfo -d [-T netid] prognum versnum
15
16
[[email protected] ~]# rpcinfo -p
17
program vers proto port service
18
100000 4 tcp 111 portmapper
19
100000 3 tcp 111 portmapper
20
100000 2 tcp 111 portmapper
21
100000 4 udp 111 portmapper
22
100000 3 udp 111 portmapper
23
100000 2 udp 111 portmapper
24
100024 1 udp 39073 status
25
100024 1 tcp 53933 status
26
100005 1 udp 20048 mountd
27
100005 1 tcp 20048 mountd
28
100005 2 udp 20048 mountd
29
100005 2 tcp 20048 mountd
30
100005 3 udp 20048 mountd
31
100005 3 tcp 20048 mountd
32
100003 3 tcp 2049 nfs
33
100003 4 tcp 2049 nfs
34
100227 3 tcp 2049 nfs_acl
35
100003 3 udp 2049 nfs
36
100003 4 udp 2049 nfs
37
100227 3 udp 2049 nfs_acl
38
100021 1 udp 58591 nlockmgr
39
100021 3 udp 58591 nlockmgr
40
100021 4 udp 58591 nlockmgr
41
100021 1 tcp 37458 nlockmgr
42
100021 3 tcp 37458 nlockmgr
43
100021 4 tcp 37458 nlockmgr
Copied!
If one of the NFS services does not start up correctly,rpcbindwill be unable to map RPC requests from clients for that service to the correct port. In many cases, if NFS is not present inrpcinfooutput, restarting NFS causes the service to correctly register withrpcbindand begin working. For more information and a list of options onrpcinfo, refer to itsmanpage.

TCP Wrappers

Therpcbindservice uses TCP wrappers for access control, and access control rules forrpcbindaffect all RPC-based services. Alternatively, it is possible to specify access control rules using /etc/host.allow and etc/host.deny files.
By default both of hosts.allow and hosts.deny files are blank. So the access will be granted to any client.
1
[[email protected] ~]# cat /etc/hosts.allow
2
#
3
# hosts.allow This file contains access rules which are used to
4
# allow or deny connections to network services that
5
# either use the tcp_wrappers library or that have been
6
# started through a tcp_wrappers-enabled xinetd.
7
#
8
# See 'man 5 hosts_options' and 'man 5 hosts_access'
9
# for information on rule syntax.
10
# See 'man tcpd' for information on tcp_wrappers
11
#
12
13
[[email protected] ~]# cat /etc/hosts.deny
14
#
15
# hosts.deny This file contains access rules which are used to
16
# deny connections to network services that either use
17
# the tcp_wrappers library or that have been
18
# started through a tcp_wrappers-enabled xinetd.
19
#
20
# The rules in this file can also be set up in
21
# /etc/hosts.allow with a 'deny' option instead.
22
#
23
# See 'man 5 hosts_options' and 'man 5 hosts_access'
24
# for information on rule syntax.
25
# See 'man tcpd' for information on tcp_wrappers
26
#
Copied!
Keep in mind that hosts.allow over rides hosts.deny. For example if we want to let specific network range can access and use "portmap" we specify portmap: 192.168.10.0/24 in /etc/hosts.allow and then define portmanp: ALL in /etc/hosts.deny . Do not forget to restart rpcbind and nfs service in order to changes take effect.

exportfs

that is a utility to show what shares are available.
1
[[email protected] ~]# exportfs
2
/nfsshare centos7-1.example.com
3
/nfsshare <world>
Copied!
Some usefull switches are:
exportfs examples
Description
exportfs -v
Displays a list of shares files and options on a server
exportfs -a
Exports all shares listed in /etc/exports, or given name
exportfs -u
Unexports all shares listed in /etc/exports, or given name
exportfs -r
Refresh the server’s list after modifying /etc/exports

nfsstat

nfstat let us config client connections to export
1
[[email protected] ~]# nfsstat
2
Server rpc stats:
3
calls badcalls badclnt badauth xdrcall
4
0 0 0 0 0
5
6
[[email protected] ~]# nfsstat --help
7
Usage: nfsstat [OPTION]...
8
9
-m, --mounts Show statistics on mounted NFS filesystems
10
-c, --client Show NFS client statistics
11
-s, --server Show NFS server statistics
12
-2 Show NFS version 2 statistics
13
-3 Show NFS version 3 statistics
14
-4 Show NFS version 4 statistics
15
-o [facility] Show statistics on particular facilities.
16
nfs NFS protocol information
17
rpc General RPC information
18
net Network layer statistics
19
fh Usage information on the server's file handle cache
20
rc Usage information on the server's request reply cache
21
all Select all of the above
22
-v, --verbose, --all Same as '-o all'
23
-r, --rpc Show RPC statistics
24
-n, --nfs Show NFS statistics
25
-Z[#], --sleep[=#] Collects stats until interrupted.
26
Cumulative stats are then printed
27
If # is provided, stats will be output every
28
# seconds.
29
-S, --since file Shows difference between current stats and those in 'file'
30
-l, --list Prints stats in list format
31
--version Show program version
32
--help What you just did
33
34
[[email protected] ~]# nfsstat -m
Copied!

NFS Client Configuration

Lets configure NFS client, First we need to have some utilites set to mount NFS File system(Centos7):
1
[[email protected] ~]# yum install nfs-utils nfs-utils-lib rpcbind
2
[[email protected] ~]# systemctl start rpcbind
3
[[email protected] ~]# systemctl start nfs
Copied!
Now we create a mount point before mounting NFS file system under /mnt directory :
1
[[email protected] ~]# cd /mnt/
2
[[email protected] mnt]# mkdir nfsmounthere
3
[[email protected] mnt]# ls -l
4
total 0
5
drwxr-xr-x. 2 root root 6 Jul 10 02:39 nfsmounthere
Copied!
and for demostrating the discussion we had about user id mapping and group id , we create two users:
1
[[email protected] mnt]# useradd -u 1101 -m nfsuser1
2
[[email protected] mnt]# useradd -u 1102 -m nfsuser2
3
4
[[email protected] mnt]# cat /etc/passwd | grep nfsuser
5
nfsuser1:x:1101:1101::/home/nfsuser1:/bin/bash
6
nfsuser2:x:1102:1102::/home/nfsuser2:/bin/bash
7
8
[[email protected] mnt]# passwd nfsuser1
9
Changing password for user nfsuser1.
10
New password:
11
Retype new password:
12
passwd: all authentication tokens updated successfully.
13
14
[[email protected] mnt]# passwd nfsuser2
15
Changing password for user nfsuser2.
16
New password:
17
Retype new password:
18
passwd: all authentication tokens updated successfully.
19
20
[[email protected] mnt]# getent passwd nfsuser1
21
nfsuser1:x:1101:1101::/home/nfsuser1:/bin/bash
22
[[email protected] mnt]# getent passwd nfsuser2
23
nfsuser2:x:1102:1102::/home/nfsuser2:/bin/bash
Copied!
we can use showmount command to findout what has been shared on the server:
1
[[email protected] ~]$ showmount --help
2
Usage: showmount [-adehv]
3
[--all] [--directories] [--exports]
4
[--no-headers] [--help] [--version] [host]
Copied!
showmount command examples
Description
showmount -e
Shows the available shares on your local machine
showmount -e <server-ip or hostname>
Lists the available shares at the remote server
showmount -d
Lists all the sub directories
1
[[email protected] ~]# showmount -e 192.168.10.133
2
clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host)
Copied!
Ops lets check remote rpc service with rpcinfo command:
1
[[email protected] ~]# rpcinfo -p 192.168.10.133
2
rpcinfo: can't contact portmapper: RPC: Remote system error - No route to host
Copied!
Oh No, we have forgot to set NFS required ports on the server firewall, so :
1
[[email protected] ~]# firewall-cmd --permanent --add-service=rpc-bind
2
success
3
[[email protected] ~]# firewall-cmd --permanent --add-service=mountd
4
success
5
[[email protected] ~]# firewall-cmd --permanent --add-port=2049/tcp
6
success
7
[[email protected] ~]# firewall-cmd --permanent --add-port=2049/udp
8
success
9
[[email protected] ~]# firewall-cmd --reload
10
success
Copied!
okey , lets check it again:
1
[[email protected] ~]# rpcinfo -p 192.168.10.133
2
program vers proto port service
3
100000 4 tcp 111 portmapper
4
100000 3 tcp 111 portmapper
5
100000 2 tcp 111 portmapper
6
100000 4 udp 111 portmapper
7
100000 3 udp 111 portmapper
8
100000 2 udp 111 portmapper
9
100024 1 udp 51943 status
10
100024 1 tcp 34420 status
11
100005 1 udp 20048 mountd
12
100005 1 tcp 20048 mountd
13
100005 2 udp 20048 mountd
14
100005 2 tcp 20048 mountd
15
100005 3 udp 20048 mountd
16
100005 3 tcp 20048 mountd
17
100003 3 tcp 2049 nfs
18
100003 4 tcp 2049 nfs
19
100227 3 tcp 2049 nfs_acl
20
100003 3 udp 2049 nfs
21
100003 4 udp 2049 nfs
22
100227 3 udp 2049 nfs_acl
23
100021 1 udp 39146 nlockmgr
24
100021 3 udp 39146 nlockmgr
25
100021 4 udp 39146 nlockmgr
26
100021 1 tcp 46184 nlockmgr
27
100021 3 tcp 46184 nlockmgr
28
100021 4 tcp 46184 nlockmgr
29
30
[[email protected] ~]# showmount -e 192.168.10.133
31
Export list for 192.168.10.133:
32
/nfsshare (everyone)
Copied!
and now it is mounting time:
1
[[email protected] ~]# mount 192.168.10.133:/nfsshare /mnt/nfsmounthere/
2
3
Filesystem Size Used Avail Use% Mounted on
4
/dev/mapper/centos_centos7--2-root 17G 3.7G 14G 22% /
5
devtmpfs 1.9G 0 1.9G 0% /dev
6
tmpfs 1.9G 0 1.9G 0% /dev/shm
7
tmpfs 1.9G 9.1M 1.9G 1% /run
8
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
9
/dev/sda1 1014M 179M 836M 18% /boot
10
tmpfs 378M 20K 378M 1% /run/user/1000
11
tmpfs 378M 0 378M 0% /run/user/0
12
192.168.10.133:/nfsshare 46G 4.6G 41G 11% /mnt/nfsmounthere
Copied!
lets create a file there:
1
[[email protected] ~]# cd /mnt/nfsmounthere/
2
[[email protected] nfsmounthere]# echo "hello" > hello.txt
3
[[email protected] nfsmounthere]# ls -l
4
total 4
5
-rw-r--r--. 1 root root 6 Jul 10 04:48 hello.txt
Copied!
as we have set no root squash option, if root user create a file it would have root permisions, wheather from the client mashine or the server:
1
[[email protected] ~]# cd /nfsshare/
2
[[email protected] nfsshare]# ls -l
3
total 4
4
-rw-r--r--. 1 root root 6 Jul 10 04:48 hello.txt
5
[[email protected] nfsshare]# cat hello.txt
6
hello
Copied!
see boot root users have the same permissions to this file, if we have used no root squash, as root users wouldn't be mapped among client and server it would get nfsnobody user permissions.
Lets do another example, let create two text files, nfsuser1 created user1.txt and nfsuser2 creates user2.txt file:
1
[[email protected] nfsmounthere]# su nfsuser1
2
[[email protected] nfsmounthere]$ echo " user1 id 1101 from client" > user1.txt
3
[[email protected] nfsmounthere]$ su nfsuser2
4
Password:
5
[[email protected] nfsmounthere]$ echo " user2 id 1102 from client" > user2.txt
6
[[email protected] nfsmounthere]$ ls -l
7
total 12
8
-rw-r--r--. 1 root root 6 Jul 10 04:48 hello.txt
9
-rw-rw-r--. 1 nfsuser1 nfsuser1 27 Jul 10 04:58 user1.txt
10
-rw-rw-r--. 1 nfsuser2 nfsuser2 27 Jul 10 04:59 user2.txt
11
12
[[email protected] nfsmounthere]$ echo "I am user2 from client, i was here too" >> user1.txt
13
bash: user1.txt: Permission denied
Copied!
Now on the server we create nfsuser1 with uid 1102 (this id is nfsuser2's id on client machine):
1
[[email protected] nfsshare]# useradd -u 1101 -m nfsuser2
2
[[email protected] nfsshare]# passwd nfsuser2
3
Changing password for user nfsuser2.
4
New password:
5
Retype new password:
6
passwd: all authentication tokens updated successfully.
7
[[email protected] nfsshare]# su nfsuser2
8
[[email protected] nfsshare]$ cd /nfsshare/
9
[[email protected] nfsshare]$ ls -l
10
total 12
11
-rw-r--r--. 1 root root 6 Jul 10 04:48 hello.txt
12
-rw-rw-r--. 1 nfsuser2 nfsuser2 27 Jul 10 04:58 user1.txt
13
-rw-rw-r--. 1 1102 1102 27 Jul 10 04:59 user2.txt
14
[[email protected] nfsshare]$ echo "I am user2 from the server, I can do it HAHA" >> user1.txt
15
[[email protected] nfsshare]$ cat user1.txt
16
user1 id 1101 from client
17
I am user2 from the server, I can do it HAHA
Copied!
Hm, It seem unsecure hm? Be carefull about usning NFSv3 and use OpenLDAP with it.

NFS /etc/fstab options

As u have seen we use mount command to mount NFS File System on the clint computer, that is not permanent and it would dissapear after rebooting the system.
1
[[email protected] ~]# umount /mnt/nfsmounthere/
2
[[email protected] ~]# vim /etc/fstab
3
4
[[email protected] ~]# cat /etc/fstab
5
6
#
7
# /etc/fstab
8
# Created by anaconda on Sun Jun 10 02:22:51 2018
9
#
10
# Accessible filesystems, by reference, are maintained under '/dev/disk'
11
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
12
#
13
/dev/mapper/centos_centos7--2-root / xfs defaults 0 0
14
UUID=b388d696-cb03-4156-9576-c2cddf954366 /boot xfs defaults 0 0
15
/dev/mapper/centos_centos7--2-swap swap swap defaults 0 0
16
17
# creating an entry to mount our NFS share
18
192.168.10.133:/nfsshare /mnt/nfsmounthere nfs defaults 0 0
Copied!
and lets chek it:
1
[[email protected] ~]# mount -a
2
3
Filesystem Size Used Avail Use% Mounted on
4
/dev/mapper/centos_centos7--2-root 17G 3.7G 14G 22% /
5
devtmpfs 1.9G 0 1.9G 0% /dev
6
tmpfs 1.9G 0 1.9G 0% /dev/shm
7
tmpfs 1.9G 9.1M 1.9G 1% /run
8
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
9
/dev/sda1 1014M 179M 836M 18% /boot
10
tmpfs 378M 4.0K 378M 1% /run/user/42
11
tmpfs 378M 20K 378M 1% /run/user/1000
12
192.168.10.133:/nfsshare 46G 4.6G 41G 11% /mnt/nfsmounthere
13
14
[[email protected] ~]# cd /mnt/nfsmounthere/
15
[[email protected] nfsmounthere]# ls -l
16
total 12
17
-rw-r--r--. 1 root root 6 Jul 10 04:48 hello.txt
18
-rw-rw-r--. 1 nfsuser1 nfsuser1 72 Jul 10 05:06 user1.txt
19
-rw-rw-r--. 1 nfsuser2 nfsuser2 27 Jul 10 04:59 user2.txt
Copied!
As it is a network file system it can make problems during boot process if network is not available, so we change the configurations, there are number of different defaults which determined the behaviour of our attemted mount depending upon its availability.We can indicated weather we want to mount soft or hard. If the server is unavailable and we set it to soft, it would stop trying after determining NFS server is unavailable, if we set it to hard mount it would countinue trying upto the timeout option.
We can also set it foreground or background, fg or bg. It determines if the attempt for mounting happens on the foreground, then the boot of system would wait until it succeeded or fail . Or in the background , so the boot can countinue silently attemting to mount while boot process is doing its job. it usually used with Hard mount option, inorder to the system from hanging.
timeout option wich is defined with tmeo=10 in seconds, and indicates timeout value before the mount attempts fail.
retrans option, defines how many times the system will retry to mount duting a boot.
rsize and wsize which are the maximum read and write sizes requests, allowed in the remote server. Typically set to 8MegaBytes in size or 8192. We can perforamncing testing if we have very very small files or large of files.
ro and rw it is obvious as you can guess it mount file system Read-Only or for Read-Write, but if share Read-Only on the server side it would be Read-Only.
1
[[email protected] nfsmounthere]# cd
2
[[email protected] ~]# umount /mnt/nfsmounthere/
3
[[email protected] ~]# vim /etc/fstab
4
[[email protected] ~]# cat /etc/fstab
5
6
#
7
# /etc/fstab
8
# Created by anaconda on Sun Jun 10 02:22:51 2018
9
#
10
# Accessible filesystems, by reference, are maintained under '/dev/disk'
11
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
12
#
13
/dev/mapper/centos_centos7--2-root / xfs defaults 0 0
14
UUID=b388d696-cb03-4156-9576-c2cddf954366 /boot xfs defaults 0 0
15
/dev/mapper/centos_centos7--2-swap swap swap defaults 0 0
16
17
# creating an entry to mount our NFS share
18
192.168.10.133:/nfsshare /mnt/nfsmounthere nfs hard,bg,timeo=300,rsize=1024,wsize=2048 0 0
19
20
[[email protected] ~]# mount -a
21
22
Filesystem Size Used Avail Use% Mounted on
23
/dev/mapper/centos_centos7--2-root 17G 3.7G 14G 22% /
24
devtmpfs 1.9G 0 1.9G 0% /dev
25
tmpfs 1.9G 0 1.9G 0% /dev/shm
26
tmpfs 1.9G 9.1M 1.9G 1% /run
27
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
28
/dev/sda1 1014M 179M 836M 18% /boot
29
tmpfs 378M 4.0K 378M 1% /run/user/42
30
tmpfs 378M 20K 378M 1% /run/user/1000
31
192.168.10.133:/nfsshare 46G 4.6G 41G 11% /mnt/nfsmounthere
Copied!
Thats all.
Last modified 2yr ago