NFS 3 mount options performance impact

jboothomas
5 min readFeb 21, 2023

--

The following article will cover various NFS mount options and the results obtained against two NFS servers, one a Virtual Machine and the other an external NFS storage array. Prior to diving into the results the test environment used was as follows:

A client VM on esxi hostA with a mount point to a virtual machine NFS server on esxi hostB running on a fibre channel flash datastore, and a second mount point to an external NFS full flash storage array. Between each test run the client cache is cleared and the mount points are cycled (unmount / mount). This allows for our tests to have minimal VMware cache interference. All these systems are connected via a 40GbE network but between the VMs we have a better network response time by 0.1ms compared to the VM to storage network response time.

The test script used is as follows:

#/bin/sh!
echo "NFS server IP: " $1
echo "NFS server export: " $2
echo "mount path: " $3
echo "mount options: " $4
echo `date`
echo "clear cache and cycle mount point"
sync; echo 3 > /proc/sys/vm/drop_caches
umount /nfs/$3
if test -z "$4"
then
mount $1:$2 /nfs/$3
else
mount $1:$2 /nfs/$3 -o $4
fi
echo `date`
echo "display mount options"
cat /proc/mounts | grep /nfs/$3
echo "create 10000 files random 1MB"
time for i in {1..10000}
do
#echo $i
openssl rand -out /nfs/$3/$i -base64 $(( 10**6 * 3/4 ))
done
echo "list the files ls -alh"
time ls -alh /nfs/$3 > /dev/null
echo "read contents of 1000 random files"
time for j in {1..1000}
do
cat /nfs/$3/$(( $RANDOM % 10000 + 1 )) > /dev/null
done
echo "delete the files"
time rm -rf /nfs/$3/*

The above script will allow us to time create, read, list and delete operations on the NFS mount and see the impact of mount options.

DEFAULT MOUNT — No Options specified

For our VM NFS server the default options are:

rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountvers=3,mountport=43808,mountproto=udp,local_lock=none,

For our storage array NFS server the default options are:

rw,relatime,vers=3,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountvers=3,mountport=2049,mountproto=udp,local_lock=none,

Default results:

+----------------+------------------------+------------------------+
| TEST | VM NFSserver | Array NFSserver |
+----------------+------------------------+------------------------+
| CREATE | 2m38.942s | 2m33.390s |
| LIST | 0m1.724s | 0m0.215s |
| READ | 0m2.216s | 0m2.399s |
| DELETE | 0m12.791s | 0m6.568s |
+----------------+------------------------+------------------------+

RSIZE WSIZE variations

The rsize and wsize options are set to either 4096 (the smallest value allowed) or 1048576 (the largest value allowed)

Results:

+----------------+-------------------------+-------------------------+
| TEST | VM NFSserver | Array NFSserver |
| -o r/wsize= | 4096 | 1048576 | 4096 | 1048576 |
+----------------+-------------------------+-------------------------+
| CREATE | 2m49.313s | 2m34.624s | 3m22.727s | 2m37.181s |
| LIST | 0m1.764s | 0m1.555s | 0m0.373s | 0m0.218s |
| READ | 0m1.773s | 0m2.097s | 0m8.516s | 0m2.709s |
| DELETE | 0m13.105s | 0m11.051s | 0m7.103s | 0m6.439s. |
+----------------+-------------------------+-------------------------+

NOAC NFS options

We set the NFS noac option, as per the man pages:

“The noac option prevents clients from caching file attributes so that applications can more quickly detect file changes on the server. In addition to preventing the client from caching file attributes, the noac option forces application writes to become synchronous so that local changes to a file become visible on the server immediately. That way, other clients can quickly detect recent writes when they check the file’s attributes. Using the noac option provides greater cache coherence among NFS clients accessing the same files, but it extracts a significant performance penalty.”

Note: using noac sets the following options: acregmin=0, acregmax=0, acdirmin=0, acdirmax=0.

Results:

+----------------+------------------------+------------------------+
| TEST | VM NFSserver | Array NFSserver |
+----------------+------------------------+------------------------+
| CREATE | 25m32.453s | 13m5.242s |
| LIST | 0m13.362s | 0m5.649s |
| READ | 0m3.068s | 0m2.811s |
| DELETE | 0m21.416s | 0m14.529s |
+----------------+------------------------+------------------------+

ACTIMEO variations

As per the man pages: “Using actimeo sets all of acregmin, acregmax, acdirmin, and acdirmax to the same value.”

Results:

+--------------+-------------------------+------------------------+----------+
| TEST | VM NFSserver |
| actimeo= | 3 | 30 | 60 | 120 | 600 |
+--------------+------------------------+------------------------+-----------+
| CREATE | 2m35.783s | 2m33.524s | 2m25.882s | 2m30.370s | 2m25.207s |
| LIST | 0m1.375s | 0m2.028s | 0m1.904s | 0m1.812s | 0m1.492s |
| READ | 0m1.951s | 0m2.100s | 0m2.240s | 0m2.260s | 0m1.917s |
| DELETE | 0m12.029s | 0m9.503s | 0m8.772s | 0m9.334s | 0m10.846s |
+--------------+------------+-----------+-----------+------------+-----------+
+--------------+------------+-----------+-----------+------------+-----------+
| TEST | ARRAY NFSserver |
| actimeo= | 3 | 30 | 60 | 120 | 600 |
+--------------+------------+-----------+-----------+------------+-----------+
| CREATE | 2m32.204s | 2m32.173s | 2m32.740s | 2m30.609s | 2m28.925s |
| LIST | 0m0.241s | 0m0.262s | 0m0.237s | 0m0.231s | 0m0.296s |
| READ | 0m2.621s | 0m2.555s | 0m2.520s | 0m2.373s | 0m2.147s |
| DELETE | 0m7.053s | 0m5.701s | 0m5.797s | 0m5.301s | 0m5.473s |
+--------------+------------+-----------+-----------+------------+-----------+

ACREGMIN or ACREGMAX or ACDIRMIN or ACDIRMAX = 0

Results:

+-------------+-------------------------------------------------------+
| | VM NFSserver |
| TEST | acregmin=0 | acregmax=0 | acdirmin=0 | acdirmax=0 |
+-------------+-------------+-------------+-------------+-------------+
| CREATE | 2m32.796s | 2m28.411s | 2m30.707s | 2m31.067s |
| LIST | 0m3.780s | 0m4.830s | 0m17.643s | 0m10.156s |
| READ | 0m2.184s | 0m2.882s | 0m2.489s | 0m2.300s |
| DELETE | 0m12.468s | 0m11.031s | 0m29.276s | 0m19.557s |
+-------------+-------------+-------------+-------------+-------------+
+-------------+-------------------------------------------------------+
| | ARRAY NFSserver |
| TEST | acregmin=0 | acregmax=0 | acdirmin=0 | acdirmax=0 |
+-------------+-------------+-------------+-------------+-------------+
| CREATE | 2m33.740s | 2m30.436s | 2m32.701s | 2m32.072s |
| LIST | 0m2.330s | 0m2.653s | 0m4.205s | 0m0.266s |
| READ | 0m2.779s | 0m2.891s | 0m2.751s | 0m2.425s |
| DELETE | 0m6.946s | 0m7.259s | 0m15.801s | 0m6.350s |
+-------------+-------------+-------------+-------------+-------------+

SYNC and ASYNC option

Results

+-------------+-------------------------------------------------------+
| | VM NFSserver | ARRAY NFSServer |
| TEST | async | sync | async | sync |
+-------------+---------------------------+---------------------------+
| CREATE | 2m33.138s | 23m15.580s | 2m30.924s | 12m49.679s |
| LIST | 0m1.775s | 0m1.573s | 0m0.236s | 0m0.190s |
| READ | 0m2.159s | 0m1.855s | 0m2.342s | 0m1.634s |
| DELETE | 0m10.896s | 0m10.437s | 0m6.149s | 0m5.533s |
+-------------+---------------------------+---------------------------+

TCP mount protocol

Results:

+----------------+------------------------+------------------------+
| TEST | VM NFSserver | Array NFSserver |
+----------------+------------------------+------------------------+
| CREATE | 2m26.336s | 2m30.183s |
| LIST | 0m1.885s | 0m0.236s |
| READ | 0m2.137s | 0m2.190s |
| DELETE | 0m10.517s | 0m6.205s |
+----------------+------------------------+------------------------+

NCONNECT option

nconnect is a mount option that tells the client to open multiple connections to the destination, up to a maximum of 16 and is available in newer Linux versions (from RHEL 8.3 and Ubuntu 20.04).

For multi stream workloads distributing across the several kernel NFS connections will improve performance.

Conclusion

NOAC, Sync, and acdirmin=0 have the most impact on results mainly on CREATE and DELETE operations. Of course these results were on a small dataset of just 10'000 files in one directory, the impact will of course be multiplied as the file/directory count increases.

--

--

jboothomas

Infrastructure engineering for modern data applications