Re: [PATCH v3 3/3] NFSD: Add support for encoding multiple segments

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 26, 2015 at 11:47:03AM -0400, Anna Schumaker wrote:
> On 03/26/2015 11:38 AM, J. Bruce Fields wrote:
> > On Thu, Mar 26, 2015 at 11:32:25AM -0400, Trond Myklebust wrote:
> >> On Thu, Mar 26, 2015 at 11:21 AM, Anna Schumaker
> >> <Anna.Schumaker@xxxxxxxxxx> wrote:
> >>> Here are my updated numbers!  I tested with files 5G in size: one 100% data, one 100% hole, and one alternating between hole and data every 4K.  I collected data for both v4.1 and v4.2 with and without the READ_PLUS patches:
> >>>
> >>> ##########################
> >>> #                        #
> >>> #   Without READ_PLUS    #
> >>> #                        #
> >>> ##########################
> >>>
> >>>
> >>> NFS v4.1:
> >>>                             Trial
> >>> |---------|---------|---------|---------|---------|---------|---------|
> >>> |         |    1    |    2    |    3    |    4    |    5    | Average |
> >>> |---------|---------|---------|---------|---------|---------|---------|
> >>> |    Data |  8.723s |  7.243s |  8.252s |  6.997s |  6.980s |  7.639s |
> >>> |    Hole |  5.271s |  5.224s |  5.060s |  4.897s |  5.321s |  5.155s |
> >>> |   Mixed |  8.050s | 10.057s |  7.919s |  8.060s |  9.557s |  8.729s |
> >>> |---------|---------|---------|---------|---------|---------|---------|
> >>>
> >>>
> >>>
> >>>
> >>> NFS v4.2:
> >>>                             Trial
> >>> |---------|---------|---------|---------|---------|---------|---------|
> >>> |         |    1    |    2    |    3    |    4    |    5    | Average |
> >>> |---------|---------|---------|---------|---------|---------|---------|
> >>> |    Data |  6.707s |  7.070s |  6.722s |  6.761s |  6.810s |  6.814s |
> >>> |    Hole |  5.152s |  5.149s |  5.213s |  5.206s |  5.312s |  5.206s |
> >>> |   Mixed |  7.979s |  7.985s |  8.177s |  7.772s |  8.280s |  8.039s |
> >>> |---------|---------|---------|---------|---------|---------|---------|
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> #######################
> >>> #                     #
> >>> #   With READ_PLUS    #
> >>> #                     #
> >>> #######################
> >>>
> >>>
> >>> NFS v4.1:
> >>>                             Trial
> >>> |---------|---------|---------|---------|---------|---------|---------|
> >>> |         |    1    |    2    |    3    |    4    |    5    | Average |
> >>> |---------|---------|---------|---------|---------|---------|---------|
> >>> |    Data |  9.082s |  7.008s |  7.116s |  6.771s |  7.902s |  7.576s |
> >>> |    Hole |  5.333s |  5.358s |  5.380s |  5.161s |  5.282s |  5.303s |
> >>> |   Mixed |  8.189s |  8.308s |  9.540s |  7.937s |  8.420s |  8.479s |
> >>> |---------|---------|---------|---------|---------|---------|---------|
> >>>
> >>>
> >>>
> >>>
> >>> NFS v4.2:
> >>>                             Trial
> >>> |---------|---------|---------|---------|---------|---------|---------|
> >>> |         |    1    |    2    |    3    |    4    |    5    | Average |
> >>> |---------|---------|---------|---------|---------|---------|---------|
> >>> |    Data |  7.033s |  6.829s |  7.025s |  6.873s |  7.134s |  6.979s |
> >>> |    Hole |  1.794s |  1.800s |  1.905s |  1.811s |  1.725s |  1.807s |
> >>> |   Mixed |  7.590s |  8.777s |  9.423s | 10.366s |  8.024s |  8.836s |
> >>> |---------|---------|---------|---------|---------|---------|---------|
> >>>
> >>
> >> So there is a clear win in the 100% hole case here, but otherwise the
> >> statistical fluctuations are dominating the numbers. Can you get us a
> >> little more stats and then perhaps run the results through nfsometer?
> > 
> > Also, could you describe the setup (are these still kvm's), and how
> > you're clearing the cache between runs?
> 
> These are still KVMs and my server is exporting an xfs filesystem.  I clear caches by running "echo 3 > /proc/sys/vm/drop_caches" on the server before every read, and I remount my client after reading each set of three files once.

What sort of device is the exported xfs filesystem on?  (Can't there
be a second level of caching on the guest, depending on how it's set
up?)

Can we get results on bare metal?  (The kvm test might be a good
worst-case for read_plus, as I'd expect bandwidth to be relatively high
compared to the cost of the extra memcpy's or seek calls.  But it also
seems more complicated.)

--b.

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux