Re: Slow write times to gluster disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Okay good. At least this validates my doubts. Handling O_SYNC in gluster NFS and fuse is a bit different.
When application opens a file with O_SYNC on fuse mount then each write syscall has to be written to disk as part of the syscall where as in case of NFS, there is no concept of open. NFS performs write though a handle saying it needs to be a synchronous write, so write() syscall is performed first then it performs fsync(). so an write on an fd with O_SYNC becomes write+fsync. I am suspecting that when multiple threads do this write+fsync() operation on the same file, multiple writes are batched together to be written do disk so the throughput on the disk is increasing is my guess.

Does it answer your doubts?

On Wed, May 10, 2017 at 9:35 PM, Pat Haley <phaley@xxxxxxx> wrote:

Without the oflag=sync and only a single test of each, the FUSE is going faster than NFS:

FUSE:
mseas-data2(dri_nascar)% dd if=/dev/zero count=4096 bs=1048576 of=zeros.txt conv=sync
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 7.46961 s, 575 MB/s


NFS
mseas-data2(HYCOM)% dd if=/dev/zero count=4096 bs=1048576 of=zeros.txt conv=sync
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 11.4264 s, 376 MB/s



On 05/10/2017 11:53 AM, Pranith Kumar Karampuri wrote:
Could you let me know the speed without oflag=sync on both the mounts? No need to collect profiles.

On Wed, May 10, 2017 at 9:17 PM, Pat Haley <phaley@xxxxxxx> wrote:

Here is what I see now:

[root@mseas-data2 ~]# gluster volume info
 
Volume Name: data-volume
Type: Distribute
Volume ID: c162161e-2a2d-4dac-b015-f31fd89ceb18
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: mseas-data2:/mnt/brick1
Brick2: mseas-data2:/mnt/brick2
Options Reconfigured:
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
nfs.exports-auth-enable: on
diagnostics.brick-sys-log-level: WARNING
performance.readdir-ahead: on
nfs.disable: on
nfs.export-volumes: off



On 05/10/2017 11:44 AM, Pranith Kumar Karampuri wrote:
Is this the volume info you have?

>     [root at mseas-data2 ~]# gluster volume info
>
>     Volume Name: data-volume
>     Type: Distribute
>     Volume ID: c162161e-2a2d-4dac-b015-f31fd89ceb18
>     Status: Started
>     Number of Bricks: 2
>     Transport-type: tcp
>     Bricks:
>     Brick1: mseas-data2:/mnt/brick1
>     Brick2: mseas-data2:/mnt/brick2
>     Options Reconfigured:
>     performance.readdir-ahead: on
>     nfs.disable: on
>     nfs.export-volumes: off

​I copied this from old thread from 2016. This is distribute volume. Did you change any of the options in between?
-- 

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley                          Email:  phaley@xxxxxxx
Center for Ocean Engineering       Phone:  (617) 253-6824
Dept. of Mechanical Engineering    Fax:    (617) 253-8125
MIT, Room 5-213                    http://web.mit.edu/phaley/www/
77 Massachusetts Avenue
Cambridge, MA  02139-4301
--
Pranith
-- 

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley                          Email:  phaley@xxxxxxx
Center for Ocean Engineering       Phone:  (617) 253-6824
Dept. of Mechanical Engineering    Fax:    (617) 253-8125
MIT, Room 5-213                    http://web.mit.edu/phaley/www/
77 Massachusetts Avenue
Cambridge, MA  02139-4301



--
Pranith
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux