Re: [Gluster-devel] GlusterFS FUSE client hangs on rsyncing lots of file

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



With "performance.client-io-threads" set to "off" no hangs occurred in 3 
rsync/rm rounds. Could that be some fuse-bridge lock race? Will bring that 
option to "on" back again and try to get full statedump.

On четвер, 21 січня 2016 р. 14:54:47 EET Raghavendra G wrote:
> On Thu, Jan 21, 2016 at 10:49 AM, Pranith Kumar Karampuri <
> 
> pkarampu@xxxxxxxxxx> wrote:
> > On 01/18/2016 02:28 PM, Oleksandr Natalenko wrote:
> >> XFS. Server side works OK, I'm able to mount volume again. Brick is 30%
> >> full.
> > 
> > Oleksandr,
> > 
> >       Will it be possible to get the statedump of the client, bricks
> > 
> > output next time it happens?
> > 
> > https://github.com/gluster/glusterfs/blob/master/doc/debugging/statedump.m
> > d#how-to-generate-statedump
> We also need to dump inode information. To do that you've to add "all=yes"
> to /var/run/gluster/glusterdump.options before you issue commands to get
> statedump.
> 
> > Pranith
> > 
> >> On понеділок, 18 січня 2016 р. 15:07:18 EET baul jianguo wrote:
> >>> What is your brick file system? and the glusterfsd process and all
> >>> thread status?
> >>> I met same issue when client app such as rsync stay in D status,and
> >>> the brick process and relate thread also be in the D status.
> >>> And the brick dev disk util is 100% .
> >>> 
> >>> On Sun, Jan 17, 2016 at 6:13 AM, Oleksandr Natalenko
> >>> 
> >>> <oleksandr@xxxxxxxxxxxxxx> wrote:
> >>>> Wrong assumption, rsync hung again.
> >>>> 
> >>>> On субота, 16 січня 2016 р. 22:53:04 EET Oleksandr Natalenko wrote:
> >>>>> One possible reason:
> >>>>> 
> >>>>> cluster.lookup-optimize: on
> >>>>> cluster.readdir-optimize: on
> >>>>> 
> >>>>> I've disabled both optimizations, and at least as of now rsync still
> >>>>> does
> >>>>> its job with no issues. I would like to find out what option causes
> >>>>> such
> >>>>> a
> >>>>> behavior and why. Will test more.
> >>>>> 
> >>>>> On пʼятниця, 15 січня 2016 р. 16:09:51 EET Oleksandr Natalenko wrote:
> >>>>>> Another observation: if rsyncing is resumed after hang, rsync itself
> >>>>>> hangs a lot faster because it does stat of already copied files. So,
> >>>>>> the
> >>>>>> reason may be not writing itself, but massive stat on GlusterFS
> >>>>>> volume
> >>>>>> as well.
> >>>>>> 
> >>>>>> 15.01.2016 09:40, Oleksandr Natalenko написав:
> >>>>>>> While doing rsync over millions of files from ordinary partition to
> >>>>>>> GlusterFS volume, just after approx. first 2 million rsync hang
> >>>>>>> happens, and the following info appears in dmesg:
> >>>>>>> 
> >>>>>>> ===
> >>>>>>> [17075038.924481] INFO: task rsync:10310 blocked for more than 120
> >>>>>>> seconds.
> >>>>>>> [17075038.931948] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> >>>>>>> disables this message.
> >>>>>>> [17075038.940748] rsync           D ffff88207fc13680     0 10310
> >>>>>>> 10309 0x00000080
> >>>>>>> [17075038.940752]  ffff8809c578be18 0000000000000086
> >>>>>>> ffff8809c578bfd8
> >>>>>>> 0000000000013680
> >>>>>>> [17075038.940756]  ffff8809c578bfd8 0000000000013680
> >>>>>>> ffff880310cbe660
> >>>>>>> ffff881159d16a30
> >>>>>>> [17075038.940759]  ffff881e3aa25800 ffff8809c578be48
> >>>>>>> ffff881159d16b10
> >>>>>>> ffff88087d553980
> >>>>>>> [17075038.940762] Call Trace:
> >>>>>>> [17075038.940770]  [<ffffffff8160a1d9>] schedule+0x29/0x70
> >>>>>>> [17075038.940797]  [<ffffffffa023a53d>]
> >>>>>>> __fuse_request_send+0x13d/0x2c0
> >>>>>>> [fuse]
> >>>>>>> [17075038.940801]  [<ffffffffa023db30>] ?
> >>>>>>> fuse_get_req_nofail_nopages+0xc0/0x1e0 [fuse]
> >>>>>>> [17075038.940805]  [<ffffffff81098350>] ? wake_up_bit+0x30/0x30
> >>>>>>> [17075038.940809]  [<ffffffffa023a6d2>] fuse_request_send+0x12/0x20
> >>>>>>> [fuse]
> >>>>>>> [17075038.940813]  [<ffffffffa024382f>] fuse_flush+0xff/0x150 [fuse]
> >>>>>>> [17075038.940817]  [<ffffffff811c45c4>] filp_close+0x34/0x80
> >>>>>>> [17075038.940821]  [<ffffffff811e4ed8>] __close_fd+0x78/0xa0
> >>>>>>> [17075038.940824]  [<ffffffff811c6103>] SyS_close+0x23/0x50
> >>>>>>> [17075038.940828]  [<ffffffff81614de9>]
> >>>>>>> system_call_fastpath+0x16/0x1b
> >>>>>>> ===
> >>>>>>> 
> >>>>>>> rsync blocks in D state, and to kill it, I have to do umount --lazy
> >>>>>>> on
> >>>>>>> GlusterFS mountpoint, and then kill corresponding client glusterfs
> >>>>>>> process. Then rsync exits.
> >>>>>>> 
> >>>>>>> Here is GlusterFS volume info:
> >>>>>>> 
> >>>>>>> ===
> >>>>>>> Volume Name: asterisk_records
> >>>>>>> Type: Distributed-Replicate
> >>>>>>> Volume ID: dc1fe561-fa3a-4f2e-8330-ec7e52c75ba4
> >>>>>>> Status: Started
> >>>>>>> Number of Bricks: 3 x 2 = 6
> >>>>>>> Transport-type: tcp
> >>>>>>> Bricks:
> >>>>>>> Brick1:
> >>>>>>> server1:/bricks/10_megaraid_0_3_9_x_0_4_3_hdd_r1_nolvm_hdd_storage_0
> >>>>>>> 1
> >>>>>>> /as
> >>>>>>> te
> >>>>>>> risk/records Brick2:
> >>>>>>> server2:/bricks/10_megaraid_8_5_14_x_8_6_16_hdd_r1_nolvm_hdd_storage
> >>>>>>> _
> >>>>>>> 01/
> >>>>>>> as
> >>>>>>> terisk/records Brick3:
> >>>>>>> server1:/bricks/11_megaraid_0_5_4_x_0_6_5_hdd_r1_nolvm_hdd_storage_0
> >>>>>>> 2
> >>>>>>> /as
> >>>>>>> te
> >>>>>>> risk/records Brick4:
> >>>>>>> server2:/bricks/11_megaraid_8_7_15_x_8_8_20_hdd_r1_nolvm_hdd_storage
> >>>>>>> _
> >>>>>>> 02/
> >>>>>>> as
> >>>>>>> terisk/records Brick5:
> >>>>>>> server1:/bricks/12_megaraid_0_7_6_x_0_13_14_hdd_r1_nolvm_hdd_storage
> >>>>>>> _
> >>>>>>> 03/
> >>>>>>> as
> >>>>>>> terisk/records Brick6:
> >>>>>>> server2:/bricks/12_megaraid_8_9_19_x_8_13_24_hdd_r1_nolvm_hdd_storag
> >>>>>>> e
> >>>>>>> _03
> >>>>>>> /a
> >>>>>>> sterisk/records Options Reconfigured:
> >>>>>>> cluster.lookup-optimize: on
> >>>>>>> cluster.readdir-optimize: on
> >>>>>>> client.event-threads: 2
> >>>>>>> network.inode-lru-limit: 4096
> >>>>>>> server.event-threads: 4
> >>>>>>> performance.client-io-threads: on
> >>>>>>> storage.linux-aio: on
> >>>>>>> performance.write-behind-window-size: 4194304
> >>>>>>> performance.stat-prefetch: on
> >>>>>>> performance.quick-read: on
> >>>>>>> performance.read-ahead: on
> >>>>>>> performance.flush-behind: on
> >>>>>>> performance.write-behind: on
> >>>>>>> performance.io-thread-count: 2
> >>>>>>> performance.cache-max-file-size: 1048576
> >>>>>>> performance.cache-size: 33554432
> >>>>>>> features.cache-invalidation: on
> >>>>>>> performance.readdir-ahead: on
> >>>>>>> ===
> >>>>>>> 
> >>>>>>> The issue reproduces each time I rsync such an amount of files.
> >>>>>>> 
> >>>>>>> How could I debug this issue better?
> >>>>>>> _______________________________________________
> >>>>>>> Gluster-users mailing list
> >>>>>>> Gluster-users@xxxxxxxxxxx
> >>>>>>> http://www.gluster.org/mailman/listinfo/gluster-users
> >>>>>> 
> >>>>>> _______________________________________________
> >>>>>> Gluster-devel mailing list
> >>>>>> Gluster-devel@xxxxxxxxxxx
> >>>>>> http://www.gluster.org/mailman/listinfo/gluster-devel
> >>>>> 
> >>>>> _______________________________________________
> >>>>> Gluster-devel mailing list
> >>>>> Gluster-devel@xxxxxxxxxxx
> >>>>> http://www.gluster.org/mailman/listinfo/gluster-devel
> >>>> 
> >>>> _______________________________________________
> >>>> Gluster-devel mailing list
> >>>> Gluster-devel@xxxxxxxxxxx
> >>>> http://www.gluster.org/mailman/listinfo/gluster-devel
> >> 
> >> _______________________________________________
> >> Gluster-users mailing list
> >> Gluster-users@xxxxxxxxxxx
> >> http://www.gluster.org/mailman/listinfo/gluster-users
> > 
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel@xxxxxxxxxxx
> > http://www.gluster.org/mailman/listinfo/gluster-devel


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux