Re: xfs hang when filesystem filled

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/7/2012 12:54 AM, Guk-Bong, Kwon wrote:
> HI all
> 
> I tested xfs over nfs using bonnie++
> 
> xfs and nfs hang when xfs filesystem filled
> 
> What's the problem?

The problem is likely the OP has been given enough rope to hang himself. ;)

>     b. lvcreate -L 90G -n test ld1

~90GB device

> data     =                       bsize=4096   blocks=23592960, imaxpct=25

4096*23592960/1048576/1000= ~92GB filesystem

> bonnie++ -s 0 -n 200:1024000:1024000 -r 32G -d /test/ -u 0 -g 0 -q &

((((200*1024=204800 files)*(1024000 bytes))/1048576)/1000)= ~200GB

If my understanding of the bonnie++ options, and my math, are correct,
you are attempting to write 200GB of 1MB files, in parallel, over NFS,
to a 90GB filesystem.  Adding insult to injury, you're mounting with
inode32, causing allocation to serialize on AG0, which will cause head
thrashing when alternating between writing directory information and
file extents.

So, first and foremost, you're attempting to write twice as many bytes
as the filesystem can hold.  You're then hamstringing the filesystem's
ability to allocate in parallel.  Inode64 would be a better choice here.
 You didn't describe the underlying storage hardware, which will likely
have a role to play in the 120 second blocking and the unresponsiveness,
or "hang" as you describe it.

In summary, you're intentionally writing twice the bytes of the FS
capacity.  Processes block due to latency, and the FS hangs.

What result were you expecting to see as a result of trying to
intentionally break things?

-- 
Stan

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux