Re: cephfs infernalis (ceph version 9.2.1) - bonnie++

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Mar 21, 2016 at 2:33 PM, Michael Hanscho <reset11@xxxxxxx> wrote:
> On 2016-03-21 05:07, Yan, Zheng wrote:
>> On Sat, Mar 19, 2016 at 9:38 AM, Michael Hanscho <reset11@xxxxxxx> wrote:
>>> Hi!
>>>
>>> Trying to run bonnie++ on cephfs mounted via the kernel driver on a
>>> centos 7.2.1511 machine resulted in:
>>>
>>> # bonnie++ -r 128 -u root -d /data/cephtest/bonnie2/
>>> Using uid:0, gid:0.
>>> Writing a byte at a time...done
>>> Writing intelligently...done
>>> Rewriting...done
>>> Reading a byte at a time...done
>>> Reading intelligently...done
>>> start 'em...done...done...done...done...done...
>>> Create files in sequential order...done.
>>> Stat files in sequential order...done.
>>> Delete files in sequential order...Bonnie: drastic I/O error (rmdir):
>>> Directory not empty
>>> Cleaning up test directory after error.
>>
>> Please check if there are leftover files in the test directory. This
>> seems like readdir bug (some files are missing in readdir result) in
>> old kernel. which version of kernel were you using?
>
> The bonnie++ directory and a file (0 bytes) in it was left over - after
> the error message - you are right.
> Kernel: 3.10.0-327.10.1.el7.x86_64 (Latest CentOS 7.2 kernel
>
> (If I run the same test (Version: 1.96) on a local HD on the same
> machine - it is working as expected.)

The bug was introduced by RHEL7 backports. It's can be fixed by the
attached patch.


Thank you for reporting this.
Yan, Zheng

>
> Gruesse
> Michael
>
>
>> Regards
>> Yan, Zheng
>>
>>>
>>> # ceph -w
>>>     cluster <ID>
>>>      health HEALTH_OK
>>>      monmap e3: 3 mons at
>>> {cestor4=<IP1>:6789/0,cestor5=<IP2>:6789/0,cestor6=<IP3>:6789/0}
>>>             election epoch 62, quorum 0,1,2 cestor4,cestor5,cestor6
>>>      mdsmap e30: 1/1/1 up {0=cestor2=up:active}, 1 up:standby
>>>      osdmap e703: 60 osds: 60 up, 60 in
>>>             flags sortbitwise
>>>       pgmap v135437: 1344 pgs, 4 pools, 4315 GB data, 2315 kobjects
>>>             7262 GB used, 320 TB / 327 TB avail
>>>                 1344 active+clean
>>>
>>> Any ideas?
>>>
>>> Gruesse
>>> Michael
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

Attachment: readdir.patch
Description: Binary data

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux