On Wed, May 20, 2015 at 11:15 AM, Barclay Jameson <almightybeeij@xxxxxxxxx> wrote: > I am trying to find out why boniee++ is choking at the creating files > sequentially and deleting sequentially on cephfs. > I enabled mds debug for about 30 seconds and I find a bunch of lines > like the following: > > 2015-05-19 15:54:12.930829 7f6de4ecf700 1 -- 192.168.40.3:6800/461756 > --> 192.168.40.100:0/1741116310 -- client_caps(grant ino 10000004ca9 > 19656 seq 2 caps=pAsxLsXsxFsxcrwb dirty=- wanted=pAsxXsxFxwb follows 0 > size 0/4194304 ts 1 mtime 2015-05-19 15:54:09.098825) v5 -- ?+0 > 0x557a800 con 0x4fdeca0 > > Is this statement showing that it takes 4 seconds to create an inode? I'm not sure where you're getting that from just this line — are you comparing mtime and the log timestamp? This particular message is just a message going to the client about what access capabilities (caps) it has on that inode. > > Watching ( watch --interval=.2 -d 'ceph df') ceph df, rados df, and > ceph osd pool stats seems to show that an object is being created or > deleted about every 4 seconds. A 4 second file create sounds much slower than it should be. What does your cluster look like? Can you upload the output of ceph -s and your MDS log somewhere? -Greg > > The test is now running for over 24 hours with the following bonnie++ comand: > > ~/bonnie++-1.03e/bonnie++ -u root:root -s 256g -r 131072 -d /cephfs/ > -m CephBench -f -b > > I can give more log output if needed. > > OSD + MON/MDS > CentOS 7 > ceph version 0.94.1 (e4bfad3a3c51054df7e537a724c8d0bf9be972ff) > kernel version : 3.10.0-229.el7.x86_64 > > Client > CentOS 6 > ceph version 0.94.1 (e4bfad3a3c51054df7e537a724c8d0bf9be972ff) > kernel version : 4.0.0-1.el6.elrepo.x86_64 > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html