On Sat, 30 Oct 2010, Smets, Jan (Jan) wrote: > 3 servers with each 2 OSDs, 1 MON and 1 MDS. Which version of the kernel client are you using? It's possible this is related to efa4c120, which loosened the locking surrounding satisfying readdir requests from the dcache. sage > > - Jan > > -----Original Message----- > From: gfarnum@xxxxxxxxx [mailto:gfarnum@xxxxxxxxx] On Behalf Of Gregory Farnum > Sent: vrijdag 29 oktober 2010 21:50 > To: Smets, Jan (Jan) > Cc: ceph-devel@xxxxxxxxxxxxxxx > Subject: Re: Ceph and bonnie++ > > On Fri, Oct 29, 2010 at 6:37 AM, Smets, Jan (Jan) <jan.smets@xxxxxxxxxxxxxxxxxx> wrote: > > client0:/mnt/ceph# bonnie -s 40 -r 10 -u root -f Using uid:0, gid:0. > > Writing intelligently...done > > Rewriting...done > > Reading intelligently...done > > start 'em...done...done...done... > > Create files in sequential order...done. > > Stat files in sequential order...Expected 16384 files but only got 0 > > Cleaning up test directory after error. > > > > > > Any suggestions? There was a thread about this some time ago: > Are you running this with one or many MDSes? It should be fine under a single MDS. > > bonnie++ is one of the workloads we've had issues with on a multi-mds > system, although I thought we had it working at this point. I'll run our tests again now and see if I can reproduce locally. > -Greg > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html > > -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html