RE: Ceph and bonnie++

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



3 servers with each 2 OSDs, 1 MON and 1 MDS.

- Jan 

-----Original Message-----
From: gfarnum@xxxxxxxxx [mailto:gfarnum@xxxxxxxxx] On Behalf Of Gregory Farnum
Sent: vrijdag 29 oktober 2010 21:50
To: Smets, Jan (Jan)
Cc: ceph-devel@xxxxxxxxxxxxxxx
Subject: Re: Ceph and bonnie++

On Fri, Oct 29, 2010 at 6:37 AM, Smets, Jan (Jan) <jan.smets@xxxxxxxxxxxxxxxxxx> wrote:
> client0:/mnt/ceph# bonnie -s 40 -r 10 -u root -f Using uid:0, gid:0.
> Writing intelligently...done
> Rewriting...done
> Reading intelligently...done
> start 'em...done...done...done...
> Create files in sequential order...done.
> Stat files in sequential order...Expected 16384 files but only got 0 
> Cleaning up test directory after error.
>
>
> Any suggestions? There was a thread about this some time ago:
Are you running this with one or many MDSes? It should be fine under a single MDS.

bonnie++ is one of the workloads we've had issues with on a multi-mds
system, although I thought we had it working at this point. I'll run our tests again now and see if I can reproduce locally.
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux