write performance per disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi,

yes i did a test now with 16 instances with 16 and 32 threads each.
the absolute maximum was 1100mb/sec but network was still not seturated.
all disks had the same load with about 110mb/sec - the maximum of the disks i got using direct access was 170/mb sec writes...

this is not a too bad value... i will make more tests with 10 and 20 virt machines at the same time.

do you think 110 mb per disk is the ceph maximum? (for 170 theoretical per disk) 
110 per disks inclues journals also...

thanx
philipp
________________________________________
Von: ceph-users [ceph-users-bounces at lists.ceph.com]" im Auftrag von "Mark Nelson [mark.nelson at inktank.com]
Gesendet: Freitag, 04. Juli 2014 16:10
Bis: ceph-users at lists.ceph.com
Betreff: Re: [ceph-users] write performance per disk

On 07/03/2014 08:11 AM, VELARTIS Philipp D?rhammer wrote:
> Hi,
>
> I have a ceph cluster setup (with 45 sata disk journal on disks) and get
> only 450mb/sec writes seq (maximum playing around with threads in rados
> bench) with replica of 2
>
> Which is about ~20Mb writes per disk (what y see in atop also)
> theoretically with replica2 and having journals on disk should be 45 X
> 100mb (sata) / 2 (replica) / 2 (journal writes) which makes it 1125
> satas in reality have 120mb/sec so the theoretical output should be more.
>
> I would expect to have between 40-50mb/sec for each sata disk
>
> Can somebody confirm that he can reach this speed with a setup with
> journals on the satas (with journals on ssd speed should be 100mb per disk)?
> or does ceph only give about ? of the speed for a disk? (and not the ?
> as expected because of journals)
>
> My setup is 3 servers with: 2 x 2.6ghz xeons, 128gb ram 15 satas for
> ceph (and ssds for system) 1 x 10gig for external traffic, 1 x 10gig for
> osd traffic
> with reads I can saturate the network but writes is far away. And I
> would expect at least to saturate the 10gig with sequential writes also

In addition to the advice wido is providing (which I wholeheartedly
agree with!), you might want to check your controller/disk
configuration.  If you have journals on the same disks as the data, some
times putting the disks into single-disk RAID0 LUNs with writeback cache
enabled can help keep journal and data writes from causing seek
contention.  This only works if you have a controller with cache and a
battery though.

>
> Thank you
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

_______________________________________________
ceph-users mailing list
ceph-users at lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux