Re: Impact of fancy striping

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If I understand correctly you have one sas disk as a journal for multiple OSDs.
If you do small synchronous writes it will become a IO bottleneck pretty quickly:
Due to multiple journals on the same disk it will no longer be sequential writes writes to one journal but  4k writes to x journals making it fully random.
I would expect a performance of 100 to 200 IOPS max.
Doing an iostat -x or atop should show this bottleneck immediately.
This is also the reason to go with SSDs: they have reasonable random IO performance.

Cheers,
Robert van Leeuwen

Sent from my iPad

> On 6 dec. 2013, at 17:05, "nicolasc" <nicolas.canceill@xxxxxxxxxxx> wrote:
> 
> Hi James,
> 
> Thank you for this clarification. I am quite aware of that, which is why the journals are on SAS disks in RAID0 (SSDs out of scope).
> 
> I still have trouble believing that fast-but-not-super-fast journals is the main reason for the poor performances observed. Maybe I am mistaken?
> 
> Best regards,
> 
> Nicolas Canceill
> Scalable Storage Systems
> SURFsara (Amsterdam, NL)
> 
> 
> 
> On 12/03/2013 03:01 PM, James Pearce wrote:
>>> I would really appreciate it if someone could:
>>> - explain why the journal setup is way more important than striping settings;
>> 
>> I'm not sure if it's what you're asking, but any write must be physically written to the journal before the operation is acknowledged.  So the overall cluster performance (or rather write latency) is always governed by the speed of those journals.  Data is then gathered up into (hopefully) larger blocks and committed to OSDs later.
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux