Re: Ceph journal - isn't it a bit redundant sometimes?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The hope is that with some of Somnath's work and perhaps additional future work, we might be able to make the journal a little smarter about how much data to keep and when to flush. While we are still ultimately bound by the backend disk performance, we might be able to absorb writes in a smarter way and let more coalescing happen for longer periods of time than we currently do today.

On 10/20/2015 08:41 AM, Robert LeBlanc wrote:
Given enough load, that fast Jornal will get filled and you will only be
as fast as the back disk can flush (and at the same time service reads).
That the the situation we are in right now. We are still seeing better
performance than a raw spindle, but only 150 IOPs, not 15000 IOPS that
the SSD can do. You are still ultimately bound by the back end disk.

Robert LeBlanc

Sent from a mobile device please excuse any typos.

On Oct 20, 2015 2:34 AM, "Luis Periquito" <periquito@xxxxxxxxx
<mailto:periquito@xxxxxxxxx>> wrote:

    On Tue, Oct 20, 2015 at 3:26 AM, Haomai Wang <haomaiwang@xxxxxxxxx
    <mailto:haomaiwang@xxxxxxxxx>> wrote:
     > The fact is that journal could help a lot for rbd use cases,
     > especially for small ios. I don' t think it will be bottleneck. If we
     > just want to reduce double write, it doesn't solve any performance
     > problem.
     >

    One trick I've been using in my ceph clusters is hiding a slow write
    backend behind a fast journal device. The write performance will be of
    the fast (and small) journal device. This only helps on write, but it
    can make a huge difference.

    I've even made some tests showing (within 10%, RBD and S3) that the
    backend device doesn't matter and the write performance is exactly the
    same as that of the journal device fronting all the writes.
    _______________________________________________
    ceph-users mailing list
    ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux