Re: All SSD Ceph Journal Placement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jeldrik,

You are right. In this situation, you are better off collocating the journal on the new SSD OSDs and recycling your journal to an OSD (if its wear level allows it) once all its attached HDD OSDs are replaced.
As a side note, make sure to monitor the write endurance/wear level on the SSDs.

Cheers,
Maxime Guyot <mailto:maxime.guyot@xxxxxxxx>

On 20/12/16 15:59, "ceph-users on behalf of Jeldrik" <ceph-users-bounces@xxxxxxxxxxxxxx on behalf of jeldrik@xxxxxxxxxxxxx> wrote:

    Hi all,
    
    i know this topic has been discussed a few times from different
    perspectives here, but I could not really get to the answer I need.
    
    We're running a ceph cluster with the following setup:
    
    3 Nodes with 6 OSDs (HDD) and 2 Journal Disks (SSD) each. This is a more
    or less small setup for a private cluster environment. We now want to
    replace the HDDs with SSDs because the customer needs more performance.
    We use INTEL DC SSDs as journal devices and we want to use the same
    model as OSDs. Because of hardware limitations we are not able to
    upgrade the journal devices to let's say PCIe NVMe.
    
    We could easily just go an replace the HDDs one by one. But the question
    is: wouldn't the journal be the new bottleneck? The OSDs are the same
    SSD model so they would have the same read/write performance as the
    journal and every OSD could just get to about 1/3 of there performance
    capabilities, am I right? Wouldn't it be better to place the journal of
    each OSD on the very same SSD and use the old journals as additional
    OSDs? We would get 6 OSDs more and they would only drop to 1/2 of there
    performance capabilities. At least this is what I think :-)
    
    So, am I right here that it would be better to place journal and OSD on
    the same SSD in this setup?
    
    Thanks and regards,
    
    Jeldrik
    
    _______________________________________________
    ceph-users mailing list
    ceph-users@xxxxxxxxxxxxxx
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
    

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux