Re: Ceph cache tier and rbd volumes/SSD primary, HDD replica crush rule!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le 12/01/2016 18:27, Mihai Gheorghe a écrit :
> One more question. Seeing that cache tier holds data on it untill it
> reaches % ratio, i suppose i must set replication to 2 or higher on
> the cache pool to not lose hot data not writen to the cold storage in
> case of a drive failure, right? 
>
> Also, will there be any perfomance penalty if i set the osd journal on
> the same SSD as the OSD. I now have one SSD specially for journaling
> the SSD OSDs. I know that in the case of mechanical drive this is a
> problem!

With traditional 7200rpm SATA HDD OSDs, one DC SSD for 4 to 6 OSDs is
usually advised because it will have both the bandwidth and the IOPS
needed to absorb the writes the HDDs themselves can handle. With SSD
based OSDs I would advise against separating journals from filestore
because :

- if you don't hit Ceph bottlenecks it might be difficult to find a
combination of journal and filestore SSD models that ensure that one
journal SSD can handle several filestores efficiently (in perf, cost and
endurance) so you could end up with one journal SSD per filestore SSD to
get the best behaviour at which point you would simply be wasting space
and reliability by underusing the journal SSDs : the theoretical IOPS
limit would be the same than using all SSDs to have both a filestore and
its journal which provides nearly twice the space and on hardware
failure doesn't render one SSD useless in addition to the one failing.
- anyway currently Ceph is probably the bottleneck most of the time with
SSD-based pools, so you probably won't be able to saturate your
filestore SSDs, dedicating SSDs to journals may not help individual OSD
performance but reduce the total number of OSDs  : so you probably want
as many OSDs as possible to get the highest IOPS,
- performance would be less predictable : depending on the workload you
could alternatively hit bottlenecks on the journal SSDs or the filestore
SSDs.

Best regards,

Lionel
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux