Re: Number of SSD for OSD journal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



16.12.2014 10:53, Daniel Schwager пишет:
> Hallo Mike,
> 
>> This is also have another way.
>> * for CONF 2,3 replace 200Gb SSD to 800Gb and add another 1-2 SSD to
>> each node.
>> * make tier1 read-write cache on SSDs
>> * also you can add journal partition on them if you wish - then data
>> will moving from SSD to SSD before let down on HDD
>> * on HDD you can make erasure pool or replica pool
> 
> Do you have some experience (performance ?)  with SSD as caching tier1? Maybe some small benchmarks? From the mailing list, I "feel" that SSD-tearing is not much used in productive.
> 
> regards
> Danny
> 
> 

No. But I think it's better than using SSD only for journals. Looks on
StorPool or Nutanix (in some way) - they used SSD as a storage/long life
cache as a storage.

Cache pool tiering it's a new feature in Ceph introducing in Firefly.
It's explain why cache tiering by now haven't used in production.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux