On Tue, 16 Dec 2014 12:10:42 +0300 Mike wrote: > 16.12.2014 10:53, Daniel Schwager пишет: > > Hallo Mike, > > > >> This is also have another way. > >> * for CONF 2,3 replace 200Gb SSD to 800Gb and add another 1-2 SSD to > >> each node. > >> * make tier1 read-write cache on SSDs > >> * also you can add journal partition on them if you wish - then data > >> will moving from SSD to SSD before let down on HDD > >> * on HDD you can make erasure pool or replica pool > > > > Do you have some experience (performance ?) with SSD as caching > > tier1? Maybe some small benchmarks? From the mailing list, I "feel" > > that SSD-tearing is not much used in productive. > > > > regards > > Danny > > > > > > No. But I think it's better than using SSD only for journals. Looks on > StorPool or Nutanix (in some way) - they used SSD as a storage/long life > cache as a storage. > Unfortunately a promising design doesn't make a well rounded working solution. > Cache pool tiering it's a new feature in Ceph introducing in Firefly. > It's explain why cache tiering by now haven't used in production. > If you'd followed the various discussions here, you'd know that SSD based cache tiers are pointless (from a performance perspective) in Firefly and still riddled with bugs in Giant with only minor improvements. They show great promise/potential and I'm looking forward to use them, but right now (and probably for the next 1-2 releases) the best bang for the buck in speeding up Ceph is classic SSD journals for writes and lots of RAM for reads. Christian -- Christian Balzer Network/Systems Engineer chibi@xxxxxxx Global OnLine Japan/Fusion Communications http://www.gol.com/ _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com