On 2015-07-10T15:20:23, Jacek Jarosiewicz <jjarosiewicz@xxxxxxxxxxxxx> wrote: > We have tried both - you can see performance gain, but we finally went > toward ceph cache tier. It's much more flexible and gives similar gains in > terms of performance. > > Downside to bcache is that you can't use it on a drive that already has data > - only new, clean partitions can be added - and (although I've read that > bcache is quite resiliant) you can not acces raw filesystem once bcache is > added to your partition (data is only accessible through bcache, so > potentially if bcache goes corrupt, your data goes corrupt). > > Downside to flashcache is that you can only combine partition on ssd with > another partition on spinning drive, so you have to think ahead when > planning your disc layout, ie.: if you partition your ssd with `n' > partitions so that it can cache your `n' spinning drives, and then you want > to add another spinning drive you either had to have left some space on the > original ssd, or you have to add a new one. And if you have left some space > - it's been just sitting there waiting for a new spinning drive. > > With cache tier you can have your cake and eat it too :) - add/remove ssd's > on demand, and add/remove spinning drives as you wish - just tune the pool > sizes after you change your drive layout. Great feedback, too. So the point about bcache is very valid. But then, a cache layer does require a lot more tuning and has many more moving parts, requires more memory, and a more complex ceph setup. (I was specifically wondering if a bcache could help in front of SMR drives, actually.) But it's really useful to know you're seeing similar speed-ups with the cache tiering. Regards, Lars -- Architect Storage/HA SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Dilip Upmanyu, Graham Norton, HRB 21284 (AG Nürnberg) "Experience is the name everyone gives to their mistakes." -- Oscar Wilde _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com