Hi Arne and James, Ah, I misunderstood James' suggestion. Using bcache w/ SSDs can be another viable alternative to SSD journal partitions indeed. I think ultimately I will need to test the options since very few people have experience with cache tiering or bcache. Thanks, Benjamin From: Arne Wiebalck [mailto:Arne.Wiebalck@xxxxxxx] Sent: Tuesday, July 08, 2014 11:27 AM To: Somhegyi Benjamin Cc: ceph-users at lists.ceph.com Subject: Re: Using large SSD cache tier instead of SSD journals? Hi Benjamin, Unless I misunderstood, I think the suggestion was to use bcache devices on the OSDs (not on the clients), so what you use it for in the end doesn't really matter. The setup of bcache devices is pretty similar to a mkfs and once set up, bcache devices come up and can be mounted as any other device. Cheers, Arne -- Arne Wiebalck CERN IT On 08 Jul 2014, at 11:01, Somhegyi Benjamin <somhegyi.benjamin at wigner.mta.hu<mailto:somhegyi.benjamin at wigner.mta.hu>> wrote: Hi James, Yes, I've checked bcache, but as far as I can tell you need to manually configure and register the backing devices and attach them to the cache device, which is not really suitable to dynamic environment (like RBD devices for cloud VMs). Benjamin -----Original Message----- From: James Harper [mailto:james@xxxxxxxxxxxxxxxxx] Sent: Tuesday, July 08, 2014 10:17 AM To: Somhegyi Benjamin; ceph-users at lists.ceph.com<mailto:ceph-users at lists.ceph.com> Subject: RE: Using large SSD cache tier instead of SSD journals? Have you considered bcache? It's in the kernel since 3.10 I think. It would be interesting to see comparisons between no ssd, journal on ssd, and bcache with ssd (with journal on same fs as osd) James _______________________________________________ ceph-users mailing list ceph-users at lists.ceph.com<mailto:ceph-users at lists.ceph.com> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140708/b5ac64ff/attachment.htm>