On Thu, Mar 01, 2018 at 04:57:59PM +0100, Hervé Ballans wrote: :Can we find recent benchmarks on this performance issue related to the :location of WAL/DBs ? I don't have benchmarks but I have some anecdotes. we previously had 4T NLSAS (7.2k) filestore data drives with journals on SSD (5:1 ssd:spinner). We had unpleasant latency and at about 60% space utilization we were 80%+ IOPs utilization. We decided to go with smaller 2T but still slow 7.2k NLSAS drive for next expantion to spread IOPS over more but still cheap spindles. This coincided with bluestore going official in luminous so we did not spec. SSD. This worked out fairly well on 2T drives thay has similar but slightly lower IOP utilization and dramaticly improved latency. Based on this we decided ot do rolling conversions of older 4T servers to bluestor (they were already luminous), removing the SSD layer with an eye to making a performace pool out of them later. This went poorly. Latency improved to the same exent we saw on newer 2T drive but IOPs frequently flatlined at 100% during deep scrubs resulting in slow requests, blocked PGs and very sad VMs on top of it all. We went back and re-reformated the OSDs to use bluestor with db on ssd. This kept the improved latency characteristics and dropped IOPs on spinning disks back to about the same maybe slightly less than filestore was so not great but acceptable. Much of this suffering is due to our budgetary requirements being clearer than our performance requirements. But atleast for slow spinners the SSD can make a big impact, presumably if we had faster disk SSD would have more marginal effects. -Jon _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com