Some people are doing hyperconverged ceph, colocating qemu virtualization with ceph-osds. It is relevant for a decent subset of people here. Therefore knowledge of the degree of performance degradation is useful. -- Adam On Thu, Jan 11, 2018 at 11:38 AM, <ceph@xxxxxxxxxxxxxx> wrote: > I don't understand how all of this is related to Ceph > > Ceph runs on a dedicated hardware, there is nothing there except Ceph, and > the ceph daemons have already all power on ceph's data. > And there is no random-code execution allowed on this node. > > Thus, spectre & meltdown are meaning-less for Ceph's node, and mitigations > should be disabled > > Is this wrong ? > > > On 01/11/2018 06:26 PM, Dan van der Ster wrote: >> >> Hi all, >> >> Is anyone getting useful results with your benchmarking? I've prepared >> two test machines/pools and don't see any definitive slowdown with >> patched kernels from CentOS [1]. >> >> I wonder if Ceph will be somewhat tolerant of these patches, similarly >> to what's described here: >> http://www.scylladb.com/2018/01/07/cost-of-avoiding-a-meltdown/ >> >> Cheers, Dan >> >> [1] Ceph v12.2.2, FileStore OSDs, kernels 3.10.0-693.11.6.el7.x86_64 >> vs the ancient 3.10.0-327.18.2.el7.x86_64 >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com