Hi Mark, I missed the meeting, I know the recording would be on the channel later, but was curious to know the details regarding the kernel bug that affects Intel NVMe drives. The RH patch link in the pad doesn't seem to work for me, can you please provide more details on the issue, how it manifests with Ceph, symptoms, findings etc. We are chasing a perf issue on the OpenStack VMs with cinder backed RBD volumes on Jewel (sporadic high disk util, high io waits on the mounted volumes in the VMs), that seem to go away by disabling deep scrubs! We are trying to validate that theory on our test clusters at this point. And we use Intel NVMe cards for our journals, hence the interest. Thanks, -Pavan. On 11/29/18, 11:04 AM, "ceph-devel-owner@xxxxxxxxxxxxxxx on behalf of Mark Nelson" <ceph-devel-owner@xxxxxxxxxxxxxxx on behalf of mnelson@xxxxxxxxxx> wrote: Hi Folks, Perf meeting at the usual 8AM PST time (ie right now!). Only agenda item so far is a kernel bug found that affects Intel NVMe drives and Ceph deployed on top of LVM. Etherpad: https://pad.ceph.com/p/performance_weekly Bluejeans: https://bluejeans.com/908675367 Thanks, Mark