I've also seen this behavior sometimes (on real hardware without VMs or Ceph involved). Somewhat related: please don't use legacy VirtIO block devices (virtio-blk), they suck for various reasons: slow, no support for TRIM, ... Use a VirtIO SCSI controller instead Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Sun, Jan 13, 2019 at 6:08 PM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote: > > > > Is this normal or expected that lvm can have high utilization while the > disk the logical volume is, has not? Or do I need to do still custom > optimizations for the ceph rbd backend? > https://www.redhat.com/archives/linux-lvm/2013-October/msg00022.html > > > Atop: > > LVM | Groot-LVroot | busy 74% | read 0 | write 428 | > KiB/r 0 | KiB/w 11 | MBr/s 0.0 | MBw/s 0.5 | avio 17.2 > ms | > DSK | vda | busy 5% | read 0 | write 585 | > KiB/r 0 | KiB/w 8 | MBr/s 0.0 | MBw/s 0.5 | avio 0.89 > ms | > > > [@~]# pvs > PV VG Fmt Attr PSize PFree > /dev/vda2 VGroot lvm2 a-- <7.61g 788.00m > > > Luminous 12.2.10, all (ceph nodes, libvirt, test vm) have default > centos7.6 (with default kernel) > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com