> Do you by any chance have your OSDs placed at a local directory path rather > than on a non utilized physical disk? No, I have 18 Disks per Server. Each OSD is mapped to a physical disk. Here in the output of one server: ansible@zrh-srv-m-cph02:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg01-root 28G 4.5G 22G 18% / none 4.0K 0 4.0K 0% /sys/fs/cgroup udev 48G 4.0K 48G 1% /dev tmpfs 9.5G 1.3M 9.5G 1% /run none 5.0M 0 5.0M 0% /run/lock none 48G 20K 48G 1% /run/shm none 100M 0 100M 0% /run/user /dev/mapper/vg01-tmp 4.5G 9.4M 4.3G 1% /tmp /dev/mapper/vg01-varlog 9.1G 5.1G 3.6G 59% /var/log /dev/sdf1 932G 15G 917G 2% /var/lib/ceph/osd/ceph-3 /dev/sdg1 932G 15G 917G 2% /var/lib/ceph/osd/ceph-4 /dev/sdl1 932G 13G 919G 2% /var/lib/ceph/osd/ceph-8 /dev/sdo1 932G 15G 917G 2% /var/lib/ceph/osd/ceph-11 /dev/sde1 932G 15G 917G 2% /var/lib/ceph/osd/ceph-2 /dev/sdd1 932G 15G 917G 2% /var/lib/ceph/osd/ceph-1 /dev/sdt1 932G 15G 917G 2% /var/lib/ceph/osd/ceph-15 /dev/sdq1 932G 12G 920G 2% /var/lib/ceph/osd/ceph-12 /dev/sdc1 932G 14G 918G 2% /var/lib/ceph/osd/ceph-0 /dev/sds1 932G 17G 916G 2% /var/lib/ceph/osd/ceph-14 /dev/sdu1 932G 14G 918G 2% /var/lib/ceph/osd/ceph-16 /dev/sdm1 932G 15G 917G 2% /var/lib/ceph/osd/ceph-9 /dev/sdk1 932G 17G 915G 2% /var/lib/ceph/osd/ceph-7 /dev/sdn1 932G 14G 918G 2% /var/lib/ceph/osd/ceph-10 /dev/sdr1 932G 15G 917G 2% /var/lib/ceph/osd/ceph-13 /dev/sdv1 932G 14G 918G 2% /var/lib/ceph/osd/ceph-17 /dev/sdh1 932G 17G 916G 2% /var/lib/ceph/osd/ceph-5 /dev/sdj1 932G 14G 918G 2% /var/lib/ceph/osd/ceph-30 _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com