Hi ceph users,
I am using CephFS for file storage and I have noticed that the data gets distributed very unevenly across OSDs.
As a result, when the whole CephFS file system is at 60% full, some of the OSDs already reach the 95% full condition, and no more data can be written to the system.
Is there any way to force a more even distribution of PGs to OSDs? I am using the default crush map, with two levels (root/host). Can any changes to the crush map help? I would really like to be get higher disk utilization than 60% without 1 of 90 disks
filling up so early.
Thanks,
Andras
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com