I have seen this and some of our big customers have also seen this. I was using 8TB HDDs and when running small tests using a fresh HDD setup, these tests resulted in very good performance. I then loaded the ceph cluster so each of the 8TB HDD used 4TB and reran the same tests. performance was cut in 1/2. This is using the default settings on how ceph creates the dirs and sub-directories on each OSD. You can flatten out this dir structure so the structure is more wide than deep and performance is improved. Check out the filestore_merge_threshold and filestore_split_multiple settings. Rick
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com