Good day
I'm currently administrating a Ceph cluster that consists out of HDDs & SSDs. The rule for cephfs_data (ec) is to write to both these drive classifications (HDD+SSD). I would like to change it so that cephfs_metadata (non-ec) writes to SSD & cephfs_data (erasure encoded "ec") writes to HDD since we're experiencing high disk latency.
1) The first option to come to mind would be to migrate each pool to a new rule but this would mean moving a tonne of data around. (How is disk space calculated on this, if I use 600 TB in an EC pool, do I need another 600 TB pool to move it over, or does it shrink the existing pool as it inflates the new pool while moving?)
2) I would like to know if the alternative is possible:
i.e. Delete the SSDs from the default host bucket (leave everything as it is) and move the metadata pool to the SSD based crush rule.
However I'm not sure if this is possible as it will be deleting a leaf from a bucket in our default root. Which means when you add a new SSD osd where does it end up?
crush map - http://pastefile.fr/6f37e7e594a61d0edd9dc947349c756b
ceph osd pool ls detail - http://pastefile.fr/0f215e1252ec58c144d9abfe1688adc8
osd tree - http://pastefile.fr/2acdd377a2db021b6af2996929b85082
If anyone has any input it would be greatly appreciated.
Regards
--
I'm currently administrating a Ceph cluster that consists out of HDDs & SSDs. The rule for cephfs_data (ec) is to write to both these drive classifications (HDD+SSD). I would like to change it so that cephfs_metadata (non-ec) writes to SSD & cephfs_data (erasure encoded "ec") writes to HDD since we're experiencing high disk latency.
1) The first option to come to mind would be to migrate each pool to a new rule but this would mean moving a tonne of data around. (How is disk space calculated on this, if I use 600 TB in an EC pool, do I need another 600 TB pool to move it over, or does it shrink the existing pool as it inflates the new pool while moving?)
2) I would like to know if the alternative is possible:
i.e. Delete the SSDs from the default host bucket (leave everything as it is) and move the metadata pool to the SSD based crush rule.
However I'm not sure if this is possible as it will be deleting a leaf from a bucket in our default root. Which means when you add a new SSD osd where does it end up?
crush map - http://pastefile.fr/6f37e7e594a61d0edd9dc947349c756b
ceph osd pool ls detail - http://pastefile.fr/0f215e1252ec58c144d9abfe1688adc8
osd tree - http://pastefile.fr/2acdd377a2db021b6af2996929b85082
If anyone has any input it would be greatly appreciated.
Regards
--
Jeremi-Ernst Avenant, Mr.
Cloud Infrastructure Specialist
Cloud Infrastructure Specialist
Inter-University Institute for Data Intensive Astronomy
5th Floor, Department of Physics and Astronomy,
University of Cape Town
Tel: 021 959 4137
Web: www.idia.ac.za
E-mail (IDIA): jeremi@xxxxxxxxxx
Rondebosch, Cape Town, 7600_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx