Replacing SSD disk(metadata, rocksdb) which are associated with HDDs(osd block)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all, I have configured a ceph cluster to be used as an object store with a combination of SSDs and HDDs, where the block.db is stored on LVM on SSDs and the OSD block is stored on HDDs. 
I have set up one SSD for storing metadata (rocksDB), and five HDDs are associated with it to store the OSD blocks. 
During a disk replacement test, if one SSD that stores the block.db fails, all five associated OSDs go down. 
When replacing and recovering the failed SSD, it seems necessary to reconfigure the five OSDs, which takes too long to recover data and has significant performance impact. 
So here are two questions:

- Is it common practice to configure SSDs with block.db and associate them with five HDD disks to store OSD blocks when using eight SSDs and forty HDDs? 
  Or would it be better to only store the rgw index on SSDs? I am also curious about the difference in performance between these configurations.
- If SSDs are configured with block.db as described above, will it be necessary to reinstall the five associated OSDs (HDDs) if one SSD fails ? 
  Also, is there a way to reconstruct the metadata on the newly replaced SSD from the remaining five intact HDDs? 

As a novice in ceph clusters, I seek advice from experienced users. 
Thank you.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux