Hi,
- Is it common practice to configure SSDs with block.db and
associate them with five HDD disks to store OSD blocks when using
eight SSDs and forty HDDs?
yes, it is common practice.
Or would it be better to only store the rgw index on SSDs? I am
also curious about the difference in performance between these
configurations.
With performance the response is usually "it depends". If performance
is an issue, why not go all flash?
- If SSDs are configured with block.db as described above, will it
be necessary to reinstall the five associated OSDs (HDDs) if one SSD
fails ?
Yes, most likely all OSDs depending on that single SSD will have to be
redeployed. There's a chance you might be able to rescue some of the
rocksDB LVs (pvmove)
Regards,
Eugen
Zitat von TaekSoo Lim <xxbirds@xxxxxxxxx>:
Hi all, I have configured a ceph cluster to be used as an object
store with a combination of SSDs and HDDs, where the block.db is
stored on LVM on SSDs and the OSD block is stored on HDDs.
I have set up one SSD for storing metadata (rocksDB), and five HDDs
are associated with it to store the OSD blocks.
During a disk replacement test, if one SSD that stores the block.db
fails, all five associated OSDs go down.
When replacing and recovering the failed SSD, it seems necessary to
reconfigure the five OSDs, which takes too long to recover data and
has significant performance impact.
So here are two questions:
- Is it common practice to configure SSDs with block.db and
associate them with five HDD disks to store OSD blocks when using
eight SSDs and forty HDDs?
Or would it be better to only store the rgw index on SSDs? I am
also curious about the difference in performance between these
configurations.
- If SSDs are configured with block.db as described above, will it
be necessary to reinstall the five associated OSDs (HDDs) if one SSD
fails ?
Also, is there a way to reconstruct the metadata on the newly
replaced SSD from the remaining five intact HDDs?
As a novice in ceph clusters, I seek advice from experienced users.
Thank you.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx