Hi Dave,
For sure ceph-bluestore-tool can be used for that. Unfortunately it
lacks LVM tag manipulation stuff required to properly setup DB/WAL
volume for Ceph.
See https://tracker.ceph.com/issues/42928
Which means that LVM tags to be updated manually if pure
ceph-bluestore-tool is used.
Additionally there is a pending PR
(https://github.com/ceph/ceph/pull/39580) to implement DB/WAL
manipulation at ceph-volume (which in turn rely on ceph-bluestore-tool
to perform lower level operations). Hence one should either wait until
it's merged and backported or do such a backport (actually python code
only) on his own.
Thanks,
Igor
On 3/23/2021 2:37 PM, Dave Hall wrote:
Hello,
Based on other discussions in this list I have concluded that I need to add
NVMe to my OSD nodes and expand the NVMe (DB/WAL) for each OSD. Is there a
way to do this without destroying and rebuilding each OSD (after
safe removal from the cluster, of course)? Is there a way to use
ceph-bluestore-tool for this? Is it as simple as lvextend?
Why more NVMe? Frequent DB spillovers, and the recommendation that the
NVMe should be 40GB for every TB of HDD. When I did my initial setup I
thought that 124GB of NVMe for a 12TB HDD would be sufficient, but by the
above metric it should be more like 480GB of NVMe.
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdhall@xxxxxxxxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx