You just stop the osd, flush the journal, delete the old journal partition, create the new partition with the same guid, initialize the journal, and start the osd.
On Wed, Jun 21, 2017, 8:44 PM Brady Deetz <bdeetz@xxxxxxxxx> wrote:
Hello,_______________________________________________I'm expanding my 288 OSD, primarily cephfs, cluster by about 16%. I have 12 osd nodes with 24 osds each. Each osd node has 2 P3700 400GB NVMe PCIe drives providing 10GB journals for groups of 12 6TB spinning rust drives and 2x lacp 40gbps ethernet.Our hardware provider is recommending that we start deploying P4600 drives in place of our P3700s due to availability.I've seen some talk on here regarding this, but wanted to throw an idea around. I was okay throwing away 280GB of fast capacity for the purpose of providing reliable journals. But with as much free capacity as we'd have with a 4600, maybe I could use that extra capacity as a cache tier for writes on an rbd ec pool. If I wanted to go that route, I'd probably replace several existing 3700s with 4600s to get additional cache capacity. But, that sounds risky...What do you guys think?
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com