Right, but procedure described in the blog can be pretty easily adjusted
to do a resize.
Thanks,
Igor
On 7/17/2018 11:10 AM, Eugen Block wrote:
Hi,
There is no way to resize DB while OSD is running. There is a bit
shorter "unofficial" but risky way than redeploying OSD though. But
you'll need to tag specific OSD out for a while in any case. You will
also need either additional free partition(s) or initial deployment
had to be done using LVMs.
See this blog for morr details.
http://heiterbiswolkig.blogs.nde.ag/2018/04/08/migrating-bluestores-block-db/
just for clarification: we did NOT resize the block.db in the
described procedure! We used the exact same block size for the new lvm
based block.db as it was before. This is also mentioned in the article.
Regards,
Eugen
Zitat von Igor Fedotov <ifedotov@xxxxxxx>:
Hi Zhang,
There is no way to resize DB while OSD is running. There is a bit
shorter "unofficial" but risky way than redeploying OSD though. But
you'll need to tag specific OSD out for a while in any case. You will
also need either additional free partition(s) or initial deployment
had to be done using LVMs.
See this blog for morr details.
http://heiterbiswolkig.blogs.nde.ag/2018/04/08/migrating-bluestores-block-db/
And I advise to try such things at non-production cluster first.
Thanks,
Igor
On 7/12/2018 7:03 AM, Shunde Zhang wrote:
Hi Ceph Gurus,
I have installed Ceph Luminous with Bluestore using ceph-ansible.
However, when I did the install, I didn’t set the wal/db size. Then
it ended up using the default values, which is quite small: 1G db
and 576MB wal.
Note that each OSD node has 12 OSDs and each OSD has a 1.8T spinning
disk for data. All 12 OSDs share one NVMe M2 SSD for wal/db.
Now the ceph is in use, and I want to increase the size of db/wal
after doing some research: I want to use 20G db and 1G wal. (Are
they reasonable numbers?)
I can delete one OSD and then re-create it with ceph-ansible but
that is troublesome.
I wonder if there is a (simple) way to increase the size of both db
and wal when an OSD is running?
Thanks in advance,
Shunde.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com