Re: resize wal/db

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Igor,

Thanks for the reply.
I am using NVMe SSD for wal/db. Do you have any instruction on how to create a partition for wal/db on it?
For example, the type of existing partitions are unknown.
If I create a new one, it defaults to Linux:

Disk /dev/nvme0n1: 500.1 GB, 500107862016 bytes, 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: 300AA8A7-8E72-4D4D-8232-02A0EA7FEB00


#         Start          End    Size  Type            Name
 1         2048      2099199      1G  unknown         ceph block.db
 2      2099200      3278847    576M  unknown         ceph block.wal
 3      3278848      5375999      1G  unknown         ceph block.db
 4      5376000      6555647    576M  unknown         ceph block.wal
 5      6555648      8652799      1G  unknown         ceph block.db
 6      8652800      9832447    576M  unknown         ceph block.wal
 7      9832448     11929599      1G  unknown         ceph block.db
 8     11929600     13109247    576M  unknown         ceph block.wal
 9     13109248     15206399      1G  unknown         ceph block.db
10     15206400     16386047    576M  unknown         ceph block.wal
11     16386048     18483199      1G  unknown         ceph block.db
12     18483200     19662847    576M  unknown         ceph block.wal
13     19662848     21759999      1G  unknown         ceph block.db
14     21760000     22939647    576M  unknown         ceph block.wal
15     22939648     25036799      1G  unknown         ceph block.db
16     25036800     26216447    576M  unknown         ceph block.wal
17     26216448     28313599      1G  unknown         ceph block.db
18     28313600     29493247    576M  unknown         ceph block.wal
19     29493248     31590399      1G  unknown         ceph block.db
20     31590400     32770047    576M  unknown         ceph block.wal
21     32770048     34867199      1G  unknown         ceph block.db
22     34867200     36046847    576M  unknown         ceph block.wal
23     36046848     38143999      1G  unknown         ceph block.db
24     38144000     39323647    576M  unknown         ceph block.wal
25     39323648     41420799      1G  unknown         ceph block.db
26     41420800     42600447    576M  unknown         ceph block.wal
27     42600448     44697599      1G  unknown         ceph block.db
28     44697600     45877247    576M  unknown         ceph block.wal
29     45877248     47974399      1G  Linux filesyste 


Thanks,
Shunde

On 18 Jul 2018, at 12:08 am, Igor Fedotov <ifedotov@xxxxxxx> wrote:

For now you can expand that space up to actual volume size using ceph-bluestore-tool commands ( bluefs-bdev-expand and set-label-key).

Which is a bit tricky though.

And I'm currently working on a solution within ceph-bluestore tool to  simplify both expansion and migration.


Thanks,

Igor


On 7/17/2018 5:02 PM, Nicolas Huillard wrote:
Le mardi 17 juillet 2018 à 16:20 +0300, Igor Fedotov a écrit :
Right, but procedure described in the blog can be pretty easily
adjusted
to do a resize.
Sure, but if I remember correctly, Ceph itself cannot use the increased
size: you'll end up with a larger device with unused additional space.
Using that space may be on the TODO, though, so this may not be a
complete waste of space...


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux