2017-11-09 17:52 GMT+01:00 Rudi Ahlers <rudiahlers@xxxxxxxxx>:
Hi Caspar,Is this in the [global] or [osd] section of ceph.conf?
I've put it in the [global] section but it could be that it belongs in the [osd], the parameter is not really documented for that matter.
I am new to ceph so this is all still very vague to me.What is the difference betwen the WAL and the DB?
I suggest you first read through this page carefully to understand the differences:
And, lastly, if I want to setup the OSD in Proxmox beforehand and add the journal to it, can I make these changes afterward?
You're not 'adding' a journal afterward, the journal (db/wal) is already on the OSD (just not seperated) so you'd have to move it to another drive.
This could be done with filestore in the past but to my knowledge not with bluestore. You can always just destroy the OSD and create a new one (with seperate journal).
And, how do I partition the SSD drive then?
Partitioning is automatically done according to the sizes specified in the ceph.conf (so if you want a 20GB DB, use that as value for bluestore_block_db_size)
Caspar
On Thu, Nov 9, 2017 at 6:26 PM, Caspar Smit <casparsmit@xxxxxxxxxxx> wrote:Rudi,You can set the size of block.db and block.wal partitions in the ceph.conf configuration file using:bluestore_block_db_size = 16106127360 (which is 15GB, just calculate the correct number for your needs)
bluestore_block_wal_size = 16106127360Kind regards,
Caspar2017-11-09 17:19 GMT+01:00 Rudi Ahlers <rudiahlers@xxxxxxxxx>:Hi Alwin,Thanx for the help.I see now that I used the wrong wording in my email. I want to resize the journal, not upgrade.So, following your commands, I still sit with a 1GB journal:oot@virt1:~# ceph-disk prepare --bluestore \> --block.db /dev/sde --block.wal /dev/sde1 /dev/sdaSetting name!partNum is 0REALLY setting name!The operation has completed successfully.prepare_device: OSD will not be hot-swappable if block.db is not the same device as the osd dataSetting name!partNum is 1REALLY setting name!The operation has completed successfully.The operation has completed successfully.prepare_device: OSD will not be hot-swappable if block.wal is not the same device as the osd dataprepare_device: Block.wal /dev/sde1 was not prepared with ceph-disk. Symlinking directly.Setting name!partNum is 1REALLY setting name!The operation has completed successfully.The operation has completed successfully.meta-data="" isize=2048 agcount=4, agsize=6400 blks= sectsz=4096 attr=2, projid32bit=1= crc=1 finobt=1, sparse=0, rmapbt=0, reflink=0data = bsize=4096 blocks=25600, imaxpct=25= sunit=0 swidth=0 blksnaming =version 2 bsize=4096 ascii-ci=0 ftype=1log =internal log bsize=4096 blocks=1608, version=2= sectsz=4096 sunit=1 blks, lazy-count=1realtime =none extsz=4096 blocks=0, rtextents=0Warning: The kernel is still using the old partition table.The new table will be used at the next reboot or after yourun partprobe(8) or kpartx(8)The operation has completed successfully.root@virt1:~# partproberoot@virt1:~# fdisk -l | grep sdeDisk /dev/sde: 372.6 GiB, 400088457216 bytes, 781422768 sectors/dev/sde1 2048 195311615 195309568 93.1G Linux filesystem/dev/sde2 195311616 197408767 2097152 1G unknown--On Thu, Nov 9, 2017 at 6:02 PM, Alwin Antreich <a.antreich@xxxxxxxxxxx> wrote:Hi Rudi,
> ______________________________On Thu, Nov 09, 2017 at 04:09:04PM +0200, Rudi Ahlers wrote:
> Hi,
>
> Can someone please tell me what the correct procedure is to upgrade a CEPH
> journal?
>
> I'm running ceph: 12.2.1 on Proxmox 5.1, which runs on Debian 9.1
>
> For a journal I have a 400GB Intel SSD drive and it seems CEPH created a
> 1GB journal:
>
> Disk /dev/sdf: 372.6 GiB, 400088457216 bytes, 781422768 sectors
> /dev/sdf1 2048 2099199 2097152 1G unknown
> /dev/sdf2 2099200 4196351 2097152 1G unknown
>
> root@virt2:~# fdisk -l | grep sde
> Disk /dev/sde: 372.6 GiB, 400088457216 bytes, 781422768 sectors
> /dev/sde1 2048 2099199 2097152 1G unknown
>
>
> /dev/sda :
> /dev/sda1 ceph data, active, cluster ceph, osd.3, block /dev/sda2,
> block.db /dev/sde1
> /dev/sda2 ceph block, for /dev/sda1
> /dev/sdb :
> /dev/sdb1 ceph data, active, cluster ceph, osd.4, block /dev/sdb2,
> block.db /dev/sdf1
> /dev/sdb2 ceph block, for /dev/sdb1
> /dev/sdc :
> /dev/sdc1 ceph data, active, cluster ceph, osd.5, block /dev/sdc2,
> block.db /dev/sdf2
> /dev/sdc2 ceph block, for /dev/sdc1
> /dev/sdd :
> /dev/sdd1 other, xfs, mounted on /data/brick1
> /dev/sdd2 other, xfs, mounted on /data/brick2
> /dev/sde :
> /dev/sde1 ceph block.db, for /dev/sda1
> /dev/sdf :
> /dev/sdf1 ceph block.db, for /dev/sdb1
> /dev/sdf2 ceph block.db, for /dev/sdc1
> /dev/sdg :
>
>
> resizing the partition through fdisk didn't work. What is the correct
> procedure, please?
>
> Kind Regards
> Rudi Ahlers
> Website: http://www.rudiahlers.co.za
_________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
For Bluestore OSDs you need to set bluestore_block_size to geat a bigger
partition for the DB and bluestore_block_wal_size for the WAL.
ceph-disk prepare --bluestore \
--block.db /dev/sde --block.wal /dev/sde /dev/sdX
This gives you in total four partitions on two different disks.
I think it will be less hassle to remove the OSD and prepare it again.
--
Cheers,
Alwin
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com