Advice on sizing WAL/DB cluster for Optane and SATA SSD disks.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm building a 4-node Proxmox cluster, with Ceph for the VM disk storage.

On each node, I have:


   - 1 x 512Gb M.2 SSD (for Proxmox/boot volume)
   - 1 x 960GB Intel Optane 905P (for Ceph WAL/DB)
   - 6 x 1.92TB Intel S4610 SATA SSD (for Ceph OSD)

I'm using the Proxmox "pveceph" command to setup the OSDs.

By default this seems to pick 10% of the OSD size for the DB volume, and 1%
of the OSD size for the WAL volume.

This means after four drives, I ran out of space:

# pveceph osd create /dev/sde -db_dev /dev/nvme0n1
> create OSD on /dev/sde (bluestore)
> creating block.db on '/dev/nvme0n1'
>   Rounding up size to full physical extent 178.85 GiB
> lvcreate
> 'ceph-861ebf6d-8fee-4313-8de6-4e797dc436ee/osd-db-da591d0f-8a05-42fa-bc62-a093bf98aded'
> error:   Volume group "ceph-861ebf6d-8fee-4313-8de6-4e797dc436ee" has
> insufficient free space (45784 extents): 45786 required.


Anyway, I assume that means I need to tune my DB and WAL volumes down from
the defaults.

What advice to you have in terms of making best use of the available space,
between WAL and DB?

What is the impact of having WAL and DB smaller than 1% and 10% of OSD size
respectively?

Thanks,
Victor
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux