Re: Advice on sizing WAL/DB cluster for Optane and SATA SSD disks.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Vitaliy - Sure, I can use those absolute values (30GB for DB, 2GB for WAL)
you suggested.

Currently - Proxmox is defaulting to a 178.85 GB partition for the DB/WAL.
(It seems to put the DB and WAL on the same partition).

Using your calculations, with 6 x OSDs per host - that means a total of
180GB for DB, 12GB for WAL = 192GB in total. (Optane drive is 960GB in
capacity).

Question 1 - Are there any advantages to using larger DB partition than
30GB, or larger WAL than 2GB? (Just thinking how to best use the entire
Optane drive if possible).

Question 2 - How do I check the WAL size in Ceph? (Proxmox seems to be
putting the WAL on the same partition as the DB, but I don't know where its
size is specified).

Thanks,
Victor

On Mon, Mar 16, 2020 at 12:05 AM Виталий Филиппов <vitalif@xxxxxxxxxx>
wrote:

> WAL is 1G (you can allocate 2 to be sure), DB should always be 30G. And
> this doesn't depend on the size of the data partition :-)
>
> 14 марта 2020 г. 22:50:37 GMT+03:00, Victor Hooi <victorhooi@xxxxxxxxx>
> пишет:
>>
>> Hi,
>>
>> I'm building a 4-node Proxmox cluster, with Ceph for the VM disk storage.
>>
>> On each node, I have:
>>
>>
>>    - 1 x 512Gb M.2 SSD (for Proxmox/boot volume)
>>    - 1 x 960GB Intel Optane 905P (for Ceph WAL/DB)
>>    - 6 x 1.92TB Intel S4610 SATA SSD (for Ceph OSD)
>>
>> I'm using the Proxmox "pveceph" command to setup the OSDs.
>>
>> By default this seems to pick 10% of the OSD size for the DB volume, and 1%
>> of the OSD size for the WAL volume.
>>
>> This means after four drives, I ran out of space:
>>
>> # pveceph osd create /dev/sde -db_dev /dev/nvme0n1
>>
>>> create OSD on /dev/sde (bluestore)
>>> creating block.db on '/dev/nvme0n1'
>>>   Rounding up size to full physical extent 178.85 GiB
>>> lvcreate
>>> 'ceph-861ebf6d-8fee-4313-8de6-4e797dc436ee/osd-db-da591d0f-8a05-42fa-bc62-a093bf98aded'
>>> error:   Volume group "ceph-861ebf6d-8fee-4313-8de6-4e797dc436ee" has
>>> insufficient free space (45784 extents): 45786 required.
>>>
>>
>>
>> Anyway, I assume that means I need to tune my DB and WAL volumes down from
>> the defaults.
>>
>> What advice to you have in terms of making best use of the available space,
>> between WAL and DB?
>>
>> What is the impact of having WAL and DB smaller than 1% and 10% of OSD size
>> respectively?
>>
>> Thanks,
>> Victor
>> ------------------------------
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>>
> --
> With best regards,
> Vitaliy Filippov
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux