Re: OSD Service Advanced Specification db_slots

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I somehow missed your reply earlier, but yes, I think that's useful.  I know how big my NVMe drives are (1920GB), and the maximum number of spinning disks a given node is going to have (12), so chopping up the NVMe drives shouldn't be hard.  It's a bit suboptimal since all OSDs get the same DB size regardless of OSD size, but I think it should be OK.

Thanks!

-----Original Message-----
From: Eugen Block <eblock@xxxxxx> 
Sent: Wednesday, September 15, 2021 4:04 AM
To: ceph-users@xxxxxxx
Subject:  Re: OSD Service Advanced Specification db_slots

Hi,

db_slots is still not implemented:

pacific:~ # ceph orch apply -i osd.yml --dry-run Error EINVAL: Failed to validate Drive Group: Filtering for <db_slots> is not supported


> Question 2:  If db_slots still *doesn't* work, is there a coherent way 
> to divide up a solid state DB drive for use by a bunch of OSDs when 
> the OSDs may not all be created in one go?  At first I thought it was 
> related to limit, but re-reading the advanced specification for a 4th 
> time, I don't think that's the case.  Of course this question is moot 
> if db_slots actually works.

You can use the block_db_size filter so ceph-volume would not consume the entire DB disk (SSD/NVMe), but it would try to deploy all available data devices if you don't specify more filters. You can use the --dry-run flag to try out some specs to get a feeling what ceph-volume would actually do. Just as a short example from my lab (all-in-one-host with three data devices and one SSD for block_db), this

service_type: osd
service_id: osd_spec_hdd_ssd
service_name: osd.hdd_ssd_mix
placement:
   host_pattern: '*'
data_devices:
   rotational: 1
   limit: 2
db_devices:
   rotational: 0
filter_logic: AND
block_db_size: 3G
unmanaged: false


would result in this deployment:

+---------+------------------+---------+----------+----------+-----+
|SERVICE  |NAME              |HOST     |DATA      |DB        |WAL  |
+---------+------------------+---------+----------+----------+-----+
|osd      |osd_spec_hdd_ssd  |pacific  |/dev/vdb  |/dev/vde  |-    |
|osd      |osd_spec_hdd_ssd  |pacific  |/dev/vdc  |/dev/vde  |-    |
+---------+------------------+---------+----------+----------+-----+

These are my available devices:

vdb                         1   10G
vdc                         1   10G
vdd                         1   10G
vde                         0    8G

So the limit filter works as expected here. If I don't specify it, I wouldn't get any OSDs because ceph-volume can't fit three DBs of size
3 GB onto the 8 GB disk.

Does that help?

Regards,
Eugen


Zitat von Edward R Huyer <erhvks@xxxxxxx>:

> I recently upgraded my existing cluster to Pacific and cephadm, and 
> need to reconfigure all the (rotational) OSDs to use NVMe drives for 
> db storage.  I think I have a reasonably good idea how that's going to 
> work, but the use of db_slots and limit in the OSD service 
> specification have me scratching my head.
>
> Question 1:  Does db_slots actually work in the latest version of 
> Pacific?  It's listed here 
> https://docs.ceph.com/en/pacific/cephadm/osd/#additional-options but 
> in the advanced case section 
> https://docs.ceph.com/en/pacific/cephadm/osd/#the-advanced-case
> there's still a note saying it's not implemented.
>
> Question 2:  If db_slots still *doesn't* work, is there a coherent way 
> to divide up a solid state DB drive for use by a bunch of OSDs when 
> the OSDs may not all be created in one go?  At first I thought it was 
> related to limit, but re-reading the advanced specification for a 4th 
> time, I don't think that's the case.  Of course this question is moot 
> if db_slots actually works.
>
> Any advice or information would be appreciated.
>
> -----
> Edward Huyer
> Golisano College of Computing and Information Sciences Rochester 
> Institute of Technology Golisano 70-2373
> 152 Lomb Memorial Drive
> Rochester, NY 14623
> 585-475-6651
> erhvks@xxxxxxx<mailto:erhvks@xxxxxxx>
>
> Obligatory Legalese:
> The information transmitted, including attachments, is intended only 
> for the person(s) or entity to which it is addressed and may contain 
> confidential and/or privileged material. Any review, retransmission, 
> dissemination or other use of, or taking of any action in reliance 
> upon this information by persons or entities other than the intended 
> recipient is prohibited. If you received this in error, please contact 
> the sender and destroy any copies of this information.
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux