Re: [External Email] Re: Re: Bluestore - How to review config?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Lin, Igor, Herve,

With the help of the information all of you have provided I have now reviewed the massive amount of information that is available for just one of my OSDs.  (All 24 were configured with Ceph-Ansible, so all should be configured the same.)

Like Lin's example below, I see that I don't have a WAL device. This is a place where the config documentation was a bit confusing:

Each OSD node has 8 x 12TB SAS drives and a 1.6TB PCIe NVMe. Since there are 4 additional bays I set a parameter in Ceph-Ansible to divide the NVMe into 12 'slices'.

When I was setting this up there was something in the docs that made it seem that by putting the DB on the NVMe, Bluestore would automatically use extra space in the NVMe slice for WAL.  Is this correct, or does it have to be specified in some explicit way?

Further, I'm trying to understand how my NVMe is 'sliced' or 'partitioned'.  It doesn't seem to have a partition table so maybe it's just one big LVM device.  Nevertheless, I'd like to be able to find out how the NVMe is sliced up, how big the Rocks DB is, and whether any extra space is being used for WAL or anything else.

Of course, if I've failed to create an optimal configuration I'm next going to ask if I can adjust it without having to wipe and reinitialize every OSD.

Thanks.

-Dave

Dave Hall
Binghamton University
kdhall@xxxxxxxxxxxxxx

On 5/6/2020 2:20 AM, lin yunfan wrote:
Is there a way to get the block,block.db,block.wal path and size?
what if all of them or some of them are colocated in one disk?

I can get the info from a wal,db,block colocated osd like below:

ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-0/
{
     "/var/lib/ceph/osd/ceph-0//block": {
         "osd_uuid": "199f2445-af9e-4172-8231-6d98858684e8",
         "size": 107268255744,
         "btime": "2020-02-15 21:53:42.972004",
         "description": "main",
         "bluefs": "1",
         "ceph_fsid": "83a73817-3566-4044-91b6-22cee6753515",
         "kv_backend": "rocksdb",
         "magic": "ceph osd volume v026",
         "mkfs_done": "yes",
         "ready": "ready",
         "require_osd_release": "\u000c",
         "whoami": "0"
     }
}
but there is no path of the block.

ceph osd metadata 0
{
     "id": 0,
     "arch": "x86_64",
     "back_addr": "172.18.2.178:6801/12605",
     "back_iface": "ens160",
     "bluefs": "1",
     "bluefs_db_access_mode": "blk",
     "bluefs_db_block_size": "4096",
     "bluefs_db_dev": "8:16",
     "bluefs_db_dev_node": "sdb",
     "bluefs_db_driver": "KernelDevice",
     "bluefs_db_model": "Virtual disk    ",
     "bluefs_db_partition_path": "/dev/sdb2",
     "bluefs_db_rotational": "1",
     "bluefs_db_size": "107268255744",
     "bluefs_db_type": "hdd",
     "bluefs_single_shared_device": "1",
     "bluestore_bdev_access_mode": "blk",
     "bluestore_bdev_block_size": "4096",
     "bluestore_bdev_dev": "8:16",
     "bluestore_bdev_dev_node": "sdb",
     "bluestore_bdev_driver": "KernelDevice",
     "bluestore_bdev_model": "Virtual disk    ",
     "bluestore_bdev_partition_path": "/dev/sdb2",
     "bluestore_bdev_rotational": "1",
     "bluestore_bdev_size": "107268255744",
     "bluestore_bdev_type": "hdd",
     "ceph_version": "ceph version 12.2.13
(584a20eb0237c657dc0567da126be145106aa47e) luminous (stable)",
     "cpu": "Intel(R) Xeon(R) CPU E7-4820 v3 @ 1.90GHz",
     "default_device_class": "hdd",
     "distro": "ubuntu",
     "distro_description": "Ubuntu 18.04.3 LTS",
     "distro_version": "18.04",
     "front_addr": "172.18.2.178:6800/12605",
     "front_iface": "ens160",
     "hb_back_addr": "172.18.2.178:6802/12605",
     "hb_front_addr": "172.18.2.178:6803/12605",
     "hostname": "ceph",
     "journal_rotational": "1",
     "kernel_description": "#92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020",
     "kernel_version": "4.15.0-91-generic",
     "mem_swap_kb": "4194300",
     "mem_total_kb": "8168160",
     "os": "Linux",
     "osd_data": "/var/lib/ceph/osd/ceph-0",
     "osd_objectstore": "bluestore",
     "rotational": "1"
}
there are paths and size. does bdev mean block in  ceph-bluestore-tool?


ceph daemon osd.0 config show (
block
     "bluestore_block_path": "",
     "bluestore_block_size": "10737418240",

db
     "bluestore_block_db_create": "false",
     "bluestore_block_db_path": "",
     "bluestore_block_db_size": "0",

wal
     "bluestore_block_wal_create": "false",
     "bluestore_block_wal_path": "",
     "bluestore_block_wal_size": "100663296",
there is no path info and only have wal size

What is the best way to get the path and size infomation of
block,block.db adn block.wal?




linyunfan

Igor Fedotov <ifedotov@xxxxxxx> 于2020年5月5日周二 下午10:47写道:
Hi Dave,

wouldn't this help (particularly "Viewing runtime settings" section):

https://docs.ceph.com/docs/nautilus/rados/configuration/ceph-conf/


Thanks,

Igor

On 5/5/2020 2:52 AM, Dave Hall wrote:
Hello,

Sorry if this has been asked before...

A few months ago I deployed a small Nautilus cluster using
ceph-ansible.  The OSD nodes have multiple spinning drives and a PCI
NVMe.   Now that the cluster has been stable for a while it's time to
start optimizing performance.

While I can tell that there is a part of the NVMe associated with each
OSD, I'm trying to verify which BlueStore components are using the
NVMe - WAL, DB, Cache - and whether the configuration generated by
ceph-ansible (and my settings in osds.yml) is optimal for my hardware.

I've searched around a bit and, while I have found documentation on
how to configure, reconfigure, and repair a BlueStore OSD, I haven't
found anything on how to query the current configuration.

Could anybody point me to a command or link to documentation on this?

Thanks.

-Dave

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux