Re: Unclear on metadata config for new Pacific cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 23 Feb 2022 at 11:25, Eugen Block <eblock@xxxxxx> wrote:

> Hi,
>
> if you want to have DB and WAL on the same device, just don't specify
> WAL in your drivegroup. It will be automatically created on the DB
> device, too. In your case the rotational flag should be enough to
> distinguish between data and DB.
>
> > based on the suggestion in the docs that this would be sufficient for
> both
> > DB and WAL (
> > https://docs.ceph.com/en/pacific/cephadm/services/osd/#the-simple-case)
> > ended up with metadata on the HDD data disks, as demonstrated by quite a
> > lot of space being consumed even with no actual data.
>
> How exactly did you determine that there was actual WAL data on the HDDs?
>
>
>
I couldn't say exactly what it was, but 7 or so TBs was in use, even with
no user data at all.

With the latest iteration, there was just a few GBs in use immediately
after creation.



> Zitat von Adam Huffman <adam.huffman.lists@xxxxxxxxx>:
>
> > Hello
> >
> > We have a new Pacific cluster configured via Cephadm.
> >
> > For the OSDs, the spec is like this, with the intention for DB and WAL to
> > be on NVMe:
> >
> > spec:
> >
> >   data_devices:
> >
> >     rotational: true
> >
> >   db_devices:
> >
> >     model: SSDPE2KE032T8L
> >
> >   filter_logic: AND
> >
> >   objectstore: bluestore
> >
> >   wal_devices:
> >
> >     model: SSDPE2KE032T8L
> >
> > This was after an initial attempt like this:
> >
> > spec:
> >
> >   data_devices:
> >
> >     rotational: 1
> >
> >   db_devices:
> >
> >     rotational: 0
> >
> > based on the suggestion in the docs that this would be sufficient for
> both
> > DB and WAL (
> > https://docs.ceph.com/en/pacific/cephadm/services/osd/#the-simple-case)
> > ended up with metadata on the HDD data disks, as demonstrated by quite a
> > lot of space being consumed even with no actual data.
> >
> > With the new spec, the usage looks more normal. However, it's not clear
> > whether both DB and WAL are in fact on the faster devices as desired.
> >
> > Here's an except of one of the new OSDs:
> >
> >     {
> >
> >         "id": 107,
> >
> >         "arch": "x86_64",
> >
> >         "back_iface": "",
> >
> >         "bluefs": "1",
> >
> >         "bluefs_dedicated_db": "0",
> >
> >         "bluefs_dedicated_wal": "1",
> >
> >         "bluefs_single_shared_device": "0",
> >
> >         "bluefs_wal_access_mode": "blk",
> >
> >         "bluefs_wal_block_size": "4096",
> >
> >         "bluefs_wal_dev_node": "/dev/dm-40",
> >
> >         "bluefs_wal_devices": "nvme0n1",
> >
> >         "bluefs_wal_driver": "KernelDevice",
> >
> >         "bluefs_wal_partition_path": "/dev/dm-40",
> >
> >         "bluefs_wal_rotational": "0",
> >
> >         "bluefs_wal_size": "355622453248",
> >
> >         "bluefs_wal_support_discard": "1",
> >
> >         "bluefs_wal_type": "ssd",
> >
> >         "bluestore_bdev_access_mode": "blk",
> >
> >         "bluestore_bdev_block_size": "4096",
> >
> >         "bluestore_bdev_dev_node": "/dev/dm-39",
> >
> >         "bluestore_bdev_devices": "sdr",
> >
> >         "bluestore_bdev_driver": "KernelDevice",
> >
> >         "bluestore_bdev_partition_path": "/dev/dm-39",
> >
> >         "bluestore_bdev_rotational": "1",
> >
> >         "bluestore_bdev_size": "8001561821184",
> >
> >         "bluestore_bdev_support_discard": "0",
> >
> >         "bluestore_bdev_type": "hdd",
> >
> >         "ceph_release": "pacific",
> >
> >         "ceph_version": "ceph version 16.2.7
> > (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)",
> >
> >         "ceph_version_short": "16.2.7",
> >
> >    8<                                                        >8
> >
> >         "container_image": "
> >
> quay.io/ceph/ceph@sha256:a39107f8d3daab4d756eabd6ee1630d1bc7f31eaa76fff41a77fa32d0b903061
> > ",
> >
> >         "cpu": "AMD EPYC 7352 24-Core Processor",
> >
> >         "default_device_class": "hdd",
> >
> >         "device_ids":
> >
> "nvme0n1=SSDPE2KE032T8L_PHLN0195002R3P2BGN,sdr=LENOVO_ST8000NM010A_EX_WKD2CHZL0000E02930J6",
> >
> >         "device_paths":
> >
> "nvme0n1=/dev/disk/by-path/pci-0000:c1:00.0-nvme-1,sdr=/dev/disk/by-path/pci-0000:41:00.0-scsi-0:0:41:0",
> >
> >         "devices": "nvme0n1,sdr",
> >
> >         "distro": "centos",
> >
> >         "distro_description": "CentOS Stream 8",
> >
> >         "distro_version": "8",
> >
> >     8<                                                       >8
> >
> >         "journal_rotational": "0",
> >
> >         "kernel_description": "#1 SMP Thu Feb 10 16:11:23 UTC 2022",
> >
> >         "kernel_version": "4.18.0-365.el8.x86_64",
> >
> >         "mem_swap_kb": "4194300",
> >
> >         "mem_total_kb": "131583928",
> >
> >         "network_numa_unknown_ifaces": "back_iface,front_iface",
> >
> >         "objectstore_numa_nodes": "0",
> >
> >         "objectstore_numa_unknown_devices": "sdr",
> >
> >         "os": "Linux",
> >
> >         "osd_data": "/var/lib/ceph/osd/ceph-107",
> >
> >         "osd_objectstore": "bluestore",
> >
> >         "osdspec_affinity": "dashboard-admin-1645460246886",
> >
> >         "rotational": "1"
> >
> >     }
> >
> > Note:
> >
> >        "bluefs_dedicated_db": "0",
> >
> >        "bluefs_dedicated_wal": "1",
> >
> >        "bluefs_single_shared_device": "0",
> >
> > On one of our Nautilus clusters, we have:
> >
> > "bluefs_single_shared_device": "1",
> >
> > and the same on an Octopus cluster.
> >
> > I've heard of the WAL being hosted in the DB, but not the other way
> > around...
> >
> > Best Wishes,
> > Adam
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux