you can also test it directly with ceph bench, if the WAL is on the
flash device:
https://www.clyso.com/blog/verify-ceph-osd-db-and-wal-setup/
Joachim
___________________________________
ceph ambassador DACH
ceph consultant since 2012
Clyso GmbH - Premier Ceph Foundation Member
https://www.clyso.com/
Am 10.07.23 um 09:12 schrieb Eugen Block:
Yes, because you did *not* specify a dedicated WAL device. This is
also reflected in the OSD metadata:
$ ceph osd metadata 6 | grep dedicated
"bluefs_dedicated_db": "1",
"bluefs_dedicated_wal": "0"
Only if you had specified a dedicated WAL device you would see it in
the lvm list output, so this is all as expected.
You can check out the perf dump of an OSD to see that it actually
writes to the WAL:
# ceph daemon osd.6 perf dump bluefs | grep wal
"wal_total_bytes": 0,
"wal_used_bytes": 0,
"files_written_wal": 1588,
"bytes_written_wal": 1090677563392,
"max_bytes_wal": 0,
Zitat von Jan Marek <jmarek@xxxxxx>:
Hello,
but when I try to list devices config with ceph-volume, I can see
a DB devices, but no WAL devices:
ceph-volume lvm list
====== osd.8 =======
[db]
/dev/ceph-5aa92e38-077b-48e2-bda6-5b7db7b7701c/osd-db-bfd11468-d109-4f85-9723-75976f51bfb9
block device
/dev/ceph-eaf5f0d7-ad50-4009-9ee6-04b8204b5b1a/osd-block-26b1d4b7-2425-4a2f-912b-111cf66a5970
block uuid j4s9lv-wS9n-xg2W-I4Y0-fUSu-Vuvl-9gOB2P
cephx lockbox secret
cluster fsid 2c565e24-7850-47dc-a751-a6357cbbaf2a
cluster name ceph
crush device class
db device
/dev/ceph-5aa92e38-077b-48e2-bda6-5b7db7b7701c/osd-db-bfd11468-d109-4f85-9723-75976f51bfb9
db uuid d9MZ2r-ImXX-Xod0-TNDS-tqi5-oG5Y-wrXFtW
encrypted 0
osd fsid 26b1d4b7-2425-4a2f-912b-111cf66a5970
osd id 8
osdspec affinity osd_spec_default
type db
vdo 0
devices /dev/nvme0n1
[block]
/dev/ceph-eaf5f0d7-ad50-4009-9ee6-04b8204b5b1a/osd-block-26b1d4b7-2425-4a2f-912b-111cf66a5970
block device
/dev/ceph-eaf5f0d7-ad50-4009-9ee6-04b8204b5b1a/osd-block-26b1d4b7-2425-4a2f-912b-111cf66a5970
block uuid j4s9lv-wS9n-xg2W-I4Y0-fUSu-Vuvl-9gOB2P
cephx lockbox secret
cluster fsid 2c565e24-7850-47dc-a751-a6357cbbaf2a
cluster name ceph
crush device class
db device
/dev/ceph-5aa92e38-077b-48e2-bda6-5b7db7b7701c/osd-db-bfd11468-d109-4f85-9723-75976f51bfb9
db uuid d9MZ2r-ImXX-Xod0-TNDS-tqi5-oG5Y-wrXFtW
encrypted 0
osd fsid 26b1d4b7-2425-4a2f-912b-111cf66a5970
osd id 8
osdspec affinity osd_spec_default
type block
vdo 0
devices /dev/sdi
(part of listing...)
Sincerely
Jan Marek
Dne Po, čec 10, 2023 at 08:10:58 CEST napsal Eugen Block:
Hi,
if you don't specify a different device for WAL it will be
automatically
colocated on the same device as the DB. So you're good with this
configuration.
Regards,
Eugen
Zitat von Jan Marek <jmarek@xxxxxx>:
> Hello,
>
> I've tried to add to CEPH cluster OSD node with a 12 rotational
> disks and 1 NVMe. My YAML was this:
>
> service_type: osd
> service_id: osd_spec_default
> service_name: osd.osd_spec_default
> placement:
> host_pattern: osd8
> spec:
> block_db_size: 64G
> data_devices:
> rotational: 1
> db_devices:
> paths:
> - /dev/nvme0n1
> filter_logic: AND
> objectstore: bluestore
>
> Now I have 12 OSD with DB on NVMe device, but without WAL. How I
> can add WAL to this OSD?
>
> NVMe device still have 128GB free place.
>
> Thanks a lot.
>
> Sincerely
> Jan Marek
> --
> Ing. Jan Marek
> University of South Bohemia
> Academic Computer Centre
> Phone: +420389032080
> http://www.gnu.org/philosophy/no-word-attachments.cs.html
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
--
Ing. Jan Marek
University of South Bohemia
Academic Computer Centre
Phone: +420389032080
http://www.gnu.org/philosophy/no-word-attachments.cs.html
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx