Re: CEPH orch made osd without WAL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Eugen,

I've tried to specify dedicated WAL device, but I have only
/dev/nvme0n1 , so I cannot write a correct YAML file...

Dne Po, čec 10, 2023 at 09:12:29 CEST napsal Eugen Block:
> Yes, because you did *not* specify a dedicated WAL device. This is also
> reflected in the OSD metadata:
> 
> $ ceph osd metadata 6 | grep dedicated
>     "bluefs_dedicated_db": "1",
>     "bluefs_dedicated_wal": "0"

Yes, it is exactly, as you wrote.

> 
> Only if you had specified a dedicated WAL device you would see it in the lvm
> list output, so this is all as expected.
> You can check out the perf dump of an OSD to see that it actually writes to
> the WAL:
> 
> # ceph daemon osd.6 perf dump bluefs | grep wal
>         "wal_total_bytes": 0,
>         "wal_used_bytes": 0,
>         "files_written_wal": 1588,
>         "bytes_written_wal": 1090677563392,
>         "max_bytes_wal": 0,

Here is some problem:

# ceph daemon osd.8 perf dump bluefs
Can't get admin socket path: unable to get conf option admin_socket for osd: b"error parsing 'osd': expected string of the form TYPE.ID, valid types are: auth, mon, osd, mds, mgr, client\n"

I'm on the host, on which is this OSD 8.

My CEPH version is latest (I hope) quincy: 17.2.6.

Thanks a lot for help.

Sincerely
Jan Marek

> 
> 
> Zitat von Jan Marek <jmarek@xxxxxx>:
> 
> > Hello,
> > 
> > but when I try to list devices config with ceph-volume, I can see
> > a DB devices, but no WAL devices:
> > 
> > ceph-volume lvm list
> > 
> > ====== osd.8 =======
> > 
> >   [db]          /dev/ceph-5aa92e38-077b-48e2-bda6-5b7db7b7701c/osd-db-bfd11468-d109-4f85-9723-75976f51bfb9
> > 
> >       block device              /dev/ceph-eaf5f0d7-ad50-4009-9ee6-04b8204b5b1a/osd-block-26b1d4b7-2425-4a2f-912b-111cf66a5970
> >       block uuid                j4s9lv-wS9n-xg2W-I4Y0-fUSu-Vuvl-9gOB2P
> >       cephx lockbox secret
> >       cluster fsid              2c565e24-7850-47dc-a751-a6357cbbaf2a
> >       cluster name              ceph
> >       crush device class
> >       db device                 /dev/ceph-5aa92e38-077b-48e2-bda6-5b7db7b7701c/osd-db-bfd11468-d109-4f85-9723-75976f51bfb9
> >       db uuid                   d9MZ2r-ImXX-Xod0-TNDS-tqi5-oG5Y-wrXFtW
> >       encrypted                 0
> >       osd fsid                  26b1d4b7-2425-4a2f-912b-111cf66a5970
> >       osd id                    8
> >       osdspec affinity          osd_spec_default
> >       type                      db
> >       vdo                       0
> >       devices                   /dev/nvme0n1
> > 
> >   [block]       /dev/ceph-eaf5f0d7-ad50-4009-9ee6-04b8204b5b1a/osd-block-26b1d4b7-2425-4a2f-912b-111cf66a5970
> > 
> >       block device              /dev/ceph-eaf5f0d7-ad50-4009-9ee6-04b8204b5b1a/osd-block-26b1d4b7-2425-4a2f-912b-111cf66a5970
> >       block uuid                j4s9lv-wS9n-xg2W-I4Y0-fUSu-Vuvl-9gOB2P
> >       cephx lockbox secret
> >       cluster fsid              2c565e24-7850-47dc-a751-a6357cbbaf2a
> >       cluster name              ceph
> >       crush device class
> >       db device                 /dev/ceph-5aa92e38-077b-48e2-bda6-5b7db7b7701c/osd-db-bfd11468-d109-4f85-9723-75976f51bfb9
> >       db uuid                   d9MZ2r-ImXX-Xod0-TNDS-tqi5-oG5Y-wrXFtW
> >       encrypted                 0
> >       osd fsid                  26b1d4b7-2425-4a2f-912b-111cf66a5970
> >       osd id                    8
> >       osdspec affinity          osd_spec_default
> >       type                      block
> >       vdo                       0
> >       devices                   /dev/sdi
> > 
> > (part of listing...)
> > 
> > Sincerely
> > Jan Marek
> > 
> > 
> > Dne Po, čec 10, 2023 at 08:10:58 CEST napsal Eugen Block:
> > > Hi,
> > > 
> > > if you don't specify a different device for WAL it will be automatically
> > > colocated on the same device as the DB. So you're good with this
> > > configuration.
> > > 
> > > Regards,
> > > Eugen
> > > 
> > > 
> > > Zitat von Jan Marek <jmarek@xxxxxx>:
> > > 
> > > > Hello,
> > > >
> > > > I've tried to add to CEPH cluster OSD node with a 12 rotational
> > > > disks and 1 NVMe. My YAML was this:
> > > >
> > > > service_type: osd
> > > > service_id: osd_spec_default
> > > > service_name: osd.osd_spec_default
> > > > placement:
> > > >   host_pattern: osd8
> > > > spec:
> > > >   block_db_size: 64G
> > > >   data_devices:
> > > >     rotational: 1
> > > >   db_devices:
> > > >     paths:
> > > >     - /dev/nvme0n1
> > > >   filter_logic: AND
> > > >   objectstore: bluestore
> > > >
> > > > Now I have 12 OSD with DB on NVMe device, but without WAL. How I
> > > > can add WAL to this OSD?
> > > >
> > > > NVMe device still have 128GB free place.
> > > >
> > > > Thanks a lot.
> > > >
> > > > Sincerely
> > > > Jan Marek
> > > > --
> > > > Ing. Jan Marek
> > > > University of South Bohemia
> > > > Academic Computer Centre
> > > > Phone: +420389032080
> > > > http://www.gnu.org/philosophy/no-word-attachments.cs.html
> > > 
> > > 
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > 
> > --
> > Ing. Jan Marek
> > University of South Bohemia
> > Academic Computer Centre
> > Phone: +420389032080
> > http://www.gnu.org/philosophy/no-word-attachments.cs.html
> 
> 
> 

-- 
Ing. Jan Marek
University of South Bohemia
Academic Computer Centre
Phone: +420389032080
http://www.gnu.org/philosophy/no-word-attachments.cs.html

Attachment: signature.asc
Description: PGP signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux