Hello Eugen, Dne Po, čec 10, 2023 at 10:02:58 CEST napsal Eugen Block: > It's fine, you don't need to worry about the WAL device, it is automatically > created on the nvme if the DB is there. Having a dedicated WAL device would > only make sense if for example your data devices are on HDD, your rocksDB on > "regular" SSDs and you also have nvme devices. But since you already use > nvme for DB you don't need to specify a WAL device. OK :-) > > > Here is some problem: > > > > # ceph daemon osd.8 perf dump bluefs > > Can't get admin socket path: unable to get conf option admin_socket for > > osd: b"error parsing 'osd': expected string of the form TYPE.ID, valid > > types are: auth, mon, osd, mds, mgr, client\n" > > > > I'm on the host, on which is this OSD 8. > > I should have mentioned that you need to enter into the container first > > cephadm enter --name osd.8 > > and then > > ceph daemon osd.8 perf dump bluefs Yes, it was a problem: ceph daemon osd.8 perf dump bluefs | grep wal "wal_total_bytes": 0, "wal_used_bytes": 0, "files_written_wal": 535, "bytes_written_wal": 121443819520, "max_bytes_wal": 0, "alloc_unit_wal": 0, "read_random_disk_bytes_wal": 0, "read_disk_bytes_wal": 0, So I can now see, that it uses WAL. Once again, thanks a lot. Sincerely Jan Marek -- Ing. Jan Marek University of South Bohemia Academic Computer Centre Phone: +420389032080 http://www.gnu.org/philosophy/no-word-attachments.cs.html
Attachment:
signature.asc
Description: PGP signature
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx