actually absence of block.wal symlink is good enough symptom that
DB and WAL are merged .
But you can also inspect OSD startup log or check bluefs perf
counters after some load - corresponding WAL counters (total/used)
should be zero.
Hi all,
I'm setting up my Ceph cluster (last release of Luminous) and
I'm currently configuring OSD with WAL and DB on NVMe disk.
OSD data are on a SATA disk and Both WAL and DB are on the same
partition of the NVMe disk.
After creating partitions on the NVMe (block partitions,
without filesystem), I created my first OSD from the admin node
:
$ ceph-deploy osd
create --debug --bluestore --data /dev/sda --block-db
/dev/nvme0n1p1 node-osd0
It works perfectly, but I just want to clarify a point
regarding the WAL : I understood that if we specify a --block-db
option without a --block-wal, WAL is stored on the same
partition than the DB.
OK, I'm sure it's working like that but how can I check now
where the wal is really stored ? (as there is no symbolic link
block.wal into /var/lib/ceph/osd/ceph-0 [1] ?)
Is there somewhere or a Ceph command where I can check this ?
I just wanted to be sure of my options before starting
deployment on my 120 OSDs !
Thanks for your clarifications,
Hervé
[1] # ls -l
/var/lib/ceph/osd/ceph-0/
total 48
-rw-r--r-- 1 ceph ceph 465 Aug 16 14:36 activate.monmap
lrwxrwxrwx 1 ceph ceph 93 Aug 16 14:36 block ->
/dev/ceph-766bd78c-ed1a-4e27-8b4d-7adc4c4f2f0d/osd-block-98bfb597-009b-4e88-bc5e-dd22587d21fe
lrwxrwxrwx 1 ceph ceph 15 Aug 16 14:36 block.db ->
/dev/nvme0n1p1
-rw-r--r-- 1 ceph ceph 2 Aug 16 14:36 bluefs
-rw-r--r-- 1 ceph ceph 37 Aug 16 14:36 ceph_fsid
-rw-r--r-- 1 ceph ceph 37 Aug 16 14:36 fsid
-rw------- 1 ceph ceph 55 Aug 16 14:36 keyring
-rw-r--r-- 1 ceph ceph 8 Aug 16 14:36 kv_backend
-rw-r--r-- 1 ceph ceph 21 Aug 16 14:36 magic
-rw-r--r-- 1 ceph ceph 4 Aug 16 14:36 mkfs_done
-rw-r--r-- 1 ceph ceph 41 Aug 16 14:36 osd_key
-rw-r--r-- 1 ceph ceph 6 Aug 16 14:36 ready
-rw-r--r-- 1 ceph ceph 10 Aug 16 14:36 type
-rw-r--r-- 1 ceph ceph 2 Aug 16 14:36 whoami
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com