Re: issue on adding SSD to SATA cluster for db/wal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

does 'show-label' reflect your changes for block.db?

---snip---
host2:~ # ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-3/
inferring bluefs devices from bluestore path
{
    "/var/lib/ceph/osd/ceph-3/block": {
[...]
    },
    "/var/lib/ceph/osd/ceph-3/block.db": {
        "osd_uuid": "ead6e380-6a17-4ee3-992d-849bbc75a091",
        "size": 3221225472,
        "btime": "2020-10-12 16:45:36.117672",
        "description": "bluefs db"
    }
}
---snip---


Is the block.db symlink present in /var/lib/ceph/osd/ceph-<ID> ?

Assuming you're using ceph-volume with lvm (since you're on Nautilus) are the lv tags correct?

---snip---
[ceph: root@host2 /]# lvs -o lv_tags
WARNING: PV /dev/vda2 in VG vg01 is using an old PV header, modify the VG to update.
  LV Tags
ceph.block_device=/dev/ceph-88d97a7c-2a2a-4f81-8da4-d994d53d203b/osd-block-e5ec5f6b-333b-42d5-b760-175c1c528f8a,ceph.block_uuid=ndq0ny-CwaO-wO2d-arcc-tQqQ-B2FW-wIp1wc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f279f36-811c-3270-9f9d-58335b1bb9c0,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.db_device=/dev/ceph-5d92c94b-9dde-4d38-ba2a-0f3e766162d1/osd-db-add92714-e279-468e-b0a2-dca494fbd5bd,ceph.db_uuid=add92714-e279-468e-b0a2-dca494fbd5bd,ceph.encrypted=0,ceph.osd_fsid=e5ec5f6b-333b-42d5-b760-175c1c528f8a,ceph.osd_id=2,ceph.osdspec_affinity=default,ceph.type=db,ceph.vdo=0
---snip---


Regards,
Eugen


Zitat von Zhenshi Zhou <deaderzzs@xxxxxxxxx>:

Hi all,

I have a 14.2.15 cluster with all SATA OSDs. Now we plan to add SSDs in the
cluster for db/wal usage. I checked the docs and found a command
'ceph-bluestore-tool' can deal with the issue.

I added db/wal to the osd in my test environment but in the end it still
get the warning message.
"osd.0 spilled over 64 KiB metadata from 'db' device (7 MiB used of 8.0
GiB) to slow device"

my procedure :
sdd is the new disk for db/wal.
sgdisk --new=1:0:+8GB --change-name=1:bluestore_block_db_0
--partition-guid=1:$(uuidgen) --mbrtogpt -- /dev/sdd
sgdisk --new=2:0:+1GB --change-name=2:bluestore_block_wal_0
--partition-guid=2:$(uuidgen) --mbrtogpt -- /dev/sdd
systemctl stop ceph-osd@0
CEPH_ARGS="--bluestore-block-db-size 8589934592" ceph-bluestore-tool
bluefs-bdev-new-db --path /var/lib/ceph/osd/ceph-0 --dev-target /dev/sdd1
ceph-bluestore-tool bluefs-bdev-new-wal --path /var/lib/ceph/osd/ceph-0/
--dev-target /dev/sdd2
ceph-bluestore-tool bluefs-bdev-expand --path /var/lib/ceph/osd/ceph-0/
systemctl start ceph-osd@0
ceph tell osd.0 compact

The warning message tells that there is some metadata still in the slow
device.
How can I deal with this issue?

Thanks
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux