Re: rbd persistent cache configuration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 4, 2024 at 4:41 PM Peter <petersun@xxxxxxxxxxxx> wrote:
>
> I follow below document to setup image level rbd persistent cache,
> however I get error output while i using the command provide by the document.
> I have put my commands and descriptions below.
> Can anyone give some instructions? thanks in advance.
>
> https://docs.ceph.com/en/pacific/rbd/rbd-persistent-write-back-cache/
>
> https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html/block_device_guide/ceph-block-devices#enabling-persistent-write-log-cache_block
> [https://access.redhat.com/webassets/avalon/g/shadowman-200.png]<https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html/block_device_guide/ceph-block-devices#enabling-persistent-write-log-cache_block>
> Chapter 2. Ceph block devices Red Hat Ceph Storage 5 | Red Hat Customer Portal<https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html/block_device_guide/ceph-block-devices#enabling-persistent-write-log-cache_block>
> Access Red Hat’s knowledge, guidance, and support through your subscription.
> access.redhat.com
> I tried use host level client command, i got no error, however I won't be able to get cache usage output.
> "ceph config set client rbd_persistent_cache_mode ssd
> ceph config set client rbd_plugins pwl_cache"
>
>
>
> [root@master-node1 ceph]# rbd info sas-pool/testdrive
> rbd image 'testdrive':
>         size 40 GiB in 10240 objects
>         order 22 (4 MiB objects)
>         snapshot_count: 0
>         id: 3de76a7e7c519
>         block_name_prefix: rbd_data.3de76a7e7c519
>         format: 2
>         features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
>         op_features:
>         flags:
>         create_timestamp: Thu Jun 29 02:03:41 2023
>         access_timestamp: Thu Jun 29 07:19:40 2023
>         modify_timestamp: Thu Jun 29 07:18:00 2023
>
> I check feature  exclusive-lock has been already enabled
> and I run following command get fault output.
> [root@master-node1 ceph]# rbd config image set sas-pool/testdrive rbd_persistent_cache_mode ssd
> rbd: invalid config key: rbd_persistent_cache_mode
>
> [root@master-node1 ceph]# rbd config image set sas-pool/testdrive rbd_plugins pwl_cache
> rbd: invalid config key: rbd_plugins

Hi Peter,

What is the output of "rbd --version" on this node?

Were "ceph config set client rbd_persistent_cache_mode ssd" and "ceph
config set client rbd_plugins pwl_cache" above ran on a different node?

>
> root@node1:~# rbd status sas-pool/testdrive
> Watchers:
>         watcher=10.1.254.51:0/1544956346 client.39553300 cookie=140244238214096
>
>
> I hope to see the output include the persistent cache state like below:
>
> $ rbd status rbd/foo
> Watchers:
>         watcher=10.10.0.102:0/1061883624 client.25496 cookie=140338056493088
> Persistent cache state:
>         host: sceph9
>         path: /mnt/nvme0/rbd-pwl.rbd.101e5824ad9a.pool
>         size: 1 GiB
>         mode: ssd
>         stats_timestamp: Sun Apr 10 13:26:32 2022
>         present: true   empty: false    clean: false
>         allocated: 509 MiB
>         cached: 501 MiB
>         dirty: 338 MiB
>         free: 515 MiB
>         hits_full: 1450 / 61%
>         hits_partial: 0 / 0%
>         misses: 924
>         hit_bytes: 192 MiB / 66%
>         miss_bytes: 97 MiB

Normally, the output that you are expecting would be there only while
the image is opened.

Thanks,

                Ilya
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux