Are setting 'ceph auth caps' and/or adding a cache pool I/O-disruptive operations?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm trying to figure out if setting auth caps and/or adding a cache pool
are I/O-disruptive operations, i.e. if caps reset to 'none' for a brief
moment or client I/O momentarily stops for other reasons.

For example, I had the following auth setting in my 16.2.x cluster:

client.cinder
        key: BLA
        caps: [mgr] profile rbd pool=volumes, profile rbd
pool=volumes-nvme, profile rbd pool=ec-volumes-meta, profile rbd
pool=ec-volumes-data, profile rbd pool=vms
        caps: [mon] profile rbd
        caps: [osd] profile rbd pool=volumes, profile rbd
pool=volumes-nvme, profile rbd pool=ec-volumes-meta, profile rbd
pool=ec-volumes-data, profile rbd pool=vms, profile rbd-read-only
pool=images

I executed the following command to grant client 'cinder' access to the
'volume-cache' pool:

ceph auth caps client.cinder mgr 'profile rbd pool=volumes, profile rbd
pool=volumes-nvme, profile rbd pool=ec-volumes-meta, profile rbd
pool=ec-volumes-data, profile rbd pool=vms, profile rbd pool=volumes-cache'
mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd
pool=volumes-nvme, profile rbd pool=ec-volumes-meta, profile rbd
pool=ec-volumes-data, profile rbd pool=vms, profile rbd-read-only
pool=images, profile rbd pool=volumes-cache'

I.e., just added the necessary access to the 'volumes-cache' pool, nothing
else. And then I set the 'volumes-cache' pool as an overlay for 'volumes'
('volumes-cache' was previously set up as a writeback cache tier for
'volumes'):

ceph osd tier set-overlay volumes volumes-cache

One of these operations, i.e. either 'ceph auth caps' or 'ceph osd tier
set-overlay', resulted in a brief interruption of client I/O towards the
'volumes' pool, which caused some VMs  (qemu, librbd) running on clients to
lose their virtual disks. I'm not sure which one, and now I'm overly
cautious about touching either of these things :-)

I would very much appreciate any advice!

Best regards,
Zakhar
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux