Re: Are setting 'ceph auth caps' and/or adding a cache pool I/O-disruptive operations?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, it was an attempt to address poor performance, which didn't go well.

Btw, this isn't the first time I'm reading that cache tier is "kind of
deprecated", but the documentation doesn't really say this but explains how
to make a cache tier instead. Perhaps it should be made more clear that
adding a cache tier isn't a good idea.

/Z

On Fri, Nov 5, 2021 at 7:55 AM Anthony D'Atri <anthony.datri@xxxxxxxxx>
wrote:

> Cache tiers are kind of deprecated, they’re finicky and easy to run into
> trouble with.  Suggest avoiding.
>
> > On Nov 4, 2021, at 10:42 PM, Zakhar Kirpichenko <zakhar@xxxxxxxxx>
> wrote:
> >
> > Hi,
> >
> > I'm trying to figure out if setting auth caps and/or adding a cache pool
> > are I/O-disruptive operations, i.e. if caps reset to 'none' for a brief
> > moment or client I/O momentarily stops for other reasons.
> >
> > For example, I had the following auth setting in my 16.2.x cluster:
> >
> > client.cinder
> >        key: BLA
> >        caps: [mgr] profile rbd pool=volumes, profile rbd
> > pool=volumes-nvme, profile rbd pool=ec-volumes-meta, profile rbd
> > pool=ec-volumes-data, profile rbd pool=vms
> >        caps: [mon] profile rbd
> >        caps: [osd] profile rbd pool=volumes, profile rbd
> > pool=volumes-nvme, profile rbd pool=ec-volumes-meta, profile rbd
> > pool=ec-volumes-data, profile rbd pool=vms, profile rbd-read-only
> > pool=images
> >
> > I executed the following command to grant client 'cinder' access to the
> > 'volume-cache' pool:
> >
> > ceph auth caps client.cinder mgr 'profile rbd pool=volumes, profile rbd
> > pool=volumes-nvme, profile rbd pool=ec-volumes-meta, profile rbd
> > pool=ec-volumes-data, profile rbd pool=vms, profile rbd
> pool=volumes-cache'
> > mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd
> > pool=volumes-nvme, profile rbd pool=ec-volumes-meta, profile rbd
> > pool=ec-volumes-data, profile rbd pool=vms, profile rbd-read-only
> > pool=images, profile rbd pool=volumes-cache'
> >
> > I.e., just added the necessary access to the 'volumes-cache' pool,
> nothing
> > else. And then I set the 'volumes-cache' pool as an overlay for 'volumes'
> > ('volumes-cache' was previously set up as a writeback cache tier for
> > 'volumes'):
> >
> > ceph osd tier set-overlay volumes volumes-cache
> >
> > One of these operations, i.e. either 'ceph auth caps' or 'ceph osd tier
> > set-overlay', resulted in a brief interruption of client I/O towards the
> > 'volumes' pool, which caused some VMs  (qemu, librbd) running on clients
> to
> > lose their virtual disks. I'm not sure which one, and now I'm overly
> > cautious about touching either of these things :-)
> >
> > I would very much appreciate any advice!
> >
> > Best regards,
> > Zakhar
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux