Re: rbd device name reuse frequency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Shridhar,

As Ilya suggested :

Use image names instead of device names, i.e. "rbd unmap myimage"
instead of "rbd unmap /dev/rbd0".

I think this will solve the problem. You just need to advice your
orchestrator/hypervisor to use device names instead of rdb0...

About udev rule I guess not, because ceph is creating a brand new device
every time, but I'm not sure.

On Mon, Apr 20, 2020 at 10:27 PM Void Star Nill <void.star.nill@xxxxxxxxx>
wrote:

> Hi Jason,
>
> >
> > Why would you need to map the same image multiple times? Is that just a
> > limitation of your container management system? For example, k8s w/
> > ceph-csi will only map a PVC once per node even if it's used by multiple
> > containers. When the last container is stopped, it will then unmap the
> > image.
> >
>
> The way we orchestrate the RBD volumes today makes it difficult to do it
> the way k8s CSI does. Volume map and unmap are currently managed by
> individual hypervisor threads/subprocesses so we just keep a 1:1 mapping of
> devices to containers.
>
> Thanks,
> Shridhar
>
>
>
> >
> >
> >>
> >> Is it possible to add custom udev rules to control this behavior?
> >>
> >> Thanks,
> >> Shridhar
> >>
> >>
> >> On Mon, 20 Apr 2020 at 01:19, Ilya Dryomov <idryomov@xxxxxxxxx> wrote:
> >>
> >> > On Sat, Apr 18, 2020 at 6:53 AM Void Star Nill <
> >> void.star.nill@xxxxxxxxx>
> >> > wrote:
> >> > >
> >> > > Hello,
> >> > >
> >> > > How frequently do RBD device names get reused? For instance, when I
> >> map a
> >> > > volume on a client and it gets mapped to /dev/rbd0 and when it is
> >> > unmapped,
> >> > > does a subsequent map reuse this name right away?
> >> >
> >> > Yes.
> >> >
> >> > >
> >> > > I ask this question, because in our use case, we try to unmap a
> volume
> >> > and
> >> > > we are thinking about adding some retries in case unmap fails for
> any
> >> > > reason. But I am concerned about race conditions such as the
> >> following:
> >> > > 1. thread 1 calls unmap, but the call times out and returns in the
> >> > process,
> >> > > but in the background unmap request does go through and the device
> >> gets
> >> > > removed
> >> > > 2. thread 1 does a retry based on the device name.
> >> > >
> >> > > If between 1 and 2, another thread tries to map another volume and
> if
> >> it
> >> > > gets mapped to same device right after previous unmap was
> successful,
> >> > then
> >> > > in step 2, we will be trying to unmap a device that doesn't belong
> to
> >> > > previous map.
> >> > >
> >> > > So I want to know how frequently do the device names get reused, and
> >> if
> >> > > there is a way to keep them using new names until they round back
> >> after a
> >> > > max limit.
> >> >
> >> > No, there is no way.
> >> >
> >> > Use image names instead of device names, i.e. "rbd unmap myimage"
> >> > instead of "rbd unmap /dev/rbd0".
> >> >
> >> > Thanks,
> >> >
> >> >                 Ilya
> >> >
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
> >>
> >
> > --
> > Jason
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux