Re: rbd device name reuse frequency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Ilya.

The challenge is that, in our environment, we could have multiple
containers using the same volume on the same host, so we map them multiple
times and unman them by device when one of the containers
complete/terminate - so that we dont unmap the one that's still in use by
other container.

Is it possible to add custom udev rules to control this behavior?

Thanks,
Shridhar


On Mon, 20 Apr 2020 at 01:19, Ilya Dryomov <idryomov@xxxxxxxxx> wrote:

> On Sat, Apr 18, 2020 at 6:53 AM Void Star Nill <void.star.nill@xxxxxxxxx>
> wrote:
> >
> > Hello,
> >
> > How frequently do RBD device names get reused? For instance, when I map a
> > volume on a client and it gets mapped to /dev/rbd0 and when it is
> unmapped,
> > does a subsequent map reuse this name right away?
>
> Yes.
>
> >
> > I ask this question, because in our use case, we try to unmap a volume
> and
> > we are thinking about adding some retries in case unmap fails for any
> > reason. But I am concerned about race conditions such as the following:
> > 1. thread 1 calls unmap, but the call times out and returns in the
> process,
> > but in the background unmap request does go through and the device gets
> > removed
> > 2. thread 1 does a retry based on the device name.
> >
> > If between 1 and 2, another thread tries to map another volume and if it
> > gets mapped to same device right after previous unmap was successful,
> then
> > in step 2, we will be trying to unmap a device that doesn't belong to
> > previous map.
> >
> > So I want to know how frequently do the device names get reused, and if
> > there is a way to keep them using new names until they round back after a
> > max limit.
>
> No, there is no way.
>
> Use image names instead of device names, i.e. "rbd unmap myimage"
> instead of "rbd unmap /dev/rbd0".
>
> Thanks,
>
>                 Ilya
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux