Re: SCSI inquiry interface for RBD devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Aug 12, 2016 at 4:17 PM, Ilya Dryomov <idryomov@xxxxxxxxx> wrote:
> On Fri, Aug 12, 2016 at 1:38 AM, Kamble, Nitin A
> <Nitin.Kamble@xxxxxxxxxxxx> wrote:
>>
>> SCSI block devices has SG_IO IOCTL interface by which applications can identify the device uniquely. This helps applications identify the device uniquely without relying on the device file names or any other mechanisms.
>>
>> For more details l ook at the manpage of sg_inq commnad. https://manned.org/sg_inq
>>
>> Some of the non SCSI devices also support this SG_IO interface for inquiry. Examples include SATA, NVME, Virtual Disks.
>>
>> It is desired to identify every RBD uniquely just from the device file. It will help in using RBD devices as drop in replacement for SCSI/SATA devices. Currently the RBD code in ceph-client does not support the SG_IO ioctl interface for inquiry to identify the device uniquely. Has adding the SG_IO ioctl interface to the RBD devices ever been considered before?
>
> No, SG_IO hasn't come up before, and we don't really have a good
> established way of doing that at all - if we had one, wiring up SG_IO
> (assuming no objections from SCSI maintainers) wouldn't be a problem.
> One approach is to concatenate a cluster-wide UUID with pool, image and
> snapshot IDs, but the problem there is that images can be moved between
> pools and clusters and v1 images don't have an image ID.
>
> Mike recently ran into the need to uniquely identify RBD devices for
> his multipath-tools work and I think he ended up using some variant of
> the above approach, grabbing IDs from /sys/bus/rbd.  (The cluster_uuid
> and snap_id attributes currently don't exist, but will be added in the
> next kernel release.)
>
> It would make this a lot easier if we started generating a unique UUID
> for the image and each snapshot at image/snapshot creation time.  Jason?

I've filed http://tracker.ceph.com/issues/17012 for UUIDs.  Once it's
in, exporting that in various ways should be trivial.

Thanks,

                Ilya
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux