Re: sata_nv and RAID1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Diego M. Vadell <dvadell@xxxxxxxxxxxxxx> wrote:
> On Monday 13 June 2005 13:07, Peter T. Breuer wrote:
> > > > I don't know either. For the FR1 code I implemented three new ioctls ..
> > > > all of them sent out by the FR1 (raid1) driver.
> > > >
> > > >   1) notify component that it is in an array and which
> > > >   2) notify component that it is no longer in an array and which
> > > >   3) send component a callback function through which it can
> > > >      SET_FAULTY and re-HOTADD itself to the array it kno it is in
> > > >      as need be.
> > > >
> > > > Maybe hotplugging has those facilities. I don't know.
> > > >
> > > > Cooperating devices would have to implement the ioctls.
> > >
> > >    If I understand right, even if I used FR1, it wont pass the test
> >
> > Yes it will. The device driver will detect something wrong (if the
> > device driver doesn't know, NOBODY does) and call back to the raid array
> > driver to say "set me faulty".
> >
> > That's the whole idea.
> >
> > When the device driver senses its device is well again, it will call
> > back and say "hot add me again".

> But not as it is today...


Yes, "as it is today".

> when you say "Cooperating devices would have to 
> implement the ioctls." means that I have to touch sata_nv's source code to 
> implement those ioctls, am I right?

One has to implement or have implemented those ioctls in the driver of
whichever device you are interested in, in order to cooperate properly
with fr1 (in that respect).  That goes without saying.  I merely provided
the infrastructure in fr1 - indeed, I could not have provided anything
else because I do not control the code for anything else. Fr1 will send
the ioctls I listed above (or sketched, rather) to any component device.
It is up to that component device to make use of them.

There may well be another scheme/architecture already available. I
don't know. In my abject ignorance I simply implemented an adequate
scheme for my purposes, and waited for anyone to tell me of a better
one.  If hotplugging is supported by the target device, it may well be
the case that you could implement at least ioctl (3) via hotplugging.

Ioctls (1) and (2) are merely informative. They courteously let the
target device know that it has been included in (or excluded from) a
particular md array. This allows the target device to make any mode
shifts that may be appropriate to running in a raid array, such as
exposing errors immediately to the array instead of blocking and trying
again internally (or vice versa -). It also allows the target device
to decide when or if to make use of the callback function provided to
it in ioctl (3), with which it can tell the raid arrays in which it has
been informed that it has been included of its current state of health.


> If that is all I have to do, I will give it a try (supposing my boss does not 
> make me use the raid in the motherboard) . 

The enbd code (ftp://oboe.it.uc3m.es/pub/Programs/enbd-2.4.32pre.tgz)
implements the ioctls in question.

If you know of another scheme, please feel free to tell me about it.

My idea was that the target device should preemptively inform the array
if it is in good or bad health. This implies that it should know in
which array it is included, in order to know who is interested in
knowing its health. Hence ioctls (1) and (2) which tell it who wishes
to be informed about its health. And ioctl (3) which gives it a
telephone number to use.

Ioctl (3) could be implemented the other way round - that is, it could
be simply the md array which receives an existing SETFAULTY or HOTADD
ioctl. I don't know why I chose to send across a callback function to
the target device instead. Probably because I was aware of locking
difficulties.


Peter

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux