RE: Proposal: non-striping RAID4

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



First off, I totally agree with you that standards need to be defined. I also agree with you that in most situations you would not want to be frequently spinning drives down and up.

That being said, I was explaining particular applications where the drives are not frequently accessed, and that's where this would be useful.

Western Digital has green drives already in production (albeit without standards.) See http://www.wdc.com/en/products/Products.asp?DriveID=336

I'm glad I double checked because I almost said these drives have variable speeds. It looks like the different models spin at different rates depending on how many platters they have. However, they do give the sleep/standby power usage as ~3W less than idle. I would imagine that number would be even larger for "non-green" drives. That can make a huge difference over time and many drives.

Implementing this disk array type would not enforce drive spin down, but rather allow it where standard raid types do not (within reason.)

I think of this approach as a bunch of independent disks that sacrifice a little write performance for peace of mind knowing that you can rebuild one if necessary. It has limited application, but would be a terrific option where applicable (online archive that is infrequently accessed.)

Tony Germano


> Subject: RE: Proposal: non-striping RAID4
> Date: Thu, 22 May 2008 17:10:08 -0500
> From: david@xxxxxxxxxxxx
> To: tony_germano@xxxxxxxxxxx; linux-raid@xxxxxxxxxxxxxxx
>
> I personally have a real problem with sleepy drives. There is no ANSI
> specification, and no drive vendors are making disks (today) that are
> engineered for this. Granted spinning down disks saves power & heat,
> but since disks aren't yet engineered for frequent spin-ups, then there
> are industry-wide concerns about disk life.
>
> Without benefit of an ANSI spec for this mode (and why stop at sleep,
> have several lower-RPM speeds that sacrifice performance for heat/power
> savings), then I just see too many problems for general use, so it would
> have to be limited to an appliance. The appliance vendor would probably
> have to carefully test & qualify disks, and insure that applications
> won't constantly spin disks up and have problems with 30+ sec timeouts
> and such.
>
> I think best next step is to write a bunch of emails to the various ANSI
> T10, T11, and T13 committee members and have them work out a spec so we
> have rules, and disks that are designed for this purpose.
>
> Undoubtedly there is a need, but without standards it will be a kludge.
>
>
> David lethe
>
> -----Original Message-----
> From: linux-raid-owner@xxxxxxxxxxxxxxx
> [mailto:linux-raid-owner@xxxxxxxxxxxxxxx] On Behalf Of Tony Germano
> Sent: Thursday, May 22, 2008 4:16 PM
> To: linux-raid@xxxxxxxxxxxxxxx
> Subject: Re: Proposal: non-striping RAID4
>
>
> I would like to bring this back to the attention of the group (from
> November 2007) since the conversation died off and it looks like a few
> key features important to me were left out of the discussion... *grin*
>
> The original post was regarding "unRAID" developed by
> http://lime-technology.com/
>
> I had an idea in my head, and "unRAID" has features almost identical to
> what I was thinking about with the exception of a couple deal breaking
> design decisions. These are due to the proprietary front end, not the
> modified driver.
>
> Bad decision #1) Implementation is for a NAS Appliance. Files are only
> accessible through a Samba share. (Though this is great for the hoards
> of people that use it as network storage for their windows media center
> pcs.)
>
> Bad decision #2) Imposed ReiserFS.
>
> Oh yeah, and it's not free in either sense of the word.
>
> The most relevant uses I can think of for this type of array are archive
> storage and low use media servers. Keeping that in mind...
>
> Good Thing #1)
> "JBOD with parity." Each usable disk is seen separately and has its own
> filesystem. This allows mixed sized disks and replacing older smaller
> drives with newer larger ones one at a time while utilizing the extra
> capacity right away (after expanding the filesystem.) In the event that
> two or more disks are lost, surviving non-parity disks still have 100%
> of their data. (Adding a new disk larger than the parity disk is
> possible, but takes multiple steps of converting it to the new parity
> disk and then adding the old parity disk back to the array as a regular
> disk... acceptable to me)
>
> Good Thing #2)
> You can spin down idle disks. Since there is no data striping and file
> systems don't [have to] span drives, reading a file only requires 1 disk
> to be spinning. Writing only requires 1 disk + parity disk. This is an
> important feature to the "GREEN" community. On my mythtv server, I only
> record a few shows each week. I would have disks in this setup possibly
> not accessed for weeks or even months at a time. They don't need to be
> spinning, and performance is of no importance to me as long as it can
> keep up with writing HD streams.
>
> Hopefully this brings a new perspective to the idea.
>
> Thanks,
> Tony Germano
> _________________________________________________________________
> Keep your kids safer online with Windows Live Family Safety.
> http://www.windowslive.com/family_safety/overview.html?ocid=TXT_TAGLM_WL
> _Refresh_family_safety_052008--
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>

_________________________________________________________________
Make every e-mail and IM count. Join the i’m Initiative from Microsoft.
http://im.live.com/Messenger/IM/Join/Default.aspx?source=EML_WL_ MakeCount--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux