Re: Interesting feature request for linux raid, waking up drives

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Piergiorgio,

it's not a problem at all and commercial NAS-es often do that... :)

In terms of maximum capacity, PSU must be able to support starting of
all drives anyway, because it actually does that when you start the
computer.

In terms of total power usage, maximum load is there only for very
short period of time, probably under 10 secs. How many times a day
they wake up depends on usage scenario and you can regulate that by
setting timeout time. In the case of NAS in my office drives wake up
in average fewer than fives times a day, sometimes in the weekend they
are sleeping all the weekend.

I've attached the graph of real power usage of this NAS, it is 7 * 1.5
TB RAID5 and graph is from one minute averages. You can see tri basic
level of power usage, under 50 W when all drivers are in standby,
something around 55 W when 1 drive with OS is up and around 80 W when
all drives are running.

The spikes when waking drives should be on start of 80 W levels, but
as you can see they are almost unnoticeable in one minute averages.
The other spikes around 85 - 90 W are probably load on CPU.

The bigger concern can be if more frequent stopping and starting
drives is not damaging drives. But if you have up to ten starts a day,
it should be OK. SMART value for start/stop count is typically
decreasing 1 for every 1000 start/stop cycles, so you get to 50 after
cca 13 years. And this value is critical at 0.

On other hand the drives are not spinning all the time and this puts
less stress on cheaper desktop drives.

Patrik

On Wed, May 9, 2012 at 11:38 PM, Piergiorgio Sartor
<piergiorgio.sartor@xxxxxxxx> wrote:
> Hi Patrik,
>
> On Wed, May 09, 2012 at 11:16:33PM +0200, Patrik Horník wrote:
>> On Wed, May 9, 2012 at 10:06 PM, Larkin Lowrey
>> <llowrey@xxxxxxxxxxxxxxxxx> wrote:
>> > I second this suggestion but I don't think it's the job of the raid
>> > layer to keep track of whether the member drives are spinning or not.
>>
>> I also dont think it should be directly in the raid level, but it is
>> problem of linux raid and so the solution should be sought here.
>>
>> > I have implemented a similar setup to this but am suffering from the
>> > sequential spin-up problem you described. It would be nice to have a
>> > solution.
>>
>> My script is not perfect but it eliminates sequential spin-up problem
>> perfectly. If you want, use it. The sequential spin-up problem was the
>> reason I wrote it and its main function is to detect woken drives and
>> immediately wake other drives from RAID.
>
> you mentioned "aggressive power saving", but, on the other
> hand, if I get it correctly, you want to spin-up the HDDs
> alltogether.
>
> How do you manage the current peak (power peak) you create
> at spin up? Expecially in comparison with PSU efficiency.
>
> I mean, a running HDD can consume 2W~5W, while at spin-up
> it consumes 20W~30W.
> If you have, let's say, 20 HDDs NAS, at spin-up the PSU
> should provide 20*30W=600W, while in normal operation it
> will have to provide only 20*2W=40W (extreme cases, of
> course, YMMV).
> In stand-by it will be less, probably 10W for the HDDs.
>
> Now, PSU efficiency usually is tuned at a certain power
> need, typically 80% of max PSU power rating.
> Above and below is less, obviously much less at, let's
> say, 10% of max PSU power.
> Golden or similar consumer PSU, marked 80+, can keep
> high efficiency (80%~90%) up to low power.
> The specification goes down to 20% of max power.
>
> In the above NAS example, the PSU will have to provide,
> at spin-up, more than 600W, I would say 700~800W,
> while in stand-by it will go down to 30~50W (or less).
> Keeping some safety margin, will require 1000W PSU,
> which will be used only 3~5% in stand-by.
> At this level efficiency will be a disaster (usually),
> resulting in an effective power usage (from mains)
> probably above 100W, defeating the wanted power saving.
>
> How do you deal with this issue?
> Have you any real world data about min/max power
> consumption?
> Do know PSUs (consumer class, ATX) capable of
> handling wide power range with high efficiency?
>
> Going back to the spin-up, my idea was exactly the
> opposite, i.e. to activate the HDDs with 4~5 secs.
> delay one from the other, in order to reduce the
> peak power, thus using a lower power rating PSU,
> thus havig high efficiency at low power.
>
> Thanks a lot in advance,
>
> bye,
>
> pg
>
>> > A userspace daemon could probably do the job. I found that relying on
>> > the drive's internal power management for spinning them down was
>> > unreliable (especially for WDC "green" drives) so I implemented a script
>> > that watches /sys/block/sdX/stat for activity and spins down the drive
>> > directly (via hdparm) when no activity has been posted for a
>> > configurable period of time. A daemon process that was responsible for
>> > spinning down the constituent drives could also be responsible for
>> > spinning them up by watching /sys/block/mdX/stat for pending transfers.
>> > Perhaps you and I could work on such a project.
>>
>> I added support for spinning down drives only as addition after I
>> bought first WD Greens . It is done in wrong way, it relies on some
>> drives in the array working correctly and I guess your way is the
>> correct one. Do you have specification of /sys/block/sdX/stat?
>>
>> Right now the script is checking power status of drives by hdparm. I
>> dont know yet what is in /sys/block/sdX/stat and what is better, but
>> the basic principle behind my script works perfectly at least in my
>> setups - if at least one drive from raid array is awake, wake up all
>> of them.
>>
>> > One thing mdadm could do which would help greatly is to enumerate the
>> > member disk block devices (not just partitions or member raid devices)
>> > for a given array. This information is known since concurrent sync
>> > operations are serialized so no two sync operations occur at the same
>> > time on the same physical devices.
>>
>> Maybe Neil can give us his thoughts what is the best place / form for
>> such functionality.
>>
>> Patrik
>>
>> >
>> > --Larkin
>> >
>> > On 5/9/2012 12:37 PM, Patrik Horník wrote:
>> >> Hello Neil,
>> >>
>> >> I want to propose some functionality for linux raid subsystem that I
>> >> think will be very practical for many users, automatic waking of
>> >> drives. I am using my own user land script written years ago to do
>> >> that and I dont know if there is some standard solution now. If there
>> >> is some, please point me to it.
>> >>
>> >> I am using couple of big RAID5 arrays in servers working like NASes in
>> >> small office and home, which are in use only small part of the day. I
>> >> am using low power server and aggressive power saving settings on HDDs
>> >> to make power consumption substantially lower, for example drives are
>> >> going to sleep after 15 min of inactivity. Normally problem with such
>> >> settings is extremely long waking time when array is accessed.
>> >> Software accessing data often first requests only chunk of data on
>> >> first drive in array and waiting cca 20-30 sec for them, after
>> >> processing them accessing data on another drive and waiting another
>> >> 20-30 sec and so on.
>> >>
>> >> I solved it with my own script in PHP, which monitors drives' status
>> >> periodically. When it detects that drive from RAID array woke up, it
>> >> immediately wakes other drives. So total waking time is equal to
>> >> waking of one drive plus couple of seconds. It works perfectly and
>> >> smoothly for years for me.
>> >>
>> >> I attached the script from one of my servers, it is little cruel and
>> >> using hwparm and smartctl to monitor and manipulating drives. It is
>> >> little customized and specific for its server, for example one drive
>> >> detected by model is not used to wake up other drives and two drives
>> >> are also putting one another into sleep, because I found out the
>> >> standby timeout setting was not working reliable on one drive. But you
>> >> will get the idea.
>> >>
>> >> I think it could be useful for some users if there is possibility to
>> >> use such feature. Do you think it would be useful? Do you think there
>> >> is some place in linux raid infrastructure where it can be somehow
>> >> implemented? (Possibly as some user land tool using some kernel APIs,
>> >> I dont know.)
>> >>
>> >> Best regards,
>> >>
>> >> Patrik Horník
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
> --
>
> piergiorgio

Attachment: display_measurement.png
Description: PNG image


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux