Re: Sleepy drives and MD RAID 6

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Have you tried the dd command w/o nonblock and putting it in the
background via &? You could then use the 'wait' command to wait for them
to finish.

I did dust off some old memories and recalled that one of my SAS
controllers (LSI) does the spin ups serially no matter what and I ended
up moving these low duty cycle drives to my other SAS controller
(Marvell) and put my always spinning drives on the LSI. I've never seen
this behavior from any of my AHCI SATA controllers.

--Larkin

On 8/14/2014 11:50 AM, Adam Talbot wrote:
> I am running out of ideas.  Does anyone know how to wake a disk with a
> non-blocking, and non-caching method?
> I have tried the following commands:
> dd if=/dev/sdh of=/dev/null bs=4096 count=1 iflag=direct,nonblock
> hdparm --dco-identify /dev/sdh   (This gets cached after the 3~10th
> time running)
> hdparm --read-sector 48059863 /dev/sdh
>
> Any ideas?
>
> On Wed, Aug 13, 2014 at 9:07 AM, Adam Talbot <ajtalbot1@xxxxxxxxx> wrote:
>> Arg!!  Am I hitting some kind of blocking at the Linux kernel?? No
>> matter what I do, I can't seem to get the drives to spin up in
>> parallel.  Any ideas?
>>
>> A simple test case trying to get two drives to spin up at once.
>> root@nas:~# hdparm -C /dev/sdh /dev/sdg
>> /dev/sdh:
>>  drive state is:  standby
>>
>> /dev/sdg:
>>  drive state is:  standby
>>
>> #Two terminal windows dd'ing sdg and sdh at the same time.
>> root@nas:~/dm_drive_sleeper# time dd if=/dev/sdh of=/dev/null bs=4096
>> count=1 iflag=direct
>> 1+0 records in
>> 1+0 records out
>> 4096 bytes (4.1 kB) copied, 14.371 s, 0.3 kB/s
>>
>> real   0m28.139s ############# WHY?! ################
>> user   0m0.000s
>> sys   0m0.000s
>>
>> #A single drive spin-up
>> root@nas:~/dm_drive_sleeper# time dd if=/dev/sdh of=/dev/null bs=4096
>> count=1 iflag=direct
>> 1+0 records in
>> 1+0 records out
>> 4096 bytes (4.1 kB) copied, 14.4212 s, 0.3 kB/s
>>
>> real   0m14.424s
>> user   0m0.000s
>> sys   0m0.000s
>>
>> On Tue, Aug 12, 2014 at 8:23 AM, Adam Talbot <ajtalbot1@xxxxxxxxx> wrote:
>>> Thank you all for the input.  At this point I think I am going to write a
>>> simple daemon to do dm power management. I still think this would be a good
>>> feature set to roll into the driver stack, or madam-tools.
>>>
>>> As far as wear and tear on the disks. Yes, starting and stopping the drives
>>> shortens their life span. I don't trust my disks, regardless of
>>> starting/stopping, that is why I run RAID 6. Lets say I use my NAS with it's
>>> 7 disks for 2 hours a day, 7 days a week @ 10 watts per drive.  The current
>>> price for power in my area is $0.11 per kilowatt-hour. That comes out to be
>>> $5.62 per year to run my drives for 2 hours, daily.  But if I run my drives
>>> 24/7 it would cost me $67.45/year.  Basically it would cost me an extra
>>> $61.83/year to run the drives 24/7.  The 2TB 5400RPM SATA drives I have been
>>> picking up from local surplus, or auction websites are costing me $40~$50,
>>> including shipping and tax.  In other words I could buy a new disk every
>>> 8~10 months to replace failures and it would be the same cost. Drives don't
>>> fail that fast, even if I was start/stopping them 10 times daily. This is
>>> also completely ignoring the fact that drive prices are failing.  Sorry to
>>> disappoint, but I am going to spin down my array and save some money.
>>>
>>>
>>>
>>>
>>> On Tue, Aug 12, 2014 at 2:46 AM, Wilson, Jonathan
>>> <piercing_male@xxxxxxxxxxx> wrote:
>>>> On Tue, 2014-08-12 at 07:55 +0200, Can Jeuleers wrote:
>>>>> On 08/12/2014 03:21 AM, Larkin Lowrey wrote:
>>>>>> Also, leaving spin-up to the controller is
>>>>>> also not so hot since some controllers spin-up the drives sequentially
>>>>>> rather than in parallel.
>>>>> Sequential spin-up is a feature to some, because it avoids large power
>>>>> spikes.
>>>> I vaguely recall older drives had a jumper to set a delayed spin up so
>>>> they stayed in a low power (possibly un-spun up) mode when power was
>>>> applied and only woke up when a command was received (I think any
>>>> command, not a specific "wake up" one).
>>>>
>>>> Also as mentioned some controllers may also only wake drives one after
>>>> the other, likewise mdriad does not care about the underlying
>>>> hardware/driver stack, only that it eventually responds, and even then I
>>>> believe it will happily wait till the end of time if no response or
>>>> error is propagated up the stack; hence the time out in scsi_device
>>>> stack not in the mdraid.
>>>>
>>>>
>>>>
>>>>> --
>>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>
>>>>
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux