Re: Harddisk not going to standby when using NILFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I did some measurements to check how often my nilfs2 drives are in
standby during one day (24h), during this time only a daily backup
script have been accessing the disks (rsync from some other
directories on /dev/sda1). For comparison I have also included an ext4
partition on /dev/sdc. Note: there are no other partitions on these
disks so there should basically be no activity. (I am running Ubuntu
9.10 so an occassional updatedb or similar can be expected, besides my
daily backup script.)

The nilfs2 settings used during measurement (where I am trying to
force the drives to standby as much as possible, hence, it is not a
typical use of nilfs2):
   protection_period = 86400
   nsegments_per_clean = 20
   cleaning_interval = 86400
(86400 seconds correspond to 24h)

Results:
  nilfs2  /dev/sdb spindown 34 %, count 18, max 2340 sec, spindown
time accumulated 29430 sec
  nilfs2 /dev/sdd spindown 47 %, count 26,  max 2370 sec, spindown
time accumulated 41220 sec
  ext4 /dev/sdc spindown 89 %, count 5, max 53580 sec, spindown time
accumulated 77430 sec

  Same measurement, but this time with no cleanerd running for sdb and
sdd; still the result is in the same range.
  nilfs2 /dev/sdb spindown 23 %, count 11, max 2340 sec, spindown time
accumulated 20580 sec
  nilfs2 /dev/sdd spindown 47 %, count 23, max 2340 sec, spindown time
accumulated 41190 sec
  ext4 /dev/sdc spindown 87 %, count 3, max 55200 sec, spindown time
accumulated 75870 sec
['count' refers to the number of times the disk has been spun down,
'max' refers to the maximum time the disk has been in the spun down
state]

I was quite surprised to see that the nilfs2 disk where quite rarely
in the spun down state, at least when compared to the ext4 disk,
nilfs2 23-47% vs. ext3 ~87%. Also, stopping the cleanerd seamed to
have little effect on disk spinning down.
Could anyone explain what is being done to the nilfs disks during this
time (or provide a reference)? I would have expected the disks to be
spun down much more of the time, for example, on par with the ext4
disk (~87% spun down). What can I do to reduce disk activity on the
nilfs2 partitions further?

Markus

ps. I have set the standby timeout on these disk using hdparm; hdparm
-S 30 /dev/xx. hdparm -B did not work for me.
In my measurements I am checking the disks state using 'hdparm -C',
each disk sampled every 30 seconds. The measurements are done using a
bash script.


2010/1/3 Ryusuke Konishi <ryusuke@xxxxxxxx>:
> Hi,
> On Sat, 2 Jan 2010 19:54:29 +0100, Markus Lindgren wrote:
>> Hi,
>>
>> I just started using NILFS on two of my "backup" disks just to give
>> NILFS a try. Initially I plan to test having daily backups on these
>> disks (I know this is not a typical use scenario for NILFS, but anyhow
>> it is a way of starting with NILFS).
>>
>> To my problem... I discovered that when using NILFS the hard disk
>> never seems to enter the 'standby' power-saving state; it remains in
>> the 'active/idle' state, which causes extra noise and power usage.
>> (Which is bad since I like things when they are quiet :-)
>> I then tried mounting one of the disks without the cleanerd, but still
>> both disks remain in the 'active/idle' state. And yes, I waited
>> sufficiently long for the drives to enter the standby state.
>>
>> Does anyone know of a workaround to get the disks to enter the
>> 'standby' power state when using NILFS? (And not by forcing disks to
>> that state by using hdparm.)
>>
>> In case it is the cleanerd causing the disks not to spin down, is
>> there perhaps a way to trigger running the "cleanerd" only a few times
>> per day? In my usage scenario there is no point in constantly checking
>> for garbage on the disks.
>
> On my laptop, the standby state worked after doing 'hdparm -B 1'.
> But, as you pointed out, cleanerd prevented the device from entering
> the standby state.  After killing cleanerd by hand, this issue seems
> to get settled.
>
> If you won't kill cleanerd manually, another possible solution is to
> set protection_period a large value in /etc/nilfs_cleanerd.conf.  I
> think the essential solution is to improve cleanerd to suppress
> needless operation as repeatedly discussed in this list.
>
>> On a secondary note, when I tried to remount the second disk to not
>> use cleanerd I got the following message:
>>   umount.nilfs2: cleanerd (pid=2160) still exists on /dev/sdb1.
>> waiting.....failed
>>   umount.nilfs2: /mnt/sdb1: device is busy
>>   umount.nilfs2: cleanerd (pid=2160) still exists on /dev/sdb1.
>> waiting.....failed
>>   umount.nilfs2: /mnt/sdb1: device is busy
>> Which is a bit strange, since as far as I know, no files where in use
>> from that disk and no bash/shell running from directories on that
>> disk. I did not help to try forcing the umount either.
>
> Ahh, there seems no way to shutdown cleanerd directly by remount.
>
> How about this?
>
>  # mount -t nilfs2 -o remount,ro /dev/xxx /dir &&
>    mount -t nilfs2 -i -o remount,rw /dev/xxx /dir
>
>> Thanks in advance,
>>   Markus Lindgren
>
> Cheers,
> Ryusuke Konishi
>
--
To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Filesystem Development]     [Linux BTRFS]     [Linux CIFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux