On Tue, 28 Dec 2010 18:29:26 +0700 hansbkk@xxxxxxxxx wrote: > This doesn't actually relate to the blocksize issue, but a caveat - > I've heard that these "green" drives are not suitable for use in a > RAID. > > The specific issue is apparently that these drives spin down very > frequently, but most RAID implementations keep spinning them back up > again just as frequently (perhaps unnecessarily?), thus causing undue > wear and tear on the drives' mechanics and ultimately premature > failure. > After a brief period of no writes, md will update the bitmap and/or the superblock to record that the array is clean (it may update the bitmap at other times too, but that is not relevant here). If the auto-spindown time of the drive is less than the delay-before-marking the-array-clean of md, then you could get extra spin-ups. The delay for updating the superblock is in sysfs in the md/safe_mode_timeout file which defaults to 0.2 seconds (200msec). The delay for updating the bitmap is set by an mdadm option (--bitmap-delay or something like that) when adding a bitmap to an array, and I think is available in sysfs in md/bitmap/something in recent kernels. The actual delay before a write is between 2 and 3 times this number. I think it defaults to 5 seconds (hence 10 to 15 second delay). So if the drive spins down sooner than 15 seconds after the last IO, there could be a problem but tuning md can git rid of that problem. If the drive spin-down time is longer than 15 seconds, they should be no unnecessary spin-ups. If anyone has any data on default spin-down times of these "green" drives I would be keen to hear about it. Thanks, NeilBrown -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html