[I'm the original bug reporter. Sorry for getting so late into the conversation] On Thu, Apr 05, 2007 at 12:47:49AM +0400, Michael Tokarev wrote: > Neil Brown wrote: > > On Tuesday April 3, newsuser@xxxxxxxxxxxxxxx wrote: > [] > >> After the power cycle the kernel boots, devices are discovered, among > >> which the ones holding raid. Then we try to find the device that holds > >> swap in case of resume and / in case of a normal boot. > >> > >> Now comes a crucial point. The script that finds the raid array, finds > >> the array in an unclean state and starts syncing. > [] > > So you can start arrays 'readonly', and resume off a raid1 without any > > risk of the the resync starting when it shouldn't. > > But I wonder why this raid is necessary in the first place. In the case of my original report, the array is not actually necesary, since the resume image is in another (normal) partition. The array gets resumed since the mdadm scripts run before the resume ones in the initrd and they by default start *every* array in the system. But at least the mdadm maintainer seems to think that having the resume image in a raid device, or in an lvm logical volume inside a raid device, or other such esoteric arangements, is an use case worth supporting. Something that I seem to not have said. It's not *all* arrays that are unclean on reboot, just one (that is used as physical volume for LVM. I don't know if that's relevant). Also worth mentioning is that kernel space suspend on 2.6.17 did not have this problem (or didn't show it in my system, anyways). After reading through the responses, I have come to think this is a kernel issue, and have posted a report (#418823) to debian's linux-2.6 package. I'll wait to see what they have to say.
Attachment:
signature.asc
Description: Digital signature