On 17/08/2011 17:45, Asdo wrote:
On 08/17/11 16:34, John Robinson wrote:
[...]
The first sector of a md RAID with metadata 1.0 is in its data area,
so there's no way md is writing to this area itself, it's almost
certainly the filesystem that's writing it.
This is an interesting observation then. ("no way" is a bit extreme
though)
If it did, everybody's filesystems would be getting trashed, and I don't
think this is happening.
You are right in the sense that it might have been the
filesystem that is doing something at the first remount, and not MD. I
can't be sure it's MD anymore. Still, this is wrong, why should the
filesystem wipe its own boot sector?
It's ext3 btw. If no one pops up with an explanation here on linux-raid
I will also ask there.
I still stand that what I am doing is correct. I am using the partition
boot sector properly
http://en.wikipedia.org/wiki/Volume_boot_record
Not all filesystems or other things you might have in a partition
necessarily support a volume boot record. LVM doesn't. XFS doesn't. md
doesn't unless you use metadata 1.2. As it happens it can work with md
if you use metadata 0.9 or 1.0 and a filesystem which does support a
volume boot record.
ext2/3 supports a volume boot record. The first superblock starts 1K
into the filesystem. If you are using a 1K block size, the superblock is
in block 1, so mke2fs won't touch an existing volume boot record. If you
are using the more likely 4K block size, the superblock is 1K into block
0, and mke2fs will write zeros to the first 1K.
Once mke2fs has been run, however, I wouldn't expect ext2/3 to overwrite
the volume boot record, given that the developers bothered to support
one in the first place, but that's an ext2/3 question.
Cheers,
John.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html