Re: [PATCH e2fsprogs] Add ZFS detection to libblkid

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Sáb, 2009-04-04 at 15:25 -0600, Andreas Dilger wrote:
> I _suppose_ there is no hard requirement that the ub_magic is present in
> the first überblock slot at 128kB, but that does make it harder to find.
> In theory we would need to add 256 magic value checks, which seems
> unreasonable.  Ricardo, do you know why the zfs.img.bz2 has bad überblocks
> for the first 4 slots?

Your supposition is correct - there's no requirement that the first
uberblock that gets written to the uberblock array has to be in the
first slot.

The reason that this image has bad uberblocks in the first 4 slots is
that, in the current ZFS implementation, when you create a ZFS pool, the
first uberblock that gets written to disk has txg number 4, and the slot
that gets chosen for each uberblock is "txg_nr % nr_of_uberblock_slots".

So in fact, it's not that the first 4 uberblocks are bad, it's just that
the first 4 slots don't have any uberblocks in them yet.

However, even though currently it's txg nr 4 that gets written first,
this is an implementation-specific detail that we cannot (or should not)
rely upon.

So I think you're (mostly) right - in theory, a correct implementation
would have to search all the uberblock slots in all the 4 labels (2 at
the beginning of the partition and 2 at the end), for a total of 512
magic offsets, but this is not easy to do with libblkid because it only
looks for the magic values at hard-coded offsets (as opposed to being
able to implement a routine to look for a filesystem, which could use a
simple "for" statement).

This is why I decided to change your patch to look for VDEV_BOOT_MAGIC,
which I assumed was always there in the same place, but apparently this
does not seem to be the case.

Eric, do you know how this ZFS pool/filesystem was created?
Specifically, which Solaris/OpenSolaris version/build, or maybe zfs-fuse
version? Also, details about which partitioning scheme is being used and
whether this is a root pool would also help a lot.

BTW, I also agree that it would be useful for ext3's mkfs to zero-out
the first and last 512 KB of the partition, to get rid of the ZFS labels
and magic values, although if it detects these magic values, it would be
quite useful for mkfs to refuse to format the partition, forcing the
user to specify some "--force" flag (like "zpool create" does), or at
least ask the user for confirmation (if mkfs is being used in
interactive mode), to avoid accidental data destruction.

If this is not done, then maybe leaving the ZFS labels intact could be
better, so that the user has a chance to recover (some/most) of it's
data, in case he made a mistake.

Cheers,
Ricardo


--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux