I'd like to have multiple partitions rather than one large disk. What if one of your paths fails during the mkfs on the single path'd /dev/sdX? I am adding auto partition/format/mount capability (triggered by udev add events). A reboot between the sfdisk and mkfs.ext3 will not be possible. I would really like to operate on multipathed device nodes the whole way through to handle possible path failure during the process and because the ultimate mount point will be to the multipathed partition device node. Any other suggestions? On 9/28/07, Kevin Foote <kevin.foote@xxxxxxxxx> wrote: > I alwase use the entire disk .. no partitions, and I usually do mkfs on the > underlying /dev/sdX device before > I bother with multipath.. > Then I just use the /dev/mapper entries for actual mounts .. the > underlying FS is already there. > just my .02 > > mkfs.ext3 -F /dev/sdX > > > On 9/28/07, David Strand <dpstrand@xxxxxxxxx> wrote: > > > > When I perform an fdisk or sfdisk on a /dev/mapper/ multipath device I > > get a warning about it failing to re-read the partition information. > > With fdisk it warns that I'll need to reboot before using the device, > > with sfdisk it just complains about the ioctl that failed: > > > > Re-reading the partition table ... > > BLKRRPART: Invalid argument > > > > Is it possible I have something configured wrong? Or is this a problem > > with the way /dev/mapper nodes work? I get the same result when using > > /dev/dm-* nodes. With the raw device nodes such as /dev/sd* it works > > ok. > > > > -- > > dm-devel mailing list > > dm-devel@xxxxxxxxxx > > https://www.redhat.com/mailman/listinfo/dm-devel > > > > > > -- > :wq! > kevin.foote > -- > dm-devel mailing list > dm-devel@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/dm-devel > -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel