Hi, I've got a somewhat broken installation here, affected by LVM over dmraid, similar to https://bugs.launchpad.net/ubuntu/+source/dmraid/+bug/129285. My system is a Intel fakeraid configuration using RAID 1. /dev/mapper/isw_bedhadgieeh_Volume0 consists of /dev/sda and /dev/sdb. Using LVM filtering, I forced the usage of /dev/sda4 as a PV. Mountpoints: /dev/mapper/isw_bedhadgieeh_Volume02 / /dev/mapper/volgroup-home /home /dev/mapper/volgroup-var /var /dev/mapper/volgroup-music /music In order to solve the LVM trouble, I'd like to remove LVM and create my partitions directly on /dev/isw_bedhadgieeh_Volume0. My idea is to do the following: 1) mkfs.ext3 /dev/sdb4 2) copy all data from all LVs (/home,/var and /music) to /dev/sdb4 /dev/sda4 (holding LVM) and /dev/sdb4 (ext3) are out of sync now. How could this be solved? Would a 'dd if=/dev/sdb4 of=/dev/sda4' suffice? Or is there a way to tell dmraid to sync or to use a specific disc for read operations? When writing to the dmraid device, the write is effectivly executed on both underlying devices (somewhat clear, that's RAID 1). What happens on reads? Is /dev/sda or /dev/sdb used? Is there a way to set a disk manually faulty like in mdadm? Thanks in advance, Stefan _______________________________________________ Ataraid-list mailing list Ataraid-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/ataraid-list