Dear reader, Please let me know if I need to file a bug report. It does not seem like this could be a bug, because it seems like a very natural thing to want to do that many would have tried before. Host OS is CentOS 6.5, kernel is 2.6.32-431.11.2.el6.x86_64, and qemu-kvm version is 2:0.12.1.2-2.415.el6_5.7 as reported by yum. I have an LSI 2308 SAS controller with two LSI SAS expanders that provide multiple paths to two SAS SSDs. I will describe only the relevant subset (a sort of minimal problematic example) of my broader storage configuration on these devices below. I have multipathing configured for these two SSDs, so that they appear as /dev/mapper/system00 and /dev/mapper/system01. I have two partitions on each device, so the partitions for one device appear as /dev/mapper/system00p1 and /dev/mapper/system00p2, and the partitions for the other device appear as /dev/mapper/system01p1 and /dev/mapper/system01p2. Corresponding partitions are configured for mdadm software raid 1. This gives me /dev/md0 and /dev/md1. /boot is mounted on /dev/md0. On /dev/md1, I have a physical LVM volume with a volume group, vg_system, which has several logical volumes. These are named host_os, db_data, vm1, vm2, vm3, ..., vmN , and appear as as /dev/vg_system/NAME. host_os is mounted on /, and the others are mounted on /NAME, where NAME is simply the logical volume name. I want to install CentOS 6.5 guests using commands like: virt-install --name=vm1 --cpu=host --vcpus=2,maxvcpus=8 --ram=2048 --os-type=linux --os-variant=rhel6 --network bridge=br1,model=virtio --nographics --cdrom=/tmp/CentOS-6.5-x86_64-minimal.iso --disk path=/dev/vg_system/guest_os_1,bus=virtio,cache=none Note that I am specifying the path in /dev/ to the raw LVM logical volumes, which I thought was supported. However, when I get into the text-based anaconda installer for CentOS 6.5 in a guest, and I reach the area where I am supposed to choose target storage for the installation, no storage devices are listed. Curious, I tried several other devices and found the following: /dev/md0: FAILS /dev/md1: FAILS /dev/vg_system/guest_os_1: FAILS /dev/mapper/system00: [*] vda 190782 MB (Virtio Block Device) /dev/mapper/system00p2 FAILS /dev/sda: [*] vda 190782 MB (Virtio Block Device) file-based storage: [*] vda SIZE_OF_FILE MB (Virtio Block Device) Failed means no disk device appears in the installer, and vda means that the device appears. My theories, and why each one does not explain the facts as I understand them: 1. mdadm is the problem: I began to suspect that paravirtualization couldn't work with mdadm, so I tried bus=ide, but it didn't make any difference. Additionally, if mdadm is the problem, then /dev/mapper/system00p2, which is not on top of mdadm, should would, but it doesn't. 2. multipathing is the problem Then /dev/mapper/system00 should not work in that case, because this is a multipath device, but it does. 3. LVM is the problem I do not have a way to test this since I do not have any LVM volumes not present on top of mdadm, but a) LVM is reported widely on the web as being supported as a block device for KVM and b) it cannot be the only problem, because /dev/mapper/system00p2 is below LVM. I am very confused about what is happening, and I look forward to any insights you can provide. It is particularly strange to me that the raw disk system00 works but the partition system00p2 does not. All of these results are repeatable, not just transient. Best regards, Ryan Lichtenwalter -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html