I also want to add some discrepancies in looking at the file:cat /proc/mdstat
Personalities : [raid1]
md35 : active raid1 sdb2[0] sdc2[1]
99629440 blocks super 1.0 [3/2] [UU_]
md34 : active raid1 sdbt2[0] sdbp2[1]
830240064 blocks super 1.0 [3/2] [UU_]
md33 : active raid1 sdbu2[0] sdbs2[1]
830240064 blocks super 1.0 [3/2] [UU_]
md32 : active raid1 sdby2[0] sdbv2[1]
830240064 blocks super 1.0 [3/2] [UU_]
md31 : active raid1 sdbr2[0] sdbw2[1]
830240064 blocks super 1.0 [3/2] [UU_]
md30 : active raid1 sdbq2[0] sdbx2[1]
830240064 blocks super 1.0 [3/2] [UU_]
We are only using the second partition of each drive for our md devices. In the log(attached previously) we see that the drive, the first partition, and the second partition are "Added to device cache". Is "device cache" some other cache not related to our SSD cache we're trying to create?
#device/dev-cache.c:333 /dev/sdb: Added to device cache
#device/dev-cache.c:330 /dev/disk/by-id/ata-Samsung_SSD_840_EVO_120GB_S1D5NSBF109608M: Aliased to /dev/sdb in device cache
#device/dev-cache.c:330 /dev/disk/by-id/wwn-0x50025388a01d30bc: Aliased to /dev/sdb in device cache
#device/dev-cache.c:333 /dev/sdb1: Added to device cache
#device/dev-cache.c:330 /dev/disk/by-id/ata-Samsung_SSD_840_EVO_120GB_S1D5NSBF109608M-part1: Aliased to /dev/sdb1 in device cache
#device/dev-cache.c:330 /dev/disk/by-id/wwn-0x50025388a01d30bc-part1: Aliased to /dev/sdb1 in device cache
#device/dev-cache.c:319 /dev/disk/by-partlabel/metadata: Already in device cache
#device/dev-cache.c:330 /dev/disk/by-partuuid/e7d5c622-626c-4122-9d5c-ccb1ae1ff0dc: Aliased to /dev/sdb1 in device cache
#device/dev-cache.c:330 /dev/disk/by-uuid/dc8f83e2-1073-4128-b64f-9e86b9539c67: Aliased to /dev/sdb1 in device cache
#device/dev-cache.c:333 /dev/sdb2: Added to device cache
#device/dev-cache.c:330 /dev/disk/by-id/ata-Samsung_SSD_840_EVO_120GB_S1D5NSBF109608M-part2: Aliased to /dev/sdb2 in device cache
#device/dev-cache.c:330 /dev/disk/by-id/wwn-0x50025388a01d30bc-part2: Aliased to /dev/sdb2 in device cache
#device/dev-cache.c:319 /dev/disk/by-partlabel/raiddata: Already in device cache
#device/dev-cache.c:330 /dev/disk/by-partuuid/39bcecad-693c-4842-b71d-920c4eb8aaef: Aliased to /dev/sdb2 in device cache
Hello Jonathan,Thanks for the reply. In regards to the information you are seeking:1) I've attached the output of the verbose vgchange command
2) By crash, I mean the machine hangs then automatically reboots itself.Hope that helps.--On Tue, Sep 2, 2014 at 6:00 PM, Brassow Jonathan <jbrassow@redhat.com> wrote:
On Aug 25, 2014, at 11:45 AM, Elvin Cako wrote:
Hello,Kernel:3.10.0-123.6.3.el7.x86_64 #1 SMP Wed Aug 6 21:12:36 UTC 2014 x86_64 x86_64 x86_64 GNU/LinuxIssue:In trying to set up the new LVM caching, I've run into some issues. I have followed the steps in the lvm manual:Thanks for trying LVM caching.My first guess is that not all the devices are visible to LVM when it tries to construct the LVs. This could have been due to MD running after lvmetad - I'm not sure. However, testing without lvmetad (as you have done) should have yielded different results then. Even without the devices being visible, 'vgchange' should not crash.We could use a couple pieces of information:1) what is the verbose trace of 'vgchange -ay'? (Add '-vvvv' for "very verbose" and capture the output.)
2) What do you mean by "crash"? Does the system go down? Does the command hang or segfault?brassowElvin
_______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/