Re: lot of scsi devices bug

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



German Staltari wrote:
Hi, this is maybe a udev bug, but it affected me when I was creating a lv in a cluster, so it could help some with this configuration. When I added some scsi disk (SAN) to the cluster nodes (more than 64 SCSI devices), udev created the device node for capi20 instead of sdbm. This produced a bad behavior in lvm when I was trying to create the vg's and lv's, it started to give errors like:

Error locking on node node-06: Internal lvm error, check syslog
Error locking on node node-05: Internal lvm error, check syslog
Error locking on node node-04: Internal lvm error, check syslog
Error locking on node node-01: Internal lvm error, check syslog
Error locking on node node-02: Internal lvm error, check syslog
Error locking on node node-03: Internal lvm error, check syslog
Failed to activate new LV.

When I commented out this lines

SYSFS{dev}="68:0",              NAME="capi20"
SYSFS{dev}="191:[0-9]*",        NAME="capi/%n"
KERNEL=="capi*",                MODE="0660"

in /etc/udev/rules.d/50-udev.rules, everything worked again.

I hope this could help,
German Staltari

--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


Forgot to add:
FC4 system, totally updated.

--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux