root@maude:~# mount | grep shm
none on /run/shm type tmpfs (rw,nosuid,nodev)
root@maude:~# df -h | grep shm
none 3.9G 56M 3.9G 2% /run/shm
root@maude:~# ls -al /dev/shm
lrwxrwxrwx 1 root root 8 May 11 16:26 /dev/shm -> /run/shm
As you can see, /dev/shm is a symlink to /run/shm. I would think that
would work. I tried creating a new tmpfs and mounting it on /dev/shm.
Same results. Corosync dies within a few minutes.
Any more ideas? There's nothing really unusual about our setup other
than the number of nodes (14) in the cluster. It's running on a Xen XCP
virtual machine, but I wouldn't think that would be a factor.
Thanks.
- Rob P.
On 5/9/2013 7:18 PM, Angus Salkeld wrote:
On 08/05/13 20:10 -0500, Rob Parsons wrote:
I'm using libqb pulled from github on May 4th.
One quick thing to check is the location of your shared memory
I use travis ci for libqb and travis uses ubuntu vm's and I
know I had to do a workaround for the shared memory location
being moved from /dev/shm to /run/shm.
See: https://github.com/asalkeld/libqb/blob/master/.travis.yml
I'd suggest have a look at the output of:
mount | grep shm
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)
df -h | grep shm
tmpfs 3.9G 2.9M 3.9G 1% /dev/shm
and see if you need to run that workaround. (libqb tries /dev/shm
first).
-Angus
_______________________________________________
discuss mailing list
discuss@xxxxxxxxxxxx
http://lists.corosync.org/mailman/listinfo/discuss