Re: Disperse volumes on armhf

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What is the endianness of the armhf CPU ?
Are you running a 32bit or 64bit Operating System ?


On Fri, Aug 3, 2018 at 9:51 AM, Fox <foxxz.net@xxxxxxxxx> wrote:
Just wondering if anyone else is running into the same behavior with disperse volumes described below and what I might be able to do about it.

I am using ubuntu 18.04LTS on Odroid HC-2 hardware (armhf) and have installed gluster 4.1.2 via PPA. I have 12 member nodes each with a single brick. I can successfully create a working volume via the command:

gluster volume create testvol1 disperse 12 redundancy 4 gluster01:/exports/sda/brick1/testvol1 gluster02:/exports/sda/brick1/testvol1 gluster03:/exports/sda/brick1/testvol1 gluster04:/exports/sda/brick1/testvol1 gluster05:/exports/sda/brick1/testvol1 gluster06:/exports/sda/brick1/testvol1 gluster07:/exports/sda/brick1/testvol1 gluster08:/exports/sda/brick1/testvol1 gluster09:/exports/sda/brick1/testvol1 gluster10:/exports/sda/brick1/testvol1 gluster11:/exports/sda/brick1/testvol1 gluster12:/exports/sda/brick1/testvol1

And start the volume:
gluster volume start testvol1

Mounting the volume on an x86-64 system it performs as expected.

Mounting the same volume on an armhf system (such as one of the cluster members) I can create directories but trying to create a file I get an error and the file system unmounts/crashes:
root@gluster01:~# mount -t glusterfs gluster01:/testvol1 /mnt
root@gluster01:~# cd /mnt
root@gluster01:/mnt# ls
root@gluster01:/mnt# mkdir test
root@gluster01:/mnt# cd test
root@gluster01:/mnt/test# cp /root/notes.txt ./
cp: failed to close './notes.txt': Software caused connection abort
root@gluster01:/mnt/test# ls
ls: cannot open directory '.': Transport endpoint is not connected

I get many of these in the glusterfsd.log:
The message "W [MSGID: 101088] [common-utils.c:4316:gf_backtrace_save] 0-management: Failed to save the backtrace." repeated 100 times between [2018-08-03 04:06:39.904166] and [2018-08-03 04:06:57.521895]


Furthermore, if a cluster member ducks out (reboots, loses connection, etc) and needs healing the self heal daemon logs messages similar to that above and can not heal - no disk activity (verified via iotop) though very high CPU usage and the volume heal info command indicates the volume needs healing.


I tested all of the above in virtual environments using x86-64 VMs and could self heal as expected.

Again this only happens when using disperse volumes. Should I be filing a bug report instead?

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users



--
Milind

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux