Re: gluster and LIO, fairly basic setup, having major issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/06/2016 04:25 PM, Michael Ciccarelli wrote:
this is the info file contents.. is there another file you would want to
see for config?
type=2
count=2
status=1
sub_count=2
stripe_count=1
replica_count=2
disperse_count=0
redundancy_count=0
version=3
transport-type=0
volume-id=98c258e6-ae9e-4407-8f25-7e3f7700e100
username=removed just cause
password=removed just cause
op-version=3
client-op-version=3
quota-version=0
parent_volname=N/A
restored_from_snap=00000000-0000-0000-0000-000000000000
snap-max-hard-limit=256
diagnostics.count-fop-hits=on
diagnostics.latency-measurement=on
performance.readdir-ahead=on
brick-0=media1-be:-gluster-brick1-gluster_volume_0
brick-1=media2-be:-gluster-brick1-gluster_volume_0

here are some log entries, etc-glusterfs-glusterd.vol.log:
The message "I [MSGID: 106006]
[glusterd-svc-mgmt.c:323:glusterd_svc_common_rpc_notify] 0-management:
nfs has disconnected from glusterd." repeated 39 times between
[2016-10-06 20:10:14.963402] and [2016-10-06 20:12:11.979684]
[2016-10-06 20:12:14.980203] I [MSGID: 106006]
[glusterd-svc-mgmt.c:323:glusterd_svc_common_rpc_notify] 0-management:
nfs has disconnected from glusterd.
[2016-10-06 20:13:50.993490] W [socket.c:596:__socket_rwv] 0-nfs: readv
on /var/run/gluster/360710d59bc4799f8c8a6374936d2b1b.socket failed
(Invalid argument)

I can provide any specific details you would like to see.. Last night I
tried 1 more time and it appeared to be working ok for running 1 VM
under VMware but as soon as I had 3 running the targets became
unresponsive. I believe gluster volume is ok but for whatever reason the
ISCSI target daemon seems to be having some issues...

Thank you for the offer to share more details. Would it be possible to log a bug [1] with a tarball of /var/log/glusterfs from both servers attached?


here is from the messages file:
Oct  5 23:13:00 media2 kernel: MODE SENSE: unimplemented page/subpage:
0x1c/0x02
Oct  5 23:13:00 media2 kernel: MODE SENSE: unimplemented page/subpage:
0x1c/0x02
Oct  5 23:13:35 media2 kernel:
iSCSI/iqn.1998-01.com.vmware:vmware4-0941d552: Unsupported SCSI Opcode
0x4d, sending CHECK_CONDITION.
Oct  5 23:13:35 media2 kernel:
iSCSI/iqn.1998-01.com.vmware:vmware4-0941d552: Unsupported SCSI Opcode
0x4d, sending CHECK_CONDITION.


There does seem to be a knowledge base from VMWare for this error [2]. Could be worth a check to see if the recommended steps help you prevent this log in messages.



Is it that the gluster overhead is just killing LIO/target?


Difficult to comment based on the available data. Would you happen to know if any gluster self-healing was in progress when this problem was observed?

Regards,
Vijay

[1] https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS

[2] https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003278

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux