Max thin client sessions/gdm limit?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]




Greetings,

I've been subscribed to this list for some time and I'd like to start off by thanking everyone who helps out on it. This is my first post to it, so please be gentle :-)

I have several offices set up with RedHat and CentOS terminal servers. We are using CentOS 4.6 and RedHat 4.6 on them, GDM for the display manager, and PXES (http://sourceforge.net/projects/pxes/) for the boot image. Each terminal server is also the font server, dhcp server, tftp server, dns server, and provides a ldap slave from our master ldap server (at our corporate office) for user accounts and authentication.

Everything has been working quite well for some time. However now I seem to have hit a 50 thin client/gdm session limit. I've tested this several times by powering off all of our thin clients and restarting the terminal server, then powering up each thin client one at a time. They all work fine up to the 51st thin client. When I power up the 51st thin client it grabs an ip address, fetches its boot image, proceeds to boot, X starts (gray mesh screen, large X cursor -- gray screen of death?) and then it just sits there. The GDM login/greeter is never presented.

Looking at the log file on the terminal server shows:

gdm_child_action: Aborting display


Running gdm in debug mode gives me this (sorry for the long log dump):

May 14 03:08:21 lts-nimbus gdm[4066]: gdm_xdmcp_decode: Received opcode QUERY from client 10.2.1.200 May 14 03:08:21 lts-nimbus gdm[4066]: gdm_xdmcp_handle_query: Opcode 2 from 10.2.1.200 May 14 03:08:21 lts-nimbus gdm[4066]: gdm_xdmcp_send_willing: Sending WILLING to 10.2.1.200 May 14 03:08:24 lts-nimbus gdm[4066]: gdm_xdmcp_decode: Received opcode REQUEST from client 10.2.1.200 May 14 03:08:24 lts-nimbus gdm[4066]: gdm_xdmcp_handle_request: Got REQUEST from 10.2.1.200 May 14 03:08:24 lts-nimbus gdm[4066]: gdm_xdmcp_handle_request: xdmcp_pending=0, MaxPending=4, xdmcp_sessions=50, MaxSessions=80, ManufacturerID= May 14 03:08:24 lts-nimbus gdm[4066]: gdm_xdmcp_display_dispose_check (200.netbiz.com:0) May 14 03:08:24 lts-nimbus gdm[4066]: gdm_auth_secure_display: Setting up access for 200.netbiz.com:0
May 14 03:08:24 lts-nimbus gdm[4066]: gdm_auth_secure_display: Setting up access
May 14 03:08:24 lts-nimbus gdm[4066]: gdm_auth_secure_display: Setting up access for 200.netbiz.com:0 - 1 entries May 14 03:08:24 lts-nimbus gdm[4066]: gdm_xdmcp_display_alloc: display=200.netbiz.com:0, session id=1696684507, xdmcp_pending=1 May 14 03:08:24 lts-nimbus gdm[4066]: gdm_xdmcp_send_accept: Sending ACCEPT to 10.2.1.200 with SessionID=1696684507 May 14 03:08:24 lts-nimbus gdm[4066]: gdm_xdmcp_decode: Received opcode MANAGE from client 10.2.1.200 May 14 03:08:24 lts-nimbus gdm[4066]: gdm_xdmcp_handle_manage: Got MANAGE from 10.2.1.200 May 14 03:08:24 lts-nimbus gdm[4066]: gdm_xdmcp_handle_manage: Got Display=0, SessionID=1696684507 Class=MIT-unspecified from 10.2.1.200 May 14 03:08:24 lts-nimbus gdm[4066]: gdm_xdmcp_handle_manage: Looked up 200.netbiz.com:0 May 14 03:08:24 lts-nimbus gdm[4066]: gdm_choose_indirect_lookup: Host 10.2.1.200 not found May 14 03:08:24 lts-nimbus gdm[4066]: gdm_forward_query_lookup: Host 10.2.1.200 not found
May 14 03:08:24 lts-nimbus gdm[4066]: gdm_display_manage: Managing 200.netbiz.com:0
May 14 03:08:24 lts-nimbus gdm[4066]: loop check: last_start 0, last_loop 0, now: 1210759704, retry_count: 0
May 14 03:08:24 lts-nimbus gdm[4066]: Resetting counts for loop of death detection
May 14 03:08:24 lts-nimbus gdm[6191]: gdm_slave_start: Starting slave process for 200.netbiz.com:0
May 14 03:08:24 lts-nimbus gdm[4066]: gdm_display_manage: Forked slave: 6191
May 14 03:08:24 lts-nimbus gdm[6191]: gdm_slave_start: Loop Thingie
May 14 03:08:24 lts-nimbus gdm[6191]: gdm_slave_run: Opening display 200.netbiz.com:0
May 14 03:08:24 lts-nimbus gdm[6191]: gdm_slave_run: Sleeping 1 on a retry
May 14 03:08:25 lts-nimbus gdm[6191]: gdm_slave_run: Sleeping 3 on a retry
May 14 03:08:26 lts-nimbus gdm[4066]: (child 5747) gdm_slave_alrm_handler: 10.2.1.220:0 got ARLM signal, to ping display May 14 03:08:26 lts-nimbus gdm[4066]: gdm_xdmcp_decode: Received opcode MANAGE from client 10.2.1.200 May 14 03:08:26 lts-nimbus gdm[4066]: gdm_xdmcp_handle_manage: Got MANAGE from 10.2.1.200 May 14 03:08:26 lts-nimbus gdm[4066]: gdm_xdmcp_handle_manage: Got Display=0, SessionID=1696684507 Class=MIT-unspecified from 10.2.1.200 May 14 03:08:26 lts-nimbus gdm[4066]: gdm_xdmcp_handle_manage: Session id 1696684507 already managed
May 14 03:08:28 lts-nimbus gdm[6191]: gdm_slave_run: Sleeping 5 on a retry
May 14 03:08:30 lts-nimbus gdm[4066]: gdm_xdmcp_decode: Received opcode MANAGE from client 10.2.1.200 May 14 03:08:30 lts-nimbus gdm[4066]: gdm_xdmcp_handle_manage: Got MANAGE from 10.2.1.200 May 14 03:08:30 lts-nimbus gdm[4066]: gdm_xdmcp_handle_manage: Got Display=0, SessionID=1696684507 Class=MIT-unspecified from 10.2.1.200 May 14 03:08:30 lts-nimbus gdm[4066]: gdm_xdmcp_handle_manage: Session id 1696684507 already managed
May 14 03:08:33 lts-nimbus gdm[6191]: gdm_slave_run: Sleeping 7 on a retry
May 14 03:08:38 lts-nimbus gdm[4066]: gdm_xdmcp_decode: Received opcode MANAGE from client 10.2.1.200 May 14 03:08:38 lts-nimbus gdm[4066]: gdm_xdmcp_handle_manage: Got MANAGE from 10.2.1.200 May 14 03:08:38 lts-nimbus gdm[4066]: gdm_xdmcp_handle_manage: Got Display=0, SessionID=1696684507 Class=MIT-unspecified from 10.2.1.200 May 14 03:08:38 lts-nimbus gdm[4066]: gdm_xdmcp_handle_manage: Session id 1696684507 already managed
May 14 03:08:40 lts-nimbus gdm[6191]: gdm_slave_run: Sleeping 9 on a retry
May 14 03:08:49 lts-nimbus gdm[6191]: gdm_slave_run: Sleeping 11 on a retry
May 14 03:08:54 lts-nimbus gdm[4066]: gdm_xdmcp_decode: Received opcode MANAGE from client 10.2.1.200 May 14 03:08:54 lts-nimbus gdm[4066]: gdm_xdmcp_handle_manage: Got MANAGE from 10.2.1.200 May 14 03:08:54 lts-nimbus gdm[4066]: gdm_xdmcp_handle_manage: Got Display=0, SessionID=1696684507 Class=MIT-unspecified from 10.2.1.200 May 14 03:08:54 lts-nimbus gdm[4066]: gdm_xdmcp_handle_manage: Session id 1696684507 already managed
May 14 03:09:00 lts-nimbus gdm[6191]: gdm_slave_run: Sleeping 13 on a retry
May 14 03:09:13 lts-nimbus gdm[6191]: gdm_slave_run: Sleeping 15 on a retry
May 14 03:09:26 lts-nimbus gdm[4066]: gdm_xdmcp_decode: Received opcode MANAGE from client 10.2.1.200 May 14 03:09:26 lts-nimbus gdm[4066]: gdm_xdmcp_handle_manage: Got MANAGE from 10.2.1.200 May 14 03:09:26 lts-nimbus gdm[4066]: gdm_xdmcp_handle_manage: Got Display=0, SessionID=1696684507 Class=MIT-unspecified from 10.2.1.200 May 14 03:09:26 lts-nimbus gdm[4066]: gdm_xdmcp_handle_manage: Session id 1696684507 already managed
May 14 03:09:28 lts-nimbus gdm[6191]: gdm_slave_run: Sleeping 17 on a retry
May 14 03:09:45 lts-nimbus gdm[6191]: gdm_slave_run: Sleeping 19 on a retry
May 14 03:09:58 lts-nimbus gdm[4066]: gdm_xdmcp_decode: Received opcode MANAGE from client 10.2.1.200 May 14 03:09:58 lts-nimbus gdm[4066]: gdm_xdmcp_handle_manage: Got MANAGE from 10.2.1.200 May 14 03:09:58 lts-nimbus gdm[4066]: gdm_xdmcp_handle_manage: Got Display=0, SessionID=1696684507 Class=MIT-unspecified from 10.2.1.200 May 14 03:09:58 lts-nimbus gdm[4066]: gdm_xdmcp_handle_manage: Session id 1696684507 already managed May 14 03:10:04 lts-nimbus gdm[6191]: gdm_slave_quick_exit: Will kill everything from the display May 14 03:10:04 lts-nimbus gdm[6191]: gdm_slave_quick_exit: Killed everything from the display
May 14 03:10:04 lts-nimbus gdm[4066]: mainloop_sig_callback: Got signal 17
May 14 03:10:04 lts-nimbus gdm[4066]: gdm_cleanup_children: child 6191 returned 4
May 14 03:10:04 lts-nimbus gdm[4066]: gdm_child_action: Aborting display 200.netbiz.com:0 May 14 03:10:04 lts-nimbus gdm[4066]: gdm_display_unmanage: Stopping 200.netbiz.com:0 (slave pid: 0) May 14 03:10:04 lts-nimbus gdm[4066]: gdm_display_dispose: Disposing 200.netbiz.com:0
May 14 03:10:04 lts-nimbus gdm[4066]: gdm_display_unmanage: Display stopped

Once this has happened even powering off a working thin client that was able to get a GDM session/login and powering up one that was not still doesn't work. However any of the ones that were working can still be powered up and get a GDM session/login. If I power down all the thin clients, restart the terminal server I can again get any of them to work up to the 51st connection. I've looked through the config files for xfs and gdm but can't find anything relating to any sort of session limit. The only thing that looked remotely close was MaxSessions under the [xdmcp] section which is already set to 100.

I spent a couple of hours searching with Google but didn't really find anything other than a bug for a different version of GDM than we are using that caused it to ignore a session limit setting (which I can't seem to find where to set it anyway).

I believe everything is pretty much stock distribution. We are using the gdm from rpm "gdm-2.6.0.5-7.rhel4.15".

Does anyone have any ideas? I'm not sure what more info to post but I'd be glad to send anymore log info or config files.

Any help/suggestions are appreciated,

Ryan Faussett

_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos

[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux