Hi, I am writing a device driver. While testing the driver, I run it under high load (i.e. most of the physical memory is allocated by the driver, hundreds of processes competing for access to the driver interface, which allows 32 processes at a time only). In that situation, the machine 'almost hangs', i.e. input on the terminal is processed extremly slowly (dozens of seconds up to more than a minute, for a /sbin/lsmod) - this is not surprising, due to both, the lack of available memory as well as the high number of running processes (moreover the machine is diskless, booting over the network, and has no swap space). When I kill all the processes that use the driver (with killall), I sometimes observe, that the driver's usage count does not drop to 0, and that the driver does not release the memory it had allocated (the driver allocates memory on behalf of a process when the process opens the driver - it releases that memory in the release() method). The symptoms look as if some of the processes would have vanished, without invoking the release() method. Since I don't observe any such problems or inconsistencies unless running with excessively high load, I was wondering, if that problem might be related to the Linux OS, rather than to my driver (one possiblity would be that Linux removes some processes under high load, without properly calling the release method for all open files). Has anyone made similar observations? regards Martin -- Supercomputing System AG email: maletinsky@scs.ch Martin Maletinsky phone: +41 (0)1 445 16 05 Technoparkstrasse 1 fax: +41 (0)1 445 16 10 CH-8005 Zurich -- Kernelnewbies: Help each other learn about the Linux kernel. Archive: http://mail.nl.linux.org/kernelnewbies/ IRC Channel: irc.openprojects.net / #kernelnewbies Web Page: http://www.kernelnewbies.org/