On Tue, Mar 17, 2009 at 07:56:39AM +0000, Joe Thornber wrote: > 2009/3/16 Benjamin Marzinski <bmarzins@xxxxxxxxxx>: > > I definitely see a problem if we use the default stacksize on ia64 > > machines. In RHEL5 at least, it's 10Mb per thread. With one waiter > > thread per multipath device, you get a gigabyte of memory wasted on > > machines with over a hundred multipath devices. > > You need to check whether this is 1G of physical memory, or just a 1G > chunk out of the address space. > > Some threads need to have their stack reserved and locked into memory > before calls into the kernel. This avoids deadlocks where the stack > gets paged out, but he vm can't page it back in until the thread > completes ... > > It sounds like you have many more threads running these days than when > I last looked at LVM, it's not clear to me how many of these are ones > that need their stacks mem-locking. Do you have an idea ? > > If they don't need mem-locking then as long as you're not forcing the > stack to be physically allocated I wouldn't worry too much about > consuming address space. > > Hope that ramble made sense, Yeah, sorry for missing your reply. The issue is that the event threads occasionally need to hold mutexs that the checker thread (the one that restores downed paths) needs. If the system was low on memory because a number of devices had no paths to them, and IO was queueing up, you could run into a problem where a event thread was paged out while holding a mutex. With the mutex locked, the checker thread could never restore the downed paths, letting the IO complete, and freeing up the memory. Christophe, if people are O.k. with these patches now, could they get in. Thanks -Ben > > - Joe > > -- > dm-devel mailing list > dm-devel@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/dm-devel -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel