Thanks for the reply, Whit! Perfectly reasonable first question. The websites have user-generated content (think CMS), where people could put in that kind of content. The likelihood of such a scenario is slim-to-none, but I'd rather not have that kind of vulnerability in the first place. And yes, we could also add in validation and/or stripping of content that is outside the bounds of normal, but the main reason I bring up this Page of Death scenario is that I worry that it may be indicative of a weakness in the system that a different kind of load pattern could trigger this kind of hang. To answer the second question, running top on the Linux side during the Page of Death (with nothing else running) I get a CPU % spike of anywhere between 80-110% on glusterfsd, and 20% on glusterfs, with close to 22GB of memory free. The machines are 16-core apiece, though. On the Windows side there is next to no effect on CPU, memory, or network utilization. Ken On Sun, Jul 17, 2011 at 8:06 PM, Whit Blauvelt <whit.gluster at transpect.com>wrote: > On Sun, Jul 17, 2011 at 07:56:57PM -0500, Ken Randall wrote: > > > However, as a part of a different suite of tests is a Page of Death, > which > > contains tens of thousands of image references on a single page. > > Off topic response: Is there ever in real production any page, anywhere, > tht contains tens of thousands of image references? I'm all for testing at > the extreme, and capacity that goes far beyond what's needed for practical > purposes. Is that what this is, or do you anticipate real-life Page o' > Death > scenarios? > > Closer to the topic: What's going on with the load on the various systems. > On the Linux side, have you watched each of them with something like htop? > > Whit > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://gluster.org/pipermail/gluster-users/attachments/20110717/72c1c569/attachment.htm>