onsdag 18 mars 2009 00:08:03 skrev "Shawn O. Pearce" <spearce@xxxxxxxxxxx>: > Robin Rosenberg <robin.rosenberg.lists@xxxxxxxxxx> wrote: > > tisdag 17 mars 2009 02:16:09 skrev "Shawn O. Pearce" <spearce@xxxxxxxxxxx>: > > > > > If we detect a file open failure while opening a pack we halve > > > the number of permitted open files and try again, [...] > > > > The output of getMessage isn't that simple to interpret. Here it is filename+" (Too many files open)", > > and on other platforms it is probably something else. This goes for the message part of most exceptions > > thrown from platform specific code like file i/o socket i/o etc. The type of exception is a FileNotFoundException, > > btw. > > > > I wonder whether your code works on any platform. > > Arrrgh. > > OK. Maybe scrap that part of the patch then? Yes, I think so, inless you want to try something as ugly as getMessage().toLower().indexof("(too many")? Not sure what it looks like in Windows or OSX. We know from the JDK source it's filename + "(" + reason +")" and the problem here is the reason part. > Its too bad they don't have a specific type of exception for this, FileNotFoundException is a little more specific. Maybe in combinatikon with a file.exists() and file.canRead test... (thinking loud now) > nor do they have a way to hold onto file descriptors under a type > like a SoftReference where the runtime can whack them if you have > too many. The problem is that it's not connected to file descriptors but to memory. Doing a GC on filenotfoundexception (here) could help here if one uses soft reference, or one could prune the cache manually. The parameter could also be a > > I guess that's why Hadoop HBase just tells you to up your fd ulimit > to 32767. :-) Yeah, with gigabytes of memory that might not consume too much resources anyway. -- robin -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html