Christer Weinigel wrote: > > To misquote Scott Adams: I'm not anti-Reiser4, I'm anti-idiot. > > /Christer > So can we ignore the benchmarks issue entirely and deal with the issue of what might it actually take now that Hans is not pissing all over everyone to get Reiser4 submitted with some hope of being accepted ? Your analysis was excellent. It make the case that just is there will be a few cases where Reiser4 is absolutely compelling and a few cases where it makes no sense at all that there are alot of cases in the middle. I can keep coming up with examples where the attributes unique to Reiser4 are an excellent idea, and you can just as easily come up with cases where it is a bad idea. If the standard is that it must be all things to all people, then we need to quit letting any filesystems in. Let's say John takes your advice and tweaks bonnie or builds whatever benchmark test bed you want, and proves Reiser4 outperforms everything in all cases (highly unlikely) does that mean it should get it ? What about if it is 10% slower in most normal cases ? Is that a good enough reason for excluding it ? Is anyone ready to claim that there are no cases (beyond the extremely rare case of compressing zero's) where Reiser4 may prove to be the best choice ? Compression itself is almost a religious war subject. for some people it is ALWAYS a bad idea, to others it is ALWAYS a good idea. In the real world it varies. Application level compression is generally superior to filesystem compression - there is more knowledge about the data leading to better choice of compression algorithms. and fortunately Linux provides one of the best environments for incorporating application level compression. But generally is not the same as ALWAYS. While I raised a specific case I was aware of where my understanding was compressed data could result in a net overall gain in performance - network servers where compression decompression were handled at the client, several other instances have been raised. Depending on the speed of the CPU, size of memory, size and speed of cache, speed of disk, type of data, .... sometimes compressed data will not only save space but improve performance - and sometimes it will be costly to performance. That does not make it a bad idea. We have several scheduling algorithms, because one size does not fit all. Besides I doubt given the complexity of the issue that John or anyone else can put together a benchmark that I can't poke holes in. There are so many cases that there is no such thing as the general case. Even performance compressing zeros starts to almost look like a rational measure when you start to calculate all the permutations in the data compression matrix. -- Dave Lynch DLA Systems Software Development: Embedded Linux 717.627.3770 dhlii@xxxxxxxxxx http://www.dlasys.net fax: 1.253.369.9244 Cell: 1.717.587.7774 Over 25 years' experience in platforms, languages, and technologies too numerous to list. "Any intelligent fool can make things bigger and more complex... It takes a touch of genius - and a lot of courage to move in the opposite direction." Albert Einstein - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html