sending to gluster-users. ---------- Forwarded message ---------- From: Raghavendra G <raghavendra at gluster.com> Date: Wed, Mar 17, 2010 at 9:07 PM Subject: Re: GlusterFS 3.0.2 small file read performance benchmark To: John Feuerstein <john at feurix.com> Hi John, please find the inlined comments: On Wed, Mar 17, 2010 at 8:21 PM, John Feuerstein <john at feurix.com> wrote: > Hello Raghavendra, > > > when stdout is redirected to /dev/null, tar on my laptop is not doing > > any reads (tar cf - . > /dev/null). Can you confirm whether tar is > > having same behaviour on your test setup? when redirected to any file > > other than /dev/null, tar is doing reads. Can you attach strace of tar? > > indeed, you are right. I don't have the test-systems at hand any more, > but have just confirmed it here on my local machine. I am sorry. > > This is when stdout goes to a file: > > > lstat("./20K-AAA", {st_mode=S_IFREG|0644, st_size=20480, ...}) = 0 > > open("./20K-AAA", O_RDONLY) = 3 > > read(3, > "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 9216) > = 9216 > > write(1, > "./\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 10240) > = 10240 > > [... more reads and writes here ...] > > fstat(3, {st_mode=S_IFREG|0644, st_size=20480, ...}) = 0 > > close(3) = 0 > > ... and this when it goes to /dev/null: > > > lstat("./20K-AAA", {st_mode=S_IFREG|0644, st_size=20480, ...}) = 0 > > lstat("./20K-AAA", {st_mode=S_IFREG|0644, st_size=20480, ...}) = 0 > > > So the tests actually did not measure meta-data+data read performance > but instead only meta-data read performance. > > I can understand now that io-cache, read-ahead and quick-read could not > possibly help here (since the design of these translators do not affect > fetching meta-data?). > > yes. > But still, it's weird that stat-prefetch makes this test slower. It > looks like the more translators I've used, the more they worked > "against" each other, possibly fighting for locks...? > As for as io-threads is concerned it is not recommended to be used on client side, since there is no blocking layer (sockets are non-blocking). The only reason is to make use of multiple processing units present. In that case, it might be helpful on top of caching translators, since searching through cache may be computationally intensive. As you've said in your previous post, it is indeed true that performance translators should be judicially used depending on the use case. For eg., in case of random-reads, read-ahead will not be useful. And in such cases, they may degrade performance, since the performance translators themselves do some housekeeping stuff. > > After knowing the fact that this was a meta-data only test, the only > interesting measurement left is the final test run (basic config without > unneeded performance translators) compared to the local test on ext4+VFS: > > real 0m38.576s > user 0m3.356s > sys 0m6.076s > > vs > > real 0m1.312s > user 0m2.264s > sys 0m3.256s > > > So a io-cache for meta-data could be great? It was just ~250MB of data, > so even if this test would have read it all, the ~40second difference > would still be meta-data. > > Though it is unfair to compare network file system with local file system, I get the crux of what you are saying. stat-prefetch does do metadata caching, but metadata (corresponding to dentries of a directory) is cached when a directory is read and the life time of cache is from the time dentries are read till the fd corresponding to directory is closed. The targeted use cases were ls -l on huge directory, samba etc. As far as tar is concerned, it does do readdir, but stat on dentries is not sent before the fd is closed. Instead, they are sent after the fd is closed, hence stat-prefetch is not helping here. Thanks for the detailed tests :). Thanks. > > Best regards, > John > regards, -- Raghavendra G -- Raghavendra G