-------- Original-Nachricht -------- > Datum: Tue, 31 Mar 2009 15:26:00 +0300 > Von: Stas Oskin <stas.oskin at gmail.com> > An: gluster-users <gluster-users at gluster.org> > Betreff: Re: Strange issues with du and df > Hi. > Ok, df was incorrect because one of servers went into <defunct> (zombie) > state, and df waits until the software done writing or releases writing > handles. > > I'm giving GlusterFS a 3rd try, but these issues are very critical: > > 1) 0 size files > Welcome to the club. > 2) AFR losts some of the files - after a day of running there is a > mismatch > between the one server and the second one > Do you mean that you loose the whole file or just the content of the file? Somehow lately the whole AFR functionality is borked for me. I tried to reduce everything to the minimum (no performance translators or anything other not needed) and still I am loosing data. Restarting server and client can lead to total loss of content for some files in my case. > 3) Possible memory leak > Could you provide valgrind dumps showing where the leaks are occurring? > 4) Server going to defunct state > > > Any idea how it would be the best to diagnose and advance with these > issues? > > Thanks! > > 2009/3/31 Stas Oskin <stas.oskin at gmail.com> > > > Hi. > > > > I'm receiving very strange results with AFR: > > > > Client: > > > > df -h: > > Filesystem Size Used Avail Use% Mounted on > > glusterfs 31G 28G 1020M 97% /mnt/media > > > > du -sh for mounted directory: > > 1007M (ompletely different) > > > > > > Server 1: > > df -h: > > Filesystem Size Used Avail Use% Mounted on > > /dev/hda4 31G 28G 1021M 97% /media > > > > du -sh for exported directory: > > 1007M (completely different here too!) > > > > > > Server 2: > > Filesystem Size Used Avail Use% Mounted on > > /dev/hda4 31G 1.1G 28G 4% /media > > > > du -sh for exported directory: > > 920M > > > > > > 1) It's just like the AFR completely broken. > > 2) What's more strange, that the server 1 has completely different stats > > for 2 commands - first time that I see it. > > 3) This happened after the disk space on GlusterFS got completely full. > > > > > > I also have some files with 0 sizes, so the other mentioned problem > exist > > too. The used version is 2RC7. > > > > How can these issues be tracked down? They are very serious, > > and basically influent the whole cluster reliability. > > > > Regards. > > -- Psssst! Schon vom neuen GMX MultiMessenger geh?rt? Der kann`s mit allen: http://www.gmx.net/de/go/multimessenger01