Could it be possible that you are hitting only the cache on the first write ? Maybe filling up a 512MB BBU cache on the server with the first write, and then the next one comes and triggers a flush on the BBU ? In any case, having output from a "gluster profile" that was active during the writes is probably appreciated the people who can read them. On Fri, Mar 1, 2013 at 12:42 PM, Nikita A Kardashin <differentlocal at gmail.com> wrote: > No, I am speaking about stranges in write to existing files. Maybe 'broken', > but maybe root of trouble in my options (flush-behind or some else)? > > Illtustration of my situation: > root at virtual:~# rm testfile.bin # removing old file > root at virtual:~# dd if=/dev/zero of=testfile.bin bs=100M count=3 # testing > speed on new file > 3+0 records in > 3+0 records out > 314572800 bytes (315 MB) copied, 0.268943 s, 1.2 GB/s > root at virtual:~# dd if=/dev/zero of=testfile.bin bs=100M count=3 # testing > speed on existing file. WOW! > 3+0 records in > 3+0 records out > 314572800 bytes (315 MB) copied, 290.361 s, 1.1 MB/s > > Why writing to existing file is soooooooooooooooo slooooooooooooow? > > > 2013/3/1 Brian Candler <B.Candler at pobox.com> >> >> On Fri, Mar 01, 2013 at 03:30:07PM +0600, Nikita A Kardashin wrote: >> > If I try to execute above command inside virtual machine (KVM), first >> > time all going right - about 900MB/s (cache effect, I think), but if >> > I >> > run this test again on existing file - task (dd) hungs up and can be >> > stopped only by Ctrl+C. >> > Overall virtual system latency is poor too. For example, apt-get >> > upgrade upgrading system very, very slow, freezing on "Unpacking >> > replacement" and other io-related steps. >> > Does glusterfs have any tuning options, that can help me? >> >> If you are finding that processes hang or freeze indefinitely, this is not >> a question of "tuning", this is simply "broken". >> >> Anyway, you're asking the wrong person - I'm currently in the process of >> stripping out glusterfs, although I remain interested in the project. >> >> I did find that KVM performed very poorly, but KVM was not my main >> application and that's not why I'm abandoning it. I'm stripping out >> glusterfs primarily because it's not supportable in my environment, >> because >> there is no documentation on how to analyse and recover from failure >> scenarios which can and do happen. This point in more detail: >> http://www.gluster.org/pipermail/gluster-users/2013-January/035118.html >> >> The other downside of gluster was its lack of flexibility, in particular >> the >> fact that there is no usage scaling factor on bricks, so that even with a >> simple distributed setup all your bricks have to be the same size. Also, >> the object store feature which I wanted to use, has clearly had hardly any >> testing (even the RPM packages don't install properly). >> >> I *really* wanted to deploy gluster, because in principle I like the idea >> of >> a virtual distribution/replication system which sits on top of existing >> local filesystems. But for storage, I need something where operational >> supportability is at the top of the pile. >> >> Regards, >> >> Brian. > > > > > -- > With best regards, > differentlocal (www.differentlocal.ru | differentlocal at gmail.com), > System administrator. > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users -- Vennlig hilsen Torbj?rn Thorsen Utvikler / driftstekniker Trollweb Solutions AS - Professional Magento Partner www.trollweb.no Telefon dagtid: +47 51215300 Telefon kveld/helg: For kunder med Serviceavtale Bes?ksadresse: Luramyrveien 40, 4313 Sandnes Postadresse: Maurholen 57, 4316 Sandnes Husk at alle v?re standard-vilk?r alltid er gjeldende