Hello,
I wrote a PHP client script (using latest AWS S3 API lib) and this not solve my (hanging download) problem at all. So the problem not be in libs3/s3 tool. I changed MTU from 1500 to 9000 and back does not solved. Are there any apache (mpm-worker), fastcgi, rgw or librados tuning options to handle more concurrent download? (File sizes between 16K and 1024K).Mihaly
2013/9/17 Mihály Árva-Tóth <mihaly.arva-toth@xxxxxxxxxxxxxxxxxxxxxx>
Hello,I'm trying to download objects from one container (which contains 3 million objects, file sizes between 16K and 1024K) parallel 10 threads. I'm using "s3" binary comes from libs3. I'm monitoring download time, response time of 80% lower than 50-80 ms. But sometimes download hanging up, up to 17 secs; apache returns with error code 500. apache error log (lot of):
[Tue Sep 17 11:33:11 2013] [error] [client 194.38.106.67] FastCGI: comm with server "/var/www/radosgw.fcgi" aborted: idle timeout (30 sec)
[Tue Sep 17 11:33:11 2013] [error] [client 194.38.106.67] FastCGI: incomplete headers (0 bytes) received from server "/var/www/radosgw.fcgi"
[Tue Sep 17 11:33:11 2013] [error] [client 194.38.106.67] Handler for fastcgi-script returned invalid result code 1I tried with native apache2/fastcgi ubuntu packages and Ceph built apache2/fastcgi both. When I turn on "rgw print continue = true" with modified build, the result is better very bit (less hungs). "FastCgiWrapper Off" of course.
And if I set parallel get requests only 3 (instead of 10) the result is much better, the longest hang only 1500 ms. So I think this is depends with some resource management. But I get no idea.
Using ceph-0.67.4 with Ubuntu 12.04 x8_64.
I found the following issue (more than 1 year): http://tracker.ceph.com/issues/2027
But this closed with unable to reproduce. I can reproduce every time.Thank you,Mihaly
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com