As far as i can tell gluster uses swift for the object storage part. Swift han an arbitrarily set maximum file size of 5 gb. After that you are supposed to split the file in pieces and create a manifest file that is used to concatenate the pieces when downloading. The "st" command does this automatically. Thats the reason for setting clirnt_max_body_size to 5 G. On 2011 8 1 09:15, "Gangalwar" <gaurav at gluster.com> wrote: > Hi, > Thanks for reporting this issue, it will be fixed in the next release. > Also could i know why you are using client_max_body_size 5G; in the config file? > > Thanks, > Gaurav > > ________________________________ > > Hello, > > Don't know if this is the best way to report a bug, but here goes :). > > I have 2 gluster servers running glusterfs-3.3beta1 on which I have > configured the Object Storage platform. The servers are on a private > network with no public IP's and i was trying to load balance the > object storage system using nginx. It worked great except that every > other request would be answered with a 503 error. Upon inspection of > /var/log/swift/proxy.error I found the following traceback: > > Jul 29 13:28:53 storage05 proxy-server ERROR 500 Traceback (most > recent call last):#012 File > "/usr/local/lib/python2.6/dist-packages/swift-1.4_dev-py2.6.egg/swift/obj/server.py", > line 891, in __call__#012 res = getattr(self, req.method)(req)#012 > File "/usr/local/lib/python2.6/dist-packages/swift-1.4_dev-py2.6.egg/swift/obj/server.py", > line 733, in GET#012 if file_obj.metadata[X-ETAG] in > request.if_none_match:#012NameError: global name 'X' is not > defined#012 From Object Server 127.0.0.1:6010<http://127.0.0.1:6010> > > (txn: > tx2abf0954-1043-4976-a692-39da260d9271) > > It seams that at line 733 in > /usr/local/lib/python2.6/dist-packages/swift-1.4_dev-py2.6.egg/swift/obj/server.py, > is trying to call X-ETAG instead of X_ETAG (i think its a typo). > Replacing the dash with an > underscore takes care of the error on my system. If its of any help, > here is the nginx config i used: > > > worker_processes 1; > > events { > worker_connections 1024; > } > > > http { > include mime.types; > default_type application/octet-stream; > > sendfile on; > keepalive_timeout 65; > > upstream backend-secure { > server 192.168.5.5:443<http://192.168.5.5:443>; > server 192.168.5.6:443<http://192.168.5.6:443>; > } > > server { > listen 80; > client_max_body_size 5G; > location / { > proxy_pass https://backend-secure; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto https; > proxy_redirect off; > > } > } > server { > listen 443 ssl; > client_max_body_size 5G; > ssl_certificate /etc/nginx/ssl/cert.crt; > ssl_certificate_key /etc/nginx/ssl/key.key; > location / { > proxy_pass https://backend-secure; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto https; > proxy_redirect off; > } > } > } > > > Best regards, > Gabriel > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org<mailto:Gluster-users at gluster.org> > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://gluster.org/pipermail/gluster-users/attachments/20110801/d136f27f/attachment-0001.htm>