For the PHP script with little write/read accesses I will try to find it (I dont remember exactly the syntax), but for PHP Sessions, the bug could be easily reproduced.
I just test it on a new very simple GlusterFS partition with no trafic (juste me), and I reproduced it immediatly.
Explainations:
- 2 servers Debian Lenny stable
- GlusterFS 3.0.0 in distributed mode (one server and multiple clients)
- Lighttpd / PHP5 Fast-CGI
I juste mount the GlusterFS partition on the /var/www directory.
First of all, the PHP script you can execute:
<?php
session_save_path('.');
//if you want to verify if it worked
//echo session_save_path();
session_start();
?>
Secondly, there are 2 configurations if GlusterFS and, of course, one works and one does not.
The client configuration is the same in the both cases:
glusterfs.vol
volume test-1
type protocol/client
option transport-type tcp
option remote-host test
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume
volume writebehind
type performance/write-behind
option cache-size 4MB
subvolumes test-1
end-volume
volume readahead
type performance/read-ahead
option page-count 4
subvolumes writebehind
end-volume
volume iocache
type performance/io-cache
option cache-size 1GB
option cache-timeout 1
subvolumes readahead
end-volume
volume quickread
type performance/quick-read
option cache-timeout 1
option max-file-size 64kB
subvolumes iocache
end-volume
volume statprefetch
type performance/stat-prefetch
subvolumes quickread
end-volume
Now the server configuration:
glusterfsd.vol (this doesnt work)
volume posix1
type storage/posix
option directory /data
end-volume
volume locks1
type features/locks
subvolumes posix1
end-volume
volume brick1
type performance/io-threads
option thread-count 8
subvolumes locks1
end-volume
volume server-tcp
type protocol/server
option transport-type tcp
option auth.addr.brick1.allow *
option transport.socket.listen-port 6996
option transport.socket.nodelay on
subvolumes brick1
end-volume
glusterfsd.vol (this works)
volume posix1
type storage/posix
option directory /data
end-volume
#volume locks1
# type features/locks
# subvolumes posix1
#end-volume
volume brick1
type performance/io-threads
option thread-count 8
subvolumes posix1
end-volume
volume server-tcp
type protocol/server
option transport-type tcp
option auth.addr.brick1.allow *
option transport.socket.listen-port 6996
option transport.socket.nodelay on
subvolumes brick1
end-volume
So, with the locks translator, you can execute the script one time (it will be ok) but the second time the session file is on the file system but locked and nobody can access to it. PHP freezes and processes coult not be killed.
When it's happened, I have nothing in client-side logs but I have 2 kinds of message in the server-side logs:
When I execute the script:
[2010-02-04 21:11:22] W [posix.c:246:posix_lstat_with_gen] posix1: Access to /data//.. (on dev 2049) is crossing device (64768)
[2010-02-04 21:11:24] W [posix.c:246:posix_lstat_with_gen] posix1: Access to /data//.. (on dev 2049) is crossing device (64768)
When I try to umount -f (disconnect the gluster):
[2010-02-04 21:13:45] E [server-protocol.c:339:protocol_server_reply] protocol/server: frame 20: failed to submit. op= 26, type= 4
As I said I will try to find the other PHP script.
I hope this will help you.
Thanks for your hard work, GlusterFS is a great project.
Regards.
Le jeudi 04 février 2010 à 11:25 -0600, Tejas N. Bhise a écrit :
Hi Samuel, This problem is important for us to fix. We have heard of this from 2-3 users. It seems to be a combination of apache, php session files shared by multiple apache servers and locking ( or lock translator ). We have tried very hard to reproduce this with test programs that read/write very small files across multiple clients ( to mimic behaviour of shared php session files ). We even tried this using sample programs ( even jmeter ) using php sessions - but we were not able to reproduce it. Are you able to reproduce it easily ? If so, we would request you to help us reproduce this inhouse or else help us with some tracing from your system - ( disclaimer - the tracing could slow down the system a bit :-) ) .. Maybe we can start with using your test script - please let us know what gluster config, machine config/type etc you used so we can try something as close as possible. Thank you for your help in making the product better. Regards, Tejas. ----- Original Message ----- From: "Samuel Hassine" <samuel.hassine@xxxxxxxxx> To: "Gluster List" <gluster-devel@xxxxxxxxxx> Cc: "Yann Autissier" <yann.autissier@xxxxxxxxxxxxxxxxxx> Sent: Thursday, February 4, 2010 8:14:34 PM GMT +05:30 Chennai, Kolkata, Mumbai, New Delhi Subject: Feedback - Problem with the locks feature Hi all, Just a little feedback about a special using of GlusterFS. We are hosting about 15 000 websites on an infrastructure containing a big files server. We just change our files sharing system from NFS to GlusterFS. So we are using a simple distributed GlusterFS for the websites files. We are alo using a second GlusterFSd instance (on an alternate port) in order to export a special partition for PHP sessions. On the first partition, it seems to work very well for reading and writing. But on the second one, with a "classic" server configuration (posix vol, locks vol, performance vol and server vol), the file system freezes immediatly. After some tests, we found that it works fine without the locks volume. But after some other tests on the first GlusterFS, we discover that the same problem could occured with a script that writes many little files on the FS. It is the same problem with the last version of ezpublish, the cache generation failed). Do you think it could be fixed in the next versions or just bypassing the locks translator is a good solution? Thanks for your answers. Sam _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxx http://lists.nongnu.org/mailman/listinfo/gluster-devel