Fwd: locking on mds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Forwarding to the list.


---------- Forwarded message ----------
From: M. Piscaer <debian@xxxxxxxxxxx>
Date: Thu, Nov 14, 2013 at 7:22 PM
Subject: Re:  locking on mds
To: John Spray <john.spray@xxxxxxxxxxx>


Yes, after upgrading to ceph 0.72 on all nodes, the problem got fixed.

Thanks Yanzheng, Grag and John for the help.

Now I need to update the puppet code from:
(https://github.com/enovance/puppet-ceph)

I only don't know which version I downloaded from him.

But that is A new issue.

Kinds regards,

Michiel Piscaer

On do, 2013-11-14 at 17:10 +0000, John Spray wrote:
> This is just a complicated version of the following test:
>
> with one file
> for n in N:
>   pick one of two clients
>   on that client:
>     open the file, increment a value, close the file
>     print the value after it was incremented.
>     check that the incremented value is equal to n+1
>
> It seems like when run, this test sees the counter is going back to
> zero, which I would guess is PHP's behaviour when for some reason it
> fails to open/load its file.
>
> PHP is relying on flock() for safety here[1]. A quick google for
> "cephfs flock" takes me to http://tracker.ceph.com/issues/2825.
> Almost certainly worth re-testing with latest Ceph.
>
> John
>
> 1. https://github.com/php/php-src/blob/d9bfe06194ae8f760cb43a3e7120d0503f327398/ext/session/mod_files.c
>
> On Thu, Nov 14, 2013 at 4:11 PM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
> > [ Adding list back. ]
> >
> > I don't know php; I'm not sure what it means for "count" to drop to
> > zero in your script. What erroneous behavior do you believe the
> > filesystem is displaying?
> > -Greg
> > Software Engineer #42 @ http://inktank.com | http://ceph.com
> >
> > On Thu, Nov 14, 2013 at 12:04 AM, M. Piscaer <debian@xxxxxxxxxxx> wrote:
> >> Greg,
> >>
> >> What do you mean,
> >>
> >> What info do you need?
> >>
> >> Kind regards,
> >>
> >> Michiel Piscaer
> >>
> >>
> >> On wo, 2013-11-13 at 16:19 -0800, Gregory Farnum wrote:
> >>> I'm not too familiar with the toolchain you're using, so can you
> >>> clarify what problem you're seeing with CephFS here?
> >>> -Greg
> >>> Software Engineer #42 @ http://inktank.com | http://ceph.com
> >>>
> >>>
> >>> On Wed, Nov 13, 2013 at 12:06 PM, M. Piscaer <debian@xxxxxxxxxxx> wrote:
> >>> >
> >>> > Hi,
> >>> >
> >>> > I have an webcluster setup, where on the loadbalancers the persistence
> >>> > timeout is 0. To share the sessions I use ceph version 0.56.7, like you
> >>> > see on the diagram.
> >>> >
> >>> > +----------------+
> >>> > | Internet       |
> >>> > +----------------+
> >>> >              |
> >>> >        +-----+-----------------------+
> >>> >        |                             |
> >>> > +-----------------+          +-----------------+
> >>> > | loadbalancer-01 |          | loadbalancer-02 |
> >>> > +-----------------+          +-----------------+
> >>> >        |                             |
> >>> >        +-----+--192.168.1.0/24-------+
> >>> >              |
> >>> > +--------+   |     +--------+
> >>> > | web-01 |---+-----| web-02 |
> >>> > +--------+   |      +--------+
> >>> >              |
> >>> > +--------+   |     +--------+
> >>> > | osd-01 |---+-----| osd-02 |
> >>> > +--------+   |     +--------+
> >>> >              |
> >>> > +--------+   |     +--------+
> >>> > | mds-01 |---+-----| mds-02 |
> >>> > +--------+   |     +--------+
> >>> >              |
> >>> >     +--------+--------+-------------------+
> >>> >     |                 |                   |
> >>> > +--------+         +--------+         +--------+
> >>> > | mon-01 |         | mon-02 |         | mon-03 |
> >>> > +--------+         +--------+         +--------+
> >>> >
> >>> > I mount on the web nodes the ceph mds tables:
> >>> > ** /etc/fstab **
> >>> > mon-01:/        < Session_mountpoint>   ceph
> >>> > defaults,name=admin,secret=<secret_key> 0       0
> >>> >
> >>> > My probem is that when the sessions gets a frequent update, I sometimes
> >>> > loss my session data.
> >>> >
> >>> > I can reproduse my problem with the following PHP script:
> >>> >
> >>> > <?php
> >>> > // page2.php
> >>> >
> >>> > session_save_path('/var/www/storage/sessions/');
> >>> > session_start();
> >>> >
> >>> > $_SESSION['count']++;
> >>> > echo 'count: ';
> >>> > echo $_SESSION['count'];
> >>> >
> >>> > ?>
> >>> >
> >>> > When i run the following commands:
> >>> >
> >>> > michielp@michielp-hp:~$ wget --no-check-certificate
> >>> > --keep-session-cookies --save-cookies /tmp/cookies.txt
> >>> > https://sub.domain.nl/page2.php -O -
> >>> > michielp@michielp-hp:~$ for foo in {1..10000}; do wget
> >>> > --no-check-certificate --load-cookies /tmp/cookies.txt
> >>> > "https://sub.domain.nl/page2.php"; -O - -o /dev/null; sleep 0.3; done
> >>> >
> >>> > At 10, 100 and 1000 and further the couter hits to 0. When i use sleep
> >>> > 0.4 everthing works fine.
> >>> >
> >>> > michielp@michielp-hp:~$ for foo in {1..10000}; do wget
> >>> > --no-check-certificate --load-cookies /tmp/cookies.txt
> >>> > "https://sub.domain.nl/page2.php"; -O - -o /dev/null; done
> >>> >
> >>> > count: 1
> >>> > count: 2
> >>> > count: 3
> >>> > count: 4
> >>> > count: 5
> >>> > count: 6
> >>> > count: 7
> >>> > count: 8
> >>> > count: 9
> >>> > count: 10
> >>> > count: 1
> >>> > count: 2
> >>> > count: 1
> >>> >
> >>> > Also when is switch off one of the webservers the problem disappears.
> >>> >
> >>> > On mds-01 is see the folllowing message:
> >>> > root@isp-oscaccstormds-01:/var/log/ceph# tail ceph-mds.5.log
> >>> > 2013-11-13 20:49:00.428592 7f20fca22700  0 mds.0.server
> >>> > handle_client_file_setlock: start: 0, length: 0, client: 18900, pid:
> >>> > 10032, type: 4
> >>> >
> >>> > The config of the ceph cluster looks like:
> >>> >
> >>> > [global]
> >>> >   auth cluster required = cephx
> >>> >   auth service required = cephx
> >>> >   auth client required = cephx
> >>> >   keyring = /etc/ceph/keyring
> >>> >   cluster network = 192.168.1.0/24
> >>> >   public network = 192.168.1.0/24
> >>> >
> >>> >   fsid = 82ecbd50-81ff-4f6c-a009-0bd02a1b4043
> >>> >
> >>> > [mon]
> >>> >   mon data = /var/lib/ceph/mon/mon.$id
> >>> >
> >>> > [osd]
> >>> >   osd journal size = 4096
> >>> >   filestore flusher = false
> >>> >   osd data = /var/lib/ceph/osd/osd.$id
> >>> >   osd journal = /var/lib/ceph/osd/osd.$id/journal
> >>> >   osd mkfs type = xfs
> >>> >   keyring = /var/lib/ceph/osd/osd.$id/keyring
> >>> >
> >>> > [mds]
> >>> >   mds data = /var/lib/ceph/mds/mds.$id
> >>> >   keyring = /var/lib/ceph/mds/mds.$id/keyring
> >>> >
> >>> > [mon.0]
> >>> >   host = mon-01
> >>> >   mon addr = 192.168.1.56:6789
> >>> >
> >>> > [mon.1]
> >>> >   host = mon-02
> >>> >   mon addr = 192.168.1.57:6789
> >>> >
> >>> > [mon.2]
> >>> >   host = mon-03
> >>> >   mon addr = 192.168.1.58:6789
> >>> >
> >>> > [mds.5]
> >>> >   host = mds-01
> >>> >
> >>> > [mds.6]
> >>> >   host = mds-02
> >>> >
> >>> > [osd.0]
> >>> >    host = osd-02
> >>> >    devs = /dev/sdb1
> >>> >    cluster addr = 192.168.1.60
> >>> >    public addr = 192.168.1.60
> >>> >
> >>> > [osd.1]
> >>> >    host = osd-01
> >>> >    devs = /dev/sdb1
> >>> >    cluster addr = 192.168.1.59
> >>> >    public addr = 192.168.1.59
> >>> >
> >>> >
> >>> > Kinds regards,
> >>> >
> >>> > Michiel Piscaer
> >>> >
> >>> >
> >>> > _______________________________________________
> >>> > ceph-users mailing list
> >>> > ceph-users@xxxxxxxxxxxxxx
> >>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> >>
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux