Re: monitor exclusive lock when rbd client died abruptly

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 22, 2020 at 7:46 AM Liu, Changcheng
<changcheng.liu@xxxxxxxxx> wrote:
>
> Hi all,
>    I've checked below document:
>    https://docs.ceph.com/docs/master/rbd/rbd-exclusive-locks/
>
>    The content is contradictory to the result shown in below experiment:
>    The experient shows that another process could still write data to rbd
>    volume while there already has a process write to the same rbd volume
>    continuously.

This is the expected behaviour.  Exclusive lock is a cooperative
mechanism that ensures that only a single client is able to write
to the image and update its metadata (such as the object map)
at any given moment, not until the client exits.  It is acquired
automatically and the ownership is transparently transitioned between
clients.  In your example, "second" wakes up and requests the lock
from "first", "first" releases it, "second" performs its write,
"first" reacquires the lock and goes on.

If you want to disable transparent lock transitions, you need to
acquire the lock manually with RBD_LOCK_MODE_EXCLUSIVE:

>
>    1. ceph master head commit:
>    commit 1dd932a8f565dc74cf2441ab139aa173575c0e92
>    Date:   Thu Jul 16 09:42:30 2020 +0900
>
>    2. setup cluster
>    build$ OSD=3 MON=1 MGR=1 RGW=0 MDS=0 ../src/vstart.sh -k -d -n
>
>    3. create rbd volume
>    build$ bin/ceph -c ceph.conf osd pool create rbd 128
>    build$ bin/ceph -c ceph.conf osd pool application enable rbd rbd
>    build$ bin/rbd -c ceph.conf create fio_test --size 10G --image-format=2 --rbd_default_features=13
>
>    4. set environment variable
>    build$ export LD_LIBRARY_PATH=/home/nstcc3/work/src/ceph/build/lib:$LD_LIBRARY_PATH
>
>    5. build attached file
>    build$ g++ librbdtest.cpp -DKILL_DEAD -I../src/include -L lib/ -lrados -lrbd -o first
>    build$ g++ librbdtest.cpp -I../src/include  -L lib/ -lrados -lrbd -o second
>
>    6. run program
>       1) build$ ./first
>          "first" process write to rbd volume continuously.
>       2) build$ ./second
>          "second" process could still write to the same volume and exit normally.
>
>    source code file: librbdtest.cpp
>       1 #include <rbd/librbd.hpp>
>       2 #include <rados/librados.hpp>
>       3
>       4 #include <cstring>
>       5 #include <iostream>
>       6 #include <string>
>       7 #include <unistd.h>
>       8
>       9 void err_msg(int ret, const std::string &msg = "") {
>      10     std::cerr << "[error] msg:" << msg << " strerror: "
>      11               << strerror(-ret) <<  std::endl;
>      12 }
>      13 void err_exit(int ret, const std::string &msg = "") {
>      14     err_msg(ret, msg);
>      15     exit(EXIT_FAILURE);
>      16 }
>      17
>      18 int main(int argc, char* argv[]) {
>      19 #if !defined(KILL_DEAD)
>      20     sleep(30);
>      21 #endif
>      22     int ret = 0;
>      23     librados::Rados rados;
>      24
>      25     ret = rados.init("admin");
>      26     if (ret < 0)
>      27         err_exit(ret,"failed to initialize rados");
>      28     ret = rados.conf_read_file("ceph.conf");
>      29     if (ret < 0)
>      30         err_exit(ret, "failed to parse ceph.conf");
>      31
>      32     ret = rados.connect();
>      33     if (ret < 0)
>      34         err_exit(ret, "failed to connect to rados cluster");
>      35
>      36     librados::IoCtx io_ctx;
>      37     std::string pool_name = "rbd";
>      38     ret = rados.ioctx_create(pool_name.c_str(), io_ctx);
>      39     if (ret < 0) {
>      40         rados.shutdown();
>      41         err_exit(ret, "failed to create ioctx");
>      42     }
>      43
>      44     // rbd
>      45     librbd::RBD rbd;
>      46
>      47     librbd::Image image;
>      48     std::string image_name = "fio_test";
>      49     ret = rbd.open(io_ctx, image, image_name.c_str());
>      50     if (ret < 0) {
>      51         io_ctx.close();
>      52         rados.shutdown();
>      53         err_exit(ret, "failed to open rbd image");
>      54     } else {
>      55         std::cout << "open image succeeded" << std::endl;
>      56     }
>      57

              image.lock_acquire(RBD_LOCK_MODE_EXCLUSIVE);

Thanks,

                Ilya
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux