[testing ceph] with gnu stow and NFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



H Cephers!

TL;DR
Use NFS server to share binaries and libs (ceph and any other) among your cluster. Then link them using  gnu stow from mounted NFS to the root ( / ) directory on every node. Now switching between your custom ceph builids (or any other e.g. tcmalloc) on whole cluster will be very fast, easy to automate and consistent. Stow will change your ceph version using symbolic links with two commands only.

Long version:

I want to share with you, one of my concepts that make my life testing ceph a little bit easier. Some time ago I wrote a few words about this in other thread, but You probably missed them, because of heavy discussion in that thread.

Main idea was to make easy mechanism to switch between ceph versions: binaries/libs - everything. But the truth was I'm too lazy to reinstall it manually on every host and to ignorant to check if I've installed right version ;)

What I have at the moment:
- NFS server that exports /home/ceph to all of my cluster nodes
- Several subfolders with ceph builds, e.g. /home/ceph/ceph-0.94.1, /home/ceph/git/ceph
- and libraries e.g. /home/ceph/tcmalloc/gpertftools-2.4

In /home/ceph/ceph-0.94.1 and /home/ceph/tcmalloc/gpertftools-2.4 I have additional directory called BIN and everything is installed into it by running

instead of normal install (or building RPMs):
$ make
$ make install

something like:

$ mkdir  BIN
$ make
$ make DESTDIR=$PWD/BIN install
$ rm -rf $PWD/BIN/var                           # in case of Ceph we don't want to share this directory on NFS, so we must remove it


DESTDIR will install all package related files into BIN, just like it would be the root ( / ) directory:
$ tree BIN
BIN
??? etc
?   ??? bash_completion.d
?   ?   ??? ceph
?   ?   ??? rados
?   ?   ??? radosgw-admin
?   ?   ??? rbd
?   ??? ceph
??? sbin
?   ??? mount.ceph
?   ??? mount.fuse.ceph
??? usr
    ??? bin
    ?   ??? ceph
    ?   ??? ceph-authtool
    ?   ??? ceph_bench_log
    ?   ??? ceph-brag
    ?   ??? ceph-client-debug
    ?   ??? ceph-clsinfo

And now it's time for gnu stow: https://www.gnu.org/software/stow/

On every node I run from root:
$ stow -d /home/ceph/ceph-0.94.1  -t/ BIN; ldconfig;

Stow will create symbolic links from every file/directory from BIN into root ( / ) directory on my Linux, and ceph would work just like I'd make install it normal way, or using rpms.
$ type ceph
ceph is hashed (/usr/bin/ceph)

$ ls -al /usr/bin/ceph
$ lrwxrwxrwx 1 root root 50 Dec 11 14:33 /usr/bin/ceph -> ../../home/ceph/ceph-0.94.1/BIN/usr/bin/ceph

I can do the same for other libraries as well:
$ stow -d /home/ceph/tcmalloc/gperftools-2.4   -t/ BIN; ldconfig;

If I need to check another ceph/library version I just stop ceph on all nodes, then "unstow":
$ stow -D -d /home/ceph/ceph-0.94.1  -t/ BIN; ldconfig;

and "stow" again to different version
$ stow -D -d /home/ceph/ceph-0.94.1_my_custom_build  -t/ BIN; ldconfig;

== Exception ==
/etc/init.d/ceph should be copied into / because when you "unstow" ceph, "service ceph start" won't be working.
============

Then I just start ceph on all nodes and that's all.

Quite fast isn't it?

NFS+stow concept could be used not only for "builds" (compilation, make, make install), but from RPMs too (precompiled binaries). You need to unpack RPM into BIN folder and run stow, it will work just like you would install this rpm in a standard way, into root ( / ).

Placing binaries/libs on NFS does not impact performance on ceph at runtime, it could in fact cause some delay during processes start, when they are loaded from file system. Of course NFS will be SPOF, but for tests that I made this doesn't matter, I test only application behavior and infrastructure is untouched.

This idea is a time-saver at the day, and easy automation during night tests.

Regards,
Igor.


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux