On Sat, Dec 3, 2016 at 2:34 AM, Rakesh Parkiti <rakeshparkiti@xxxxxxxxxxx> wrote: > Hi All, > > > I. Firstly, As per my understanding, RBD image features (exclusive-lock, > object-map, fast-diff, deep-flatten, journaling) are not yet ready for ceph > Jewel version? Incorrect -- these features are the default enabled features for all newly created images under the Jewel release *because* they exist in Jewel. Perhaps you are referring to krbd in a particular distro kernel (i.e. not a Jewel-release krbd), in which case, yes, most likely these features are not currently supported. > II. The only working image feature is "Layering". Again, this is only correct when referring to krbd, not librbd. Support for exclusive lock is available in the latest kernel 4.9 RC. > III.Trying to configure rbd-mirroring on two different clusters, which has > same "ceph" cluster name. > > --- Here I have observed two problems: > > a) Initial command "ceph-deploy --cluster tom new tom1" command is > working fine in ubuntu 16.04, where as while creating initial monitor > unable to create a monitor. Error: Admin Socket Error. > > > > > b) Whereas in CentOS-7, It straight away says: > > > [user@local1 local]$ ceph-deploy --cluster tom new tom1 > [ceph_deploy.conf][DEBUG ] found configuration file at: > /home/user/.cephdeploy.conf > [ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy --cluster > active new local1 > : > : > [ceph_deploy.new][ERROR ] custom cluster names are not supported on sysvinit > hosts > [ceph_deploy][ERROR ] ClusterNameError: host local1 does not support custom > cluster names > > Note: This is an expected behavior as per REDHAT Errata forum. > > Questions:- > > 1. To configure RBD-MIRRORING for images. Required RBD image features are > "exclusive-lock + journaling" as these two features are mandate. Yes > 2. Are RBD image features working with older ceph versions like Hammer? Exclusive lock is supported with Hammer, but journaling was added to Jewel. > 3. Any Operating System specific Kernel is required to work with these RBD > Image features? rbd-mirror support is only available when using librbd -- krbd doesn't support journaling. > 4. RBD-Mirroring is production ready? If Yes, can anyone share the working > configuration steps? The only known limitations is that the rbd-mirror daemon isn't highly available. We are looking to address this in the future Luminous release of Ceph. > 5. How to change the cluster name from default "ceph" as cluster name? > I did not see any official document with proper steps for cluster name > change. only I found this procedure from this below link > http://docs.ceph.com/docs/jewel/rados/deployment/ceph-deploy-new/ , If I > am wrong, please direct me to proper link. The "cluster name" is actually just an artifact of how the Ceph clients can locate the proper configuration file. Therefore, even if you clusters was created using the name "ceph", you can copy its configuration file over to an rbd-mirror daemon host, place it in "/etc/ceph/<cluster name>.conf", and configure mirroring using the configuration file's cluster name. > 6. Can we configure rbd-mirroring with default "ceph" cluster name for two > different clusters? If yes, how to isolate, which is primary and secondary? Yes -- see above (i.e. just have two config files available whose filenames match the configured cluster name within rbd-mirroring pool configuration). > --Below is the output, when try to create RBD image with these features > (exclusive-lock, object-map, fast-diff, deep-flatten, journaling). > > Steps Information: > > ================ > > user@tom1:~$ uname -a > Linux tom1 4.4.0-31-generic #50-Ubuntu SMP Wed Jul 13 00:07:12 UTC 2016 > x86_64 x86_64 x86_64 GNU/Linux > > user@tom1:~$ lsb_release -a > No LSB modules are available. > Distributor ID: Ubuntu > Description: Ubuntu 16.04.1 LTS > Release: 16.04 > Codename: xenial > > user@tom1:~$ ceph -v > ceph version 10.2.3 (ecc23778eb545d8dd55e2e4735b53cc93f92e65b) > > user@tom1:~$ ceph -s > cluster c7c91460-3cd6-4183-9ebb-8880fb15865f > health HEALTH_OK > monmap e1: 3 mons at > {tom1=10.1.24.93:6789/0,tom2=10.1.24.94:6789/0,tom3=10.1.24.95:6789/0} > election epoch 4, quorum 0,1,2 tom1,tom2,tom3 > osdmap e51: 9 osds: 9 up, 9 in > flags sortbitwise > pgmap v255: 128 pgs, 1 pools, 114 bytes data, 5 objects > 322 MB used, 134 GB / 134 GB avail > 128 active+clean > > user@tom1:~$ rados lspools > > rbd > > user@tom1:~$ rbd create --image rbd/img1 --size 1G > user@tom1:~$ rbd --image rbd/img1 info > rbd image 'img1': > size 1024 MB in 256 objects > order 22 (4096 kB objects) > block_name_prefix: rbd_data.105a2ae8944a > format: 2 > features: layering, exclusive-lock, object-map, fast-diff, > deep-flatten > flags: > > user@tom1:~$ rbd feature enable rbd/img1 journaling > user@tom1:~$ rbd --image rbd/img1 info > rbd image 'img1': > size 1024 MB in 256 objects > order 22 (4096 kB objects) > block_name_prefix: rbd_data.105a2ae8944a > format: 2 > features: layering, exclusive-lock, object-map, fast-diff, > deep-flatten, journaling > flags: > journal: 105a2ae8944a > mirroring state: disabled > > user@tom1:~$ sudo rbd map --image rbd/img1 > rbd: sysfs write failed > RBD image feature set mismatch. You can disable features unsupported by the > kernel with "rbd feature disable". > In some cases useful info is found in syslog - try "dmesg | tail" or so. > rbd: map failed: (6) No such device or address > > > II. Working with RBD feature - "layering" only > > user@tom1:~$ rbd create --image rbd/img2 --size 1G --image-feature layering > user@tom1:~$ rbd --image rbd/img2 info > rbd image 'img2': > size 1024 MB in 256 objects > order 22 (4096 kB objects) > block_name_prefix: rbd_data.105f238e1f29 > format: 2 > features: layering > flags: > > user@tom1:~$ sudo rbd map --image rbd/img2 > /dev/rbd0 > > user@tom1:~$ rbd showmapped > id pool image snap device > 0 rbd img2 - /dev/rbd0 > > can someone help in replying these questions. It would be great help. > Thanks..! > > Thanks > Rakesh Parkiti > > > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Jason _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com