rbd-nbd uses librbd directly -- it runs as a user-space daemon process and interacts with the kernel NBD commands via a UNIX socket. As a result, it supports all image features supported by librbd. You can use the rbd CLI to map/unmap RBD-based NBDs [1] similar to how you map/unmap images via krbd. I wouldn't see this as a replacement for krbd, but rather another tool to support certain RBD use-cases [2]. [1] http://docs.ceph.com/docs/master/man/8/rbd/#commands [2] https://github.com/ceph/ceph/pull/6595 -- Jason Dillaman ----- Original Message ----- > From: "Bill Sanders" <billysanders@xxxxxxxxx> > To: "Yehuda Sadeh-Weinraub" <yehuda@xxxxxxxxxx> > Cc: "Sage Weil" <sweil@xxxxxxxxxx>, "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx>, ceph-users@xxxxxxxx, > ceph-maintainers@xxxxxxxx, ceph-announce@xxxxxxxx > Sent: Thursday, January 14, 2016 2:27:17 PM > Subject: Re: v10.0.2 released > > Is there some information about rbd-nbd somewhere? If it has feature > parity with librbd and is easier to maintain, will this eventually > deprecate krbd? We're using the RBD kernel client right now, and so > this looks like something we might want to explore at my employer. > > Bill > > On Thu, Jan 14, 2016 at 9:04 AM, Yehuda Sadeh-Weinraub > <yehuda@xxxxxxxxxx> wrote: > > On Thu, Jan 14, 2016 at 7:37 AM, Sage Weil <sweil@xxxxxxxxxx> wrote: > >> This development release includes a raft of changes and improvements for > >> Jewel. Key additions include CephFS scrub/repair improvements, an AIX and > >> Solaris port of librados, many librbd journaling additions and fixes, > >> extended per-pool options, and NBD driver for RBD (rbd-nbd) that allows > >> librbd to present a kernel-level block device on Linux, multitenancy > >> support for RGW, RGW bucket lifecycle support, RGW support for Swift > > > > rgw bucket lifecycle isn't there, it still has some more way to go > > before we merge it in. > > > > Yehuda > > > >> static large objects (SLO), and RGW support for Swift bulk delete. > >> > >> There are also lots of smaller optimizations and performance fixes going > >> in all over the tree, particular in the OSD and common code. > >> > >> Notable Changes > >> --------------- > >> > >> See > >> > >> http://ceph.com/releases/v10-0-2-released/ > >> > >> [I'd include the changelog here but I'm missing a oneliner that renders > >> the rst in email-suitable form...] > >> > >> Getting Ceph > >> ------------ > >> > >> * Git at git://github.com/ceph/ceph.git > >> * Tarball at http://download.ceph.com/tarballs/ceph-10.0.2.tar.gz > >> * For packages, see http://ceph.com/docs/master/install/get-packages > >> * For ceph-deploy, see > >> http://ceph.com/docs/master/install/install-ceph-deploy > >> -- > >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > >> the body of a message to majordomo@xxxxxxxxxxxxxxx > >> More majordomo info at http://vger.kernel.org/majordomo-info.html > > _______________________________________________ > > ceph-users mailing list > > ceph-users@xxxxxxxxxxxxxx > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com