On Thu, Apr 7, 2011 at 2:54 AM, Yoshiaki Tamura <tamura.yoshiaki@xxxxxxxxx> wrote: > 2011/4/7 Stefan Hajnoczi <stefanha@xxxxxxxxx>: >> On Thu, Apr 07, 2011 at 10:14:03AM +0900, Yoshiaki Tamura wrote: >>> 2011/3/29 Josh Durgin <josh.durgin@xxxxxxxxxxxxx>: >>> > The new format is rbd:pool/image[@snapshot][:option1=value1[:option2=value2...]] >>> > Each option is used to configure rados, and may be any Ceph option, or "conf". >>> > The "conf" option specifies a Ceph configuration file to read. >>> > >>> > This allows rbd volumes from more than one Ceph cluster to be used by >>> > specifying different monitor addresses, as well as having different >>> > logging levels or locations for different volumes. >>> > >>> > Signed-off-by: Josh Durgin <josh.durgin@xxxxxxxxxxxxx> >>> > --- >>> > block/rbd.c | 119 ++++++++++++++++++++++++++++++++++++++++++++++++++-------- >>> > 1 files changed, 102 insertions(+), 17 deletions(-) >>> > >>> > diff --git a/block/rbd.c b/block/rbd.c >>> > index cb76dd3..bc3323d 100644 >>> > --- a/block/rbd.c >>> > +++ b/block/rbd.c >>> > @@ -22,13 +22,17 @@ >>> > /* >>> > * When specifying the image filename use: >>> > * >>> > - * rbd:poolname/devicename >>> > + * rbd:poolname/devicename[@snapshotname][:option1=value1[:option2=value2...]] >>> >>> I'm not sure IIUC, but currently this @snapshotname seems to be >>> meaningless; it doesn't allow you to boot from a snapshot because it's >>> read only. Am I misunderstanding or tested incorrectly? >> >> Read-only block devices are supported by QEMU and can be useful. > > I agree. My expectation was that @snapshotname is introduced to have > writable snapshot. > The RADOS backend doesn't support writable snapshots. However, down the rbd roadmap we plan to have layering which in a sense is writable snapshots. The whole shift to librbd was done so that introducing such new functionality will be transparent and will not require much or any changes in the qemu code. Yehuda -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html