On Wed, 4 Feb 2015, Cristian Falcas wrote: > Hi, > > We have an openstack installation that uses ceph as the storage backend. > > We use mainly snapshot and boot from snapshot from an original > instance with a 200gb disk. Something like this: > 1. import original image > 2. make volume from image (those 2 steps were done only once, when we > installed openstack) > 3. boot main instance from volume, update the db inside > 4. snapshot the instance > 5. make volumes from previous snapshot > 6. boot test instances from those volumes (the last 3 steps take less then 30s) > > > Currently the fs is btrfs and we are in love with the solution: the > snapshots are instant and boot from snapshot is also instant. It cut > our tests time (compared with the vmware solution + netap storage) > from 12h to 2h. With vmware we were spending 10h with what now is done > in a few seconds. That's great to hear! > I was wondering if the fs matters in this case, because we are a > little worry about using btrfs and reading all the horror story here > and on btrfs mailing list. > > Is the snapshoting performed by ceph or by the fs? Can we switch to > xfs and have the same capabilities: instant snapshot + instant boot > from snapshot? The feature set and capabilities are identical. The difference is that on btrfs we are letting btrfs do the efficient copy-on-write cloning when we touch a snapshotted object while with XFS we literally copy the object file (usually 4MB) on the first write. You will likely see some penalty in the boot-from-clone scenario, although I have no idea how significant it will be. On the other hand, we've also seen that btrfs fragmentation over time can lead to poor performance relative to XFS. So, no clear answer, really. Sorry! If you do stick with btrfs, please report back here and share what you see as far as stability (along with the kernel version(s) you are using; most of the XFS over btrfs usage is based on FUD (in the literal sense) and I don't think we have seen much in the way of real user reports here in a while. Thanks! sage _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com