On Thu, Dec 10, 2015 at 9:18 PM, Dusty Mabe <dusty@xxxxxxxxxxxxx> wrote: > > > On 12/10/2015 08:35 PM, Chris Murphy wrote: >> Followup question: Does Docker directly use a thin pool without >> creating (virtual size) logical volumes? Because I don't see any other >> LV's created, and no XFS filesystem appears on the host using the >> mount command. And yet I see XFS mount and umount kernel messages on >> the host. This is sort of an esoteric question. However, I have no >> access to container files from the host like I can see inside each >> btrfs subvolume when btrfs is the backing method. And that suggests >> possibly rather different backup strategies depending on the backing. > > I believe it chops it up using low level device mapper stuff. I think > you don't see the mounts on your host because they are in a different > mount namespace (part of the magic behind containers). > > For more info on docker + device mapper look at slides 37-44 of [1] > > [1] - http://www.slideshare.net/Docker/docker-storage-drivers I read all the slides. That is really helpful, there's quite a bit of detail considering they're slides. It's definitely more devicemapper than LVM based (makes sense, the driver is "devicemapper" after all). The most that appears in LVM's view is the thin pool, and once Docker owns it, LVM can't make virtual LV's from that pool. As to the obscurity, on the one hand it's a perception because while I'm quite comfortable with LVM tools, I'm not that comfortable with dmsetup; and on the other hand the local backing should probably be considered disposable, without warning, in a production setup anyway. So some regular sweep of container states (if that's important) should be made into images and put elsewhere. Seriously, if the backing store were to faceplant, it's simply faster to start from the most recent image than attempt repairs. -- Chris Murphy _______________________________________________ cloud mailing list cloud@xxxxxxxxxxxxxxxxxxxxxxx http://lists.fedoraproject.org/admin/lists/cloud@xxxxxxxxxxxxxxxxxxxxxxx