Re: parted/LVM for ET [Re: Storage manager initial requirements and thoughts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 20, 2007 at 12:06:39PM +0000, Mark McLoughlin wrote:
> On Wed, 2007-03-14 at 11:29 +0000, Daniel P. Berrange wrote:
> > On Wed, Mar 14, 2007 at 09:17:37AM +0000, Richard W.M. Jones wrote:
> 
> > > Should libvirt's C API use/expose libparted structures directly?
> > > (And how would this affect the remote case?)
> > 
> > I'd say definitely not expose libpartd via libvirt APIs. I view libparted
> > as an internal implementation detail. We're not seeking to turn libvirt
> > into a general purpose parititioning tool, but rather just providing a
> > minimal set of APIs for enumerating, creating and assigning virtual disks
> > to machines. Such an API would be operating at a more abstract higher
> > level than the libparted API, so exposing libparted would be a mistake in
> > this respect.
> 
> 	I still haven't gone off the notion of a virtual storage pool :-)
> 
>   https://www.redhat.com/archives/libvir-list/2007-February/msg00057.html

I like the idea of storage pools too, but not the impl described in that
thread :-) Creating a loop back mounted sparse file & running LVM on it
is utterly disasterous for both performanceand & data integrity. It will
also not be portable to any non-Linux systems which don't have LVM. It is
really unneccessary too, since most host machines already have plenty
of other ways for us to deal with storage - and importantly these are
consistent with current manual approaches to managing storage so we have 
good compatability with non-libvirt managed storage.

 - There are a couple of different types of storage pool
     - An LVM volume group
     - Block devices
     - A directory on a filesystem
 - Each storage pool can have zero or more storage volumes allocated
     - LVM volume group has multiple logical volumes
     - Block device has multiple partitions
     - A directory has multiple files (maybe sparse)
 - Each storage pool has some measure of free space
     - LVM volume group has unallocated physical extents
     - Block device has unpartitioned sectors
     - A directory has free space from underlying filesystem
 - Every host has at least one storage pool with free space - ie a directory
   on a filesystem. Some hosts may also have free LVM space, or unpartitioned
   block devices but we can't assume their presence in general.

This lets us manage all existing VMs which are either device (LVM/block) or 
file based (/var/lib/xen/images) with the new APIs, so gives a good back
compatability story. The performance & reliability are good, since we're
avoiding extra layers of loopback.

There are only a handful of operations we need to track to get an initially
useful API:

  - Enumerate storage pools
  - Enumerate volumes within a pool
  - Extract metadata about pools (free space, UUID?)
  - Extract metadata about volumes (logical size, physical allocation, UUID)
  - Create volume.
  - Delete volume

That's pretty much it. I'd be inclined to implement regular file based pool
in terms of /var/lib/xen/images (or /var/lib/libvirt/images?) as a first
target. Its by far the easiest since it merely requires use of POSIX apis,
and is also completely cross-platform portable (which LVM isn't). 

Dan.
-- 
|=- Red Hat, Engineering, Emerging Technologies, Boston.  +1 978 392 2496 -=|
|=-           Perl modules: http://search.cpan.org/~danberr/              -=|
|=-               Projects: http://freshmeat.net/~danielpb/               -=|
|=-  GnuPG: 7D3B9505   F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505  -=| 


[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]