Openstack Havana root fs resize don't work

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is *not* a case of that bug.  That LP bug is referring to an
issue with the 'nova resize' command and *not* with an instance
resizing its own root filesystem.  I can confirm that the latter case
works perfectly fine in Havana if you have things configured properly.

A few questions:

1) What workflow are you using?  (Create a volume from an image ->
boot from that volume, ceps-backed ephemeral, or some other patch?)
2) What OS/release are you running?  I've gotten it to work with
recent versions Centos, Debian, Fedora, and Ubuntu.
3) What are you actually seeing on the image?  Is the *partition* not
being resized at all (as referenced by /proc/partions), or is it just
the filesystem that isn't being resized (as referenced by df)?

On Tue, Aug 5, 2014 at 3:41 PM, Dinu Vlad <dinuvlad13 at gmail.com> wrote:
> There?s a known issue with Havana?s rbd driver in nova and it has nothing to do with ceph. Unfortunately, it is only fixed in icehouse. See https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1219658 for more details.
>
> I can confirm that applying the patch manually works.
>
>
> On 05 Aug 2014, at 11:00, Hauke Bruno Wollentin <Hauke-Bruno.Wollentin at innovo-cloud.de> wrote:
>
>> Hi folks,
>>
>> we use Ceph Dumpling as storage backend for Openstack Havana. However our
>> instances are not able to resize its root filesystem.
>>
>> This issue just occurs for the virtual root disk. If we start instances with
>> an attached volume, the virtual volume disks size is correct.
>>
>> Our infrastructure:
>> - 1 OpenStack Controller
>> - 1 OpenStack Neutron Node
>> - 1 OpenStack Cinder Node
>> - 4 KVM Hypervisors
>> - 4 Ceph-Storage Nodes including mons
>> - 1 dedicated mon
>>
>> As OS we use Ubuntu 12.04.
>>
>> Our cinder.conf on Cinder Node:
>>
>> volume_driver = cinder.volume.driver.RBDDriver
>> rbd_pool = volumes
>> rbd_secret = SECRET
>> rbd_user = cinder
>> rbd_ceph_conf = /etc/ceph/ceph.conf
>> rbd_max_clone_depth = 5
>> glance_api_version = 2
>>
>> Our nova.conf on hypervisors:
>>
>> libvirt_images_type=rbd
>> libvirt_images_rbd_pool=volumes
>> libvirt_images_rbd_ceph_conf=/etc/ceph/ceph.conf
>> rbd_user=admin
>> rbd_secret_uuid=SECRET
>> libvirt_inject_password=false
>> libvirt_inject_key=false
>> libvirt_inject_partition=-2
>>
>> In our instances we see that the virtual disk isn't _updated_ in its size. It
>> still uses the size specified in the images.
>>
>> We use growrootfs in our images as described in the documentation + verified
>> its functionality (we switched temporarly to LVM as the storage backend, that
>> works).
>>
>> Our images are manually created regarding the documention (means only 1
>> partition, no swap, cloud-utils etc.).
>>
>> Does anyone has some hints how to solve this issue?
>>
>> Cheers,
>> Hauke
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux