Storage pool always becomes inactive while rbd volume is being deleted

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Cephers,
I'm using ceph0.94 and libvirt1.2.14.  Normally, I can show the storage pool is active using virsh pool-list. But when I refresh it, it will be inactive while some of rbd volumes are being deleted.

From the processing of refreshing storage pool,
storagePoolRefresh 
           |
           |---> virStorageBackendRBDRefreshPool 
                                                  |
                                                  |---> volStorageBackendRBDRefreshVolInfo

static int volStorageBackendRBDRefreshVolInfo(virStorageVolDefPtr vol,
                                                                                 virStoragePoolObjPtr pool,
                                                                                 virStorageBackendRBDStatePtr ptr)
{
       r = rbd_open(ptr->ioctx, vol->name, &image, NULL);
       if (r < 0) {
           VIR_DEBUG("failed to open RBD image '%s', check if it was still exist in its pool",\
                             vol->name);
            virReportSystemError(-r, _("failed to open the RBD image '%s'"),
                          vol->name);
        
            return ret;
       }
}

If vol->name has been deleted by rbd, libvirt won't open it when refreshing storage pool, it will set pool->active = 0. So the storage pool will be inactive, it will result in VMs can't be created any more. 

Is it a bug? Any way to avoid it?
Thanks


Ray Shi
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux