Re: all vms can not start up when boot all the ceph hosts.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I would check to see if the images have an exclusive-lock still held
by a force-killed VM. librbd will generally automatically clear this
up unless it doesn't have the proper permissions to blacklist a dead
client from the Ceph cluster. Verify that your OpenStack Ceph user
caps are correct [1][2].

[1] http://docs.ceph.com/docs/master/releases/luminous/#upgrade-from-jewel-or-kraken
[2] http://docs.ceph.com/docs/luminous/rbd/rbd-openstack/#setup-ceph-client-authentication
On Tue, Dec 4, 2018 at 8:56 AM Simon Ironside <sironside@xxxxxxxxxxxxx> wrote:
>
> On 04/12/2018 09:37, linghucongsong wrote:
>
> But it is just in case suddenly power off for all the hosts!
>
>
> I'm surprised you're seeing I/O errors inside the VM once they're restarted.
> Is the cluster healthy? What's the output of ceph status?
>
> Simon
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux