Hi Dan, please try to access the rbd volume via rados tools. If its working ( you can list images ) then the problem is not ceph. If its not working, then you should take care of the ceph cluster first and make it healthy(er). At first you should correct the mistake with the pg numbers. -- Mit freundlichen Gruessen / Best regards Oliver Dzombic IP-Interactive mailto:info@xxxxxxxxxxxxxxxxx Anschrift: IP Interactive UG ( haftungsbeschraenkt ) Zum Sonnenberg 1-3 63571 Gelnhausen HRB 93402 beim Amtsgericht Hanau Geschäftsführung: Oliver Dzombic Steuer Nr.: 35 236 3622 1 UST ID: DE274086107 Am 29.03.2016 um 14:18 schrieb Dan Moses: > > The error I pasted is what we got when we ran the qm start command. Here is something that could be the cause. I can tell from the error that we need some tweaking but is there anything I can do to just allow us to start vms for now? > > root@pm3:~# ceph health > HEALTH_WARN 380 pgs backfill; 32 pgs backfilling; 631 pgs degraded; 382 pgs down; 382 pgs pe ering; 631 pgs stuck degraded; 382 pgs stuck inactive; 1778 pgs stuck unclean; 631 pgs stuck undersized; 631 pgs undersized; 5 requests are blocked > 32 sec; recovery 426497/2079157 ob jects degraded (20.513%); recovery 718964/2079157 objects misplaced (34.580%); too many PGs per OSD (528 > max 300) > > ----------------- > Hi Dan, > > good. > > --- > > Please run the command manually. > > For now this is a proxmox specific problem, that something is the way, > proxmox does not like. > > But why, we dont know. > > You need to provide more info. > > So run the command manually or search in the logs for some more info > around this task error. > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com