The error I pasted is what we got when we ran the qm start command. Here is something that could be the cause. I can tell from the error that we need some tweaking but is there anything I can do to just allow us to start vms for now? root@pm3:~# ceph health HEALTH_WARN 380 pgs backfill; 32 pgs backfilling; 631 pgs degraded; 382 pgs down; 382 pgs pe ering; 631 pgs stuck degraded; 382 pgs stuck inactive; 1778 pgs stuck unclean; 631 pgs stuck undersized; 631 pgs undersized; 5 requests are blocked > 32 sec; recovery 426497/2079157 ob jects degraded (20.513%); recovery 718964/2079157 objects misplaced (34.580%); too many PGs per OSD (528 > max 300) ----------------- Hi Dan, good. --- Please run the command manually. For now this is a proxmox specific problem, that something is the way, proxmox does not like. But why, we dont know. You need to provide more info. So run the command manually or search in the logs for some more info around this task error. -- Mit freundlichen Gruessen / Best regards Oliver Dzombic IP-Interactive mailto:info@xxxxxxxxxxxxxxxxx Anschrift: IP Interactive UG ( haftungsbeschraenkt ) Zum Sonnenberg 1-3 63571 Gelnhausen HRB 93402 beim Amtsgericht Hanau Geschäftsführung: Oliver Dzombic Steuer Nr.: 35 236 3622 1 UST ID: DE274086107 Am 29.03.2016 um 14:11 schrieb Dan Moses: > Cleaned up the old logs. All host nodes are happy now in a quorum. > > However we cannot start any VMs. > > trying to aquire lock...TASK ERROR: can't lock file '/var/lock/qemu-server/lock-106.conf' - got timeout > > ------------------------------------------------------ > > Hi Dan, > > the full root partition is the very first thing you have to solve. > > This >can< be responsible for the missbehaviour, but it is for >sure< a > general problem you >need< to solve. > > So: > > 1. Clean / > 2. Restart the server > 3. Check if its working, and if not, what are the exact error messages > > If its working, great. > > If not, tell us what the daemons/vm's tell you on start up incl. the logs. > > Good luck ! > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com