Re: Available space shrinks to zero after upgrading cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am Montag, 9. Januar 2012, 12:23:53 schrieb Guido Winkelmann:
> Hi,
> 
> I just upgraded my cluster to the current git version (from 0.39, using
> https://github.com/NewDreamNetwork/ceph.git), and now ceph -s reports the
> total available space in the the cluster as 0 and all of the clients using
> the cluster are blocking:
> 
> # ceph -s
> 2012-01-09 12:12:39.433576    pg v313084: 396 pgs: 396 active+clean; 79166
> MB data, 0 KB used, 0 KB / 0 KB avail
> 2012-01-09 12:12:39.434269   mds e11: 1/1/1 up {0=alpha=up:replay}
> 2012-01-09 12:12:39.434289   osd e293: 6 osds: 6 up, 6 in
> 2012-01-09 12:12:39.434338   log 2012-01-09 12:12:26.600575 mon.0
> 10.3.1.33:6789/0 6 : [INF] osd.1 10.3.1.33:6804/22161 boot
> 2012-01-09 12:12:39.434389   mon e5: 3 mons at
> {ceph1=10.3.1.33:6789/0,ceph2=10.3.1.34:6789/0,ceph3=10.3.1.35:6789/0}
> 
> I have a cluster with three machines, each with two OSDs and one mon.
> Additionally, the first one also has a single mds.
> 
> I did the upgrade by cloning the latest git sources onto each machine, doing
> the usual ./autgen.sh, ./configure, make && make install and then
> restarting ceph on one machine after the other using /etc/init.d/ceph
> restart, starting with the last one.
> 
> So... what's happening here and how do I get my cluster working again?

Oh, hey, it just changed to this:

# ceph -s
2012-01-09 12:28:07.745570    pg v313092: 396 pgs: 232 active+clean, 106 
active+clean+degraded, 13 active+clean+replay+degraded, 45 
down+degraded+peering; 79166 MB data, 33829 MB used, 825 GB / 876 GB avail; 
8792/42838 degraded (20.524%)
2012-01-09 12:28:07.746310   mds e11: 1/1/1 up {0=alpha=up:replay}
2012-01-09 12:28:07.746330   osd e296: 6 osds: 1 up, 6 in
2012-01-09 12:28:07.746379   log 2012-01-09 12:27:24.965150 mon.0 
10.3.1.33:6789/0 7 : [INF] osd.4 10.3.1.35:6806/3705 boot
2012-01-09 12:28:07.746422   mon e1: 1 mons at {ceph1=10.3.1.33:6789/0}
2012-01-09 12:28:07.746431   mon e2: 2 mons at 
{ceph1=10.3.1.33:6789/0,ceph2=10.3.1.34:6789/0}
2012-01-09 12:28:07.746440   mon e3: 1 mons at {ceph1=10.3.1.33:6789/0}
2012-01-09 12:28:07.746446   mon e4: 2 mons at 
{ceph1=10.3.1.33:6789/0,ceph2=10.3.1.34:6789/0}
2012-01-09 12:28:07.746453   mon e5: 3 mons at 
{ceph1=10.3.1.33:6789/0,ceph2=10.3.1.34:6789/0,ceph3=10.3.1.35:6789/0}

	Guido
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux