Hi Karan,
There's info on http://ceph.com/docs/master/rados/operations/pools/
But primarily you need to check your replication levels: ceph osd dump
-o -|grep 'rep size'
Then alter the pools that are stuck unclean: ceph osd pool set
size/min_size #
If you're new to ceph it's probably a good idea to double check your pg
numbers while you're doing this.
-Michael
On 08/11/2013 11:08, Karan Singh wrote:
Hello Joseph
This sounds like a solution , BTW how to set replication level to 1 , is there any direct command or need to edit configuration file.
Many Thanks
Karan Singh
----- Original Message -----
From: "Joseph R Gruher" <joseph.r.gruher@xxxxxxxxx>
To: ceph-users@xxxxxxxxxxxxxx
Sent: Thursday, 7 November, 2013 9:14:45 PM
Subject: Re: please help me.problem with my ceph
From: ceph-users-bounces@xxxxxxxxxxxxxx [mailto:ceph-users-
bounces@xxxxxxxxxxxxxx] On Behalf Of ??
Sent: Wednesday, November 06, 2013 10:04 PM
To: ceph-users
Subject: please help me.problem with my ceph
1. I have installed ceph with one mon/mds and one osd.When i use 'ceph -
s',there si a warning:health HEALTH_WARN 384 pgs degraded; 384 pgs stuck
unclean; recovery 21/42 degraded (50.000%)
I would think this is because Ceph defaults to a replication level of 2 and you only have one OSD (nowhere to write a second copy) so you are degraded? You could add a second OSD or perhaps you could set the replication level to 1?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com