Re: How to remove stale pgs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 18, 2013 at 6:41 PM, Ta Ba Tuan <tuantb@xxxxxxxxxx> wrote:
> Hi Greg,
>
> I don't lost any OSDs,
>
> The first, Ceph had 4 pgs (0.f4f, 2.f4d, 0.2c8, 2.2c6) in stale state.
> then, I created those pgs by following commands:
>
> ceph pg force_create_pg 0.f4f
> ceph pg force_create_pg 2.f4d
> ceph pg force_create_pg 0.2c8
> ceph pg force_create_pg 2.2c6
>
> Now, after two days, Ceph still notify  above pgs in 'creating' state:

Were these PGs ever up and healthy, or is it a fresh cluster?
I notice that all of these PGs have osd 68 as primary; have you tried
restarting it? There are a couple of peering state bugs that people
have been turning up recently that can be resolved by restarting.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com

>
> root@ceph-mon-01:~# ceph pg dump | grep 'stale\|creating'
> 0.f4f   0       0       0       0       0       0       0
> stale+creating  2013-07-17 16:35:06.882419      0'0     0'0     []
> [68,12] 0'0      0.000000        0'0     0.000000
> 2.f4d   0       0       0       0       0       0       0
> stale+creating  2013-07-17 16:35:22.826552      0'0     0'0     []
> [68,12] 0'0      0.000000        0'0     0.000000
> 0.2c8   0       0       0       0       0       0       0
> stale+creating  2013-07-17 14:30:54.280454      0'0     0'0     []
> [68,5]  0'0      0.000000        0'0     0.000000
> 2.2c6   0       0       0       0       0       0       0
> stale+creating  2013-07-17 16:35:28.445878      0'0     0'0     []
> [68,5]  0'0      0.000000        0'0     0.000000
>
> How to delete above pgs, Greg?
>
> Thank Greg so much.
> --tuantaba
>
>
>
>
> On 07/19/2013 05:01 AM, Gregory Farnum wrote:
>
> On Thu, Jul 18, 2013 at 3:53 AM, Ta Ba Tuan <tuantb@xxxxxxxxxx> wrote:
>
> Hi all,
>
> I have 4 (stale+inactive) pgs, how to delete those pgs?
>
> pgmap v59722: 21944 pgs: 4 stale, 12827 active+clean, 9113 active+degraded;
> 45689 MB data, 1006 GB used, 293 TB / 294 TB avail;
>
> I found on google a long time, still can't resolve it.
> Please, help me!
>
> This depends on why they're stale+inactive. Can you pastebin the
> output of "ceph pg dump" and provide the link?
>
> Have you lost any OSDs?
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux