Re: Can't recover pgs degraded/stuck unclean/undersized

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Not that I know of. On 5 other clusters it works just fine and configuration is the same for all.
On this cluster I was using only radosgw, but cephfs was not in use but it had been already created following our procedures. 

This happened right after mounting it.

On Tue, Nov 15, 2016 at 10:24 AM John Spray <jspray@xxxxxxxxxx> wrote:
On Tue, Nov 15, 2016 at 12:14 PM, Webert de Souza Lima
<webert.boss@xxxxxxxxx> wrote:
> Hey John.
>
> Just to be sure; by "deleting the pools" you mean the cephfs_metadata and
> cephfs_metadata pools, right?
> Does it have any impact over radosgw? Thanks.

Yes, I meant the cephfs pools.  It doesn't affect rgw (assuming your
pool names correspond to what you're using them for).

By the way, it would be interesting to see what actually went wrong
with your cephfs_metadata pool.  Did you do something different with
it like trying to use difference crush rules?

John

>
> On Tue, Nov 15, 2016 at 10:10 AM John Spray <jspray@xxxxxxxxxx> wrote:
>>
>> On Tue, Nov 15, 2016 at 11:58 AM, Webert de Souza Lima
>> <webert.boss@xxxxxxxxx> wrote:
>> > Hi,
>> >
>> > after running a cephfs on my ceph cluster I got stuck with the following
>> > heath status:
>> >
>> > # ceph status
>> >     cluster ac482f5b-dce7-410d-bcc9-7b8584bd58f5
>> >      health HEALTH_WARN
>> >             128 pgs degraded
>> >             128 pgs stuck unclean
>> >             128 pgs undersized
>> >             recovery 24/40282627 objects degraded (0.000%)
>> >      monmap e3: 3 mons at
>> >
>> > {dc1-master-ds01=10.2.0.1:6789/0,dc1-master-ds02=10.2.0.2:6789/0,dc1-master-ds03=10.2.0.3:6789/0}
>> >             election epoch 140, quorum 0,1,2
>> > dc1-master-ds01,dc1-master-ds02,dc1-master-ds03
>> >       fsmap e18: 1/1/1 up {0=b=up:active}, 1 up:standby
>> >      osdmap e15851: 10 osds: 10 up, 10 in
>> >             flags sortbitwise
>> >       pgmap v11924989: 1088 pgs, 18 pools, 11496 GB data, 19669 kobjects
>> >             23325 GB used, 6349 GB / 29675 GB avail
>> >             24/40282627 objects degraded (0.000%)
>> >                  958 active+clean
>> >                  128 active+undersized+degraded
>> >                    2 active+clean+scrubbing
>> >   client io 1968 B/s rd, 1 op/s rd, 0 op/s wr
>> >
>> > # ceph health detail
>> > -> https://paste.debian.net/895825/
>> >
>> > # ceph osd lspools
>> > 2 .rgw.root,3 master.rgw.control,4 master.rgw.data.root,5
>> > master.rgw.gc,6
>> > master.rgw.log,7 master.rgw.intent-log,8 master.rgw.usage,9
>> > master.rgw.users.keys,10 master.rgw.users.email,11
>> > master.rgw.users.swift,12
>> > master.rgw.users.uid,13 master.rgw.buckets.index,14
>> > master.rgw.buckets.data,15 master.rgw.meta,16
>> > master.rgw.buckets.non-ec,22
>> > rbd,23 cephfs_metadata,24 cephfs_data,
>> >
>> > on this cluster I run cephfs, which is empty atm, and a radosgw service.
>> > How can I clean this?
>>
>> Stop your MDS daemons
>> Run "ceph mds fail <id>" for each MDS daemon
>> Use "ceph fs rm <your fs name>"
>> Then you can delete the pools.
>>
>> John
>>
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux