Re: [ceph-users] stopped backfilling process

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I hope it will help.

crush: https://www.dropbox.com/s/inrmq3t40om26vf/crush.txt
ceph osd dump: https://www.dropbox.com/s/jsbt7iypyfnnbqm/ceph_osd_dump.txt

--
Regards
Dominik

2013/11/6 yy-nm <yxdyourself@xxxxxxxxx>:
> On 2013/11/5 22:02, Dominik Mostowiec wrote:
>>
>> Hi,
>> After remove ( ceph osd out X) osd from one server ( 11 osd ) ceph
>> starts data migration process.
>> It stopped on:
>> 32424 pgs: 30635 active+clean, 191 active+remapped, 1596
>> active+degraded, 2 active+clean+scrubbing;
>> degraded (1.718%)
>>
>> All osd with reweight==1 are UP.
>>
>> ceph -v
>> ceph version 0.56.7 (14f23ab86b0058a8651895b3dc972a29459f3a33)
>>
>> health details:
>> https://www.dropbox.com/s/149zvee2ump1418/health_details.txt
>>
>> pg active+degraded query:
>> https://www.dropbox.com/s/46emswxd7s8xce1/pg_11.39_query.txt
>> pg active+remapped query:
>> https://www.dropbox.com/s/wij4uqh8qoz60fd/pg_16.2172_query.txt
>>
>> Please help - how can we fix it?
>>
> can you show  your  decoded crushmap? and output of #ceph osd dump ?
>
> ---
> 此电子邮件没有病毒和恶意软件,因为 avast! 防病毒保护处于活动状态。
> http://www.avast.com
>



-- 
Pozdrawiam
Dominik
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux