Re: ceph -w question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Also the ceph osdmap.  (ceph osd getmap -o /tmp/map will put the
osdmap in /tmp/map).
-Sam

On Mon, Apr 15, 2013 at 10:09 AM, Samuel Just <sam.just@xxxxxxxxxxx> wrote:
> Can you post the output of ceph osd tree?
> -Sam
>
> On Mon, Apr 15, 2013 at 9:52 AM, Jeppesen, Nelson
> <Nelson.Jeppesen@xxxxxxxxxx> wrote:
>> Thanks for the help but how do I track down this issue? If data is inaccessible, that's a very bad thing given this is production.
>>
>> # ceph osd dump | grep pool
>> pool 13 '.rgw.buckets' rep size 2 crush_ruleset 0 object_hash rjenkins pg_num 4800 pgp_num 4800 last_change 1198 owner 0
>> pool 14 '.rgw' rep size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 242 owner 18446744073709551615
>> pool 15 '.rgw.gc' rep size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 243 owner 18446744073709551615
>> pool 16 '.rgw.control' rep size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 244 owner 18446744073709551615
>> pool 17 '.users.uid' rep size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 246 owner 0
>> pool 18 '.users.email' rep size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 248 owner 0
>> pool 19 '.users' rep size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 250 owner 0
>> pool 20 '.usage' rep size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 256 owner 18446744073709551615
>> pool 21 '.users.swift' rep size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 1138 owner 0
>>
>> Nelson Jeppesen
>>    Disney Technology Solutions and Services
>>    Phone 206-588-5001
>>
>> -----Original Message-----
>> From: Gregory Farnum [mailto:greg@xxxxxxxxxxx]
>> Sent: Monday, April 15, 2013 9:34 AM
>> To: Jeppesen, Nelson
>> Cc: ceph-users@xxxxxxxxxxxxxx
>> Subject: Re:  ceph -w question
>>
>> "Incomplete" means that there are fewer than the minimum copies of the placement group (by default, half of the requested size, rounded up).
>> In general rebooting one node shouldn't do that unless you've changed your minimum size on the pool, and it does mean that data in those PGs is unaccessible.
>> -Greg
>> Software Engineer #42 @ http://inktank.com | http://ceph.com
>>
>>
>> On Mon, Apr 15, 2013 at 9:01 AM, Jeppesen, Nelson <Nelson.Jeppesen@xxxxxxxxxx> wrote:
>>> When I reboot any node in my prod environment with no activity I see
>>> incomplete pgs. Is that a concern? Does that mean some data is unavailable?
>>> Thank you.
>>>
>>>
>>>
>>> # ceph -v
>>>
>>> ceph version 0.56.4 (63b0f854d1cef490624de5d6cf9039735c7de5ca)
>>>
>>>
>>>
>>> # ceph -w
>>>
>>> 2013-04-15 08:57:27.712065 mon.0 [INF] pgmap v585220: 4864 pgs: 4443
>>> active+clean, 1 active+degraded, 420 incomplete; 3177 GB data, 6504 GB
>>> active+used,
>>> 38186 GB / 44691 GB avail; 252/8168154 degraded (0.003%)
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux