mkcephfs questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Haomai, 

I attached the logs again in the attachment. Sorry that I'm just start to learn Ceph, not experienced enough to analyze the logs and find the root cause. Please advice.



Wei Cao (Buddy)

-----Original Message-----
From: Haomai Wang [mailto:haomaiwang@xxxxxxxxx] 
Sent: Wednesday, April 30, 2014 4:58 PM
To: Cao, Buddy
Cc: ceph-users at lists.ceph.com
Subject: Re: mkcephfs questions

OK, actually I just know it. It looks OK.

According to the log, many osds try to boot and repeatedly. I think the problem maybe in monitor side. Could you check the monitor node and the ceph-mon.log which provided is blank.

On Wed, Apr 30, 2014 at 3:59 PM, Cao, Buddy <buddy.cao at intel.com> wrote:
> Yes, I set "osd journal size= 0 " by purpose, I'd like to use all of the space of journal device, I think I got the idea from Ceph website... Yes, I do run " mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/keyring.admin" to start create ceph cluster, and it succeed.
>
> Do you think "osd journal size=0" would cause any problems?
>
>
> Wei Cao (Buddy)
>
> -----Original Message-----
> From: Haomai Wang [mailto:haomaiwang at gmail.com]
> Sent: Wednesday, April 30, 2014 3:48 PM
> To: Cao, Buddy
> Cc: ceph-users at lists.ceph.com
> Subject: Re: mkcephfs questions
>
> I found "osd journal size = 0" in your ceph.conf?
> Do you really run mkcephfs with this? I think it will be fail.
>
> On Wed, Apr 30, 2014 at 2:42 PM, Cao, Buddy <buddy.cao at intel.com> wrote:
>> Here you go... I did not see any stuck clean related log...
>>
>>
>>
>> Wei Cao (Buddy)
>>
>> -----Original Message-----
>> From: Haomai Wang [mailto:haomaiwang at gmail.com]
>> Sent: Wednesday, April 30, 2014 2:12 PM
>> To: Cao, Buddy
>> Cc: ceph-users at lists.ceph.com
>> Subject: Re: mkcephfs questions
>>
>> Hmm, it should be another problem plays. Maybe more logs could explain it.
>>
>> ceph.log
>> ceph-mon.log
>>
>> On Wed, Apr 30, 2014 at 12:06 PM, Cao, Buddy <buddy.cao at intel.com> wrote:
>>> Thanks your reply, Haomai. What I don't understand is that, why the stuck unclean pgs keep the same numbers after 12 hours. It's the common behavior or not?
>>>
>>>
>>> Wei Cao (Buddy)
>>>
>>> -----Original Message-----
>>> From: Haomai Wang [mailto:haomaiwang at gmail.com]
>>> Sent: Wednesday, April 30, 2014 11:36 AM
>>> To: Cao, Buddy
>>> Cc: ceph-users at lists.ceph.com
>>> Subject: Re: mkcephfs questions
>>>
>>> The result of "ceph -s" should tell you the reason. There only 
>>> exists
>>> 21 OSD up but we need 24 OSDs
>>>
>>> On Wed, Apr 30, 2014 at 11:21 AM, Cao, Buddy <buddy.cao at intel.com> wrote:
>>>> Hi,
>>>>
>>>>
>>>>
>>>> I setup ceph cluster thru mkcephfs command, after I enter ?ceph 
>>>> ?s?, it always returns 4950 stuck unclean pgs. I tried the same ?ceph -s?
>>>> after 12 hrs,  there still returns the same unclean pgs number, nothing changed.
>>>> Does mkcephfs always has the problem or I did something wrong? I 
>>>> attached the result of ?ceph -s?, ?ceph osd tree? and ceph.conf I 
>>>> have, please kindly help.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> [root at ceph]# ceph -s
>>>>
>>>>     cluster 99fd4ff8-0fb8-47b9-8179-fefbba1c2503
>>>>
>>>>      health HEALTH_WARN 4950 pgs degraded; 4950 pgs stuck unclean; 
>>>> recovery
>>>> 21/42 objects degraded (50.000%); 3/24 in osds are down; clock skew 
>>>> detected on mon.1, mon.2
>>>>
>>>>      monmap e1: 3 mons at
>>>> {0=192.168.0.2:6789/0,1=192.168.0.3:6789/0,2=192.168.0.4:6789/0},
>>>> election epoch 6, quorum 0,1,2 0,1,2
>>>>
>>>>      mdsmap e4: 1/1/1 up {0=0=up:active}
>>>>
>>>>      osdmap e6019: 24 osds: 21 up, 24 in
>>>>
>>>>       pgmap v16445: 4950 pgs, 6 pools, 9470 bytes data, 21 objects
>>>>
>>>>             4900 MB used, 93118 MB / 98019 MB avail
>>>>
>>>>             21/42 objects degraded (50.000%)
>>>>
>>>>                 4950 active+degraded
>>>>
>>>>
>>>>
>>>> [root at ceph]# ceph osd tree //part of returns
>>>>
>>>> # id    weight  type name       up/down reweight
>>>>
>>>> -36     25      root vsm
>>>>
>>>> -31     3.2             storage_group ssd
>>>>
>>>> -16     3                       zone zone_a_ssd
>>>>
>>>> -1      1                               host vsm2_ssd_zone_a
>>>>
>>>> 2       1                                       osd.2   up      1
>>>>
>>>> -6      1                               host vsm3_ssd_zone_a
>>>>
>>>> 10      1                                       osd.10  up      1
>>>>
>>>> -11     1                               host vsm4_ssd_zone_a
>>>>
>>>> 18      1                                       osd.18  up      1
>>>>
>>>> -21     0.09999                 zone zone_c_ssd
>>>>
>>>> -26     0.09999                 zone zone_b_ssd
>>>>
>>>> -33     3.2             storage_group sata
>>>>
>>>> -18     3                       zone zone_a_sata
>>>>
>>>> -3      1                               host vsm2_sata_zone_a
>>>>
>>>> 1       1                                       osd.1   up      1
>>>>
>>>> -8      1                               host vsm3_sata_zone_a
>>>>
>>>> 9       1                                       osd.9   up      1
>>>>
>>>> -13     1                               host vsm4_sata_zone_a
>>>>
>>>> 17      1                                       osd.17  up      1
>>>>
>>>> -23     0.09999                 zone zone_c_sata
>>>>
>>>> -28     0.09999                 zone zone_b_sata
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Wei Cao (Buddy)
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users at lists.ceph.com
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>>
>>>
>>>
>>> --
>>> Best Regards,
>>>
>>> Wheat
>>
>>
>>
>> --
>> Best Regards,
>>
>> Wheat
>
>
>
> --
> Best Regards,
>
> Wheat



--
Best Regards,

Wheat
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: ceph -s .txt
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140504/61e9a7c9/attachment-0001.txt>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ceph.log
Type: application/octet-stream
Size: 217087 bytes
Desc: ceph.log
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140504/61e9a7c9/attachment-0002.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ceph-mon.0.log
Type: application/octet-stream
Size: 4643105 bytes
Desc: ceph-mon.0.log
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140504/61e9a7c9/attachment-0003.obj>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux