Problems during first install

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is pretty straightforward to fix. CRUSH defaults to peering with OSDs
on other nodes. If you are setting up a 1 node cluster, modify the setting.

osd crush chooseleaf type = 0

Add that to your ceph.com file and restart your cluster.




On Mon, Aug 4, 2014 at 5:51 AM, Pratik Rupala <pratik.rupala at calsoftinc.com>
wrote:

>  Hi,
>
> You mentioned that you have 3 hosts which are VMs. Are you using simple
> directories as OSDs or virtual disks as OSDs?
>
> I had same problem few days back where enough space was not available from
> OSD for the cluster.
>
> Try to increase the size of disks if you are using virtual disks and if
> you are using directories as OSDs then check whether you have enough space
> on root device using df -h command on OSD node.
>
> Regards,
> Pratik
>
>
> On 8/4/2014 4:11 PM, Tijn Buijs wrote:
>
> Hi Everybody,
>
> My idea was that maybe I was inpatient or something, so I let my Ceph
> cluster running over the weekend. So from friday 15:00 until now (it is
> monday morning 11:30 here now) it kept on running. And it didn't help :).
> It still needs to create 192 PGs.
> I've reinstalled my entier cluster a few times now. I switched over from
> CentOS 6.5 to Ubuntu 14.04.1 LTS and back to CentOS again, and every time I
> get exactly the same results. The PGs are getting in the incomplete, stuck
> inactive, stuk unclean state. What am I doing wrong? :).
>
> For the moment I'm running with 6 OSDs evenly divided over 3 hosts (so
> each host has 2 OSDs). I've only got 1 monitor configured in my current
> cluster. I hit some other problem when trying to add monitor 2 and 3 again.
> And to not complicate things with multiple problems at the same time I've
> switched back to only 1 monitor. The cluster should work that way, right?
>
> To make things clear for everybody, here is the output of ceph health and
> ceph -s:
> $ ceph health
> HEALTH_WARN 192 pgs incomplete; 192 pgs stuck inactive; 192 pgs stuck
> unclean
> $ ceph -s
>     cluster 43d5f48b-d034-4f50-bec8-5c4f3ad8276f
>      health HEALTH_WARN 192 pgs incomplete; 192 pgs stuck inactive; 192
> pgs stuck unclean
>      monmap e1: 1 mons at {ceph-mon1=10.28.28.71:6789/0}, election epoch
> 1, quorum 0 ceph-mon1
>      osdmap e20: 6 osds: 6 up, 6 in
>       pgmap v40: 192 pgs, 3 pools, 0 bytes data, 0 objects
>             197 MB used, 30456 MB / 30653 MB avail
>                  192 incomplete
>
> I hope somebody has an idea for me to try :).
>
> Met vriendelijke groet/With kind regards,
>
> Tijn Buijs
>
> [image: Cloud.nl logo]
>
> tijn at cloud.nl | T. 0800-CLOUDNL / +31 (0)162 820 000 | F. +31 (0)162 820
> 001
> Cloud.nl B.V. | Minervum 7092D | 4817 ZK Breda | www.cloud.nl
> On 31/07/14 17:19, Alfredo Deza wrote:
>
>
>
>
> On Thu, Jul 31, 2014 at 10:36 AM, Tijn Buijs <tijn at cloud.nl> wrote:
>
>>  Hello everybody,
>>
>> At cloud.nl we are going to use Ceph. So I find it a good idea to get
>> some handson experience with it, so I can work with it :). So I'm
>> installing a testcluster in a few VirtualBox machines on my iMac, which
>> runs OS X 10.9.4 offcourse. I know I will get a lousy performance, but
>> that's not the objective here. The objective is to get some experience with
>> Ceph, to see how it works.
>>
>> But I hit an issue during the initial setup of the cluster. When I'm done
>> installing everything and following the howto's on ceph.com (the
>> preflight <http://ceph.com/docs/master/start/quick-start-preflight/> and the
>> Storage Cluster quick start
>> <http://ceph.com/docs/master/start/quick-ceph-deploy/>) I need to run
>> ceph health to see that everything is running perfectly. But it doesn't run
>> perfectly, I get the following output:
>> ceph at ceph-admin:~$ ceph health
>> HEALTH_WARN 192 pgs incomplete; 192 pgs stuck inactive; 192 pgs stuck
>> unclean
>>
>> And it stays at this information, it never ever changes. So everything is
>> really stuck. But I don't know what is stuck exactly and how I can fix it.
>> Some more info about my cluster:
>> ceph at ceph-admin:~$ ceph -s
>>     cluster d31586a5-6dd6-454e-8835-0d6d9e204612
>>      health HEALTH_WARN 192 pgs incomplete; 192 pgs stuck inactive; 192
>> pgs stuck unclean
>>      monmap e3: 3 mons at {ceph-mon1=
>> 10.28.28.18:6789/0,ceph-mon2=10.28.28.31:6789/0,ceph-mon3=10.28.28.50:6789/0},
>> election epoch 4, quorum 0,1,2 ceph-mon1,ceph-mon2,ceph-mon3
>>      osdmap e25: 6 osds: 6 up, 6 in
>>       pgmap v56: 192 pgs, 3 pools, 0 bytes data, 0 objects
>>             197 MB used, 30455 MB / 30653 MB avail
>>                  192 creating+incomplete
>>
>> I'm running on Ubuntu 14.04.1 LTS Server. I did try to get it running on
>> CentOS 6.5 too (CentOS 6.5 is my actual distro of choice, but Ceph has more
>> affinity with Ubuntu, so I tried that too), but I got exactly the same
>> results.
>>
>> But because this is my first install of Ceph I don't know the exact debug
>> commands and stuff. I'm willing to get this working, but I just don't know
>> how :). Any help is appreciated :).
>>
>
>  Did you use ceph-deploy? (the link to the quick start guide makes me
> think you did)
>
>  If that was the case, did you get any warnings/errors at all?
>
>  ceph-deploy is very verbose because some of these things are hard to
> debug. Mind sharing that output?
>
>>
>> Met vriendelijke groet/With kind regards,
>>
>> Tijn Buijs
>>
>> [image: Cloud.nl logo]
>>
>> tijn at cloud.nl | T. 0800-CLOUDNL / +31 (0)162 820 000
>> <%2B31%20%280%29162%20820%20000> | F. +31 (0)162 820 001
>> <%2B31%20%280%29162%20820%20001>
>> Cloud.nl B.V. | Minervum 7092D | 4817 ZK Breda | www.cloud.nl
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
>
>
> _______________________________________________
> ceph-users mailing listceph-users at lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
John Wilkins
Senior Technical Writer
Intank
john.wilkins at inktank.com
(415) 425-9599
http://inktank.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140804/13dd23a1/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/jpeg
Size: 14056 bytes
Desc: not available
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140804/13dd23a1/attachment.jpeg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/jpeg
Size: 14056 bytes
Desc: not available
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140804/13dd23a1/attachment-0001.jpeg>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux