Re: PGs issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 






> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Bogdan SOLGA
> Sent: 19 March 2015 20:51
> To: ceph-users@xxxxxxxxxxxxxx
> Subject:  PGs issue
> 
> Hello, everyone!
> I have created a Ceph cluster (v0.87.1-1) using the info from the 'Quick
> deploy' page, with the following setup:
> • 1 x admin / deploy node;
> • 3 x OSD and MON nodes;
> o each OSD node has 2 x 8 GB HDDs;
> The setup was made using Virtual Box images, on Ubuntu 14.04.2.
> After performing all the steps, the 'ceph health' output lists the cluster in the
> HEALTH_WARN state, with the following details:
> HEALTH_WARN 64 pgs degraded; 64 pgs stuck degraded; 64 pgs stuck
> unclean; 64 pgs stuck undersized; 64 pgs undersized; too few pgs per osd (10
> < min 20)
> The output of 'ceph -s':
>     cluster b483bc59-c95e-44b1-8f8d-86d3feffcfab
>      health HEALTH_WARN 64 pgs degraded; 64 pgs stuck degraded; 64 pgs
> stuck unclean; 64 pgs stuck undersized; 64 pgs undersized; too few pgs per
> osd (10 < min 20)
>      monmap e1: 3 mons at {osd-003=192.168.122.23:6789/0,osd-
> 002=192.168.122.22:6789/0,osd-001=192.168.122.21:6789/0}, election epoch
> 6, quorum 0,1,2 osd-001,osd-002,osd-003
>      osdmap e20: 6 osds: 6 up, 6 in
>       pgmap v36: 64 pgs, 1 pools, 0 bytes data, 0 objects
>             199 MB used, 18166 MB / 18365 MB avail
>                   64 active+undersized+degraded
> 
> I have tried to increase the pg_num and pgp_num to 512, as advised here,
> but Ceph refused to do that, with the following error:
> Error E2BIG: specified pg_num 512 is too large (creating 384 new PGs on ~6
> OSDs exceeds per-OSD max of 32)
> 
> After changing the pg*_num to 256, as advised here, the warning was
> changed to:
> health HEALTH_WARN 256 pgs degraded; 256 pgs stuck unclean; 256 pgs
> undersized
> 
> What is the issue behind these warning? and what do I need to do to fix it?

It's basically telling you that you current available OSD's don't meet the requirements to suit the number of replica's you have requested.

What replica size have you configured for that pool?

> 
> I'm a newcomer in the Ceph world, so please don't shoot me if this issue has
> been answered / discussed countless times before :) I have searched the
> web and the mailing list for the answers, but I couldn't find a valid solution.
> Any help is highly appreciated. Thank you!
> Regards,
> Bogdan




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux