Re: v0.90 released

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>After apt-get update and upgrade i stil see 0.87 release .. any hint ? 

what is your repository in sources.list ?



----- Mail original -----
De: "Zeeshan Ali Shah" <zashah@xxxxxxxxxx>
À: "Florent MONTHEL" <fmonthel@xxxxxxxxxxxxx>
Cc: "Sage Weil" <sweil@xxxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxxx>, "René Gallati" <rene@xxxxxxxxxxx>
Envoyé: Samedi 27 Décembre 2014 02:11:53
Objet: Re:  v0.90 released

After apt-get update and upgrade i stil see 0.87 release .. any hint ? 
/Zee 

On Fri, Dec 26, 2014 at 9:58 AM, Florent MONTHEL < fmonthel@xxxxxxxxxxxxx > wrote: 


Hi Sage 

To be sure to have the good understanding : if I reached the max number of PG per OSD with for example 4 pools, and I have to create 2 new pools without adding OSD, I need to migrate old pools to less PGs pool, right ? 
Thanks 

Sent from my iPhone 

> On 23 déc. 2014, at 15:39, Sage Weil < sweil@xxxxxxxxxx > wrote: 
> 
>> On Tue, 23 Dec 2014, Ren? Gallati wrote: 
>> Hello, 
>> 
>> so I upgraded my cluster from 89 to 90 and now I get: 
>> 
>> ~# ceph health 
>> HEALTH_WARN too many PGs per OSD (864 > max 300) 
>> 
>> That is a new one. I had too few but never too many. Is this a problem that 
>> needs attention, or ignorable? Or is there even a command now to shrink PGs? 
> 
> It's a new warning. 
> 
> You can't reduce the PG count without creating new (smaller) pools 
> and migrating data. You can ignore the message, though, and make it go 
> away by adjusting the 'mon pg warn max per osd' (defaults to 300). Having 
> too many PGs increases the memory utilization and can slow things down 
> when adapting to a failure, but certainly isn't fatal. 
> 
>> The message did not appear before, I currently have 32 OSDs over 8 hosts and 9 
>> pools, each with 1024 PG as was the recommended number according to the OSD * 
>> 100 / replica formula, then round to next power of 2. The cluster has been 
>> increased by 4 OSDs, 8th host only days before. That is to say, it was at 28 
>> OSD / 7 hosts / 9 pools but after extending it with another host, ceph 89 did 
>> not complain. 
>> 
>> Using the formula again I'd actually need to go to 2048PGs in pools but ceph 
>> is telling me to reduce the PG count now? 
> 
> The guidance in the docs is (was?) a bit confusing. You need to take the 
> *total* number of PGs and see how many of those per OSD there are, 
> not create as many equally-sized pools as you want. There have been 
> several attempts to clarify the language to avoid this misunderstanding 
> (you're definitely not the first). If it's still unclear, suggestions 
> welcome! 
> 
> sage 
> 
> _______________________________________________ 
> ceph-users mailing list 
> ceph-users@xxxxxxxxxxxxxx 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
_______________________________________________ 
ceph-users mailing list 
ceph-users@xxxxxxxxxxxxxx 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 






-- 

Regards 
Zeeshan Ali Shah 
System Administrator - PDC HPC 
PhD researcher (IT security) 
Kungliga Tekniska Hogskolan 
+46 8 790 9115 
http://www.pdc.kth.se/members/zashah 

_______________________________________________ 
ceph-users mailing list 
ceph-users@xxxxxxxxxxxxxx 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux