Re: Reduced data availability: 4 pgs inactive, 4 pgs incomplete

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Unfortunately, I don’t see that setting documented anywhere other than the release notes.  Its hard to find guidance for questions in that case, but luckily you noted it in your blog post.  I wish I knew what setting to put that at.  I did use the deprecated one after moving to hammer a while back due to the mis-calcuated PGs.  I have now that settings, but used 0 as the value, which cleared the error in the status, but the stuck incomplete pgs persist.  I restarted all daemons, so it should be in full effect.  Interestingly enough, it added the hdd in the ceph osd tree output...   anyhow, I know this is a dirty cluster due to this mis-calcuation, I would like to fix the cluster if possible ( both the stuck/incomplete and the underlying too many pgs issue )

Thanks for the information!

-Brent

-----Original Message-----
From: Jens-U. Mozdzen [mailto:jmozdzen@xxxxxx] 
Sent: Sunday, January 7, 2018 1:23 PM
To: bkennedy@xxxxxxxxxx
Cc: stefan@xxxxxx
Subject: Fwd:  Reduced data availability: 4 pgs inactive, 4 pgs incomplete

Hi Brent,

sorry, the quoting style had me confused, this actually was targeted at your question, I believe.

@Stefan: Sorry for the noise

Regards,
Jens
----- Weitergeleitete Nachricht von "Jens-U. Mozdzen" <jmozdzen@xxxxxx> -----
   Datum: Sun, 07 Jan 2018 18:18:00 +0000
     Von: "Jens-U. Mozdzen" <jmozdzen@xxxxxx>
Betreff: Re:  Reduced data availability: 4 pgs inactive, 4 pgs incomplete
      An: stefan@xxxxxx

Hi Stefan,

I'm in a bit of a hurry, so just a short offline note:

>>> Quoting Brent Kennedy (bkennedy@xxxxxxxxxx):
>>> Unfortunately, this cluster was setup before the calculator was in 
>>> place and when the equation was not well understood.  We have the 
>>> storage space to move the pools and recreate them, which was 
>>> apparently the only way to handle the issue( you are suggesting what 
>>> appears to be a different approach ).  I was hoping to avoid doing 
>>> all of this because the migration would be very time consuming.  
>>> There is no way to fix the stuck pg?s though?  If I were to expand 
>>> the replication to 3 instances, would that help with the PGs per OSD 
>>> issue any?
>> No! It will make the problem worse because you need PGs to host these 
>> copies. The more replicas, the more PGs you need.
> Guess I am confused here, wouldn?t it spread out the existing data to 
> more PGs?  Or are you saying that it couldn?t spread out because the 
> PGs are already in use?  Previously it was set to 3 and we reduced it 
> to 2 because of failures.

If you increase the replication size, you'll ask RADOS to create additional copies in additional PGs. So the number of PGs will increase...

>>> When you say enforce, do you mean it will block all access to the 
>>> cluster/OSDs?
>> No, [...]

My experience differs: If you cluster already has too many PGs per OSD  
(before upgrading to 12.2.2) and anything PG-per-OSD-related changes  
(i.e. re-distributing PGs when OSDs go down), any access to the  
over-sized OSDs *will block*. Cost me a number of days to figure out  
and was recently discussed by someone else on the ML. Increase the  
according parameter ("mon_max_pg_per_osd") in the global section and  
restart your MONs, MGRs and OSDs (OSDs one by one, if you don't know  
your layout, to avoid data loss). Made me even create a blog entry,  
for future reference:  
http://technik.blogs.nde.ag/2017/12/26/ceph-12-2-2-minor-update-major-trouble/

If this needs to go into more details, let's take it back to the  
mailing list, I'll be available again during the upcoming week.

Regards,
Jens

----- Ende der weitergeleiteten Nachricht -----


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux