Re: why are there "degraded" PGs when adding OSDs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If it wouldn't be too much trouble, I'd actually like the binary osdmap as well (it contains the crushmap, but also a bunch of other stuff).  There is a command that lets you get old osdmaps from the mon by epoch as long as they haven't been trimmed.
-Sam

----- Original Message -----
From: "Chad William Seys" <cwseys@xxxxxxxxxxxxxxxx>
To: "Samuel Just" <sjust@xxxxxxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxxx>
Sent: Tuesday, July 28, 2015 7:40:31 AM
Subject: Re:  why are there "degraded" PGs when adding OSDs?

Hi Sam,

Trying again today with crush tunables set to firefly.  Degraded peaked around 
46.8%.

I've attached the ceph pg dump and the crushmap (same as osdmap) from before 
and after the OSD additions. 3 osds were added on host osd03.  This added 5TB 
to about 17TB for a total of around 22TB.  5TB/22TB = 22.7%  Is it expected 
for 46.8% of PGs to be degraded after adding 22% of the storage?

Another weird thing is that the kernel RBD clients froze up after the OSDs 
were added, but worked fine after reboot.  (Debian kernel 3.16.7)

Thanks for checking!
C.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux