can a OSD affect performance from pool X when blocking/slow requests PGs from pool Y ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, we have a 7 nodes ubuntu ceph hammer pool (78 OSD to be exact).
This weekend we'be experienced a huge outage from our customers vms
(located on pool CUSTOMERS, replica size 3 ) when lots of OSD's
started to slow request/block PG's on pool PRIVATE ( replica size 1 )
basically all PG's blocked where just one OSD in the acting set, but
all customers on the other pool got their vms almost freezed.

while trying to do basic troubleshooting like doing noout and then
bringing down the OSD that slowed/blocked the most, inmediatelly
another OSD slowed/locked iops on pgs from the same PRIVATE pool, so
we rolled back that change and started to move data around with the
same logic (reweighting down those OSD) with exactly the same result.

So, me made a decition, we decided to delete the pool where all PGS
where slowed/locked allways despite the osd.

Not even 10 secconds passes after the pool deletion, where not only
there were no more degraded PGs, bit also ALL slow iops dissapeared
for ever, and performance from hundreds of vms came to normal
immediately.

I must say that i was kinda scared to see that happen, bascally
because there was only ONE POOL's PGS always slowed, but performance
hit the another pool, so ... did not the PGS that exists on one pool
are not shared by the other ?
If my assertion is true, why OSD's locking iops from one pool's pg
slowed down all other pgs from other pools ?

again, i just deleted a pool that has almost no traffic, because its
pgs were locked and affected pgs on another pool, and as soon as that
happened, the whole cluster came back to normal (and of course,
HEALTH_OK and no slow transaction whatsoever)

please, someone help me understand the gap where i miss something,
since this , as long as my ceph knowledge is concerned, makes no
sense.

PS: i have found someone that , looks like went through the same here:
https://forum.proxmox.com/threads/ceph-osd-failure-causing-proxmox-node-to-crash.20781/
but i still dont understand what happened.

hoping to get the help from the community.

-- 
Alejandrito


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux