Re: PSA: Long Standing Debian/Ubuntu build performance issue (fixed, backports in progress)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Holy! I have no questions just wanted to say thanks for emailing this, as
much as it does suck to know that's been an issue I really appreciate you
sharing the information about this on here.

We've got a fair share of ubuntu clusters so if there's a way to validate I
would love to know, but it also seems like it's pretty much guaranteed to
have the issue so maybe no need for that hahahaha.

If there's anything we can provide that would be of assistance let me know
and I can see what we can do too!

Thanks to everyone involved that's doing the hard work to get this resolved!

Regards,

Bailey

> -----Original Message-----
> From: Mark Nelson <mark.nelson@xxxxxxxxx>
> Sent: February 8, 2024 2:05 PM
> To: ceph-users@xxxxxxx; dev@xxxxxxx
> Subject:  PSA: Long Standing Debian/Ubuntu build performance
> issue (fixed, backports in progress)
> 
> Hi Folks,
> 
> Recently we discovered a flaw in how the upstream Ubuntu and Debian
> builds of Ceph compile RocksDB.  It causes a variety of performance issues
> including slower than expected write performance, 3X longer compaction
> times, and significantly higher than expected CPU utilization when RocksDB
is
> heavily utilized.  The issue has now been fixed in main.
> Igor Fedotov, however, observed during the performance meeting today
> that there were no backports for the fix in place.  He also rightly
pointed out
> that it would be helpful to make an announcement about the issue given the
> severity for the affected users. I wanted to give a bit more background
and
> make sure people are aware and understand what's going on.
> 
> 1) Who's affected?
> 
> Anyone running an upstream Ubuntu/Debian build of Ceph from the last
> several years.  External builds from Canonical and Gentoo suffered from
this
> issue as well, but were fixed independently.
> 
> 2) How can you check?
> 
> There's no easy way to tell at the moment.  We are investigating if
running
> "strings" on the OSD executable may provide a clue.  For now, assume that
if
> you are using our Debian/Ubuntu builds in a non-container configuration
you
> are affected.  Proxmox for instance was affected prior to adopting the
fix.
> 
> 3) Are Cephadm deployments affected?
> 
> Not as far as we know.  Ceph container builds are compiled slightly
> differently from stand-alone Debian builds.  They do not appear to suffer
> from the bug.
> 
> 4) What versions of Ceph will get the fix?
> 
> Casey Bodley kindly offered to backport the fix to both Reef and Quincy.
>   He also verified that the fix builds properly with Pacific.  We now have
3
> separate backport PRs for the releases here:
> 
> https://github.com/ceph/ceph/pull/55500
> https://github.com/ceph/ceph/pull/55501
> https://github.com/ceph/ceph/pull/55502
> 
> 
> Please feel free to reply if you have any questions!
> 
> Thanks,
> Mark
> 
> --
> Best Regards,
> Mark Nelson
> Head of Research and Development
> 
> Clyso GmbH
> p: +49 89 21552391 12 | a: Minnesota, USA
> w: https://clyso.com | e: mark.nelson@xxxxxxxxx
> 
> We are hiring: https://www.clyso.com/jobs/
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email
> to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux