Hey Laura, Greg, all,
On 31/10/2022 17:15, Gregory Farnum wrote:
If you don't mind me asking Laura, have those issues regarding the
testing lab been resolved yet?
There are currently a lot of folks working to fix the testing lab issues.
Essentially, disk corruption affected our ability to reach quay.ceph.io.
We've made progress this morning, but we are still working to understand
the root cause of the corruption. We expect to re-deploy affected services
soon so we can resume testing for v16.2.11.
We got a note about this today, so I wanted to clarify:
For Reasons, the sepia lab we run teuthology in currently uses a Red
Hat Enterprise Virtualization stack — meaning, mostly KVM with a lot
of fancy orchestration all packaged up, backed by Gluster. (Yes,
really — a full Ceph integration was never built and at one point this
was deemed the most straightforward solution compared to running
all-up OpenStack backed by Ceph, which would have been the available
alternative.) The disk images stored in Gluster started reporting
corruption last week (though Gluster was claiming to be healthy), and
with David's departure and his backup on vacation it took a while for
the remaining team members to figure out what was going on and
identify strategies to resolve or work around it.
The relevant people have figured out a lot more of what was going on,
and Adam (David's backup) is back now so we're expecting things to
resolve more quickly at this point. And indeed the team's looking at
other options for providing this infrastructure going forward. 😄
-Greg
May I kindly ask for an update on how things are progressing? Mostly I
am interested on the (persisting) implications for testing new point
releases (e.g. 16.2.11) with more and more bugfixes in them.
Thanks a bunch!
Christian
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx