On Thu, Jul 21, 2022 at 11:15:04AM +0200, Amir Goldstein wrote: > Hi Luis, > > I have been on vacation and partly still on vacation. > Quick question. > > Soon I will want to update the shared xfstetsts repo - it has not been > updated since v2022.07.10 That's slightly over a month. > I am still waiting on the SGID tests [1] to land, which may take a few more > weeks, because I have a bunch of 5.10 backports related to SGID fixes > that I would like to validate with those tests. Neat, good to know. > My question is, at the moment, who are the active kdevops users that you > know of that will be affected by an update like that? SUSE has its own git subtree to manage their own delta and set of changes, so typically updates don't affect that tree. But just in case you can Cc Anthony. I know Pankaj also uses it for his own ZNS development. And then there are those that I am not sure of, but it's all good, I think its fair we just set users to expect us to update maybe once or twice a month. Typically I've found there may be small snafu bugs in one release and have to update soon to fix those issues, or I just fix it myself and carry this on our own tree until those fixes land upstream. This is why we have our tree for it, so to give us the flexibility to ensure we can reach stability. If you look fstests has no release tags. Long term I think it would be wonderful if we strive for that, perhaps in stride / linking with the latest kernel release rc tag at that time, but what would a release tag mean? Obviously some sort of stability, but how can we vet for this? [0] https://github.com/linux-kdevops/fstests > Updating the kdevops clone of xfstests repo is going to have an immediate > side effect of "destabilizing" all the established baselines. Sure. > I will announce it on fstests lists, but wanted to give a heads up > and collaborate the update with the active kdevops users first. You can also just use the discord server. Luis > Thanks, > Amir. > > [1] https://lore.kernel.org/fstests/bc3f8e56-d5fa-bee2-741f-d2950ca6e304@xxxxxxxxxxx/