On Tue, 14 Feb 2006 14:51:17 +0100, Ralf Corsepius wrote: > > A packager who's not running Rawhide--or at least one late test > > release--cannot test the binaries at all prior to FC5 being > > published. > Let me pull you this tooth - In the vast majority of all cases, you > don't need a particular OS/version of an OS to test whether a package is > functional, because most packages' functionality is independent of any > OS. Therefore testing update/upgrade-packages on such similar OSes (e.g > FC4 vs. FC5) will be enough in many cases. > > Also you can't expect to a package that is known to work on one > architecture to work on others, and require all maintainers to test them > on all architectures - That would be irrational nonsense. This is all poor theory. In reality, build requirements on FC(n-1) don't suffice. And packagers cannot do any run-time testing at all, not for a single arch either. If you want to discuss quality of high-level language code, which works on one arch, but breaks on another, this is off-topic. Of course I do hope that code, which works on packager's primary arch, also builds and works on different archs. But this is just a _hope_. I'm aware that some "coders" do lots of ugly things in high-level programming languages, even things which are doomed to fail on other archs. > > > And we have a lot of not that important packages in the tree now, some > > > of them are rarely updated upstream (take tiobench for example). We > > > might never notice if that package is orphaned if we always do a > > > script-mass-rebuild... > > > > Yes, a scripted mass-rebuild introduces a false sense of activity. Who > > skims over the build logs to watch out for failed configure tests which > > leave out components? Who makes sure the binaries still work? My > > experience with the last mass-rebuild is bad, > > Nevertheless it had been much better than what is being tried now. Maybe you've got that false impression, because this time a) the responsibility to trigger rebuilds is yours, b) you have not been hit by orphans, which rebuilt successfully, but are broken nevertheless, c) packages, which would not build prior to an upstream upgrade, so package maintainer needed to work on the package anyway, and d) packages with open bug reports, where the packager just neglected packaging work till FC(n+1) was close enough. > > btw, since some packagers > > performed major version upgrades shortly after the rebuilds had been done. > Packagers perform major upgrades when they "happen". > > In some cases it's a random point in time, in some cases it's the mass > rebuild co-incing with "last minute" updates. It has happened and it > will happen again, you can't prevent it, and I don't see any reason why > one would want to prevent it. Once more, this is poor theory. This is not about prevention, but about wasted efforts and lack of coordination. Package maintainers need to learn about their dependencies within Extras/Core and coordinate required rebuilds with eachother. What I see is the following: A signal has been sent. It's time to prepare for Fedora Extras 5. It's packager's responsibility to update/upgrade where necessary. And since we still do a rolling release, packagers rebuild when they're ready and when their BR are ready. And it's them who receive build logs and build failure reports directly and may peruse them for mistakes. It's also them who decide when to ship another version upgrade, possibly with GCC fixes or things like that. Why care about any package that may be broken _today_ when the packager _will_ fix it _in time_? Why perform a rebuild attempt today when possibly a week later upstream releases a new version which would build? Why interfere with packager's way of keeping a package in shape? -- fedora-extras-list mailing list fedora-extras-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/fedora-extras-list