The hardcoded upper limit of 16 jobs, passed to make via "-j" when you use either %make_build or make %{?_smp_mflags} in the %build section of your specfiles, is going away in rawhide. This may result in your jobs being run with additional parallelization in some situations. This change will appear in rawhide soon. It is not being made in F25 at this time; that requires more discussion and will warrant a separate announcement. What you need to do: Nothing. There is a small possibility that the additional parallelism can tickle some kind of build failure, but the regular Fedora x86_64 builders won't assign more than 16 CPU threads to your jobs in any case so issues may only appear on other architectures or when building your packages outside of the Fedora buildsystem. Users with sufficient cores to see any difference here can rejoice that their builds will use their hardware. What you need to do if this actually breaks your package: For the quickest fix, you can add this at the top of your spec: %global _smp_ncpus_max 16 This will put the limit back to where it was. There's a reasonable chance that your package will occasionally still fail even at that level of parallelism because most issues are going to be related to race conditions which may depend on all sorts of conditions. Safer still may be to use a lower value or even disable parallel builds entirely by not calling make with %{?_smp_mflags}. But in this case you should instead work with upstream to fix such issues at the source. After all, SMP machines certainly aren't going away. Background: For many years, Fedora's default RPM configuration (via /usr/lib/rpm/redhat/macros, from the redhat-rpm-macros package) has by default set an upper limit of "16" for the job count passed to make via "make -j". The reasons for this are related to build failures that appeared when SPARC CPUs with large numbers of threads began to appear. See https://bugzilla.redhat.com/show_bug.cgi?id=1384938 and https://bugzilla.redhat.com/show_bug.cgi?id=669638 for some references. There's not really any reason to keep the limit in place. It's difficult to imagine that there is something SPARC-specific about the original issues, and it's just not worth defaulting to such a low limit when there is so much hardware which would go unused. - J< _______________________________________________ devel-announce mailing list -- devel-announce@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-announce-leave@xxxxxxxxxxxxxxxxxxxxxxx _______________________________________________ devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx