Re: What happened to 6.1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On Fri, Oct 21, 2011 at 12:54 PM, Johnny Hughes <johnny@xxxxxxxxxx> wrote:
> On 10/21/2011 10:01 AM, Les Mikesell wrote:
>> On Fri, Oct 21, 2011 at 9:51 AM, Nicolas Thierry-Mieg
>> <Nicolas.Thierry-Mieg@xxxxxxx> wrote:
>>
>>>> Johnny, chill. I don't blame him for being confused. Up until right now,
>>>> you updated to a point release, then, over the weeks and months, there
>>>> were updates. All of a sudden, there are *no* updates for the 6.0 point
>>>> release, which is a major change in what everyone expected, based on
>>>> history.
>>>
>>> this is the way it has always been: once upstream releases x.y+1 , there
>>> are no more updates to x.y (in upstream and therefore also in centos),
>>> until centos releases x.y+1 .
>>
>> Yes, but that used to be transparent, because the centos x.y+1 release
>> happened quickly so it didn't matter that the update repo was held
>> back until an iso build was done.
>>
>
> Yes, and NOW the release process is MUCH harder.
>
> Red Hat used to have an AS release that contained everything ... we
> build that and we get everything.  Nice and simple.  Build all the
> packages, look at it against the AS iso set ... done.  Two weeks was
> about as long as it took.
>
> Now, for version 6, they have:
>
> Red Hat Enterprise Linux Server (v. 6)
> Red Hat Enterprise Linux Workstation (v. 6)
> Red Hat Enterprise Linux Desktop (v. 6)
> Red Hat Enterprise Linux HPC Node (v. 6)
> Red Hat Enterprise Linux Workstation FasTrack (v. 6)
> Red Hat Enterprise Linux Server FasTrack (v. 6)
> Red Hat Enterprise Linux Desktop FasTrack (v. 6)
> Red Hat Enterprise Linux Scalable File System (v. 6)
> Red Hat Enterprise Linux Resilient Storage (v. 6)
> Red Hat Enterprise Linux Load Balancer (v. 6)
> Red Hat Enterprise Linux HPC Node FasTrack (v. 6)
> Red Hat Enterprise Linux High Performance Network (v. 6)
> Red Hat Enterprise Virtualization
>
> They have the same install groups with different packages based on the
> above groupings, so we have to do some kind of custom generation of the
> comps files to things work.
>
> They have created an optional channel in several of those groupings that
> is only accessible via RHN and they do not put those RPMS on any ISOs
> ... and they have completely changed their "Authorized Use Policy" so
> that we can NOT login to RHN and use anything that is not on a public
> FTP server or on an ISO set ... effectively cutting us off from the
> ability to check anything on the optional channel.
>
> Now we have to engineer a compilation of all those groupings, we have to
> figure out what parts of the optional channels go at the point release
> and which ones do not (the ones that are upgrades).   Sometimes the only
> way to tell is when something does not build correctly and you have
> reverse an optional package to a previous version for the build, etc.
>
> We have to use anaconda to build our ISOs and upstream is using
> "something else" to build theirs .. so anaconda NEVER works anymore out
> of the box.  We get ISOs (or usb images) that do not work and have to
> basically redesign anaconda.
>
> We can't look at upstream build logs, we can't get all the binary RPMs
> for testing and be within the Terms of Service.
>
> And with the new release, it seems that they have purposely broken the
> rpmmacros, and do not care to fix it:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=743229
>
> So, trust me, it is MUCH more complicated now than it was with previous
> releases to build.
>
> With the 5.7 release, there were several SRPMS that did not make it to
> the public FTP server without much prompting from us.  And with the
> Authorized Use Policy, I can not just go to RHN and grab that SRPM and
> use it.  If it is not public, we can no longer release it.
>
> So, the short answer is, it now takes longer.
>
> Thanks,
> Johnny Hughes


As someone who was part of the previous "6.0" discussions, I have to
say thank you for finally laying out some details about what the
issues are.  More information like this would really go a long way
towards preventing future flame-fests.

Thanks for your hard work.


-☙ Brian Mathis ❧-
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos



[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux