Re: ARM as a primary architecture

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 21, 2012 at 2:36 AM, Adam Williamson <awilliam@xxxxxxxxxx> wrote:
> On Tue, 2012-03-20 at 13:39 -0400, Peter Jones wrote:
>
>> >> 4) when milestones occur, arm needs to be just as testible as other
>> >>     primary architectures
>> >
>> > So we have a new hire (hi Paul) who is looking at autoqa, and we're
>> > going to pull together as much as we can here. It would help me to know
>> > (and we're reaching out to QE separately - per my other mail) what you
>> > would consider "testable" to mean, in terms of what you'd want to see.
>>
>> I'd largely defer to adamw for specific criteria regarding testing, both
>> in terms of criteria we're testing for (i.e. #3) and in terms of establishing
>> appropriate testing procedures for the platform.  I've largely listed those
>> because there's not really any indication in the proposal as it stands
>> that this is well-considered at this point in time.  There's a brief section
>> on how to test, but it appears to be largely pro-forma.
>>
>> >> 5) installation methods must be in place.  I'm not saying it has to be
>> >>    using the same model as x86, but when we get to beta, if it can't be
>> >>    installed, it can't meet similar release criteria to existing or prior
>> >>    primary arches. Where possible, we should be using anaconda for
>> >>    installation, though I'd be open to looking at using it to build
>> >>    installed images for machines with severe resource constraints.
>> >
>> > So we feel it more appropriate to use image creation tools at this
>> > point, for the 32-bit systems that we have in mind.
>
> So, my take on this is that if we're to do release validation for ARM,
> at a stage where there is no anaconda-for-ARM and our official ARM
> deployment method is 'download the image file for your hardware and
> flash it' (or however the image file gets written exactly), then we're
> going to wind up with release criteria and validation tests for ARM
> which look very different from what we have for x86.
>
> I suppose the picture that forms in my mind is that I'd expect
> generation of the images to be fully scripted and automated, and for
> validation to essentially consist of testing that the generated images
> in fact work on each of the 'supported' platforms.

Absolutely agree. I think initially it will likely be a number of
images (similar to spins gnome/kde/xfce/minimal) and then a tool like
livecd-creator where you can do "livecd-creator --ARM
--device=pandaboard image-name-xfce.img /dev/sdb" and it will take the
img and ensure it has all the relevant bits in place for a particular
device.

> So what I'd expect to be happening is we'd have a list of supported ARM
> devices, and we'd want QA to have access to at least one of each of
> those devices. Then 'validation testing' for ARM would consist of just
> throwing the images at the devices and seeing what stuck.

That's my understanding too.

> Desktop validation would be broadly the same as x86, if desktop is
> something we'd actually be expecting to work on ARM. I rather would
> expect that to be the case, if we were calling it a primary
> architecture. By the same token, though, I would rather expect it to
> track very closely with x86, as it's all relatively high level code that
> ought to behave the same way on both, give or take graphics drivers.

Yes, and we're already pretty much there in terms of packages. In
terms of graphics there's some movement here for 2D through a number
of devices getting kms drivers into the kernel that should work with
the basic xorg-x11-drv-modesetting driver. 3D is a work in progress,
but then you don't have to look too far back into the x86 history to
see that it's only really got to a decent state on x86 in recent
times.

> I suppose I'd expect it to be something less of a heroic undertaking
> than x86 validation testing, so long as we have this model of a
> relatively small set of images for deployment to a relatively small set
> of relatively non-customizable bits of hardware. Almost all the
> difficulty and complexity in x86 validation comes from the fact that we
> definitely don't have that.

Yes, I agree. In the initial phase we were looking at supporting the
development boards so this is basically a small number of devices
(TrimSlice, PanadaBoard, BeagleBoard, Origen, Snowball and Freescale)
that have completely open stacks including unaccelerated graphics.
There's others that currently have issues regarding that such as the
Raspberry Pi which while could be used for testing obviously have
binary bits that make it not quite perfect yet.

Peter
-- 
devel mailing list
devel@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/devel



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]
  Powered by Linux