Re: armhf dnf is not working on aarch64 kernel

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Gordan, Peter, all,

On 04/27/2016 03:39 PM, Gordan Bobic wrote:
> On 2016-04-27 19:12, John Dulaney wrote:
>> On Wed, Apr 27, 2016 at 05:04:38PM +0100, Gordan Bobic wrote:
>>> >
>>> >Maybe that's something that CentOS have added (don't know, haven't
>>> >looked), RHELSA doesn't support it that I'm aware of and they're
>>> >definitely only 64K page size. The biggest change is in rpm and the
>>> >arch mappings there.

>>> They might not support it, but it most certainly works. There are no
>>> changes specific to this that I can find in CentOS. All I changed was
>>> rebuilt the host kernel with 4KB pages and ARM32 support (still an
>>> aarch64 kernel). C7 armv7hl guest is completely unmodified apart from
>>> the /etc/rpm/platform being set explicitly.

Allow me to add a few thoughts. I have been working with the ARM vendors
(as well as the ARM Architecture Group) since before the architecture
was announced, and the issue of page size and 32-bit backward
compatibility came up in the earliest days. I am speaking from a Red Hat
perspective and NOT dictating what Fedora should or must do, but I do
strongly encourage Fedora not to make a change to something like the
page size simply to support a (relatively) small number of corner cases.
It is better to focus on the longer term trajectory, which the mobile
handset market demonstrates: the transition to 64-bit computing hardware
will be much faster than people thought, and we don't need to build a
legacy (we don't a 32-bit app store filled with things that can't easily
be rebuilt, and all of them have been anyway).

That doesn't mean we shouldn't love 32-bit ARM devices, which we do. In
fact, there will be many more 32-bit ARM devices over coming years. This
is especially true for IoT clients. But there will also be a large (and
rapidly growing) number of very high performance 64-bit systems. Many of
those will not have any 32-bit backward compatibility, or will disable
it in the interest of reducing the amount of validation work. Having an
entire separate several ISAs just for the fairly nonexistent field of
proprietary non-recompilable third party 32-bit apps doesn't really make
sense. Sure, running 32-bit via multilib is fun and all, but it's not
really something that is critical to using ARM systems.

The mandatory page sizes in the v8 architecture are 4K and 64K, with
various options around the number of bits used for address spaces, huge
pages (or ginormous pages), and contiguous hinting for smaller "huge"
pages. There is an option for 16K pages, but it is not mandatory. In the
server specifications, we don't compel Operating Systems to use 64K, but
everything is written with that explicitly in mind. By using 64K early
we ensure that it is possible to do so in a very clean way, and then if
(over the coming years) the deployment of sufficient real systems proves
that this was a premature decision, we still have 4K.

The choices for preferred page size were between 4K and 64K. In the
interest of transparency, I pushed from the RH side in the earliest days
(before public disclosure) to introduce an intentional break with the
past and support only 64K on ARMv8. I also asked a few of the chip
vendors not to implement 32-bit execution (and some of them have indeed
omitted it after we discussed the needs early on), and am aggressively
pushing for it to go away over time in all server parts. But there's
more to it than that. In the (very) many early conversations with
various performance folks, the feedback was that larger page sizes than
4K should generally be adopted for a new arch. Ideally that would have
been 16K (which other architectures than x86 went with also), but that
was optional. Optionally necessarily means "does not exist". My advice
when Red Hat began internal work on ARMv8 was to listen to the experts.

I am well aware of Linus's views on the topic and I have seen the rants
on G+ and elsewhere. I am completely willing to be wrong (there is not
enough data yet) over moving to 64K too soon and ultimately if it was
premature see things like RHELSA on the Red Hat side switch back to 4K.
Fedora is its own master, but I strongly encourage retaining the use of
64K granules at this time, and letting it play out without responding to
one or two corner use cases and changing course. There are very many
design optimizations that can be done when you have a 64K page size,
from the way one can optimize cache lookups and hardware page table
walker caches to the reduction of TLB pressure (though I accept that
huge pages are an answer for this under a 4K granule regime as well). It
would be nice to blaze a trail rather than take the safe default.

My own opinion is that (in the longer term, beginning with server) we
should not have a 32-bit legacy of the kind that x86 has to deal with
forever. We can use virtualization (and later, if it really comes to it,
containers running 32-bit applications with 4K pages exposed to them -
an implementation would be a bit like "Clear" containers today) to run
32-bit applications on 64-bit without having to do nasty hacks (such as
multilib) and reduce any potential for confusion on the part of users
(see also RasPi 3 as an example). It is still early enough in the
evolution of general purpose aarch64 to try this, and have the pragmatic
fallback of retreating to 4K if needed. The same approach of running
under virtualization or within a container model equally applies to
ILP32, which is another 32-bit ABI that some folks like, in that a third
party group is welcome to do all of the lifting required.

>>> The main point being that the original assertion that making this
>>> work would require rpm, yum, packagekit, mock and other code changes
>>> doesn't seem to be correct based on empirical evidence.

>> It may work with rpm, but, as per the original post, dnf does not
>> support it, and dnf should not support it as long as Fedora
>> does not support a 32 bit userspace on aarch64.

It's a lot of lifting to support validating a 32-bit userspace for a
brand new architecture that doesn't need to have that legacy. Sure, it's
convenient, and you're obviously more than capable of building a kernel
with a 4K page size and doing whatever you need for yourself. That's the
beauty of open source. It lets you have a 32-bit userspace on a 64-bit
device without needing to support that for everyone else.

> 2) Nobody has yet pointed at ARM's own documentation (I did ask
> earlier) that says that 4KB memory page support is optional
> rather than mandatory.

Nobody said this was a requirement. I believe you raised this as some
kind of logical fallacy to reinforce the position that you have taken.

> And if 4KB support is in fact mandatory, then arguably the
> decision to opt for 64KB for the sake of supporting Seattle was
> based on wanting to support broken hardware that turned out to
> be too little too late anyway.

Seattle was incredibly well designed by a very talented team of
engineers at AMD, who know how to make servers. They did everything
fully in conformance with the specifications we coauthored for v8. It is
true that everyone would have liked to see low cost mass market Seattle
hardware in wide distribution. For the record, last week, I received one
of the preproduction "Cello" boards ($300) for which a few kinks are
being resolved before it will go into mass production soon.

<snip>

> So either something magical happens that means that the
> missing 32-bit support doesn't have to be fully emulated in
> software, or the entire argument being made for VMs instead
> of chroots is entirely erroneous.

Nobody said there wasn't a performance hit using virtualization.
Depending upon how you measure it, it's about 3-10% overhead or somesuch
to use KVM (or Xen for that matter) on ARMv8. That doesn't make it an
erroneous argument that running a VM is an easier exercise in
distribution validation and support: you build one 64-bit distro, you
build one 32-bit distro. You don't have to support a mixture. In a few
years, we'll all be using 64-bit ARM SoCs in every $10 device, only
running native 64-bit ARMv8 code,  and wondering why it was ever an
issue that we might want multilib. We'll have $1-$2 IoT widgets that are
32-bit, but that's another matter. There's no legacy today, so let's
concentrate on not building one and learning from history.

Jon.

-- 
Computer Architect | Sent from my Fedora powered laptop
_______________________________________________
arm mailing list
arm@xxxxxxxxxxxxxxxxxxxxxxx
http://lists.fedoraproject.org/admin/lists/arm@xxxxxxxxxxxxxxxxxxxxxxx




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux ARM (Vger)]     [Linux ARM]     [ARM Kernel]     [Fedora User Discussion]     [Older Fedora Users Discussion]     [Fedora Advisory Board]     [Fedora Security]     [Fedora Maintainers]     [Fedora Devel Java]     [Fedora Legacy]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Mentors]     [Fedora Package Announce]     [Fedora Package Review]     [Fedora Music]     [Fedora Packaging]     [Centos]     [Fedora SELinux]     [Coolkey]     [Yum Users]     [Tux]     [Yosemite News]     [Linux Apps]     [KDE Users]     [Fedora Tools]     [Fedora Art]     [Fedora Docs]     [Asterisk PBX]

Powered by Linux