Re: [PATCH] 64K page size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The short answer is that I'm actually working on a
MIPS-based supercomputer. :)

The longer answer is that the group I'm working with
is linking an extremely large number of SB1-based MIPS
nodes together to build a very large multicast-based
cluster. Back-of-the-envelope calculations suggest
that large pages - although inefficient on memory per
node - would be highly efficient for what we have in
mind.

(You are correct that this will not involve solving
world hunger - although I'm always happy to be proven
wrong on such matters.)

Large pages are not the only technical issue that has
been bugging me - Linux clustering technology in
general is a long way from where I'd like it to be -
but I'm happy with the idea of someone solving one of
the potentially larger thorns that has been bugging me
for a while.

Jonathan Day


--- Ralf Baechle <ralf@xxxxxxxxxxxxxx> wrote:
> 64K pages are not the universal solution to world
> hunger.  They're a
> tradeoff and usually one that is considered
> apropriate for full blown
> supercomputers.  On smaller systems the memory
> overhead is likely to be
> prohibitive.  The memory overhead problem is being
> worked on but it's
> likely to be quite some time before this is finished
> and integrated.
> 
> Do we want to get them to work?  Of course,
> Linux/MIPS supports some
> extremly large systems.  But aside of those 64K
> pagesize is rarely useful.
> 
>   Ralf
> 


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


[Index of Archives]     [Linux MIPS Home]     [LKML Archive]     [Linux ARM Kernel]     [Linux ARM]     [Linux]     [Git]     [Yosemite News]     [Linux SCSI]     [Linux Hams]

  Powered by Linux