Re: [00/41] Large Blocksize Support V7 (adds memmap support)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



mel@xxxxxxxxx (Mel Gorman) writes:

> On (16/09/07 17:08), Andrea Arcangeli didst pronounce:
>> zooming in I see red pixels all over the squares mized with green
>> pixels in the same square. This is exactly what happens with the
>> variable order page cache and that's why it provides zero guarantees
>> in terms of how much ram is really "free" (free as in "available").
>> 
>
> This picture is not grouping pages by mobility so that is hardly a
> suprise. This picture is not running grouping pages by mobility. This is
> what the normal kernel looks like. Look at the videos in
> http://www.skynet.ie/~mel/anti-frag/2007-02-28 and see how list-based
> compares to vanilla. These are from February when there was less control
> over mixing blocks than there is today.
>
> In the current version mixing occurs in the lower blocks as much as possible
> not the upper ones. So there are a number of mixed blocks but the number is
> kept to a minimum.
>
> The number of mixed blocks could have been enforced as 0, but I felt it was
> better in the general case to fragment rather than regress performance.
> That may be different for large blocks where you will want to take the
> enforcement steps.

I agree that 0 is a bad value. But so is infinity. There should be
some mixing but not a lot. You say "kept to a minimum". Is that
actively done or already happens by itself. Hopefully the later which
would be just splendid.

>> With config-page-shift mmap works on 4k chunks but it's always backed
>> by 64k or any other largesize that you choosed at compile time. And if

But would mapping a random 4K page out of a file then consume 64k?
That sounds like an awfull lot of internal fragmentation. I hope the
unaligned bits and pices get put into a slab or something as you
suggested previously.

>> the virtual alignment of mmap matches the physical alignment of the
>> physical largepage and is >= PAGE_SIZE (software PAGE_SIZE I mean) we
>> could use the 62nd bit of the pte to use a 64k tlb (if future cpus
>> will allow that). Nick also suggested to still set all ptes equal to
>> make life easier for the tlb miss microcode.

It is too bad that existing amd64 CPUs only allow such large physical
pages. But it kind of makes sense to cut away a full level or page
tables for the next bigger size each.

>> > big you can make it. I don't think my system with 1GB ram would work
>> > so well with 2MB order 0 pages. But I wasn't refering to that but to
>> > the picture.
>> 
>> Sure! 2M is sure way excessive for a 1G system, 64k most certainly
>> too, of course unless you're running a db or a multimedia streaming
>> service, in which case it should be ideal.

rtorrent, Xemacs/gnus, bash, xterm, zsh, make, gcc, galeon and the
ocasional mplayer.

I would mostly be concerned how rtorrents totaly random access of
mmapped files negatively impacts such a 64k page system.

MfG
     Goswin
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux