On Mon, Jan 13, 2014 at 11:01:32AM +0900, Minchan Kim wrote: > On Fri, Jan 10, 2014 at 01:10:39PM -0500, Johannes Weiner wrote: > > shmem mappings already contain exceptional entries where swap slot > > information is remembered. > > > > To be able to store eviction information for regular page cache, > > prepare every site dealing with the radix trees directly to handle > > entries other than pages. > > > > The common lookup functions will filter out non-page entries and > > return NULL for page cache holes, just as before. But provide a raw > > version of the API which returns non-page entries as well, and switch > > shmem over to use it. > > > > Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx> > Reviewed-by: Minchan Kim <minchan@xxxxxxxxxx> Thanks, Minchan! > > @@ -890,6 +973,73 @@ repeat: > > EXPORT_SYMBOL(find_or_create_page); > > > > /** > > + * __find_get_pages - gang pagecache lookup > > + * @mapping: The address_space to search > > + * @start: The starting page index > > + * @nr_pages: The maximum number of pages > > + * @pages: Where the resulting pages are placed > > where is @indices? Fixed :) > > @@ -894,6 +894,53 @@ EXPORT_SYMBOL(__pagevec_lru_add); > > > > /** > > * pagevec_lookup - gang pagecache lookup > > __pagevec_lookup? > > > + * @pvec: Where the resulting entries are placed > > + * @mapping: The address_space to search > > + * @start: The starting entry index > > + * @nr_pages: The maximum number of entries > > missing @indices? > > > + * > > + * pagevec_lookup() will search for and return a group of up to > > + * @nr_pages pages and shadow entries in the mapping. All entries are > > + * placed in @pvec. pagevec_lookup() takes a reference against actual > > + * pages in @pvec. > > + * > > + * The search returns a group of mapping-contiguous entries with > > + * ascending indexes. There may be holes in the indices due to > > + * not-present entries. > > + * > > + * pagevec_lookup() returns the number of entries which were found. > > __pagevec_lookup Yikes, all three fixed. > > @@ -22,6 +22,22 @@ > > #include <linux/cleancache.h> > > #include "internal.h" > > > > +static void clear_exceptional_entry(struct address_space *mapping, > > + pgoff_t index, void *entry) > > +{ > > + /* Handled by shmem itself */ > > + if (shmem_mapping(mapping)) > > + return; > > + > > + spin_lock_irq(&mapping->tree_lock); > > + /* > > + * Regular page slots are stabilized by the page lock even > > + * without the tree itself locked. These unlocked entries > > + * need verification under the tree lock. > > + */ > > Could you explain why repeated spin_lock with irq disabled isn't problem > in truncation path? To modify the cache tree, we have to take the IRQ-safe tree_lock, this is no different than removing a page (see truncate_complete_page). -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>