RE: [PATCH 2/2] Drivers: hv: Add Hyper-V balloon driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Andrew Morton [mailto:akpm@xxxxxxxxxxxxxxxxxxxx]
> Sent: Tuesday, October 09, 2012 3:45 PM
> To: KY Srinivasan
> Cc: gregkh@xxxxxxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx;
> devel@xxxxxxxxxxxxxxxxxxxxxx; olaf@xxxxxxxxx; apw@xxxxxxxxxxxxx;
> andi@xxxxxxxxxxxxxx
> Subject: Re: [PATCH 2/2] Drivers: hv: Add Hyper-V balloon driver
> 
> On Sun,  7 Oct 2012 16:59:46 -0700
> "K. Y. Srinivasan" <kys@xxxxxxxxxxxxx> wrote:
> 
> > Add the basic balloon driver.
> 
> hm, how many balloon drivers does one kernel need?
> 
> Although I see that the great majority of this code is hypervisor-specific.
> 
> > Windows hosts dynamically manage the guest
> > memory allocation via a combination memory hot add and ballooning. Memory
> > hot add is used to grow the guest memory upto the maximum memory that can
> be
> > allocatted to the guest. Ballooning is used to both shrink as well as expand
> > up to the max memory. Supporting hot add needs additional support from the
> > host. We will support hot add when this support is available. For now,
> > by setting the VM startup memory to the VM  max memory, we can use
> > ballooning alone to dynamically manage memory allocation amongst
> > competing guests on a given host.
> >
> >
> > ...
> >
> > +static int  alloc_balloon_pages(struct hv_dynmem_device *dm, int
> num_pages,
> > +			 struct dm_balloon_response *bl_resp, int alloc_unit,
> > +			 bool *alloc_error)
> > +{
> > +	int i = 0;
> > +	struct page *pg;
> > +
> > +	if (num_pages < alloc_unit)
> > +		return 0;
> > +
> > +	for (i = 0; (i * alloc_unit) < num_pages; i++) {
> > +		if (bl_resp->hdr.size + sizeof(union dm_mem_page_range) >
> > +			PAGE_SIZE)
> > +			return i * alloc_unit;
> > +
> > +		pg = alloc_pages(GFP_HIGHUSER | __GFP_NORETRY |
> GFP_ATOMIC |
> > +				__GFP_NOMEMALLOC | __GFP_NOWARN,
> > +				get_order(alloc_unit << PAGE_SHIFT));
> 
> This choice of GFP flags is basically impossible to understand, so I
> suggest that a comment be added explaining it all.
> 
> I'm a bit surprised at the inclusion of GFP_ATOMIC as it will a) dip
> into page reserves, whcih might be undesirable and b) won't even
> reclaim clean pages, which seems desirable.  I suggest this also be
> covered in the forthcoming code comment.

I will rework these flags and add appropriate comments.

> 
> drivers/misc/vmw_balloon.c seems to me to have used better choices here.
> 
> > +		if (!pg) {
> > +			*alloc_error = true;
> > +			return i * alloc_unit;
> > +		}
> > +
> > +		totalram_pages -= alloc_unit;
> 
> Well, I'd consider totalram_pages to be an mm-private thing which drivers
> shouldn't muck with.  Why is this done?

By modifying the totalram_pages, the information presented in /proc/meminfo
correctly reflects what is currently assigned to the guest (MemTotal).
 
> 
> drivers/xen/balloon.c and drivers/virtio/virtio_balloon.c also alter
> totalram_pages, also without explaining why.
> drivers/misc/vmw_balloon.c does not.
> 
> > +		dm->num_pages_ballooned += alloc_unit;
> > +
> > +		bl_resp->range_count++;
> > +		bl_resp->range_array[i].finfo.start_page =
> > +			page_to_pfn(pg);
> > +		bl_resp->range_array[i].finfo.page_cnt = alloc_unit;
> > +		bl_resp->hdr.size += sizeof(union dm_mem_page_range);
> > +
> > +	}
> > +
> > +	return num_pages;
> > +}
> >
> > ...
> >
> 
> 
> 

Thanks for the prompt review. I will address your comments and repost the patches soon.
If it is ok with you, I am going to keep the code that manipulates totalram_pages 
(for reasons I listed above).

Regards,

K. Y

_______________________________________________
devel mailing list
devel@xxxxxxxxxxxxxxxxxxxxxx
http://driverdev.linuxdriverproject.org/mailman/listinfo/devel


[Index of Archives]     [Linux Driver Backports]     [DMA Engine]     [Linux GPIO]     [Linux SPI]     [Video for Linux]     [Linux USB Devel]     [Linux Coverity]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux