Re: [PATCH v3] staging: vboxvideo: Add vboxvideo to drivers/staging

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is obviously much better and easier to review.  I ended up
reviewing it like I would normal code instead of staging code.  My main
thing is could you please re-write the module init error handling so
that each function does its own error handling instead of just calling
vbox_driver_unload().  Doing it that way is always going to buggy and
impossible to review.

On Thu, Jun 22, 2017 at 11:11:37AM +0200, Hans de Goede wrote:
> This commit adds the vboxvideo drm/kms driver for the virtual graphics
> card used in Virtual Box virtual machines to drivers/staging.
> 
> Why drivers/staging? This driver is already being patched into the kernel
> by several distros, thus it is good to get this driver upstream soon, so
> that work on the driver can be easily shared.
> 
> At the same time we want to take our time to get this driver properly
> cleaned up (mainly converted to the new atomic modesetting APIs) before
> submitting it as a normal driver under drivers/gpu/drm, putting this
> driver in staging for now allows both.
> 
> Note this driver has already been significantly cleaned up, when I started
> working on this the files under /usr/src/vboxguest/vboxvideo as installed
> by Virtual Box 5.1.18 Guest Additions had a total linecount of 52681
> lines. The version in this commit has 4874 lines.
> 
> Cc: vbox-dev@xxxxxxxxxxxxxx
> Cc: Michael Thayer <michael.thayer@xxxxxxxxxx>
> Signed-off-by: Hans de Goede <hdegoede@xxxxxxxxxx>
> Signed-off-by: Michael Thayer <michael.thayer@xxxxxxxxxx>
> ---
> Changes in v2:
> -Add Michael's S-o-b
> -Improve Kconfig help text
> -Remove .config changes which accidentally got added to the commit
> 
> Changes in v3:
> -Convert the files shared with the video-driver for other operating-systems
>  to kernel coding style too
> -Remove unused code from these files
> ---
>  drivers/staging/Kconfig                     |   2 +
>  drivers/staging/Makefile                    |   1 +
>  drivers/staging/vboxvideo/Kconfig           |  12 +
>  drivers/staging/vboxvideo/Makefile          |   7 +
>  drivers/staging/vboxvideo/TODO              |   9 +
>  drivers/staging/vboxvideo/hgsmi_base.c      | 244 ++++++++
>  drivers/staging/vboxvideo/hgsmi_ch_setup.h  |  66 +++
>  drivers/staging/vboxvideo/hgsmi_channels.h  |  53 ++
>  drivers/staging/vboxvideo/hgsmi_defs.h      |  92 +++
>  drivers/staging/vboxvideo/modesetting.c     | 142 +++++
>  drivers/staging/vboxvideo/vbox_drv.c        | 287 +++++++++
>  drivers/staging/vboxvideo/vbox_drv.h        | 299 ++++++++++
>  drivers/staging/vboxvideo/vbox_err.h        |  50 ++
>  drivers/staging/vboxvideo/vbox_fb.c         | 442 ++++++++++++++
>  drivers/staging/vboxvideo/vbox_hgsmi.c      | 115 ++++
>  drivers/staging/vboxvideo/vbox_irq.c        | 207 +++++++
>  drivers/staging/vboxvideo/vbox_main.c       | 528 +++++++++++++++++
>  drivers/staging/vboxvideo/vbox_mode.c       | 864 ++++++++++++++++++++++++++++
>  drivers/staging/vboxvideo/vbox_prime.c      |  74 +++
>  drivers/staging/vboxvideo/vbox_ttm.c        | 477 +++++++++++++++
>  drivers/staging/vboxvideo/vboxvideo.h       | 491 ++++++++++++++++
>  drivers/staging/vboxvideo/vboxvideo_guest.h |  95 +++
>  drivers/staging/vboxvideo/vboxvideo_vbe.h   |  84 +++
>  drivers/staging/vboxvideo/vbva_base.c       | 233 ++++++++
>  24 files changed, 4874 insertions(+)
>  create mode 100644 drivers/staging/vboxvideo/Kconfig
>  create mode 100644 drivers/staging/vboxvideo/Makefile
>  create mode 100644 drivers/staging/vboxvideo/TODO
>  create mode 100644 drivers/staging/vboxvideo/hgsmi_base.c
>  create mode 100644 drivers/staging/vboxvideo/hgsmi_ch_setup.h
>  create mode 100644 drivers/staging/vboxvideo/hgsmi_channels.h
>  create mode 100644 drivers/staging/vboxvideo/hgsmi_defs.h
>  create mode 100644 drivers/staging/vboxvideo/modesetting.c
>  create mode 100644 drivers/staging/vboxvideo/vbox_drv.c
>  create mode 100644 drivers/staging/vboxvideo/vbox_drv.h
>  create mode 100644 drivers/staging/vboxvideo/vbox_err.h
>  create mode 100644 drivers/staging/vboxvideo/vbox_fb.c
>  create mode 100644 drivers/staging/vboxvideo/vbox_hgsmi.c
>  create mode 100644 drivers/staging/vboxvideo/vbox_irq.c
>  create mode 100644 drivers/staging/vboxvideo/vbox_main.c
>  create mode 100644 drivers/staging/vboxvideo/vbox_mode.c
>  create mode 100644 drivers/staging/vboxvideo/vbox_prime.c
>  create mode 100644 drivers/staging/vboxvideo/vbox_ttm.c
>  create mode 100644 drivers/staging/vboxvideo/vboxvideo.h
>  create mode 100644 drivers/staging/vboxvideo/vboxvideo_guest.h
>  create mode 100644 drivers/staging/vboxvideo/vboxvideo_vbe.h
>  create mode 100644 drivers/staging/vboxvideo/vbva_base.c
> 
> diff --git a/drivers/staging/Kconfig b/drivers/staging/Kconfig
> index 268d4e6ef48a..ef28a1cb64ae 100644
> --- a/drivers/staging/Kconfig
> +++ b/drivers/staging/Kconfig
> @@ -110,4 +110,6 @@ source "drivers/staging/ccree/Kconfig"
>  
>  source "drivers/staging/typec/Kconfig"
>  
> +source "drivers/staging/vboxvideo/Kconfig"
> +
>  endif # STAGING
> diff --git a/drivers/staging/Makefile b/drivers/staging/Makefile
> index b93e6f5f0f6e..2918580bdb9e 100644
> --- a/drivers/staging/Makefile
> +++ b/drivers/staging/Makefile
> @@ -44,3 +44,4 @@ obj-$(CONFIG_KS7010)		+= ks7010/
>  obj-$(CONFIG_GREYBUS)		+= greybus/
>  obj-$(CONFIG_BCM2835_VCHIQ)	+= vc04_services/
>  obj-$(CONFIG_CRYPTO_DEV_CCREE)	+= ccree/
> +obj-$(CONFIG_DRM_VBOXVIDEO)	+= vboxvideo/
> diff --git a/drivers/staging/vboxvideo/Kconfig b/drivers/staging/vboxvideo/Kconfig
> new file mode 100644
> index 000000000000..a52746f9a670
> --- /dev/null
> +++ b/drivers/staging/vboxvideo/Kconfig
> @@ -0,0 +1,12 @@
> +config DRM_VBOXVIDEO
> +	tristate "Virtual Box Graphics Card"
> +	depends on DRM && X86 && PCI
> +	select DRM_KMS_HELPER
> +	help
> +	  This is a KMS driver for the virtual Graphics Card used in
> +	  Virtual Box virtual machines.
> +
> +	  Although it is possible to builtin this module, it is advised
> +	  to build this driver as a module, so that it can be updated
> +	  independently of the kernel. Select M to built this driver as a
> +	  module and add support for these devices via drm/kms interfaces.
> diff --git a/drivers/staging/vboxvideo/Makefile b/drivers/staging/vboxvideo/Makefile
> new file mode 100644
> index 000000000000..2d0b3bc7ad73
> --- /dev/null
> +++ b/drivers/staging/vboxvideo/Makefile
> @@ -0,0 +1,7 @@
> +ccflags-y := -Iinclude/drm
> +
> +vboxvideo-y :=  hgsmi_base.o modesetting.o vbva_base.o \
> +		vbox_drv.o vbox_fb.o vbox_hgsmi.o vbox_irq.o vbox_main.o \
> +		vbox_mode.o vbox_prime.o vbox_ttm.o
> +
> +obj-$(CONFIG_DRM_VBOXVIDEO) += vboxvideo.o
> diff --git a/drivers/staging/vboxvideo/TODO b/drivers/staging/vboxvideo/TODO
> new file mode 100644
> index 000000000000..ce764309b079
> --- /dev/null
> +++ b/drivers/staging/vboxvideo/TODO
> @@ -0,0 +1,9 @@
> +TODO:
> +-Move the driver over to the atomic API
> +-Stop using old load / unload drm_driver hooks
> +-Get a full review from the drm-maintainers on dri-devel done on this driver
> +-Extend this TODO with the results of that review
> +
> +Please send any patches to Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>,
> +Hans de Goede <hdegoede@xxxxxxxxxx> and
> +Michael Thayer <michael.thayer@xxxxxxxxxx>.
> diff --git a/drivers/staging/vboxvideo/hgsmi_base.c b/drivers/staging/vboxvideo/hgsmi_base.c
> new file mode 100644
> index 000000000000..50a0bb8bd99c
> --- /dev/null
> +++ b/drivers/staging/vboxvideo/hgsmi_base.c
> @@ -0,0 +1,244 @@
> +/*
> + * Copyright (C) 2006-2017 Oracle Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + * OTHER DEALINGS IN THE SOFTWARE.
> + */
> +
> +#include "vbox_drv.h"
> +#include "vbox_err.h"
> +#include "vboxvideo_guest.h"
> +#include "vboxvideo_vbe.h"
> +#include "hgsmi_channels.h"
> +#include "hgsmi_ch_setup.h"
> +
> +/**
> + * Inform the host of the location of the host flags in VRAM via an HGSMI cmd.
> + * @param    ctx          the context of the guest heap to use.
> + * @param    location     the offset chosen for the flags within guest VRAM.
> + * @returns 0 on success, -errno on failure
> + */
> +int hgsmi_report_flags_location(struct gen_pool *ctx, u32 location)
> +{
> +	struct hgsmi_buffer_location *p;
> +
> +	p = hgsmi_buffer_alloc(ctx, sizeof(*p), HGSMI_CH_HGSMI,
> +			       HGSMI_CC_HOST_FLAGS_LOCATION);
> +	if (!p)
> +		return -ENOMEM;
> +
> +	p->buf_location = location;
> +	p->buf_len = sizeof(struct hgsmi_host_flags);
> +
> +	hgsmi_buffer_submit(ctx, p);
> +	hgsmi_buffer_free(ctx, p);
> +
> +	return 0;
> +}
> +
> +/**
> + * Notify the host of HGSMI-related guest capabilities via an HGSMI command.
> + * @param    ctx                 the context of the guest heap to use.
> + * @param    caps                the capabilities to report, see vbva_caps.
> + * @returns 0 on success, -errno on failure
> + */
> +int hgsmi_send_caps_info(struct gen_pool *ctx, u32 caps)
> +{
> +	struct vbva_caps *p;
> +
> +	p = hgsmi_buffer_alloc(ctx, sizeof(*p), HGSMI_CH_VBVA, VBVA_INFO_CAPS);
> +	if (!p)
> +		return -ENOMEM;
> +
> +	p->rc = VERR_NOT_IMPLEMENTED;
> +	p->caps = caps;
> +
> +	hgsmi_buffer_submit(ctx, p);
> +
> +	WARN_ON_ONCE(RT_FAILURE(p->rc));
> +
> +	hgsmi_buffer_free(ctx, p);
> +
> +	return 0;
> +}
> +
> +int hgsmi_test_query_conf(struct gen_pool *ctx)
> +{
> +	u32 value = 0;
> +	int rc;
> +
> +	rc = hgsmi_query_conf(ctx, U32_MAX, &value);
> +
> +	return (RT_SUCCESS(rc) && value == U32_MAX) ? 0 : -EIO;

hgsmi_query_conf() returns normal error codes.  Preserve the error code.


	rc = hgsmi_query_conf(ctx, U32_MAX, &value);
	if (rc)
		return rc;
	if (value != U32_MAX)
		return -EIO;

	return 0;

> +}
> +
> +/**
> + * Query the host for an HGSMI configuration parameter via an HGSMI command.
> + * @param  ctx        the context containing the heap used
> + * @param  index      the index of the parameter to query,
> + *                    @see vbva_conf32::index
> + * @param  value_ret  where to store the value of the parameter on success
> + * @returns 0 on success, -errno on failure
> + */
> +int hgsmi_query_conf(struct gen_pool *ctx, u32 index, u32 *value_ret)
> +{
> +	struct vbva_conf32 *p;
> +
> +	p = hgsmi_buffer_alloc(ctx, sizeof(*p), HGSMI_CH_VBVA,
> +			       VBVA_QUERY_CONF32);
> +	if (!p)
> +		return -ENOMEM;
> +
> +	p->index = index;
> +	p->value = U32_MAX;
> +
> +	hgsmi_buffer_submit(ctx, p);
> +
> +	*value_ret = p->value;
> +
> +	hgsmi_buffer_free(ctx, p);
> +
> +	return 0;
> +}
> +
> +/**
> + * Pass the host a new mouse pointer shape via an HGSMI command.
> + *
> + * @param  ctx      the context containing the heap to be used
> + * @param  flags    cursor flags, @see VMMDevReqMousePointer::flags
> + * @param  hot_x    horizontal position of the hot spot
> + * @param  hot_y    vertical position of the hot spot
> + * @param  width    width in pixels of the cursor
> + * @param  height   height in pixels of the cursor
> + * @param  pixels   pixel data, @see VMMDevReqMousePointer for the format
> + * @param  len      size in bytes of the pixel data
> + * @returns 0 on success, -errno on failure
> + */
> +int hgsmi_update_pointer_shape(struct gen_pool *ctx, u32 flags,
> +			       u32 hot_x, u32 hot_y, u32 width, u32 height,
> +			       u8 *pixels, u32 len)
> +{
> +	struct vbva_mouse_pointer_shape *p;
> +	u32 pixel_len = 0;
> +	int rc;
> +
> +	if (flags & VBOX_MOUSE_POINTER_SHAPE) {
> +		/*
> +		 * Size of the pointer data:
> +		 * sizeof (AND mask) + sizeof (XOR_MASK)
> +		 */
> +		pixel_len = ((((width + 7) / 8) * height + 3) & ~3) +
> +			 width * 4 * height;
> +		if (pixel_len > len)
> +			return -EINVAL;
> +
> +		/*
> +		 * If shape is supplied, then always create the pointer visible.
> +		 * See comments in 'vboxUpdatePointerShape'
> +		 */
> +		flags |= VBOX_MOUSE_POINTER_VISIBLE;
> +	}
> +
> +	p = hgsmi_buffer_alloc(ctx, sizeof(*p) + pixel_len, HGSMI_CH_VBVA,
> +			       VBVA_MOUSE_POINTER_SHAPE);
> +	if (!p)
> +		return -ENOMEM;
> +
> +	p->result = VINF_SUCCESS;
> +	p->flags = flags;
> +	p->hot_X = hot_x;
> +	p->hot_y = hot_y;
> +	p->width = width;
> +	p->height = height;
> +	if (pixel_len)
> +		memcpy(p->data, pixels, pixel_len);
> +
> +	hgsmi_buffer_submit(ctx, p);
> +
> +	switch (p->result) {
> +	case VINF_SUCCESS:
> +		rc = 0;
> +		break;
> +	case VERR_NO_MEMORY:
> +		rc = -ENOMEM;
> +		break;
> +	case VERR_NOT_SUPPORTED:
> +		rc = -EBUSY;
> +		break;
> +	default:
> +		rc = -EINVAL;
> +	}
> +
> +	hgsmi_buffer_free(ctx, p);
> +
> +	return rc;
> +}
> +
> +/**
> + * Report the guest cursor position.  The host may wish to use this information
> + * to re-position its own cursor (though this is currently unlikely).  The
> + * current host cursor position is returned.
> + * @param  ctx              The context containing the heap used.
> + * @param  report_position  Are we reporting a position?
> + * @param  x                Guest cursor X position.
> + * @param  y                Guest cursor Y position.
> + * @param  x_host           Host cursor X position is stored here.  Optional.
> + * @param  y_host           Host cursor Y position is stored here.  Optional.
> + * @returns 0 on success, -errno on failure
> + */
> +int hgsmi_cursor_position(struct gen_pool *ctx, bool report_position,
> +			  u32 x, u32 y, u32 *x_host, u32 *y_host)
> +{
> +	struct vbva_cursor_position *p;
> +
> +	p = hgsmi_buffer_alloc(ctx, sizeof(*p), HGSMI_CH_VBVA,
> +			       VBVA_CURSOR_POSITION);
> +	if (!p)
> +		return -ENOMEM;
> +
> +	p->report_position = !!report_position;

report_position is already bool so no need for the !!.

> +	p->x = x;
> +	p->y = y;
> +
> +	hgsmi_buffer_submit(ctx, p);
> +
> +	*x_host = p->x;
> +	*y_host = p->y;
> +
> +	hgsmi_buffer_free(ctx, p);
> +
> +	return 0;
> +}
> +
> +/**
> + * @todo Mouse pointer position to be read from VMMDev memory, address of the
> + * memory region can be queried from VMMDev via an IOCTL. This VMMDev memory
> + * region will contain host information which is needed by the guest.
> + *
> + * Reading will not cause a switch to the host.
> + *
> + * Have to take into account:
> + *  * synchronization: host must write to the memory only from EMT,
> + *    large structures must be read under flag, which tells the host
> + *    that the guest is currently reading the memory (OWNER flag?).
> + *  * guest writes: may be allocate a page for the host info and make
> + *    the page readonly for the guest.
> + *  * the information should be available only for additions drivers.
> + *  * VMMDev additions driver will inform the host which version of the info
> + *    it expects, host must support all versions.
> + */
> diff --git a/drivers/staging/vboxvideo/hgsmi_ch_setup.h b/drivers/staging/vboxvideo/hgsmi_ch_setup.h
> new file mode 100644
> index 000000000000..8e6d9e11a69c
> --- /dev/null
> +++ b/drivers/staging/vboxvideo/hgsmi_ch_setup.h
> @@ -0,0 +1,66 @@
> +/*
> + * Copyright (C) 2006-2017 Oracle Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + * OTHER DEALINGS IN THE SOFTWARE.
> + */
> +
> +#ifndef __HGSMI_CH_SETUP_H__
> +#define __HGSMI_CH_SETUP_H__
> +
> +/*
> + * Tell the host the location of hgsmi_host_flags structure, where the host
> + * can write information about pending buffers, etc, and which can be quickly
> + * polled by the guest without a need to port IO.
> + */
> +#define HGSMI_CC_HOST_FLAGS_LOCATION 0
> +
> +struct hgsmi_buffer_location {
> +	u32 buf_location;
> +	u32 buf_len;
> +} __packed;
> +
> +/* HGSMI setup and configuration data structures. */
> +/* host->guest commands pending, should be accessed under FIFO lock only */
> +#define HGSMIHOSTFLAGS_COMMANDS_PENDING    0x01u
> +/* IRQ is fired, should be accessed under VGAState::lock only  */
> +#define HGSMIHOSTFLAGS_IRQ                 0x02u
> +/* vsync interrupt flag, should be accessed under VGAState::lock only */
> +#define HGSMIHOSTFLAGS_VSYNC               0x10u
> +/** monitor hotplug flag, should be accessed under VGAState::lock only */
> +#define HGSMIHOSTFLAGS_HOTPLUG             0x20u
> +/**
> + * Cursor capability state change flag, should be accessed under
> + * VGAState::lock only. @see vbva_conf32.
> + */
> +#define HGSMIHOSTFLAGS_CURSOR_CAPABILITIES 0x40u
> +
> +struct hgsmi_host_flags {
> +	/*
> +	 * Host flags can be accessed and modified in multiple threads
> +	 * concurrently, e.g. CrOpenGL HGCM and GUI threads when completing
> +	 * HGSMI 3D and Video Accel respectively, EMT thread when dealing with
> +	 * HGSMI command processing, etc.
> +	 * Besides settings/cleaning flags atomically, some flags have their
> +	 * own special sync restrictions, see comments for flags above.
> +	 */
> +	u32 host_flags;
> +	u32 reserved[3];
> +} __packed;
> +
> +#endif
> diff --git a/drivers/staging/vboxvideo/hgsmi_channels.h b/drivers/staging/vboxvideo/hgsmi_channels.h
> new file mode 100644
> index 000000000000..a2a34b2167b4
> --- /dev/null
> +++ b/drivers/staging/vboxvideo/hgsmi_channels.h
> @@ -0,0 +1,53 @@
> +/*
> + * Copyright (C) 2006-2017 Oracle Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + * OTHER DEALINGS IN THE SOFTWARE.
> + */
> +
> +#ifndef __HGSMI_CHANNELS_H__
> +#define __HGSMI_CHANNELS_H__
> +
> +/*
> + * Each channel has an 8 bit identifier. There are a number of predefined
> + * (hardcoded) channels.
> + *
> + * HGSMI_CH_HGSMI channel can be used to map a string channel identifier
> + * to a free 16 bit numerical value. values are allocated in range
> + * [HGSMI_CH_STRING_FIRST;HGSMI_CH_STRING_LAST].
> + */
> +
> +/* A reserved channel value */
> +#define HGSMI_CH_RESERVED				0x00
> +/* HGCMI: setup and configuration */
> +#define HGSMI_CH_HGSMI					0x01
> +/* Graphics: VBVA */
> +#define HGSMI_CH_VBVA					0x02
> +/* Graphics: Seamless with a single guest region */
> +#define HGSMI_CH_SEAMLESS				0x03
> +/* Graphics: Seamless with separate host windows */
> +#define HGSMI_CH_SEAMLESS2				0x04
> +/* Graphics: OpenGL HW acceleration */
> +#define HGSMI_CH_OPENGL					0x05
> +
> +/* The first channel index to be used for string mappings (inclusive) */
> +#define HGSMI_CH_STRING_FIRST				0x20
> +/* The last channel index for string mappings (inclusive) */
> +#define HGSMI_CH_STRING_LAST				0xff
> +
> +#endif
> diff --git a/drivers/staging/vboxvideo/hgsmi_defs.h b/drivers/staging/vboxvideo/hgsmi_defs.h
> new file mode 100644
> index 000000000000..5b21fb974d20
> --- /dev/null
> +++ b/drivers/staging/vboxvideo/hgsmi_defs.h
> @@ -0,0 +1,92 @@
> +/*
> + * Copyright (C) 2006-2017 Oracle Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + * OTHER DEALINGS IN THE SOFTWARE.
> + */
> +
> +#ifndef __HGSMI_DEFS_H__
> +#define __HGSMI_DEFS_H__
> +
> +/* Buffer sequence type mask. */
> +#define HGSMI_BUFFER_HEADER_F_SEQ_MASK     0x03
> +/* Single buffer, not a part of a sequence. */
> +#define HGSMI_BUFFER_HEADER_F_SEQ_SINGLE   0x00
> +/* The first buffer in a sequence. */
> +#define HGSMI_BUFFER_HEADER_F_SEQ_START    0x01
> +/* A middle buffer in a sequence. */
> +#define HGSMI_BUFFER_HEADER_F_SEQ_CONTINUE 0x02
> +/* The last buffer in a sequence. */
> +#define HGSMI_BUFFER_HEADER_F_SEQ_END      0x03
> +
> +/* 16 bytes buffer header. */
> +struct hgsmi_buffer_header {
> +	u32 data_size;		/* Size of data that follows the header. */
> +	u8 flags;		/* HGSMI_BUFFER_HEADER_F_* */
> +	u8 channel;		/* The channel the data must be routed to. */
> +	u16 channel_info;	/* Opaque to the HGSMI, used by the channel. */
> +
> +	union {
> +		/* Opaque placeholder to make the union 8 bytes. */
> +		u8 header_data[8];
> +
> +		/* HGSMI_BUFFER_HEADER_F_SEQ_SINGLE */
> +		struct {
> +			u32 reserved1;	/* A reserved field, initialize to 0. */
> +			u32 reserved2;	/* A reserved field, initialize to 0. */
> +		} buffer;
> +
> +		/* HGSMI_BUFFER_HEADER_F_SEQ_START */
> +		struct {
> +			/* Must be the same for all buffers in the sequence. */
> +			u32 sequence_number;
> +			/* The total size of the sequence. */
> +			u32 sequence_size;
> +		} sequence_start;
> +
> +		/*
> +		 * HGSMI_BUFFER_HEADER_F_SEQ_CONTINUE and
> +		 * HGSMI_BUFFER_HEADER_F_SEQ_END
> +		 */
> +		struct {
> +			/* Must be the same for all buffers in the sequence. */
> +			u32 sequence_number;
> +			/* Data offset in the entire sequence. */
> +			u32 sequence_offset;
> +		} sequence_continue;
> +	} u;
> +} __packed;
> +
> +/* 8 bytes buffer tail. */
> +struct hgsmi_buffer_tail {
> +	/* Reserved, must be initialized to 0. */
> +	u32 reserved;
> +	/*
> +	 * One-at-a-Time Hash: http://www.burtleburtle.net/bob/hash/doobs.html
> +	 * Over the header, offset and for first 4 bytes of the tail.
> +	 */
> +	u32 checksum;
> +} __packed;
> +
> +/*
> + * The size of the array of channels. Array indexes are u8.
> + * Note: the value must not be changed.
> + */
> +#define HGSMI_NUMBER_OF_CHANNELS 0x100
> +
> +#endif
> diff --git a/drivers/staging/vboxvideo/modesetting.c b/drivers/staging/vboxvideo/modesetting.c
> new file mode 100644
> index 000000000000..7616b8aab23a
> --- /dev/null
> +++ b/drivers/staging/vboxvideo/modesetting.c
> @@ -0,0 +1,142 @@
> +/*
> + * Copyright (C) 2006-2017 Oracle Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + * OTHER DEALINGS IN THE SOFTWARE.
> + */
> +
> +#include "vbox_drv.h"
> +#include "vbox_err.h"
> +#include "vboxvideo_guest.h"
> +#include "vboxvideo_vbe.h"
> +#include "hgsmi_channels.h"
> +
> +/**
> + * Set a video mode via an HGSMI request.  The views must have been
> + * initialised first using @a VBoxHGSMISendViewInfo and if the mode is being
> + * set on the first display then it must be set first using registers.
> + * @param  ctx           The context containing the heap to use
> + * @param  display       The screen number
> + * @param  origin_x      The horizontal displacement relative to the first scrn
> + * @param  origin_y      The vertical displacement relative to the first screen
> + * @param  start_offset  The offset of the visible area of the framebuffer
> + *                       relative to the framebuffer start
> + * @param  pitch         The offset in bytes between the starts of two adjecent
> + *                       scan lines in video RAM
> + * @param  width         The mode width
> + * @param  height        The mode height
> + * @param  bpp           The colour depth of the mode
> + * @param  flags         Flags
> + */
> +void hgsmi_process_display_info(struct gen_pool *ctx, u32 display,
> +				s32 origin_x, s32 origin_y, u32 start_offset,
> +				u32 pitch, u32 width, u32 height,
> +				u16 bpp, u16 flags)
> +{
> +	struct vbva_infoscreen *p;
> +
> +	p = hgsmi_buffer_alloc(ctx, sizeof(*p), HGSMI_CH_VBVA,
> +			       VBVA_INFO_SCREEN);
> +	if (!p)
> +		return;
> +
> +	p->view_index = display;
> +	p->origin_x = origin_x;
> +	p->origin_y = origin_y;
> +	p->start_offset = start_offset;
> +	p->line_size = pitch;
> +	p->width = width;
> +	p->height = height;
> +	p->bits_per_pixel = bpp;
> +	p->flags = flags;
> +
> +	hgsmi_buffer_submit(ctx, p);
> +	hgsmi_buffer_free(ctx, p);
> +}
> +
> +/**
> + * Report the rectangle relative to which absolute pointer events should be
> + * expressed.  This information remains valid until the next VBVA resize event
> + * for any screen, at which time it is reset to the bounding rectangle of all
> + * virtual screens.
> + * @param  ctx       The context containing the heap to use.
> + * @param  origin_x  Upper left X co-ordinate relative to the first screen.
> + * @param  origin_y  Upper left Y co-ordinate relative to the first screen.
> + * @param  width     Rectangle width.
> + * @param  height    Rectangle height.
> + * @returns 0 on success, -errno on failure
> + */
> +int hgsmi_update_input_mapping(struct gen_pool *ctx, s32 origin_x, s32 origin_y,
> +			       u32 width, u32 height)
> +{
> +	struct vbva_report_input_mapping *p;
> +
> +	p = hgsmi_buffer_alloc(ctx, sizeof(*p), HGSMI_CH_VBVA,
> +			       VBVA_REPORT_INPUT_MAPPING);
> +	if (!p)
> +		return -ENOMEM;
> +
> +	p->x = origin_x;
> +	p->y = origin_y;
> +	p->cx = width;
> +	p->cy = height;
> +
> +	hgsmi_buffer_submit(ctx, p);
> +	hgsmi_buffer_free(ctx, p);
> +
> +	return 0;
> +}
> +
> +/**
> + * Get most recent video mode hints.
> + * @param  ctx      The context containing the heap to use.
> + * @param  screens  The number of screens to query hints for, starting at 0.
> + * @param  hints    Array of vbva_modehint structures for receiving the hints.
> + * @returns 0 on success, -errno on failure
> + */
> +int hgsmi_get_mode_hints(struct gen_pool *ctx, unsigned int screens,
> +			 struct vbva_modehint *hints)
> +{
> +	struct vbva_query_mode_hints *p;
> +	size_t size;
> +
> +	if (WARN_ON(!hints))
> +		return -EINVAL;
> +
> +	size = screens * sizeof(struct vbva_modehint);
> +	p = hgsmi_buffer_alloc(ctx, sizeof(*p) + size, HGSMI_CH_VBVA,
> +			       VBVA_QUERY_MODE_HINTS);
> +	if (!p)
> +		return -ENOMEM;
> +
> +	p->hints_queried_count = screens;
> +	p->hint_structure_guest_size = sizeof(struct vbva_modehint);
> +	p->rc = VERR_NOT_SUPPORTED;
> +
> +	hgsmi_buffer_submit(ctx, p);
> +
> +	if (RT_FAILURE(p->rc)) {
> +		hgsmi_buffer_free(ctx, p);
> +		return -EIO;
> +	}
> +
> +	memcpy(hints, ((u8 *)p) + sizeof(struct vbva_query_mode_hints), size);
> +	hgsmi_buffer_free(ctx, p);
> +
> +	return 0;
> +}
> diff --git a/drivers/staging/vboxvideo/vbox_drv.c b/drivers/staging/vboxvideo/vbox_drv.c
> new file mode 100644
> index 000000000000..05c973db77a4
> --- /dev/null
> +++ b/drivers/staging/vboxvideo/vbox_drv.c
> @@ -0,0 +1,287 @@
> +/*
> + * Copyright (C) 2013-2017 Oracle Corporation
> + * This file is based on ast_drv.c
> + * Copyright 2012 Red Hat Inc.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the
> + * "Software"), to deal in the Software without restriction, including
> + * without limitation the rights to use, copy, modify, merge, publish,
> + * distribute, sub license, and/or sell copies of the Software, and to
> + * permit persons to whom the Software is furnished to do so, subject to
> + * the following conditions:
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
> + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
> + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
> + * USE OR OTHER DEALINGS IN THE SOFTWARE.
> + *
> + * The above copyright notice and this permission notice (including the
> + * next paragraph) shall be included in all copies or substantial portions
> + * of the Software.
> + *
> + * Authors: Dave Airlie <airlied@xxxxxxxxxx>
> + *          Michael Thayer <michael.thayer@xxxxxxxxxx,
> + *          Hans de Goede <hdegoede@xxxxxxxxxx>
> + */
> +#include <linux/module.h>
> +#include <linux/console.h>
> +#include <linux/vt_kern.h>
> +
> +#include <drm/drmP.h>
> +#include <drm/drm_crtc_helper.h>
> +
> +#include "vbox_drv.h"
> +
> +int vbox_modeset = -1;
> +
> +MODULE_PARM_DESC(modeset, "Disable/Enable modesetting");
> +module_param_named(modeset, vbox_modeset, int, 0400);
> +
> +static struct drm_driver driver;
> +
> +static const struct pci_device_id pciidlist[] = {
> +	{ 0x80ee, 0xbeef, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
> +	{ 0, 0, 0},
> +};
> +MODULE_DEVICE_TABLE(pci, pciidlist);
> +
> +static int vbox_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
> +{
> +	return drm_get_pci_dev(pdev, ent, &driver);
> +}
> +
> +static void vbox_pci_remove(struct pci_dev *pdev)
> +{
> +	struct drm_device *dev = pci_get_drvdata(pdev);
> +
> +	drm_put_dev(dev);
> +}
> +
> +static int vbox_drm_freeze(struct drm_device *dev)
> +{
> +	drm_kms_helper_poll_disable(dev);
> +
> +	pci_save_state(dev->pdev);
> +
> +	console_lock();
> +	vbox_fbdev_set_suspend(dev, 1);
> +	console_unlock();
> +
> +	return 0;
> +}
> +
> +static int vbox_drm_thaw(struct drm_device *dev)
> +{
> +	drm_mode_config_reset(dev);
> +	drm_helper_resume_force_mode(dev);
> +
> +	console_lock();
> +	vbox_fbdev_set_suspend(dev, 0);
> +	console_unlock();
> +
> +	return 0;
> +}
> +
> +static int vbox_drm_resume(struct drm_device *dev)
> +{
> +	int ret;
> +
> +	if (pci_enable_device(dev->pdev))
> +		return -EIO;
> +
> +	ret = vbox_drm_thaw(dev);
> +	if (ret)
> +		return ret;
> +
> +	drm_kms_helper_poll_enable(dev);
> +
> +	return 0;
> +}
> +
> +static int vbox_pm_suspend(struct device *dev)
> +{
> +	struct pci_dev *pdev = to_pci_dev(dev);
> +	struct drm_device *ddev = pci_get_drvdata(pdev);
> +	int error;
> +
> +	error = vbox_drm_freeze(ddev);
> +	if (error)
> +		return error;
> +
> +	pci_disable_device(pdev);
> +	pci_set_power_state(pdev, PCI_D3hot);
> +
> +	return 0;
> +}
> +
> +static int vbox_pm_resume(struct device *dev)
> +{
> +	struct drm_device *ddev = pci_get_drvdata(to_pci_dev(dev));
> +
> +	return vbox_drm_resume(ddev);
> +}
> +
> +static int vbox_pm_freeze(struct device *dev)
> +{
> +	struct pci_dev *pdev = to_pci_dev(dev);
> +	struct drm_device *ddev = pci_get_drvdata(pdev);
> +
> +	if (!ddev || !ddev->dev_private)
> +		return -ENODEV;
> +
> +	return vbox_drm_freeze(ddev);
> +}
> +
> +static int vbox_pm_thaw(struct device *dev)
> +{
> +	struct drm_device *ddev = pci_get_drvdata(to_pci_dev(dev));
> +
> +	return vbox_drm_thaw(ddev);
> +}
> +
> +static int vbox_pm_poweroff(struct device *dev)
> +{
> +	struct drm_device *ddev = pci_get_drvdata(to_pci_dev(dev));
> +
> +	return vbox_drm_freeze(ddev);
> +}
> +
> +static const struct dev_pm_ops vbox_pm_ops = {
> +	.suspend = vbox_pm_suspend,
> +	.resume = vbox_pm_resume,
> +	.freeze = vbox_pm_freeze,
> +	.thaw = vbox_pm_thaw,
> +	.poweroff = vbox_pm_poweroff,
> +	.restore = vbox_pm_resume,
> +};
> +
> +static struct pci_driver vbox_pci_driver = {
> +	.name = DRIVER_NAME,
> +	.id_table = pciidlist,
> +	.probe = vbox_pci_probe,
> +	.remove = vbox_pci_remove,
> +	.driver.pm = &vbox_pm_ops,
> +};
> +
> +static const struct file_operations vbox_fops = {
> +	.owner = THIS_MODULE,
> +	.open = drm_open,
> +	.release = drm_release,
> +	.unlocked_ioctl = drm_ioctl,
> +	.mmap = vbox_mmap,
> +	.poll = drm_poll,
> +#ifdef CONFIG_COMPAT
> +	.compat_ioctl = drm_compat_ioctl,
> +#endif
> +	.read = drm_read,
> +};
> +
> +static int vbox_master_set(struct drm_device *dev,
> +			   struct drm_file *file_priv, bool from_open)
> +{
> +	struct vbox_private *vbox = dev->dev_private;
> +
> +	/*
> +	 * We do not yet know whether the new owner can handle hotplug, so we
> +	 * do not advertise dynamic modes on the first query and send a
> +	 * tentative hotplug notification after that to see if they query again.
> +	 */
> +	vbox->initial_mode_queried = false;
> +
> +	mutex_lock(&vbox->hw_mutex);
> +	/*
> +	 * Disable VBVA when someone releases master in case the next person
> +	 * tries tries to do VESA.
> +	 */
> +	/** @todo work out if anyone is likely to and whether it will work. */
> +	/*
> +	 * Update: we also disable it because if the new master does not do
> +	 * dirty rectangle reporting (e.g. old versions of Plymouth) then at
> +	 * least the first screen will still be updated. We enable it as soon
> +	 * as we receive a dirty rectangle report.
> +	 */
> +	vbox_disable_accel(vbox);
> +	mutex_unlock(&vbox->hw_mutex);
> +
> +	return 0;
> +}
> +
> +static void vbox_master_drop(struct drm_device *dev, struct drm_file *file_priv)
> +{
> +	struct vbox_private *vbox = dev->dev_private;
> +
> +	/* See vbox_master_set() */
> +	vbox->initial_mode_queried = false;
> +
> +	mutex_lock(&vbox->hw_mutex);
> +	vbox_disable_accel(vbox);
> +	mutex_unlock(&vbox->hw_mutex);
> +}
> +
> +static struct drm_driver driver = {
> +	.driver_features =
> +	    DRIVER_MODESET | DRIVER_GEM | DRIVER_HAVE_IRQ | DRIVER_IRQ_SHARED |
> +	    DRIVER_PRIME,
> +	.dev_priv_size = 0,
> +
> +	.load = vbox_driver_load,
> +	.unload = vbox_driver_unload,
> +	.lastclose = vbox_driver_lastclose,
> +	.master_set = vbox_master_set,
> +	.master_drop = vbox_master_drop,
> +	.set_busid = drm_pci_set_busid,
> +
> +	.fops = &vbox_fops,
> +	.irq_handler = vbox_irq_handler,
> +	.name = DRIVER_NAME,
> +	.desc = DRIVER_DESC,
> +	.date = DRIVER_DATE,
> +	.major = DRIVER_MAJOR,
> +	.minor = DRIVER_MINOR,
> +	.patchlevel = DRIVER_PATCHLEVEL,
> +
> +	.gem_free_object = vbox_gem_free_object,
> +	.dumb_create = vbox_dumb_create,
> +	.dumb_map_offset = vbox_dumb_mmap_offset,
> +	.dumb_destroy = drm_gem_dumb_destroy,
> +	.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
> +	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
> +	.gem_prime_export = drm_gem_prime_export,
> +	.gem_prime_import = drm_gem_prime_import,
> +	.gem_prime_pin = vbox_gem_prime_pin,
> +	.gem_prime_unpin = vbox_gem_prime_unpin,
> +	.gem_prime_get_sg_table = vbox_gem_prime_get_sg_table,
> +	.gem_prime_import_sg_table = vbox_gem_prime_import_sg_table,
> +	.gem_prime_vmap = vbox_gem_prime_vmap,
> +	.gem_prime_vunmap = vbox_gem_prime_vunmap,
> +	.gem_prime_mmap = vbox_gem_prime_mmap,
> +};
> +
> +static int __init vbox_init(void)
> +{
> +#ifdef CONFIG_VGA_CONSOLE
> +	if (vgacon_text_force() && vbox_modeset == -1)
> +		return -EINVAL;
> +#endif
> +
> +	if (vbox_modeset == 0)
> +		return -EINVAL;
> +
> +	return drm_pci_init(&driver, &vbox_pci_driver);
> +}
> +
> +static void __exit vbox_exit(void)
> +{
> +	drm_pci_exit(&driver, &vbox_pci_driver);
> +}
> +
> +module_init(vbox_init);
> +module_exit(vbox_exit);
> +
> +MODULE_AUTHOR("Oracle Corporation");
> +MODULE_DESCRIPTION(DRIVER_DESC);
> +MODULE_LICENSE("GPL and additional rights");
> diff --git a/drivers/staging/vboxvideo/vbox_drv.h b/drivers/staging/vboxvideo/vbox_drv.h
> new file mode 100644
> index 000000000000..d63d8a9e2c18
> --- /dev/null
> +++ b/drivers/staging/vboxvideo/vbox_drv.h
> @@ -0,0 +1,299 @@
> +/*
> + * Copyright (C) 2013-2017 Oracle Corporation
> + * This file is based on ast_drv.h
> + * Copyright 2012 Red Hat Inc.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the
> + * "Software"), to deal in the Software without restriction, including
> + * without limitation the rights to use, copy, modify, merge, publish,
> + * distribute, sub license, and/or sell copies of the Software, and to
> + * permit persons to whom the Software is furnished to do so, subject to
> + * the following conditions:
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
> + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
> + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
> + * USE OR OTHER DEALINGS IN THE SOFTWARE.
> + *
> + * The above copyright notice and this permission notice (including the
> + * next paragraph) shall be included in all copies or substantial portions
> + * of the Software.
> + *
> + * Authors: Dave Airlie <airlied@xxxxxxxxxx>
> + *          Michael Thayer <michael.thayer@xxxxxxxxxx,
> + *          Hans de Goede <hdegoede@xxxxxxxxxx>
> + */
> +#ifndef __VBOX_DRV_H__
> +#define __VBOX_DRV_H__
> +
> +#include <linux/genalloc.h>
> +#include <linux/io.h>
> +#include <linux/string.h>
> +#include <linux/version.h>
> +
> +#include <drm/drmP.h>
> +#include <drm/drm_encoder.h>
> +#include <drm/drm_fb_helper.h>
> +#include <drm/drm_gem.h>
> +
> +#include <drm/ttm/ttm_bo_api.h>
> +#include <drm/ttm/ttm_bo_driver.h>
> +#include <drm/ttm/ttm_placement.h>
> +#include <drm/ttm/ttm_memory.h>
> +#include <drm/ttm/ttm_module.h>
> +
> +#include "vboxvideo_guest.h"
> +#include "vboxvideo_vbe.h"
> +#include "hgsmi_ch_setup.h"
> +
> +#define DRIVER_NAME         "vboxvideo"
> +#define DRIVER_DESC         "Oracle VM VirtualBox Graphics Card"
> +#define DRIVER_DATE         "20130823"
> +
> +#define DRIVER_MAJOR        1
> +#define DRIVER_MINOR        0
> +#define DRIVER_PATCHLEVEL   0
> +
> +#define VBOX_MAX_CURSOR_WIDTH  64
> +#define VBOX_MAX_CURSOR_HEIGHT 64
> +#define CURSOR_PIXEL_COUNT (VBOX_MAX_CURSOR_WIDTH * VBOX_MAX_CURSOR_HEIGHT)
> +#define CURSOR_DATA_SIZE (CURSOR_PIXEL_COUNT * 4 + CURSOR_PIXEL_COUNT / 8)
> +
> +#define VBOX_MAX_SCREENS  32
> +
> +#define GUEST_HEAP_OFFSET(vbox) ((vbox)->full_vram_size - \
> +				 VBVA_ADAPTER_INFORMATION_SIZE)
> +#define GUEST_HEAP_SIZE   VBVA_ADAPTER_INFORMATION_SIZE
> +#define GUEST_HEAP_USABLE_SIZE (VBVA_ADAPTER_INFORMATION_SIZE - \
> +				sizeof(struct hgsmi_host_flags))
> +#define HOST_FLAGS_OFFSET GUEST_HEAP_USABLE_SIZE
> +
> +struct vbox_fbdev;
> +
> +struct vbox_private {
> +	struct drm_device *dev;
> +
> +	u8 __iomem *guest_heap;
> +	u8 __iomem *vbva_buffers;
> +	struct gen_pool *guest_pool;
> +	struct vbva_buf_ctx *vbva_info;
> +	bool any_pitch;
> +	u32 num_crtcs;
> +	/** Amount of available VRAM, including space used for buffers. */
> +	u32 full_vram_size;
> +	/** Amount of available VRAM, not including space used for buffers. */
> +	u32 available_vram_size;
> +	/** Array of structures for receiving mode hints. */
> +	struct vbva_modehint *last_mode_hints;
> +
> +	struct vbox_fbdev *fbdev;
> +
> +	int fb_mtrr;
> +
> +	struct {
> +		struct drm_global_reference mem_global_ref;
> +		struct ttm_bo_global_ref bo_global_ref;
> +		struct ttm_bo_device bdev;
> +		bool mm_initialised;
> +	} ttm;
> +
> +	struct mutex hw_mutex; /* protects modeset and accel/vbva accesses */
> +	bool isr_installed;
> +	/**
> +	 * We decide whether or not user-space supports display hot-plug
> +	 * depending on whether they react to a hot-plug event after the initial
> +	 * mode query.
> +	 */
> +	bool initial_mode_queried;
> +	struct work_struct hotplug_work;
> +	u32 input_mapping_width;
> +	u32 input_mapping_height;
> +	/**
> +	 * Is user-space using an X.Org-style layout of one large frame-buffer
> +	 * encompassing all screen ones or is the fbdev console active?
> +	 */
> +	bool single_framebuffer;
> +	u32 cursor_width;
> +	u32 cursor_height;
> +	u32 cursor_hot_x;
> +	u32 cursor_hot_y;
> +	size_t cursor_data_size;
> +	u8 cursor_data[CURSOR_DATA_SIZE];
> +};
> +
> +#undef CURSOR_PIXEL_COUNT
> +#undef CURSOR_DATA_SIZE
> +
> +int vbox_driver_load(struct drm_device *dev, unsigned long flags);
> +void vbox_driver_unload(struct drm_device *dev);
> +void vbox_driver_lastclose(struct drm_device *dev);
> +
> +struct vbox_gem_object;
> +
> +struct vbox_connector {
> +	struct drm_connector base;
> +	char name[32];
> +	struct vbox_crtc *vbox_crtc;
> +	struct {
> +		u16 width;
> +		u16 height;
> +		bool disconnected;
> +	} mode_hint;
> +};
> +
> +struct vbox_crtc {
> +	struct drm_crtc base;
> +	bool blanked;
> +	bool disconnected;
> +	unsigned int crtc_id;
> +	u32 fb_offset;
> +	bool cursor_enabled;
> +	u16 x_hint;
> +	u16 y_hint;
> +};
> +
> +struct vbox_encoder {
> +	struct drm_encoder base;
> +};
> +
> +struct vbox_framebuffer {
> +	struct drm_framebuffer base;
> +	struct drm_gem_object *obj;
> +};
> +
> +struct vbox_fbdev {
> +	struct drm_fb_helper helper;
> +	struct vbox_framebuffer afb;
> +	int size;
> +	struct ttm_bo_kmap_obj mapping;
> +	int x1, y1, x2, y2;	/* dirty rect */
> +	spinlock_t dirty_lock;
> +};
> +
> +#define to_vbox_crtc(x) container_of(x, struct vbox_crtc, base)
> +#define to_vbox_connector(x) container_of(x, struct vbox_connector, base)
> +#define to_vbox_encoder(x) container_of(x, struct vbox_encoder, base)
> +#define to_vbox_framebuffer(x) container_of(x, struct vbox_framebuffer, base)
> +
> +int vbox_mode_init(struct drm_device *dev);
> +void vbox_mode_fini(struct drm_device *dev);
> +
> +#define DRM_MODE_FB_CMD drm_mode_fb_cmd2
> +#define CRTC_FB(crtc) ((crtc)->primary->fb)
> +
> +void vbox_enable_accel(struct vbox_private *vbox);
> +void vbox_disable_accel(struct vbox_private *vbox);
> +void vbox_report_caps(struct vbox_private *vbox);
> +
> +void vbox_framebuffer_dirty_rectangles(struct drm_framebuffer *fb,
> +				       struct drm_clip_rect *rects,
> +				       unsigned int num_rects);
> +
> +int vbox_framebuffer_init(struct drm_device *dev,
> +			  struct vbox_framebuffer *vbox_fb,
> +			  const struct DRM_MODE_FB_CMD *mode_cmd,
> +			  struct drm_gem_object *obj);
> +
> +int vbox_fbdev_init(struct drm_device *dev);
> +void vbox_fbdev_fini(struct drm_device *dev);
> +void vbox_fbdev_set_suspend(struct drm_device *dev, int state);
> +void vbox_fbdev_set_base(struct vbox_private *vbox, unsigned long gpu_addr);
> +
> +struct vbox_bo {
> +	struct ttm_buffer_object bo;
> +	struct ttm_placement placement;
> +	struct ttm_bo_kmap_obj kmap;
> +	struct drm_gem_object gem;
> +	struct ttm_place placements[3];
> +	int pin_count;
> +};
> +
> +#define gem_to_vbox_bo(gobj) container_of((gobj), struct vbox_bo, gem)
> +
> +static inline struct vbox_bo *vbox_bo(struct ttm_buffer_object *bo)
> +{
> +	return container_of(bo, struct vbox_bo, bo);
> +}
> +
> +#define to_vbox_obj(x) container_of(x, struct vbox_gem_object, base)
> +
> +int vbox_dumb_create(struct drm_file *file,
> +		     struct drm_device *dev,
> +		     struct drm_mode_create_dumb *args);
> +
> +void vbox_gem_free_object(struct drm_gem_object *obj);
> +int vbox_dumb_mmap_offset(struct drm_file *file,
> +			  struct drm_device *dev,
> +			  u32 handle, u64 *offset);
> +
> +#define DRM_FILE_PAGE_OFFSET (0x10000000ULL >> PAGE_SHIFT)
> +
> +int vbox_mm_init(struct vbox_private *vbox);
> +void vbox_mm_fini(struct vbox_private *vbox);
> +
> +int vbox_bo_create(struct drm_device *dev, int size, int align,
> +		   u32 flags, struct vbox_bo **pvboxbo);
> +
> +int vbox_gem_create(struct drm_device *dev,
> +		    u32 size, bool iskernel, struct drm_gem_object **obj);
> +
> +int vbox_bo_pin(struct vbox_bo *bo, u32 pl_flag, u64 *gpu_addr);
> +int vbox_bo_unpin(struct vbox_bo *bo);
> +
> +static inline int vbox_bo_reserve(struct vbox_bo *bo, bool no_wait)
> +{
> +	int ret;
> +
> +	ret = ttm_bo_reserve(&bo->bo, true, no_wait, NULL);
> +	if (ret) {
> +		if (ret != -ERESTARTSYS && ret != -EBUSY)
> +			DRM_ERROR("reserve failed %p\n", bo);
> +		return ret;
> +	}
> +	return 0;
> +}
> +
> +static inline void vbox_bo_unreserve(struct vbox_bo *bo)
> +{
> +	ttm_bo_unreserve(&bo->bo);
> +}
> +
> +void vbox_ttm_placement(struct vbox_bo *bo, int domain);
> +int vbox_bo_push_sysram(struct vbox_bo *bo);
> +int vbox_mmap(struct file *filp, struct vm_area_struct *vma);
> +
> +/* vbox_prime.c */
> +int vbox_gem_prime_pin(struct drm_gem_object *obj);
> +void vbox_gem_prime_unpin(struct drm_gem_object *obj);
> +struct sg_table *vbox_gem_prime_get_sg_table(struct drm_gem_object *obj);
> +struct drm_gem_object *vbox_gem_prime_import_sg_table(
> +	struct drm_device *dev, struct dma_buf_attachment *attach,
> +	struct sg_table *table);
> +void *vbox_gem_prime_vmap(struct drm_gem_object *obj);
> +void vbox_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int vbox_gem_prime_mmap(struct drm_gem_object *obj,
> +			struct vm_area_struct *area);
> +
> +/* vbox_irq.c */
> +int vbox_irq_init(struct vbox_private *vbox);
> +void vbox_irq_fini(struct vbox_private *vbox);
> +void vbox_report_hotplug(struct vbox_private *vbox);
> +irqreturn_t vbox_irq_handler(int irq, void *arg);
> +
> +/* vbox_hgsmi.c */
> +void *hgsmi_buffer_alloc(struct gen_pool *guest_pool, size_t size,
> +			 u8 channel, u16 channel_info);
> +void hgsmi_buffer_free(struct gen_pool *guest_pool, void *buf);
> +int hgsmi_buffer_submit(struct gen_pool *guest_pool, void *buf);
> +
> +static inline void vbox_write_ioport(u16 index, u16 data)
> +{
> +	outw(index, VBE_DISPI_IOPORT_INDEX);
> +	outw(data, VBE_DISPI_IOPORT_DATA);
> +}
> +
> +#endif
> diff --git a/drivers/staging/vboxvideo/vbox_err.h b/drivers/staging/vboxvideo/vbox_err.h
> new file mode 100644
> index 000000000000..562db8630eb0
> --- /dev/null
> +++ b/drivers/staging/vboxvideo/vbox_err.h
> @@ -0,0 +1,50 @@
> +/*
> + * Copyright (C) 2017 Oracle Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + * OTHER DEALINGS IN THE SOFTWARE.
> + */
> +
> +#ifndef __VBOX_ERR_H__
> +#define __VBOX_ERR_H__
> +
> +/**
> + * @name VirtualBox virtual-hardware error macros
> + * @{
> + */
> +
> +#define VINF_SUCCESS                        0
> +#define VERR_INVALID_PARAMETER              (-2)
> +#define VERR_INVALID_POINTER                (-6)
> +#define VERR_NO_MEMORY                      (-8)
> +#define VERR_NOT_IMPLEMENTED                (-12)
> +#define VERR_INVALID_FUNCTION               (-36)
> +#define VERR_NOT_SUPPORTED                  (-37)
> +#define VERR_TOO_MUCH_DATA                  (-42)
> +#define VERR_INVALID_STATE                  (-79)
> +#define VERR_OUT_OF_RESOURCES               (-80)
> +#define VERR_ALREADY_EXISTS                 (-105)
> +#define VERR_INTERNAL_ERROR                 (-225)
> +
> +#define RT_SUCCESS_NP(rc)   ((int)(rc) >= VINF_SUCCESS)
> +#define RT_SUCCESS(rc)      (likely(RT_SUCCESS_NP(rc)))
> +#define RT_FAILURE(rc)      (unlikely(!RT_SUCCESS_NP(rc)))
> +
> +/** @}  */
> +
> +#endif
> diff --git a/drivers/staging/vboxvideo/vbox_fb.c b/drivers/staging/vboxvideo/vbox_fb.c
> new file mode 100644
> index 000000000000..b5658aad0c21
> --- /dev/null
> +++ b/drivers/staging/vboxvideo/vbox_fb.c
> @@ -0,0 +1,442 @@
> +/*
> + * Copyright (C) 2013-2017 Oracle Corporation
> + * This file is based on ast_fb.c
> + * Copyright 2012 Red Hat Inc.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the
> + * "Software"), to deal in the Software without restriction, including
> + * without limitation the rights to use, copy, modify, merge, publish,
> + * distribute, sub license, and/or sell copies of the Software, and to
> + * permit persons to whom the Software is furnished to do so, subject to
> + * the following conditions:
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
> + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
> + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
> + * USE OR OTHER DEALINGS IN THE SOFTWARE.
> + *
> + * The above copyright notice and this permission notice (including the
> + * next paragraph) shall be included in all copies or substantial portions
> + * of the Software.
> + *
> + * Authors: Dave Airlie <airlied@xxxxxxxxxx>
> + *          Michael Thayer <michael.thayer@xxxxxxxxxx,
> + */
> +#include <linux/module.h>
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/string.h>
> +#include <linux/mm.h>
> +#include <linux/tty.h>
> +#include <linux/sysrq.h>
> +#include <linux/delay.h>
> +#include <linux/fb.h>
> +#include <linux/init.h>
> +
> +#include <drm/drmP.h>
> +#include <drm/drm_crtc.h>
> +#include <drm/drm_fb_helper.h>
> +#include <drm/drm_crtc_helper.h>
> +
> +#include "vbox_drv.h"
> +#include "vboxvideo.h"
> +
> +#define VBOX_DIRTY_DELAY (HZ / 30)
> +/**
> + * Tell the host about dirty rectangles to update.
> + */
> +static void vbox_dirty_update(struct vbox_fbdev *fbdev,
> +			      int x, int y, int width, int height)
> +{
> +	struct drm_gem_object *obj;
> +	struct vbox_bo *bo;
> +	int ret = -EBUSY;
> +	bool store_for_later = false;
> +	int x2, y2;
> +	unsigned long flags;
> +	struct drm_clip_rect rect;
> +
> +	obj = fbdev->afb.obj;
> +	bo = gem_to_vbox_bo(obj);
> +
> +	/*
> +	 * try and reserve the BO, if we fail with busy
> +	 * then the BO is being moved and we should
> +	 * store up the damage until later.
> +	 */
> +	if (drm_can_sleep())
> +		ret = vbox_bo_reserve(bo, true);
> +	if (ret) {
> +		if (ret != -EBUSY)
> +			return;
> +
> +		store_for_later = true;
> +	}
> +
> +	x2 = x + width - 1;
> +	y2 = y + height - 1;
> +	spin_lock_irqsave(&fbdev->dirty_lock, flags);
> +
> +	if (fbdev->y1 < y)
> +		y = fbdev->y1;
> +	if (fbdev->y2 > y2)
> +		y2 = fbdev->y2;
> +	if (fbdev->x1 < x)
> +		x = fbdev->x1;
> +	if (fbdev->x2 > x2)
> +		x2 = fbdev->x2;
> +
> +	if (store_for_later) {
> +		fbdev->x1 = x;
> +		fbdev->x2 = x2;
> +		fbdev->y1 = y;
> +		fbdev->y2 = y2;
> +		spin_unlock_irqrestore(&fbdev->dirty_lock, flags);
> +		return;
> +	}
> +
> +	fbdev->x1 = INT_MAX;
> +	fbdev->y1 = INT_MAX;
> +	fbdev->x2 = 0;
> +	fbdev->y2 = 0;
> +
> +	spin_unlock_irqrestore(&fbdev->dirty_lock, flags);
> +
> +	/*
> +	 * Not sure why the original code subtracted 1 here, but I will keep
> +	 * it that way to avoid unnecessary differences.
> +	 */
> +	rect.x1 = x;
> +	rect.x2 = x2 + 1;
> +	rect.y1 = y;
> +	rect.y2 = y2 + 1;
> +	vbox_framebuffer_dirty_rectangles(&fbdev->afb.base, &rect, 1);
> +
> +	vbox_bo_unreserve(bo);
> +}
> +
> +#ifdef CONFIG_FB_DEFERRED_IO
> +static void vbox_deferred_io(struct fb_info *info, struct list_head *pagelist)
> +{
> +	struct vbox_fbdev *fbdev = info->par;
> +	unsigned long start, end, min, max;
> +	struct page *page;
> +	int y1, y2;
> +
> +	min = ULONG_MAX;
> +	max = 0;
> +	list_for_each_entry(page, pagelist, lru) {
> +		start = page->index << PAGE_SHIFT;
> +		end = start + PAGE_SIZE - 1;
> +		min = min(min, start);
> +		max = max(max, end);
> +	}
> +
> +	if (min < max) {
> +		y1 = min / info->fix.line_length;
> +		y2 = (max / info->fix.line_length) + 1;
> +		DRM_INFO("%s: Calling dirty update: 0, %d, %d, %d\n",
> +			 __func__, y1, info->var.xres, y2 - y1 - 1);
> +		vbox_dirty_update(fbdev, 0, y1, info->var.xres, y2 - y1 - 1);
> +	}
> +}
> +
> +static struct fb_deferred_io vbox_defio = {
> +	.delay = VBOX_DIRTY_DELAY,
> +	.deferred_io = vbox_deferred_io,
> +};
> +#endif
> +
> +static void vbox_fillrect(struct fb_info *info, const struct fb_fillrect *rect)
> +{
> +	struct vbox_fbdev *fbdev = info->par;
> +
> +	sys_fillrect(info, rect);
> +	vbox_dirty_update(fbdev, rect->dx, rect->dy, rect->width, rect->height);
> +}
> +
> +static void vbox_copyarea(struct fb_info *info, const struct fb_copyarea *area)
> +{
> +	struct vbox_fbdev *fbdev = info->par;
> +
> +	sys_copyarea(info, area);
> +	vbox_dirty_update(fbdev, area->dx, area->dy, area->width, area->height);
> +}
> +
> +static void vbox_imageblit(struct fb_info *info, const struct fb_image *image)
> +{
> +	struct vbox_fbdev *fbdev = info->par;
> +
> +	sys_imageblit(info, image);
> +	vbox_dirty_update(fbdev, image->dx, image->dy, image->width,
> +			  image->height);
> +}
> +
> +static struct fb_ops vboxfb_ops = {
> +	.owner = THIS_MODULE,
> +	.fb_check_var = drm_fb_helper_check_var,
> +	.fb_set_par = drm_fb_helper_set_par,
> +	.fb_fillrect = vbox_fillrect,
> +	.fb_copyarea = vbox_copyarea,
> +	.fb_imageblit = vbox_imageblit,
> +	.fb_pan_display = drm_fb_helper_pan_display,
> +	.fb_blank = drm_fb_helper_blank,
> +	.fb_setcmap = drm_fb_helper_setcmap,
> +	.fb_debug_enter = drm_fb_helper_debug_enter,
> +	.fb_debug_leave = drm_fb_helper_debug_leave,
> +};
> +
> +static int vboxfb_create_object(struct vbox_fbdev *fbdev,
> +				struct DRM_MODE_FB_CMD *mode_cmd,
> +				struct drm_gem_object **gobj_p)
> +{
> +	struct drm_device *dev = fbdev->helper.dev;
> +	u32 size;
> +	struct drm_gem_object *gobj;
> +	u32 pitch = mode_cmd->pitches[0];
> +	int ret = 0;

Don't set ret.  It just disables GCC's uninitialized variable tools
which are there to help you.

> +
> +	size = pitch * mode_cmd->height;
> +	ret = vbox_gem_create(dev, size, true, &gobj);
> +	if (ret)
> +		return ret;
> +
> +	*gobj_p = gobj;
> +	return ret;

Use a literal:
	return 0;

> +}
> +
> +static int vboxfb_create(struct drm_fb_helper *helper,
> +			 struct drm_fb_helper_surface_size *sizes)
> +{
> +	struct vbox_fbdev *fbdev =
> +	    container_of(helper, struct vbox_fbdev, helper);
> +	struct drm_device *dev = fbdev->helper.dev;
> +	struct DRM_MODE_FB_CMD mode_cmd;
> +	struct drm_framebuffer *fb;
> +	struct fb_info *info;
> +	struct device *device = &dev->pdev->dev;
> +	struct drm_gem_object *gobj = NULL;
> +	struct vbox_bo *bo = NULL;

No need for these initializations to NULL.

> +	int size, ret;
> +	u32 pitch;
> +
> +	mode_cmd.width = sizes->surface_width;
> +	mode_cmd.height = sizes->surface_height;
> +	pitch = mode_cmd.width * ((sizes->surface_bpp + 7) / 8);
> +	mode_cmd.pixel_format = drm_mode_legacy_fb_format(sizes->surface_bpp,
> +							  sizes->surface_depth);
> +	mode_cmd.pitches[0] = pitch;
> +
> +	size = pitch * mode_cmd.height;
> +
> +	ret = vboxfb_create_object(fbdev, &mode_cmd, &gobj);
> +	if (ret) {
> +		DRM_ERROR("failed to create fbcon backing object %d\n", ret);
> +		return ret;
> +	}
> +
> +	ret = vbox_framebuffer_init(dev, &fbdev->afb, &mode_cmd, gobj);
> +	if (ret)
> +		return ret;
> +
> +	bo = gem_to_vbox_bo(gobj);
> +
> +	ret = vbox_bo_reserve(bo, false);
> +	if (ret)
> +		return ret;
> +
> +	ret = vbox_bo_pin(bo, TTM_PL_FLAG_VRAM, NULL);
> +	if (ret) {
> +		vbox_bo_unreserve(bo);
> +		return ret;
> +	}
> +
> +	ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap);
> +	vbox_bo_unreserve(bo);
> +	if (ret) {
> +		DRM_ERROR("failed to kmap fbcon\n");
> +		return ret;
> +	}
> +
> +	info = framebuffer_alloc(0, device);
> +	if (!info)
> +		return -ENOMEM;
> +	info->par = fbdev;
> +
> +	fbdev->size = size;
> +
> +	fb = &fbdev->afb.base;
> +	fbdev->helper.fb = fb;
> +	fbdev->helper.fbdev = info;
> +
> +	strcpy(info->fix.id, "vboxdrmfb");
> +
> +	/*
> +	 * The last flag forces a mode set on VT switches even if the kernel
> +	 * does not think it is needed.
> +	 */
> +	info->flags = FBINFO_DEFAULT | FBINFO_CAN_FORCE_OUTPUT |
> +		      FBINFO_MISC_ALWAYS_SETPAR;
> +	info->fbops = &vboxfb_ops;
> +
> +	ret = fb_alloc_cmap(&info->cmap, 256, 0);
> +	if (ret)
> +		return -ENOMEM;
> +
> +	/*
> +	 * This seems to be done for safety checking that the framebuffer
> +	 * is not registered twice by different drivers.
> +	 */
> +	info->apertures = alloc_apertures(1);
> +	if (!info->apertures)
> +		return -ENOMEM;
> +	info->apertures->ranges[0].base = pci_resource_start(dev->pdev, 0);
> +	info->apertures->ranges[0].size = pci_resource_len(dev->pdev, 0);
> +
> +	drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth);
> +	drm_fb_helper_fill_var(info, &fbdev->helper, sizes->fb_width,
> +			       sizes->fb_height);
> +
> +	info->screen_base = bo->kmap.virtual;
> +	info->screen_size = size;
> +
> +#ifdef CONFIG_FB_DEFERRED_IO
> +	info->fbdefio = &vbox_defio;
> +	fb_deferred_io_init(info);
> +#endif
> +
> +	info->pixmap.flags = FB_PIXMAP_SYSTEM;
> +
> +	DRM_DEBUG_KMS("allocated %dx%d\n", fb->width, fb->height);
> +
> +	return 0;
> +}
> +
> +static void vbox_fb_gamma_set(struct drm_crtc *crtc, u16 red, u16 green,
> +			      u16 blue, int regno)
> +{
> +}
> +
> +static void vbox_fb_gamma_get(struct drm_crtc *crtc, u16 *red, u16 *green,
> +			      u16 *blue, int regno)
> +{
> +	*red = regno;
> +	*green = regno;
> +	*blue = regno;
> +}
> +
> +static struct drm_fb_helper_funcs vbox_fb_helper_funcs = {
> +	.gamma_set = vbox_fb_gamma_set,
> +	.gamma_get = vbox_fb_gamma_get,
> +	.fb_probe = vboxfb_create,
> +};
> +
> +static void vbox_fbdev_destroy(struct drm_device *dev, struct vbox_fbdev *fbdev)
> +{
> +	struct fb_info *info;
> +	struct vbox_framebuffer *afb = &fbdev->afb;
> +
> +	if (fbdev->helper.fbdev) {
> +		info = fbdev->helper.fbdev;
> +		unregister_framebuffer(info);
> +		if (info->cmap.len)
> +			fb_dealloc_cmap(&info->cmap);
> +		framebuffer_release(info);
> +	}
> +
> +	if (afb->obj) {
> +		struct vbox_bo *bo = gem_to_vbox_bo(afb->obj);
> +
> +		if (!vbox_bo_reserve(bo, false)) {
> +			if (bo->kmap.virtual)
> +				ttm_bo_kunmap(&bo->kmap);
> +			/*
> +			 * QXL does this, but is it really needed before
> +			 * freeing?
> +			 */
> +			if (bo->pin_count)
> +				vbox_bo_unpin(bo);
> +			vbox_bo_unreserve(bo);
> +		}
> +		drm_gem_object_unreference_unlocked(afb->obj);
> +		afb->obj = NULL;
> +	}
> +	drm_fb_helper_fini(&fbdev->helper);
> +
> +	drm_framebuffer_unregister_private(&afb->base);
> +	drm_framebuffer_cleanup(&afb->base);
> +}
> +
> +int vbox_fbdev_init(struct drm_device *dev)
> +{
> +	struct vbox_private *vbox = dev->dev_private;
> +	struct vbox_fbdev *fbdev;
> +	int ret;
> +
> +	fbdev = kzalloc(sizeof(*fbdev), GFP_KERNEL);
> +	if (!fbdev)
> +		return -ENOMEM;
> +
> +	vbox->fbdev = fbdev;
> +	spin_lock_init(&fbdev->dirty_lock);
> +
> +	drm_fb_helper_prepare(dev, &fbdev->helper, &vbox_fb_helper_funcs);
> +	ret = drm_fb_helper_init(dev, &fbdev->helper, vbox->num_crtcs);
> +	if (ret)
> +		goto free;
> +
> +	ret = drm_fb_helper_single_add_all_connectors(&fbdev->helper);
> +	if (ret)
> +		goto fini;
> +
> +	/* disable all the possible outputs/crtcs before entering KMS mode */
> +	drm_helper_disable_unused_functions(dev);
> +
> +	ret = drm_fb_helper_initial_config(&fbdev->helper, 32);
> +	if (ret)
> +		goto fini;
> +
> +	return 0;
> +
> +fini:
> +	drm_fb_helper_fini(&fbdev->helper);
> +free:
> +	kfree(fbdev);
> +	vbox->fbdev = NULL;
> +
> +	return ret;
> +}
> +
> +void vbox_fbdev_fini(struct drm_device *dev)
> +{
> +	struct vbox_private *vbox = dev->dev_private;
> +
> +	if (!vbox->fbdev)
> +		return;
> +
> +	vbox_fbdev_destroy(dev, vbox->fbdev);
> +	kfree(vbox->fbdev);
> +	vbox->fbdev = NULL;
> +}
> +
> +void vbox_fbdev_set_suspend(struct drm_device *dev, int state)
> +{
> +	struct vbox_private *vbox = dev->dev_private;
> +
> +	if (!vbox->fbdev)
> +		return;
> +
> +	fb_set_suspend(vbox->fbdev->helper.fbdev, state);
> +}
> +
> +void vbox_fbdev_set_base(struct vbox_private *vbox, unsigned long gpu_addr)
> +{
> +	vbox->fbdev->helper.fbdev->fix.smem_start =
> +	    vbox->fbdev->helper.fbdev->apertures->ranges[0].base + gpu_addr;
> +	vbox->fbdev->helper.fbdev->fix.smem_len =
> +	    vbox->available_vram_size - gpu_addr;

	struct whatever *fix = &vbox->fbdev->helper.fbdev->fix;

	fix->sm_start = vbox->fbdev->helper.fbdev->apertures->ranges[0].base +
			gpu_addr;
	fix->sm_len = vbox->available_vram_size - gpu_addr;


> +}
> diff --git a/drivers/staging/vboxvideo/vbox_hgsmi.c b/drivers/staging/vboxvideo/vbox_hgsmi.c
> new file mode 100644
> index 000000000000..822fd31121cb
> --- /dev/null
> +++ b/drivers/staging/vboxvideo/vbox_hgsmi.c
> @@ -0,0 +1,115 @@
> +/*
> + * Copyright (C) 2017 Oracle Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the
> + * "Software"), to deal in the Software without restriction, including
> + * without limitation the rights to use, copy, modify, merge, publish,
> + * distribute, sub license, and/or sell copies of the Software, and to
> + * permit persons to whom the Software is furnished to do so, subject to
> + * the following conditions:
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
> + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
> + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
> + * USE OR OTHER DEALINGS IN THE SOFTWARE.
> + *
> + * The above copyright notice and this permission notice (including the
> + * next paragraph) shall be included in all copies or substantial portions
> + * of the Software.
> + *
> + * Authors: Hans de Goede <hdegoede@xxxxxxxxxx>
> + */
> +
> +#include "vbox_drv.h"
> +#include "vboxvideo_vbe.h"
> +#include "hgsmi_defs.h"
> +
> +/* One-at-a-Time Hash from http://www.burtleburtle.net/bob/hash/doobs.html */
> +static u32 hgsmi_hash_process(u32 hash, const u8 *data, int size)
> +{
> +	while (size--) {
> +		hash += *data++;
> +		hash += (hash << 10);
> +		hash ^= (hash >> 6);
> +	}
> +
> +	return hash;
> +}
> +
> +static u32 hgsmi_hash_end(u32 hash)
> +{
> +	hash += (hash << 3);
> +	hash ^= (hash >> 11);
> +	hash += (hash << 15);
> +
> +	return hash;
> +}
> +
> +/* Not really a checksum but that is the naming used in all vbox code */
> +static u32 hgsmi_checksum(u32 offset,
> +			  const struct hgsmi_buffer_header *header,
> +			  const struct hgsmi_buffer_tail *tail)
> +{
> +	u32 checksum;
> +
> +	checksum = hgsmi_hash_process(0, (u8 *)&offset, sizeof(offset));
> +	checksum = hgsmi_hash_process(checksum, (u8 *)header, sizeof(*header));
> +	/* 4 -> Do not checksum the checksum itself */
> +	checksum = hgsmi_hash_process(checksum, (u8 *)tail, 4);
> +
> +	return hgsmi_hash_end(checksum);
> +}
> +
> +void *hgsmi_buffer_alloc(struct gen_pool *guest_pool, size_t size,
> +			 u8 channel, u16 channel_info)
> +{
> +	struct hgsmi_buffer_header *h;
> +	struct hgsmi_buffer_tail *t;
> +	size_t total_size;
> +	dma_addr_t offset;
> +
> +	total_size = size + sizeof(*h) + sizeof(*t);
> +	h = gen_pool_dma_alloc(guest_pool, total_size, &offset);
> +	if (!h)
> +		return NULL;
> +
> +	t = (struct hgsmi_buffer_tail *)((u8 *)h + sizeof(*h) + size);
> +
> +	h->flags = HGSMI_BUFFER_HEADER_F_SEQ_SINGLE;
> +	h->data_size = size;
> +	h->channel = channel;
> +	h->channel_info = channel_info;
> +	memset(&h->u.header_data, 0, sizeof(h->u.header_data));
> +
> +	t->reserved = 0;
> +	t->checksum = hgsmi_checksum(offset, h, t);
> +
> +	return (u8 *)h + sizeof(*h);
> +}
> +
> +void hgsmi_buffer_free(struct gen_pool *guest_pool, void *buf)
> +{
> +	struct hgsmi_buffer_header *h =
> +		(struct hgsmi_buffer_header *)((u8 *)buf - sizeof(*h));
> +	size_t total_size = h->data_size + sizeof(*h) +
> +					     sizeof(struct hgsmi_buffer_tail);
> +
> +	gen_pool_free(guest_pool, (unsigned long)h, total_size);
> +}
> +
> +int hgsmi_buffer_submit(struct gen_pool *guest_pool, void *buf)
> +{
> +	phys_addr_t offset;
> +
> +	offset = gen_pool_virt_to_phys(guest_pool, (unsigned long)buf -
> +				       sizeof(struct hgsmi_buffer_header));
> +	outl(offset, VGA_PORT_HGSMI_GUEST);
> +	/* Make the compiler aware that the host has changed memory. */
> +	mb();
> +
> +	return 0;
> +}
> diff --git a/drivers/staging/vboxvideo/vbox_irq.c b/drivers/staging/vboxvideo/vbox_irq.c
> new file mode 100644
> index 000000000000..951c2e315bf8
> --- /dev/null
> +++ b/drivers/staging/vboxvideo/vbox_irq.c
> @@ -0,0 +1,207 @@
> +/*
> + * Copyright (C) 2016-2017 Oracle Corporation
> + * This file is based on qxl_irq.c
> + * Copyright 2013 Red Hat Inc.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + * OTHER DEALINGS IN THE SOFTWARE.
> + *
> + * Authors: Dave Airlie
> + *          Alon Levy
> + *          Michael Thayer <michael.thayer@xxxxxxxxxx,
> + *          Hans de Goede <hdegoede@xxxxxxxxxx>
> + */
> +
> +#include <drm/drm_crtc_helper.h>
> +
> +#include "vbox_drv.h"
> +#include "vboxvideo.h"
> +
> +static void vbox_clear_irq(void)
> +{
> +	outl((u32)~0, VGA_PORT_HGSMI_HOST);
> +}
> +
> +static u32 vbox_get_flags(struct vbox_private *vbox)
> +{
> +	return readl(vbox->guest_heap + HOST_FLAGS_OFFSET);
> +}
> +
> +void vbox_report_hotplug(struct vbox_private *vbox)
> +{
> +	schedule_work(&vbox->hotplug_work);
> +}
> +
> +irqreturn_t vbox_irq_handler(int irq, void *arg)
> +{
> +	struct drm_device *dev = (struct drm_device *)arg;
> +	struct vbox_private *vbox = (struct vbox_private *)dev->dev_private;
> +	u32 host_flags = vbox_get_flags(vbox);
> +
> +	if (!(host_flags & HGSMIHOSTFLAGS_IRQ))
> +		return IRQ_NONE;
> +
> +	/*
> +	 * Due to a bug in the initial host implementation of hot-plug irqs,
> +	 * the hot-plug and cursor capability flags were never cleared.
> +	 * Fortunately we can tell when they would have been set by checking
> +	 * that the VSYNC flag is not set.
> +	 */
> +	if (host_flags &
> +	    (HGSMIHOSTFLAGS_HOTPLUG | HGSMIHOSTFLAGS_CURSOR_CAPABILITIES) &&
> +	    !(host_flags & HGSMIHOSTFLAGS_VSYNC))
> +		vbox_report_hotplug(vbox);
> +
> +	vbox_clear_irq();
> +
> +	return IRQ_HANDLED;
> +}
> +
> +/**
> + * Check that the position hints provided by the host are suitable for GNOME
> + * shell (i.e. all screens disjoint and hints for all enabled screens) and if
> + * not replace them with default ones.  Providing valid hints improves the
> + * chances that we will get a known screen layout for pointer mapping.
> + */
> +static void validate_or_set_position_hints(struct vbox_private *vbox)
> +{
> +	struct vbva_modehint *hintsi, *hintsj;
> +	bool valid = true;
> +	u16 currentx = 0;
> +	int i, j;
> +
> +	for (i = 0; i < vbox->num_crtcs; ++i) {
> +		for (j = 0; j < i; ++j) {
> +			hintsi = &vbox->last_mode_hints[i];
> +			hintsj = &vbox->last_mode_hints[j];
> +
> +			if (hintsi->enabled && hintsj->enabled) {
> +				if (hintsi->dx >= 0xffff ||
> +				    hintsi->dy >= 0xffff ||
> +				    hintsj->dx >= 0xffff ||
> +				    hintsj->dy >= 0xffff ||
> +				    (hintsi->dx <
> +					hintsj->dx + (hintsj->cx & 0x8fff) &&
> +				     hintsi->dx + (hintsi->cx & 0x8fff) >
> +					hintsj->dx) ||
> +				    (hintsi->dy <
> +					hintsj->dy + (hintsj->cy & 0x8fff) &&
> +				     hintsi->dy + (hintsi->cy & 0x8fff) >
> +					hintsj->dy))
> +					valid = false;
> +			}
> +		}
> +	}
> +	if (!valid)
> +		for (i = 0; i < vbox->num_crtcs; ++i) {
> +			if (vbox->last_mode_hints[i].enabled) {
> +				vbox->last_mode_hints[i].dx = currentx;
> +				vbox->last_mode_hints[i].dy = 0;
> +				currentx +=
> +				    vbox->last_mode_hints[i].cx & 0x8fff;
> +			}
> +		}
> +}
> +
> +/**
> + * Query the host for the most recent video mode hints.
> + */
> +static void vbox_update_mode_hints(struct vbox_private *vbox)
> +{
> +	struct drm_device *dev = vbox->dev;
> +	struct drm_connector *connector;
> +	struct vbox_connector *vbox_connector;
> +	struct vbva_modehint *hints;
> +	u16 flags;
> +	bool disconnected;
> +	unsigned int crtc_id;
> +	int ret;
> +
> +	ret = hgsmi_get_mode_hints(vbox->guest_pool, vbox->num_crtcs,
> +				   vbox->last_mode_hints);
> +	if (ret) {
> +		DRM_ERROR("vboxvideo: hgsmi_get_mode_hints failed: %d\n", ret);
> +		return;
> +	}
> +
> +	validate_or_set_position_hints(vbox);
> +	drm_modeset_lock_all(dev);
> +	list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
> +		vbox_connector = to_vbox_connector(connector);
> +		hints =
> +		    &vbox->last_mode_hints[vbox_connector->vbox_crtc->crtc_id];
> +		if (hints->magic == VBVAMODEHINT_MAGIC) {

Reverse this:
		if (hints->magic != VBVAMODEHINT_MAGIC)
			continue;


> +			disconnected = !(hints->enabled);
> +			crtc_id = vbox_connector->vbox_crtc->crtc_id;
> +			flags = VBVA_SCREEN_F_ACTIVE
> +			    | (disconnected ? VBVA_SCREEN_F_DISABLED :
> +			       VBVA_SCREEN_F_BLANK);
> +			vbox_connector->mode_hint.width = hints->cx & 0x8fff;
> +			vbox_connector->mode_hint.height = hints->cy & 0x8fff;
> +			vbox_connector->vbox_crtc->x_hint = hints->dx;
> +			vbox_connector->vbox_crtc->y_hint = hints->dy;
> +			vbox_connector->mode_hint.disconnected = disconnected;
> +			if (vbox_connector->vbox_crtc->disconnected !=
> +			    disconnected) {

Reverse this too:

		if (vbox_connector->vbox_crtc->disconnected == disconnected)
			continue;




> +				hgsmi_process_display_info(vbox->guest_pool,
> +							   crtc_id, 0, 0, 0,
> +							   hints->cx * 4,
> +							   hints->cx,
> +							   hints->cy, 0,
> +							   flags);
> +				vbox_connector->vbox_crtc->disconnected =
> +				    disconnected;
> +			}
> +		}
> +	}
> +	drm_modeset_unlock_all(dev);
> +}
> +
> +static void vbox_hotplug_worker(struct work_struct *work)
> +{
> +	struct vbox_private *vbox = container_of(work, struct vbox_private,
> +						 hotplug_work);
> +
> +	vbox_update_mode_hints(vbox);
> +	drm_kms_helper_hotplug_event(vbox->dev);
> +}
> +
> +int vbox_irq_init(struct vbox_private *vbox)
> +{
> +	int ret;
> +
> +	vbox_update_mode_hints(vbox);
> +	ret = drm_irq_install(vbox->dev, vbox->dev->pdev->irq);
> +	if (unlikely(ret != 0)) {


This is a slow path so don't use likely/unlikely.  Just say:

	if (ret)

> +		vbox_irq_fini(vbox);

No need.  Btw, the error handling in this driver is very frustrating for
me.  There is a massive do-everything cleanup function we call if init
fails or we need to unload the module.  But here we are doing our own
cleanup so vbox_irq_fini() is called twice.  And, of course, if the
resource isn't needed for the entire lifetime of the module then we have
to do normal kernel style error handling.  And then there is
vbox_mode_fini() which is not implemented.  The not implemented function
is not a coincidence, I see that a lot when people start down the path
of writing do-everything-style error handling.

My suggestion is to use normal kernel style error handling everywhere
because it's simpler in the end.  Every function cleans up after itself
so you never return half the allocations you wanted.  Choose the label
name to reflect what you are doing at the label.  You only need to
remember the most recent allocation instead of remembering every single
thing you allocated.

	one = alloc();
	if (!one)
		return -ENOMEM;

	two = alloc();
	if (!two) {
		rc = -ENOMEM;
		goto free_one;
	}

	three = alloc();
	if (!three) {
		rc = -ENOMEM;
		goto free_two;
	}

It's really simple if you plod along one thing at time.

> +		DRM_ERROR("Failed installing irq: %d\n", ret);
> +		return 1;

Preserve the error code.  "return ret;"


> +	}
> +	INIT_WORK(&vbox->hotplug_work, vbox_hotplug_worker);
> +	vbox->isr_installed = true;

The isr_installed variable is not needed.

> +	return 0;
> +}
> +
> +void vbox_irq_fini(struct vbox_private *vbox)
> +{
> +	if (vbox->isr_installed) {
> +		drm_irq_uninstall(vbox->dev);
> +		flush_work(&vbox->hotplug_work);

These should be freed in the reverse order from how they were allocated
so I don't have to audit for use after frees or whatever.

void vbox_irq_fini(struct vbox_private *vbox)
{
	flush_work(&vbox->hotplug_work);
	drm_irq_uninstall(vbox->dev);
}

> +		vbox->isr_installed = false;
> +	}
> +}
> diff --git a/drivers/staging/vboxvideo/vbox_main.c b/drivers/staging/vboxvideo/vbox_main.c
> new file mode 100644
> index 000000000000..e87d804256ca
> --- /dev/null
> +++ b/drivers/staging/vboxvideo/vbox_main.c
> @@ -0,0 +1,528 @@
> +/*
> + * Copyright (C) 2013-2017 Oracle Corporation
> + * This file is based on ast_main.c
> + * Copyright 2012 Red Hat Inc.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the
> + * "Software"), to deal in the Software without restriction, including
> + * without limitation the rights to use, copy, modify, merge, publish,
> + * distribute, sub license, and/or sell copies of the Software, and to
> + * permit persons to whom the Software is furnished to do so, subject to
> + * the following conditions:
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
> + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
> + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
> + * USE OR OTHER DEALINGS IN THE SOFTWARE.
> + *
> + * The above copyright notice and this permission notice (including the
> + * next paragraph) shall be included in all copies or substantial portions
> + * of the Software.
> + *
> + * Authors: Dave Airlie <airlied@xxxxxxxxxx>,
> + *          Michael Thayer <michael.thayer@xxxxxxxxxx,
> + *          Hans de Goede <hdegoede@xxxxxxxxxx>
> + */
> +#include <drm/drm_fb_helper.h>
> +#include <drm/drm_crtc_helper.h>
> +
> +#include "vbox_drv.h"
> +#include "vbox_err.h"
> +#include "vboxvideo_guest.h"
> +#include "vboxvideo_vbe.h"
> +
> +static void vbox_user_framebuffer_destroy(struct drm_framebuffer *fb)
> +{
> +	struct vbox_framebuffer *vbox_fb = to_vbox_framebuffer(fb);
> +
> +	if (vbox_fb->obj)
> +		drm_gem_object_unreference_unlocked(vbox_fb->obj);
> +
> +	drm_framebuffer_cleanup(fb);
> +	kfree(fb);
> +}
> +
> +void vbox_enable_accel(struct vbox_private *vbox)
> +{
> +	unsigned int i;
> +	struct vbva_buffer *vbva;
> +
> +	if (!vbox->vbva_info || !vbox->vbva_buffers) {
> +		/* Should never happen... */
> +		DRM_ERROR("vboxvideo: failed to set up VBVA.\n");
> +		return;
> +	}
> +
> +	for (i = 0; i < vbox->num_crtcs; ++i) {
> +		if (!vbox->vbva_info[i].vbva) {

Invert this test.

		if (vbox->vbva_info[i].vbva)
			continue;



> +			vbva = (struct vbva_buffer *)
> +				((u8 *)vbox->vbva_buffers +
> +						     i * VBVA_MIN_BUFFER_SIZE);


It's easier to do this cast like this:
		vbva = (void *)vbox->vbva_buffers + i * VBVA_MIN_BUFFER_SIZE;


> +			if (!vbva_enable(&vbox->vbva_info[i],
> +					 vbox->guest_pool, vbva, i)) {
> +				/* very old host or driver error. */
> +				DRM_ERROR("vboxvideo: vbva_enable failed\n");
> +				return;
> +			}
> +		}
> +	}
> +}
> +
> +void vbox_disable_accel(struct vbox_private *vbox)
> +{
> +	unsigned int i;
> +
> +	for (i = 0; i < vbox->num_crtcs; ++i)
> +		vbva_disable(&vbox->vbva_info[i], vbox->guest_pool, i);
> +}
> +
> +void vbox_report_caps(struct vbox_private *vbox)
> +{
> +	u32 caps = VBVACAPS_DISABLE_CURSOR_INTEGRATION
> +	    | VBVACAPS_IRQ | VBVACAPS_USE_VBVA_ONLY;

The operator goes on the first line:

	u32 caps = VBVACAPS_DISABLE_CURSOR_INTEGRATION | VBVACAPS_IRQ |
		   VBVACAPS_USE_VBVA_ONLY;

> +	if (vbox->initial_mode_queried)
> +		caps |= VBVACAPS_VIDEO_MODE_HINTS;
> +	hgsmi_send_caps_info(vbox->guest_pool, caps);
> +}
> +
> +/**
> + * Send information about dirty rectangles to VBVA.  If necessary we enable
> + * VBVA first, as this is normally disabled after a change of master in case
> + * the new master does not send dirty rectangle information (is this even
> + * allowed?)
> + */
> +void vbox_framebuffer_dirty_rectangles(struct drm_framebuffer *fb,
> +				       struct drm_clip_rect *rects,
> +				       unsigned int num_rects)
> +{
> +	struct vbox_private *vbox = fb->dev->dev_private;
> +	struct drm_crtc *crtc;
> +	unsigned int i;
> +
> +	mutex_lock(&vbox->hw_mutex);
> +	list_for_each_entry(crtc, &fb->dev->mode_config.crtc_list, head) {
> +		if (CRTC_FB(crtc) == fb) {

Invert

> +			vbox_enable_accel(vbox);
> +			for (i = 0; i < num_rects; ++i) {
> +				struct vbva_cmd_hdr cmd_hdr;
> +				unsigned int crtc_id =
> +				    to_vbox_crtc(crtc)->crtc_id;
> +
> +				if ((rects[i].x1 >
> +					 crtc->x + crtc->hwmode.hdisplay) ||
> +				    (rects[i].y1 >
> +					 crtc->y + crtc->hwmode.vdisplay) ||
> +				    (rects[i].x2 < crtc->x) ||
> +				    (rects[i].y2 < crtc->y))
> +					continue;
> +
> +				cmd_hdr.x = (s16)rects[i].x1;
> +				cmd_hdr.y = (s16)rects[i].y1;
> +				cmd_hdr.w = (u16)rects[i].x2 - rects[i].x1;
> +				cmd_hdr.h = (u16)rects[i].y2 - rects[i].y1;
> +
> +				if (vbva_buffer_begin_update(
> +						&vbox->vbva_info[crtc_id],
> +						vbox->guest_pool)) {
> +					vbva_write(&vbox->vbva_info[crtc_id],
> +						   vbox->guest_pool,
> +						   &cmd_hdr, sizeof(cmd_hdr));
> +					vbva_buffer_end_update(
> +						   &vbox->vbva_info[crtc_id]);
> +				}

This code is still sort of jammed up against the right side of the
screen.  It's hard to read.  Maybe pull it into a separate function?

> +			}
> +		}
> +	}
> +	mutex_unlock(&vbox->hw_mutex);
> +}
> +
> +static int vbox_user_framebuffer_dirty(struct drm_framebuffer *fb,
> +				       struct drm_file *file_priv,
> +				       unsigned int flags, unsigned int color,
> +				       struct drm_clip_rect *rects,
> +				       unsigned int num_rects)
> +{
> +	vbox_framebuffer_dirty_rectangles(fb, rects, num_rects);
> +
> +	return 0;
> +}
> +
> +static const struct drm_framebuffer_funcs vbox_fb_funcs = {
> +	.destroy = vbox_user_framebuffer_destroy,
> +	.dirty = vbox_user_framebuffer_dirty,
> +};
> +
> +int vbox_framebuffer_init(struct drm_device *dev,
> +			  struct vbox_framebuffer *vbox_fb,
> +			  const struct DRM_MODE_FB_CMD *mode_cmd,
> +			  struct drm_gem_object *obj)
> +{
> +	int ret;
> +
> +	drm_helper_mode_fill_fb_struct(dev, &vbox_fb->base, mode_cmd);
> +	vbox_fb->obj = obj;
> +	ret = drm_framebuffer_init(dev, &vbox_fb->base, &vbox_fb_funcs);
> +	if (ret) {
> +		DRM_ERROR("framebuffer init failed %d\n", ret);
> +		return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +static struct drm_framebuffer *vbox_user_framebuffer_create(
> +		struct drm_device *dev,
> +		struct drm_file *filp,
> +		const struct drm_mode_fb_cmd2 *mode_cmd)
> +{
> +	struct drm_gem_object *obj;
> +	struct vbox_framebuffer *vbox_fb;
> +	int ret;
> +
> +	obj = drm_gem_object_lookup(filp, mode_cmd->handles[0]);
> +	if (!obj)
> +		return ERR_PTR(-ENOENT);
> +
> +	vbox_fb = kzalloc(sizeof(*vbox_fb), GFP_KERNEL);
> +	if (!vbox_fb) {
> +		drm_gem_object_unreference_unlocked(obj);
> +		return ERR_PTR(-ENOMEM);
> +	}
> +
> +	ret = vbox_framebuffer_init(dev, vbox_fb, mode_cmd, obj);
> +	if (ret) {
> +		drm_gem_object_unreference_unlocked(obj);
> +		kfree(vbox_fb);
> +		return ERR_PTR(ret);
> +	}
> +
> +	return &vbox_fb->base;
> +}
> +
> +static const struct drm_mode_config_funcs vbox_mode_funcs = {
> +	.fb_create = vbox_user_framebuffer_create,
> +};
> +
> +static void vbox_accel_fini(struct vbox_private *vbox)
> +{
> +	if (vbox->vbva_info) {
> +		vbox_disable_accel(vbox);

We can sometimes call this without enabling accel and without calling
vbva_setup_buffer_context().  I'm not sure what that does, but probably
it's not something we want?  This is why do everything style error
handling is so complicated and bug prone.  It's much simpler to only
undo things which have been done.


> +		kfree(vbox->vbva_info);
> +		vbox->vbva_info = NULL;
> +	}
> +	if (vbox->vbva_buffers) {
> +		pci_iounmap(vbox->dev->pdev, vbox->vbva_buffers);
> +		vbox->vbva_buffers = NULL;
> +	}
> +}
> +
> +static int vbox_accel_init(struct vbox_private *vbox)
> +{
> +	unsigned int i;
> +
> +	vbox->vbva_info = kcalloc(vbox->num_crtcs, sizeof(*vbox->vbva_info),
> +				  GFP_KERNEL);
> +	if (!vbox->vbva_info)
> +		return -ENOMEM;
> +
> +	/* Take a command buffer for each screen from the end of usable VRAM. */
> +	vbox->available_vram_size -= vbox->num_crtcs * VBVA_MIN_BUFFER_SIZE;
> +
> +	vbox->vbva_buffers = pci_iomap_range(vbox->dev->pdev, 0,
> +					     vbox->available_vram_size,
> +					     vbox->num_crtcs *
> +					     VBVA_MIN_BUFFER_SIZE);
> +	if (!vbox->vbva_buffers)
> +		return -ENOMEM;
> +
> +	for (i = 0; i < vbox->num_crtcs; ++i)
> +		vbva_setup_buffer_context(&vbox->vbva_info[i],
> +					  vbox->available_vram_size +
> +					  i * VBVA_MIN_BUFFER_SIZE,
> +					  VBVA_MIN_BUFFER_SIZE);
> +
> +	return 0;
> +}
> +
> +/** Do we support the 4.3 plus mode hint reporting interface? */
> +static bool have_hgsmi_mode_hints(struct vbox_private *vbox)
> +{
> +	u32 have_hints, have_cursor;
> +	int ret;
> +
> +	ret = hgsmi_query_conf(vbox->guest_pool,
> +			       VBOX_VBVA_CONF32_MODE_HINT_REPORTING,
> +			       &have_hints);
> +	if (RT_FAILURE(ret))

hgsmi_query_conf() returns normal kernel error codes:

	if (ret)
		return false;

> +		return false;
> +
> +	ret = hgsmi_query_conf(vbox->guest_pool,
> +			       VBOX_VBVA_CONF32_GUEST_CURSOR_REPORTING,
> +			       &have_cursor);
> +	if (RT_FAILURE(ret))

Same

> +		return false;
> +
> +	return have_hints == VINF_SUCCESS && have_cursor == VINF_SUCCESS;
> +}
> +
> +static bool vbox_check_supported(u16 id)
> +{
> +	u16 dispi_id;
> +
> +	vbox_write_ioport(VBE_DISPI_INDEX_ID, id);
> +	dispi_id = inw(VBE_DISPI_IOPORT_DATA);
> +
> +	return dispi_id == id;
> +}
> +
> +/**
> + * Set up our heaps and data exchange buffers in VRAM before handing the rest
> + * to the memory manager.
> + */
> +static int vbox_hw_init(struct vbox_private *vbox)
> +{
> +	int ret;
> +
> +	vbox->full_vram_size = inl(VBE_DISPI_IOPORT_DATA);
> +	vbox->any_pitch = vbox_check_supported(VBE_DISPI_ID_ANYX);
> +
> +	DRM_INFO("VRAM %08x\n", vbox->full_vram_size);
> +
> +	/* Map guest-heap at end of vram */
> +	vbox->guest_heap =
> +	    pci_iomap_range(vbox->dev->pdev, 0, GUEST_HEAP_OFFSET(vbox),
> +			    GUEST_HEAP_SIZE);
> +	if (!vbox->guest_heap)
> +		return -ENOMEM;
> +
> +	/* Create guest-heap mem-pool use 2^4 = 16 byte chunks */
> +	vbox->guest_pool = gen_pool_create(4, -1);
> +	if (!vbox->guest_pool)
> +		return -ENOMEM;
> +
> +	ret = gen_pool_add_virt(vbox->guest_pool,
> +				(unsigned long)vbox->guest_heap,
> +				GUEST_HEAP_OFFSET(vbox),
> +				GUEST_HEAP_USABLE_SIZE, -1);
> +	if (ret)
> +		return ret;
> +
> +	ret = hgsmi_test_query_conf(vbox->guest_pool);
> +	if (ret) {
> +		DRM_ERROR("vboxvideo: hgsmi_test_query_conf failed\n");
> +		return ret;
> +	}
> +
> +	/* Reduce available VRAM size to reflect the guest heap. */
> +	vbox->available_vram_size = GUEST_HEAP_OFFSET(vbox);
> +	/* Linux drm represents monitors as a 32-bit array. */
> +	hgsmi_query_conf(vbox->guest_pool, VBOX_VBVA_CONF32_MONITOR_COUNT,
> +			 &vbox->num_crtcs);
> +	vbox->num_crtcs = clamp_t(u32, vbox->num_crtcs, 1, VBOX_MAX_SCREENS);
> +
> +	if (!have_hgsmi_mode_hints(vbox))
> +		return -ENOTSUPP;
> +
> +	vbox->last_mode_hints =
> +	    kcalloc(vbox->num_crtcs, sizeof(struct vbva_modehint), GFP_KERNEL);
> +	if (!vbox->last_mode_hints)
> +		return -ENOMEM;
> +
> +	return vbox_accel_init(vbox);
> +}
> +
> +static void vbox_hw_fini(struct vbox_private *vbox)
> +{
> +	vbox_accel_fini(vbox);
> +	kfree(vbox->last_mode_hints);
> +	vbox->last_mode_hints = NULL;
> +}
> +
> +int vbox_driver_load(struct drm_device *dev, unsigned long flags)
> +{
> +	struct vbox_private *vbox;
> +	int ret = 0;
> +
> +	if (!vbox_check_supported(VBE_DISPI_ID_HGSMI))
> +		return -ENODEV;
> +
> +	vbox = kzalloc(sizeof(*vbox), GFP_KERNEL);
> +	if (!vbox)
> +		return -ENOMEM;
> +
> +	dev->dev_private = vbox;
> +	vbox->dev = dev;
> +
> +	mutex_init(&vbox->hw_mutex);
> +
> +	ret = vbox_hw_init(vbox);
> +	if (ret)
> +		goto out_free;
> +
> +	ret = vbox_mm_init(vbox);
> +	if (ret)
> +		goto out_free;
> +
> +	drm_mode_config_init(dev);
> +
> +	dev->mode_config.funcs = (void *)&vbox_mode_funcs;
> +	dev->mode_config.min_width = 64;
> +	dev->mode_config.min_height = 64;
> +	dev->mode_config.preferred_depth = 24;
> +	dev->mode_config.max_width = VBE_DISPI_MAX_XRES;
> +	dev->mode_config.max_height = VBE_DISPI_MAX_YRES;
> +
> +	ret = vbox_mode_init(dev);
> +	if (ret)
> +		goto out_free;
> +
> +	ret = vbox_irq_init(vbox);
> +	if (ret)
> +		goto out_free;
> +
> +	ret = vbox_fbdev_init(dev);
> +	if (ret)
> +		goto out_free;
> +
> +	return 0;
> +
> +out_free:
> +	vbox_driver_unload(dev);
> +	return ret;
> +}
> +
> +void vbox_driver_unload(struct drm_device *dev)
> +{
> +	struct vbox_private *vbox = dev->dev_private;
> +
> +	vbox_fbdev_fini(dev);
> +	vbox_irq_fini(vbox);
> +	vbox_mode_fini(dev);
> +	if (dev->mode_config.funcs)
> +		drm_mode_config_cleanup(dev);
> +
> +	vbox_hw_fini(vbox);
> +	vbox_mm_fini(vbox);
> +	if (vbox->guest_pool)
> +		gen_pool_destroy(vbox->guest_pool);
> +	if (vbox->guest_heap)
> +		pci_iounmap(dev->pdev, vbox->guest_heap);

Why is releasing ->guest_pool/heap not in vbox_hw_fini()?

> +	kfree(vbox);
> +	dev->dev_private = NULL;

This is not needed.  There are a bunch of these NULL assignments which
are only needed if we call the function twice by mistake.

> +}
> +
> +/**
> + * @note this is described in the DRM framework documentation.  AST does not
> + * have it, but we get an oops on driver unload if it is not present.
> + */
> +void vbox_driver_lastclose(struct drm_device *dev)
> +{
> +	struct vbox_private *vbox = dev->dev_private;
> +
> +	if (vbox->fbdev)
> +		drm_fb_helper_restore_fbdev_mode_unlocked(&vbox->fbdev->helper);
> +}
> +
> +int vbox_gem_create(struct drm_device *dev,
> +		    u32 size, bool iskernel, struct drm_gem_object **obj)
> +{
> +	struct vbox_bo *vboxbo;
> +	int ret;
> +
> +	*obj = NULL;
> +
> +	size = roundup(size, PAGE_SIZE);
> +	if (size == 0)
> +		return -EINVAL;
> +
> +	ret = vbox_bo_create(dev, size, 0, 0, &vboxbo);
> +	if (ret) {
> +		if (ret != -ERESTARTSYS)
> +			DRM_ERROR("failed to allocate GEM object\n");
> +		return ret;
> +	}
> +
> +	*obj = &vboxbo->gem;
> +
> +	return 0;
> +}
> +
> +int vbox_dumb_create(struct drm_file *file,
> +		     struct drm_device *dev, struct drm_mode_create_dumb *args)
> +{
> +	int ret;
> +	struct drm_gem_object *gobj;
> +	u32 handle;
> +
> +	args->pitch = args->width * ((args->bpp + 7) / 8);
> +	args->size = args->pitch * args->height;
> +
> +	ret = vbox_gem_create(dev, args->size, false, &gobj);
> +	if (ret)
> +		return ret;
> +
> +	ret = drm_gem_handle_create(file, gobj, &handle);
> +	drm_gem_object_unreference_unlocked(gobj);
> +	if (ret)
> +		return ret;

This is a resource leak.

> +
> +	args->handle = handle;
> +
> +	return 0;
> +}
> +
> +static void vbox_bo_unref(struct vbox_bo **bo)
> +{
> +	struct ttm_buffer_object *tbo;
> +
> +	if ((*bo) == NULL)
> +		return;
> +
> +	tbo = &((*bo)->bo);
> +	ttm_bo_unref(&tbo);
> +	if (!tbo)
> +		*bo = NULL;
> +}
> +
> +void vbox_gem_free_object(struct drm_gem_object *obj)
> +{
> +	struct vbox_bo *vbox_bo = gem_to_vbox_bo(obj);
> +
> +	vbox_bo_unref(&vbox_bo);
> +}
> +
> +static inline u64 vbox_bo_mmap_offset(struct vbox_bo *bo)
> +{
> +	return drm_vma_node_offset_addr(&bo->bo.vma_node);
> +}
> +
> +int
> +vbox_dumb_mmap_offset(struct drm_file *file,
> +		      struct drm_device *dev,
> +		      u32 handle, u64 *offset)
> +{
> +	struct drm_gem_object *obj;
> +	int ret;
> +	struct vbox_bo *bo;
> +
> +	mutex_lock(&dev->struct_mutex);
> +	obj = drm_gem_object_lookup(file, handle);
> +	if (!obj) {
> +		ret = -ENOENT;
> +		goto out_unlock;
> +	}
> +
> +	bo = gem_to_vbox_bo(obj);
> +	*offset = vbox_bo_mmap_offset(bo);
> +
> +	drm_gem_object_unreference(obj);
> +	ret = 0;
> +
> +out_unlock:
> +	mutex_unlock(&dev->struct_mutex);
> +	return ret;
> +}
> diff --git a/drivers/staging/vboxvideo/vbox_mode.c b/drivers/staging/vboxvideo/vbox_mode.c
> new file mode 100644
> index 000000000000..fbb572c04c0f
> --- /dev/null
> +++ b/drivers/staging/vboxvideo/vbox_mode.c
> @@ -0,0 +1,864 @@
> +/*
> + * Copyright (C) 2013-2017 Oracle Corporation
> + * This file is based on ast_mode.c
> + * Copyright 2012 Red Hat Inc.
> + * Parts based on xf86-video-ast
> + * Copyright (c) 2005 ASPEED Technology Inc.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the
> + * "Software"), to deal in the Software without restriction, including
> + * without limitation the rights to use, copy, modify, merge, publish,
> + * distribute, sub license, and/or sell copies of the Software, and to
> + * permit persons to whom the Software is furnished to do so, subject to
> + * the following conditions:
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
> + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
> + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
> + * USE OR OTHER DEALINGS IN THE SOFTWARE.
> + *
> + * The above copyright notice and this permission notice (including the
> + * next paragraph) shall be included in all copies or substantial portions
> + * of the Software.
> + *
> + */
> +/*
> + * Authors: Dave Airlie <airlied@xxxxxxxxxx>
> + *          Michael Thayer <michael.thayer@xxxxxxxxxx,
> + *          Hans de Goede <hdegoede@xxxxxxxxxx>
> + */
> +#include <linux/export.h>
> +#include <drm/drm_crtc_helper.h>
> +#include <drm/drm_plane_helper.h>
> +
> +#include "vbox_drv.h"
> +#include "vboxvideo.h"
> +#include "hgsmi_channels.h"
> +
> +static int vbox_cursor_set2(struct drm_crtc *crtc, struct drm_file *file_priv,
> +			    u32 handle, u32 width, u32 height,
> +			    s32 hot_x, s32 hot_y);
> +static int vbox_cursor_move(struct drm_crtc *crtc, int x, int y);
> +
> +/**
> + * Set a graphics mode.  Poke any required values into registers, do an HGSMI
> + * mode set and tell the host we support advanced graphics functions.
> + */
> +static void vbox_do_modeset(struct drm_crtc *crtc,
> +			    const struct drm_display_mode *mode)
> +{
> +	struct vbox_crtc *vbox_crtc = to_vbox_crtc(crtc);
> +	struct vbox_private *vbox;
> +	int width, height, bpp, pitch;
> +	unsigned int crtc_id;
> +	u16 flags;
> +	s32 x_offset, y_offset;
> +
> +	vbox = crtc->dev->dev_private;
> +	width = mode->hdisplay ? mode->hdisplay : 640;
> +	height = mode->vdisplay ? mode->vdisplay : 480;
> +	crtc_id = vbox_crtc->crtc_id;
> +	bpp = crtc->enabled ? CRTC_FB(crtc)->format->cpp[0] * 8 : 32;
> +	pitch = crtc->enabled ? CRTC_FB(crtc)->pitches[0] : width * bpp / 8;
> +	x_offset = vbox->single_framebuffer ? crtc->x : vbox_crtc->x_hint;
> +	y_offset = vbox->single_framebuffer ? crtc->y : vbox_crtc->y_hint;
> +
> +	/*
> +	 * This is the old way of setting graphics modes.  It assumed one screen
> +	 * and a frame-buffer at the start of video RAM.  On older versions of
> +	 * VirtualBox, certain parts of the code still assume that the first
> +	 * screen is programmed this way, so try to fake it.
> +	 */
> +	if (vbox_crtc->crtc_id == 0 && crtc->enabled &&
> +	    vbox_crtc->fb_offset / pitch < 0xffff - crtc->y &&
> +	    vbox_crtc->fb_offset % (bpp / 8) == 0) {
> +		vbox_write_ioport(VBE_DISPI_INDEX_XRES, width);
> +		vbox_write_ioport(VBE_DISPI_INDEX_YRES, height);
> +		vbox_write_ioport(VBE_DISPI_INDEX_VIRT_WIDTH, pitch * 8 / bpp);
> +		vbox_write_ioport(VBE_DISPI_INDEX_BPP,
> +				  CRTC_FB(crtc)->format->cpp[0] * 8);
> +		vbox_write_ioport(VBE_DISPI_INDEX_ENABLE, VBE_DISPI_ENABLED);
> +		vbox_write_ioport(
> +			VBE_DISPI_INDEX_X_OFFSET,
> +			vbox_crtc->fb_offset % pitch / bpp * 8 + crtc->x);
> +		vbox_write_ioport(VBE_DISPI_INDEX_Y_OFFSET,
> +				  vbox_crtc->fb_offset / pitch + crtc->y);
> +	}
> +
> +	flags = VBVA_SCREEN_F_ACTIVE;
> +	flags |= (crtc->enabled && !vbox_crtc->blanked) ?
> +		 0 : VBVA_SCREEN_F_BLANK;
> +	flags |= vbox_crtc->disconnected ? VBVA_SCREEN_F_DISABLED : 0;
> +	hgsmi_process_display_info(vbox->guest_pool, vbox_crtc->crtc_id,
> +				   x_offset, y_offset,
> +				   crtc->x * bpp / 8 + crtc->y * pitch,
> +				   pitch, width, height,
> +				   vbox_crtc->blanked ? 0 : bpp, flags);
> +}
> +
> +static int vbox_set_view(struct drm_crtc *crtc)
> +{
> +	struct vbox_crtc *vbox_crtc = to_vbox_crtc(crtc);
> +	struct vbox_private *vbox = crtc->dev->dev_private;
> +	struct vbva_infoview *p;
> +
> +	/*
> +	 * Tell the host about the view.  This design originally targeted the
> +	 * Windows XP driver architecture and assumed that each screen would
> +	 * have a dedicated frame buffer with the command buffer following it,
> +	 * the whole being a "view".  The host works out which screen a command
> +	 * buffer belongs to by checking whether it is in the first view, then
> +	 * whether it is in the second and so on.  The first match wins.  We
> +	 * cheat around this by making the first view be the managed memory
> +	 * plus the first command buffer, the second the same plus the second
> +	 * buffer and so on.
> +	 */
> +	p = hgsmi_buffer_alloc(vbox->guest_pool, sizeof(*p),
> +			       HGSMI_CH_VBVA, VBVA_INFO_VIEW);
> +	if (!p)
> +		return -ENOMEM;
> +
> +	p->view_index = vbox_crtc->crtc_id;
> +	p->view_offset = vbox_crtc->fb_offset;
> +	p->view_size = vbox->available_vram_size - vbox_crtc->fb_offset +
> +		       vbox_crtc->crtc_id * VBVA_MIN_BUFFER_SIZE;
> +	p->max_screen_size = vbox->available_vram_size - vbox_crtc->fb_offset;
> +
> +	hgsmi_buffer_submit(vbox->guest_pool, p);
> +	hgsmi_buffer_free(vbox->guest_pool, p);
> +
> +	return 0;
> +}
> +
> +static void vbox_crtc_load_lut(struct drm_crtc *crtc)
> +{
> +}
> +
> +static void vbox_crtc_dpms(struct drm_crtc *crtc, int mode)
> +{
> +	struct vbox_crtc *vbox_crtc = to_vbox_crtc(crtc);
> +	struct vbox_private *vbox = crtc->dev->dev_private;
> +
> +	switch (mode) {
> +	case DRM_MODE_DPMS_ON:
> +		vbox_crtc->blanked = false;
> +		break;
> +	case DRM_MODE_DPMS_STANDBY:
> +	case DRM_MODE_DPMS_SUSPEND:
> +	case DRM_MODE_DPMS_OFF:
> +		vbox_crtc->blanked = true;
> +		break;
> +	}
> +
> +	mutex_lock(&vbox->hw_mutex);
> +	vbox_do_modeset(crtc, &crtc->hwmode);
> +	mutex_unlock(&vbox->hw_mutex);
> +}
> +
> +static bool vbox_crtc_mode_fixup(struct drm_crtc *crtc,
> +				 const struct drm_display_mode *mode,
> +				 struct drm_display_mode *adjusted_mode)
> +{
> +	return true;
> +}
> +
> +/*
> + * Try to map the layout of virtual screens to the range of the input device.
> + * Return true if we need to re-set the crtc modes due to screen offset
> + * changes.
> + */
> +static bool vbox_set_up_input_mapping(struct vbox_private *vbox)
> +{
> +	struct drm_crtc *crtci;
> +	struct drm_connector *connectori;
> +	struct drm_framebuffer *fb1 = NULL;
> +	bool single_framebuffer = true;
> +	bool old_single_framebuffer = vbox->single_framebuffer;
> +	u16 width = 0, height = 0;
> +
> +	/*
> +	 * Are we using an X.Org-style single large frame-buffer for all crtcs?
> +	 * If so then screen layout can be deduced from the crtc offsets.
> +	 * Same fall-back if this is the fbdev frame-buffer.
> +	 */
> +	list_for_each_entry(crtci, &vbox->dev->mode_config.crtc_list, head) {
> +		if (!fb1) {
> +			fb1 = CRTC_FB(crtci);
> +			if (to_vbox_framebuffer(fb1) == &vbox->fbdev->afb)
> +				break;
> +		} else if (CRTC_FB(crtci) && fb1 != CRTC_FB(crtci)) {
> +			single_framebuffer = false;
> +		}
> +	}
> +	if (single_framebuffer) {
> +		list_for_each_entry(crtci, &vbox->dev->mode_config.crtc_list,
> +				    head) {
> +			if (to_vbox_crtc(crtci)->crtc_id == 0) {

Invert this if statement.

> +				vbox->single_framebuffer = true;
> +				vbox->input_mapping_width =
> +				    CRTC_FB(crtci)->width;
> +				vbox->input_mapping_height =
> +				    CRTC_FB(crtci)->height;
> +				return old_single_framebuffer !=
> +				    vbox->single_framebuffer;
> +			}
> +		}
> +	}
> +	/* Otherwise calculate the total span of all screens. */
> +	list_for_each_entry(connectori, &vbox->dev->mode_config.connector_list,
> +			    head) {
> +		struct vbox_connector *vbox_connector =
> +		    to_vbox_connector(connectori);
> +		struct vbox_crtc *vbox_crtc = vbox_connector->vbox_crtc;
> +
> +		width = max_t(u16, width, vbox_crtc->x_hint +
> +					  vbox_connector->mode_hint.width);
> +		height = max_t(u16, height, vbox_crtc->y_hint +
> +					    vbox_connector->mode_hint.height);
> +	}
> +
> +	vbox->single_framebuffer = false;
> +	vbox->input_mapping_width = width;
> +	vbox->input_mapping_height = height;
> +
> +	return old_single_framebuffer != vbox->single_framebuffer;
> +}
> +
> +static int vbox_crtc_do_set_base(struct drm_crtc *crtc,
> +				 struct drm_framebuffer *old_fb, int x, int y)
> +{
> +	struct vbox_private *vbox = crtc->dev->dev_private;
> +	struct vbox_crtc *vbox_crtc = to_vbox_crtc(crtc);
> +	struct drm_gem_object *obj;
> +	struct vbox_framebuffer *vbox_fb;
> +	struct vbox_bo *bo;
> +	int ret;
> +	u64 gpu_addr;
> +
> +	/* Unpin the previous fb. */
> +	if (old_fb) {
> +		vbox_fb = to_vbox_framebuffer(old_fb);
> +		obj = vbox_fb->obj;
> +		bo = gem_to_vbox_bo(obj);
> +		ret = vbox_bo_reserve(bo, false);
> +		if (ret)
> +			return ret;
> +
> +		vbox_bo_unpin(bo);
> +		vbox_bo_unreserve(bo);
> +	}
> +
> +	vbox_fb = to_vbox_framebuffer(CRTC_FB(crtc));
> +	obj = vbox_fb->obj;
> +	bo = gem_to_vbox_bo(obj);
> +
> +	ret = vbox_bo_reserve(bo, false);
> +	if (ret)
> +		return ret;
> +
> +	ret = vbox_bo_pin(bo, TTM_PL_FLAG_VRAM, &gpu_addr);
> +	if (ret) {
> +		vbox_bo_unreserve(bo);
> +		return ret;
> +	}
> +
> +	if (&vbox->fbdev->afb == vbox_fb)
> +		vbox_fbdev_set_base(vbox, gpu_addr);
> +	vbox_bo_unreserve(bo);
> +
> +	/* vbox_set_start_address_crt1(crtc, (u32)gpu_addr); */
> +	vbox_crtc->fb_offset = gpu_addr;
> +	if (vbox_set_up_input_mapping(vbox)) {
> +		struct drm_crtc *crtci;
> +
> +		list_for_each_entry(crtci, &vbox->dev->mode_config.crtc_list,
> +				    head) {
> +			vbox_set_view(crtc);
> +			vbox_do_modeset(crtci, &crtci->mode);
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +static int vbox_crtc_mode_set_base(struct drm_crtc *crtc, int x, int y,
> +				   struct drm_framebuffer *old_fb)
> +{
> +	return vbox_crtc_do_set_base(crtc, old_fb, x, y);
> +}
> +
> +static int vbox_crtc_mode_set(struct drm_crtc *crtc,
> +			      struct drm_display_mode *mode,
> +			      struct drm_display_mode *adjusted_mode,
> +			      int x, int y, struct drm_framebuffer *old_fb)
> +{
> +	struct vbox_private *vbox = crtc->dev->dev_private;
> +	int rc = 0;

Remove initializer.

> +
> +	vbox_crtc_mode_set_base(crtc, x, y, old_fb);
> +
> +	mutex_lock(&vbox->hw_mutex);
> +	rc = vbox_set_view(crtc);
> +	if (!rc)
> +		vbox_do_modeset(crtc, mode);
> +	hgsmi_update_input_mapping(vbox->guest_pool, 0, 0,
> +				   vbox->input_mapping_width,
> +				   vbox->input_mapping_height);
> +	mutex_unlock(&vbox->hw_mutex);
> +
> +	return rc;
> +}
> +
> +static void vbox_crtc_disable(struct drm_crtc *crtc)
> +{
> +}
> +
> +static void vbox_crtc_prepare(struct drm_crtc *crtc)
> +{
> +}
> +
> +static void vbox_crtc_commit(struct drm_crtc *crtc)
> +{
> +}
> +
> +static const struct drm_crtc_helper_funcs vbox_crtc_helper_funcs = {
> +	.dpms = vbox_crtc_dpms,
> +	.mode_fixup = vbox_crtc_mode_fixup,
> +	.mode_set = vbox_crtc_mode_set,
> +	/* .mode_set_base = vbox_crtc_mode_set_base, */
> +	.disable = vbox_crtc_disable,
> +	.load_lut = vbox_crtc_load_lut,
> +	.prepare = vbox_crtc_prepare,
> +	.commit = vbox_crtc_commit,
> +};
> +
> +static void vbox_crtc_reset(struct drm_crtc *crtc)
> +{
> +}
> +
> +static void vbox_crtc_destroy(struct drm_crtc *crtc)
> +{
> +	drm_crtc_cleanup(crtc);
> +	kfree(crtc);
> +}
> +
> +static const struct drm_crtc_funcs vbox_crtc_funcs = {
> +	.cursor_move = vbox_cursor_move,
> +	.cursor_set2 = vbox_cursor_set2,
> +	.reset = vbox_crtc_reset,
> +	.set_config = drm_crtc_helper_set_config,
> +	/* .gamma_set = vbox_crtc_gamma_set, */
> +	.destroy = vbox_crtc_destroy,
> +};
> +
> +static struct vbox_crtc *vbox_crtc_init(struct drm_device *dev, unsigned int i)
> +{
> +	struct vbox_crtc *vbox_crtc;
> +
> +	vbox_crtc = kzalloc(sizeof(*vbox_crtc), GFP_KERNEL);
> +	if (!vbox_crtc)
> +		return NULL;
> +
> +	vbox_crtc->crtc_id = i;
> +
> +	drm_crtc_init(dev, &vbox_crtc->base, &vbox_crtc_funcs);
> +	drm_mode_crtc_set_gamma_size(&vbox_crtc->base, 256);
> +	drm_crtc_helper_add(&vbox_crtc->base, &vbox_crtc_helper_funcs);
> +
> +	return vbox_crtc;
> +}
> +
> +static void vbox_encoder_destroy(struct drm_encoder *encoder)
> +{
> +	drm_encoder_cleanup(encoder);
> +	kfree(encoder);
> +}
> +
> +static struct drm_encoder *vbox_best_single_encoder(struct drm_connector
> +						    *connector)
> +{
> +	int enc_id = connector->encoder_ids[0];
> +
> +	/* pick the encoder ids */
> +	if (enc_id)
> +		return drm_encoder_find(connector->dev, enc_id);
> +
> +	return NULL;
> +}
> +
> +static const struct drm_encoder_funcs vbox_enc_funcs = {
> +	.destroy = vbox_encoder_destroy,
> +};
> +
> +static void vbox_encoder_dpms(struct drm_encoder *encoder, int mode)
> +{
> +}
> +
> +static bool vbox_mode_fixup(struct drm_encoder *encoder,
> +			    const struct drm_display_mode *mode,
> +			    struct drm_display_mode *adjusted_mode)
> +{
> +	return true;
> +}
> +
> +static void vbox_encoder_mode_set(struct drm_encoder *encoder,
> +				  struct drm_display_mode *mode,
> +				  struct drm_display_mode *adjusted_mode)
> +{
> +}
> +
> +static void vbox_encoder_prepare(struct drm_encoder *encoder)
> +{
> +}
> +
> +static void vbox_encoder_commit(struct drm_encoder *encoder)
> +{
> +}
> +
> +static const struct drm_encoder_helper_funcs vbox_enc_helper_funcs = {
> +	.dpms = vbox_encoder_dpms,
> +	.mode_fixup = vbox_mode_fixup,
> +	.prepare = vbox_encoder_prepare,
> +	.commit = vbox_encoder_commit,
> +	.mode_set = vbox_encoder_mode_set,
> +};
> +
> +static struct drm_encoder *vbox_encoder_init(struct drm_device *dev,
> +					     unsigned int i)
> +{
> +	struct vbox_encoder *vbox_encoder;
> +
> +	vbox_encoder = kzalloc(sizeof(*vbox_encoder), GFP_KERNEL);
> +	if (!vbox_encoder)
> +		return NULL;
> +
> +	drm_encoder_init(dev, &vbox_encoder->base, &vbox_enc_funcs,
> +			 DRM_MODE_ENCODER_DAC, NULL);
> +	drm_encoder_helper_add(&vbox_encoder->base, &vbox_enc_helper_funcs);
> +
> +	vbox_encoder->base.possible_crtcs = 1 << i;
> +	return &vbox_encoder->base;
> +}
> +
> +/**
> + * Generate EDID data with a mode-unique serial number for the virtual
> + *  monitor to try to persuade Unity that different modes correspond to
> + *  different monitors and it should not try to force the same resolution on
> + *  them.
> + */
> +static void vbox_set_edid(struct drm_connector *connector, int width,
> +			  int height)
> +{
> +	enum { EDID_SIZE = 128 };
> +	unsigned char edid[EDID_SIZE] = {
> +		0x00, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00,	/* header */
> +		0x58, 0x58,	/* manufacturer (VBX) */
> +		0x00, 0x00,	/* product code */
> +		0x00, 0x00, 0x00, 0x00,	/* serial number goes here */
> +		0x01,		/* week of manufacture */
> +		0x00,		/* year of manufacture */
> +		0x01, 0x03,	/* EDID version */
> +		0x80,		/* capabilities - digital */
> +		0x00,		/* horiz. res in cm, zero for projectors */
> +		0x00,		/* vert. res in cm */
> +		0x78,		/* display gamma (120 == 2.2). */
> +		0xEE,		/* features (standby, suspend, off, RGB, std */
> +				/* colour space, preferred timing mode) */
> +		0xEE, 0x91, 0xA3, 0x54, 0x4C, 0x99, 0x26, 0x0F, 0x50, 0x54,
> +		/* chromaticity for standard colour space. */
> +		0x00, 0x00, 0x00,	/* no default timings */
> +		0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01,
> +		    0x01, 0x01,
> +		0x01, 0x01, 0x01, 0x01,	/* no standard timings */
> +		0x00, 0x00, 0x00, 0x06, 0x00, 0x00, 0x06, 0x00, 0x02, 0x02,
> +		    0x02, 0x02,
> +		/* descriptor block 1 goes below */
> +		0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
> +		/* descriptor block 2, monitor ranges */
> +		0x00, 0x00, 0x00, 0xFD, 0x00,
> +		0x00, 0xC8, 0x00, 0xC8, 0x64, 0x00, 0x0A, 0x20, 0x20, 0x20,
> +		    0x20, 0x20,
> +		/* 0-200Hz vertical, 0-200KHz horizontal, 1000MHz pixel clock */
> +		0x20,
> +		/* descriptor block 3, monitor name */
> +		0x00, 0x00, 0x00, 0xFC, 0x00,
> +		'V', 'B', 'O', 'X', ' ', 'm', 'o', 'n', 'i', 't', 'o', 'r',
> +		'\n',
> +		/* descriptor block 4: dummy data */
> +		0x00, 0x00, 0x00, 0x10, 0x00,
> +		0x0A, 0x20, 0x20, 0x20, 0x20, 0x20,
> +		0x20, 0x20, 0x20, 0x20, 0x20, 0x20,
> +		0x20,
> +		0x00,		/* number of extensions */
> +		0x00		/* checksum goes here */
> +	};
> +	int clock = (width + 6) * (height + 6) * 60 / 10000;
> +	unsigned int i, sum = 0;
> +
> +	edid[12] = width & 0xff;
> +	edid[13] = width >> 8;
> +	edid[14] = height & 0xff;
> +	edid[15] = height >> 8;
> +	edid[54] = clock & 0xff;
> +	edid[55] = clock >> 8;
> +	edid[56] = width & 0xff;
> +	edid[58] = (width >> 4) & 0xf0;
> +	edid[59] = height & 0xff;
> +	edid[61] = (height >> 4) & 0xf0;
> +	for (i = 0; i < EDID_SIZE - 1; ++i)
> +		sum += edid[i];
> +	edid[EDID_SIZE - 1] = (0x100 - (sum & 0xFF)) & 0xFF;
> +	drm_mode_connector_update_edid_property(connector, (struct edid *)edid);
> +}
> +
> +static int vbox_get_modes(struct drm_connector *connector)
> +{
> +	struct vbox_connector *vbox_connector = NULL;
> +	struct drm_display_mode *mode = NULL;
> +	struct vbox_private *vbox = NULL;
> +	unsigned int num_modes = 0;
> +	int preferred_width, preferred_height;
> +
> +	vbox_connector = to_vbox_connector(connector);
> +	vbox = connector->dev->dev_private;
> +	/*
> +	 * Heuristic: we do not want to tell the host that we support dynamic
> +	 * resizing unless we feel confident that the user space client using
> +	 * the video driver can handle hot-plug events.  So the first time modes
> +	 * are queried after a "master" switch we tell the host that we do not,
> +	 * and immediately after we send the client a hot-plug notification as
> +	 * a test to see if they will respond and query again.
> +	 * That is also the reason why capabilities are reported to the host at
> +	 * this place in the code rather than elsewhere.
> +	 * We need to report the flags location before reporting the IRQ
> +	 * capability.
> +	 */
> +	hgsmi_report_flags_location(vbox->guest_pool, GUEST_HEAP_OFFSET(vbox) +
> +				    HOST_FLAGS_OFFSET);
> +	if (vbox_connector->vbox_crtc->crtc_id == 0)
> +		vbox_report_caps(vbox);
> +	if (!vbox->initial_mode_queried) {
> +		if (vbox_connector->vbox_crtc->crtc_id == 0) {
> +			vbox->initial_mode_queried = true;
> +			vbox_report_hotplug(vbox);
> +		}
> +		return drm_add_modes_noedid(connector, 800, 600);
> +	}
> +	num_modes = drm_add_modes_noedid(connector, 2560, 1600);
> +	preferred_width = vbox_connector->mode_hint.width ?
> +			  vbox_connector->mode_hint.width : 1024;
> +	preferred_height = vbox_connector->mode_hint.height ?
> +			   vbox_connector->mode_hint.height : 768;
> +	mode = drm_cvt_mode(connector->dev, preferred_width, preferred_height,
> +			    60, false, false, false);
> +	if (mode) {
> +		mode->type |= DRM_MODE_TYPE_PREFERRED;
> +		drm_mode_probed_add(connector, mode);
> +		++num_modes;
> +	}
> +	vbox_set_edid(connector, preferred_width, preferred_height);
> +	drm_object_property_set_value(
> +		&connector->base, vbox->dev->mode_config.suggested_x_property,
> +		vbox_connector->vbox_crtc->x_hint);
> +	drm_object_property_set_value(
> +		&connector->base, vbox->dev->mode_config.suggested_y_property,
> +		vbox_connector->vbox_crtc->y_hint);
> +
> +	return num_modes;
> +}
> +
> +static int vbox_mode_valid(struct drm_connector *connector,
> +			   struct drm_display_mode *mode)
> +{
> +	return MODE_OK;
> +}
> +
> +static void vbox_connector_destroy(struct drm_connector *connector)
> +{
> +	struct vbox_connector *vbox_connector = NULL;
> +
> +	vbox_connector = to_vbox_connector(connector);
> +	drm_connector_unregister(connector);
> +	drm_connector_cleanup(connector);
> +	kfree(connector);
> +}
> +
> +static enum drm_connector_status
> +vbox_connector_detect(struct drm_connector *connector, bool force)
> +{
> +	struct vbox_connector *vbox_connector = NULL;
> +
> +	(void)force;

This is to silence some horrible static analysis tool.  It's not
required for kernel code.

> +	vbox_connector = to_vbox_connector(connector);
> +
> +	return vbox_connector->mode_hint.disconnected ?
> +	    connector_status_disconnected : connector_status_connected;
> +}
> +
> +static int vbox_fill_modes(struct drm_connector *connector, u32 max_x,
> +			   u32 max_y)
> +{
> +	struct vbox_connector *vbox_connector;
> +	struct drm_device *dev;
> +	struct drm_display_mode *mode, *iterator;
> +
> +	vbox_connector = to_vbox_connector(connector);
> +	dev = vbox_connector->base.dev;
> +	list_for_each_entry_safe(mode, iterator, &connector->modes, head) {
> +		list_del(&mode->head);
> +		drm_mode_destroy(dev, mode);
> +	}
> +
> +	return drm_helper_probe_single_connector_modes(connector, max_x, max_y);
> +}
> +
> +static const struct drm_connector_helper_funcs vbox_connector_helper_funcs = {
> +	.mode_valid = vbox_mode_valid,
> +	.get_modes = vbox_get_modes,
> +	.best_encoder = vbox_best_single_encoder,
> +};
> +
> +static const struct drm_connector_funcs vbox_connector_funcs = {
> +	.dpms = drm_helper_connector_dpms,
> +	.detect = vbox_connector_detect,
> +	.fill_modes = vbox_fill_modes,
> +	.destroy = vbox_connector_destroy,
> +};
> +
> +static int vbox_connector_init(struct drm_device *dev,
> +			       struct vbox_crtc *vbox_crtc,
> +			       struct drm_encoder *encoder)
> +{
> +	struct vbox_connector *vbox_connector;
> +	struct drm_connector *connector;
> +
> +	vbox_connector = kzalloc(sizeof(*vbox_connector), GFP_KERNEL);
> +	if (!vbox_connector)
> +		return -ENOMEM;
> +
> +	connector = &vbox_connector->base;
> +	vbox_connector->vbox_crtc = vbox_crtc;
> +
> +	drm_connector_init(dev, connector, &vbox_connector_funcs,
> +			   DRM_MODE_CONNECTOR_VGA);
> +	drm_connector_helper_add(connector, &vbox_connector_helper_funcs);
> +
> +	connector->interlace_allowed = 0;
> +	connector->doublescan_allowed = 0;
> +
> +	drm_mode_create_suggested_offset_properties(dev);
> +	drm_object_attach_property(&connector->base,
> +				   dev->mode_config.suggested_x_property, -1);
> +	drm_object_attach_property(&connector->base,
> +				   dev->mode_config.suggested_y_property, -1);
> +	drm_connector_register(connector);
> +
> +	drm_mode_connector_attach_encoder(connector, encoder);
> +
> +	return 0;
> +}
> +
> +int vbox_mode_init(struct drm_device *dev)
> +{
> +	struct vbox_private *vbox = dev->dev_private;
> +	struct drm_encoder *encoder;
> +	struct vbox_crtc *vbox_crtc;
> +	unsigned int i;
> +
> +	/* vbox_cursor_init(dev); */
> +	for (i = 0; i < vbox->num_crtcs; ++i) {
> +		vbox_crtc = vbox_crtc_init(dev, i);
> +		if (!vbox_crtc)
> +			return -ENOMEM;
> +		encoder = vbox_encoder_init(dev, i);
> +		if (!encoder)
> +			return -ENOMEM;
> +		vbox_connector_init(dev, vbox_crtc, encoder);

We should probably check connector_init() for errors as well.

> +	}
> +
> +	return 0;
> +}
> +
> +void vbox_mode_fini(struct drm_device *dev)
> +{
> +	/* vbox_cursor_fini(dev); */
> +}
> +
> +/**
> + * Copy the ARGB image and generate the mask, which is needed in case the host
> + * does not support ARGB cursors.  The mask is a 1BPP bitmap with the bit set
> + * if the corresponding alpha value in the ARGB image is greater than 0xF0.
> + */
> +static void copy_cursor_image(u8 *src, u8 *dst, u32 width, u32 height,
> +			      size_t mask_size)
> +{
> +	size_t line_size = (width + 7) / 8;

	size_t line_size = ROUND_UP(width, 8);

> +	u32 i, j;
> +
> +	memcpy(dst + mask_size, src, width * height * 4);
> +	for (i = 0; i < height; ++i)
> +		for (j = 0; j < width; ++j)
> +			if (((u32 *)src)[i * width + j] > 0xf0000000)
> +				dst[i * line_size + j / 8] |= (0x80 >> (j % 8));
> +}
> +
> +static int vbox_cursor_set2(struct drm_crtc *crtc, struct drm_file *file_priv,
> +			    u32 handle, u32 width, u32 height,
> +			    s32 hot_x, s32 hot_y)
> +{
> +	struct vbox_private *vbox = crtc->dev->dev_private;
> +	struct vbox_crtc *vbox_crtc = to_vbox_crtc(crtc);
> +	struct ttm_bo_kmap_obj uobj_map;
> +	size_t data_size, mask_size;
> +	struct drm_gem_object *obj;
> +	u32 flags, caps = 0;
> +	struct vbox_bo *bo;
> +	bool src_isiomem;
> +	u8 *dst = NULL;
> +	u8 *src;
> +	int ret;
> +
> +	/*
> +	 * Re-set this regularly as in 5.0.20 and earlier the information was
> +	 * lost on save and restore.
> +	 */
> +	hgsmi_update_input_mapping(vbox->guest_pool, 0, 0,
> +				   vbox->input_mapping_width,
> +				   vbox->input_mapping_height);
> +	if (!handle) {
> +		bool cursor_enabled = false;
> +		struct drm_crtc *crtci;
> +
> +		/* Hide cursor. */
> +		vbox_crtc->cursor_enabled = false;
> +		list_for_each_entry(crtci, &vbox->dev->mode_config.crtc_list,
> +				    head)
> +			if (to_vbox_crtc(crtci)->cursor_enabled)
> +				cursor_enabled = true;


Multi-line indents get {} curly braces.

> +
> +		if (!cursor_enabled)
> +			hgsmi_update_pointer_shape(vbox->guest_pool, 0, 0, 0,
> +						   0, 0, NULL, 0);
> +		return 0;
> +	}
> +
> +	vbox_crtc->cursor_enabled = true;
> +
> +	if (width > VBOX_MAX_CURSOR_WIDTH || height > VBOX_MAX_CURSOR_HEIGHT ||
> +	    width == 0 || height == 0)
> +		return -EINVAL;
> +
> +	ret = hgsmi_query_conf(vbox->guest_pool,
> +			       VBOX_VBVA_CONF32_CURSOR_CAPABILITIES, &caps);
> +	if (ret)
> +		return ret;
> +
> +	if (!(caps & VBOX_VBVA_CURSOR_CAPABILITY_HARDWARE)) {
> +		/*
> +		 * -EINVAL means cursor_set2() not supported, -EAGAIN means
> +		 * retry at once.
> +		 */
> +		return -EBUSY;
> +	}
> +
> +	obj = drm_gem_object_lookup(file_priv, handle);
> +	if (!obj) {
> +		DRM_ERROR("Cannot find cursor object %x for crtc\n", handle);
> +		return -ENOENT;
> +	}
> +
> +	bo = gem_to_vbox_bo(obj);
> +	ret = vbox_bo_reserve(bo, false);
> +	if (ret)
> +		goto out_unref_obj;
> +
> +	/*
> +	 * The mask must be calculated based on the alpha
> +	 * channel, one bit per ARGB word, and must be 32-bit
> +	 * padded.
> +	 */
> +	mask_size = ((width + 7) / 8 * height + 3) & ~3;
> +	data_size = width * height * 4 + mask_size;
> +	vbox->cursor_hot_x = min_t(u32, max(hot_x, 0), width);
> +	vbox->cursor_hot_y = min_t(u32, max(hot_y, 0), height);
> +	vbox->cursor_width = width;
> +	vbox->cursor_height = height;
> +	vbox->cursor_data_size = data_size;
> +	dst = vbox->cursor_data;
> +
> +	ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &uobj_map);
> +	if (ret) {
> +		vbox->cursor_data_size = 0;
> +		goto out_unreserve_bo;
> +	}
> +
> +	src = ttm_kmap_obj_virtual(&uobj_map, &src_isiomem);
> +	if (src_isiomem) {
> +		DRM_ERROR("src cursor bo not in main memory\n");
> +		ret = -EIO;
> +		goto out_unmap_bo;
> +	}
> +
> +	copy_cursor_image(src, dst, width, height, mask_size);
> +
> +	flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
> +		VBOX_MOUSE_POINTER_ALPHA;
> +	ret = hgsmi_update_pointer_shape(vbox->guest_pool, flags,
> +					 vbox->cursor_hot_x, vbox->cursor_hot_y,
> +					 width, height, dst, data_size);
> +out_unmap_bo:
> +	ttm_bo_kunmap(&uobj_map);
> +out_unreserve_bo:
> +	vbox_bo_unreserve(bo);
> +out_unref_obj:
> +	drm_gem_object_unreference_unlocked(obj);
> +
> +	return ret;
> +}
> +
> +static int vbox_cursor_move(struct drm_crtc *crtc, int x, int y)
> +{
> +	struct vbox_private *vbox = crtc->dev->dev_private;
> +	u32 flags = VBOX_MOUSE_POINTER_VISIBLE |
> +	    VBOX_MOUSE_POINTER_SHAPE | VBOX_MOUSE_POINTER_ALPHA;
> +	s32 crtc_x =
> +	    vbox->single_framebuffer ? crtc->x : to_vbox_crtc(crtc)->x_hint;
> +	s32 crtc_y =
> +	    vbox->single_framebuffer ? crtc->y : to_vbox_crtc(crtc)->y_hint;
> +	u32 host_x, host_y;
> +	u32 hot_x = 0;
> +	u32 hot_y = 0;
> +	int ret;
> +
> +	/*
> +	 * We compare these to unsigned later and don't
> +	 * need to handle negative.
> +	 */
> +	if (x + crtc_x < 0 || y + crtc_y < 0 || vbox->cursor_data_size == 0)
> +		return 0;
> +
> +	ret = hgsmi_cursor_position(vbox->guest_pool, true, x + crtc_x,
> +				    y + crtc_y, &host_x, &host_y);
> +	/* Work around a bug after save and restore in 5.0.20 and earlier. */
> +	if (ret || (host_x == 0 && host_y == 0))
> +		return ret;

We are deliberately returning success if host_x and host_y are zero?

> +
> +	if (x + crtc_x < host_x)
> +		hot_x = min(host_x - x - crtc_x, vbox->cursor_width);
> +	if (y + crtc_y < host_y)
> +		hot_y = min(host_y - y - crtc_y, vbox->cursor_height);
> +
> +	if (hot_x == vbox->cursor_hot_x && hot_y == vbox->cursor_hot_y)
> +		return 0;
> +
> +	vbox->cursor_hot_x = hot_x;
> +	vbox->cursor_hot_y = hot_y;
> +
> +	return hgsmi_update_pointer_shape(vbox->guest_pool, flags,
> +			hot_x, hot_y, vbox->cursor_width, vbox->cursor_height,
> +			vbox->cursor_data, vbox->cursor_data_size);
> +}
> diff --git a/drivers/staging/vboxvideo/vbox_prime.c b/drivers/staging/vboxvideo/vbox_prime.c
> new file mode 100644
> index 000000000000..b7453e427a1d
> --- /dev/null
> +++ b/drivers/staging/vboxvideo/vbox_prime.c
> @@ -0,0 +1,74 @@
> +/*
> + * Copyright (C) 2017 Oracle Corporation
> + * Copyright 2017 Canonical
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + * OTHER DEALINGS IN THE SOFTWARE.
> + *
> + * Authors: Andreas Pokorny
> + */
> +
> +#include "vbox_drv.h"
> +
> +/*
> + * Based on qxl_prime.c:
> + * Empty Implementations as there should not be any other driver for a virtual
> + * device that might share buffers with vboxvideo
> + */
> +
> +int vbox_gem_prime_pin(struct drm_gem_object *obj)
> +{
> +	WARN_ONCE(1, "not implemented");
> +	return -ENOSYS;
> +}
> +
> +void vbox_gem_prime_unpin(struct drm_gem_object *obj)
> +{
> +	WARN_ONCE(1, "not implemented");
> +}
> +
> +struct sg_table *vbox_gem_prime_get_sg_table(struct drm_gem_object *obj)
> +{
> +	WARN_ONCE(1, "not implemented");
> +	return ERR_PTR(-ENOSYS);
> +}
> +
> +struct drm_gem_object *vbox_gem_prime_import_sg_table(
> +	struct drm_device *dev, struct dma_buf_attachment *attach,
> +	struct sg_table *table)
> +{
> +	WARN_ONCE(1, "not implemented");
> +	return ERR_PTR(-ENOSYS);
> +}
> +
> +void *vbox_gem_prime_vmap(struct drm_gem_object *obj)
> +{
> +	WARN_ONCE(1, "not implemented");
> +	return ERR_PTR(-ENOSYS);
> +}
> +
> +void vbox_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +{
> +	WARN_ONCE(1, "not implemented");
> +}
> +
> +int vbox_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *area)
> +{
> +	WARN_ONCE(1, "not implemented");
> +	return -ENOSYS;
> +}
> diff --git a/drivers/staging/vboxvideo/vbox_ttm.c b/drivers/staging/vboxvideo/vbox_ttm.c
> new file mode 100644
> index 000000000000..faacf58a1a96
> --- /dev/null
> +++ b/drivers/staging/vboxvideo/vbox_ttm.c
> @@ -0,0 +1,477 @@
> +/*
> + * Copyright (C) 2013-2017 Oracle Corporation
> + * This file is based on ast_ttm.c
> + * Copyright 2012 Red Hat Inc.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the
> + * "Software"), to deal in the Software without restriction, including
> + * without limitation the rights to use, copy, modify, merge, publish,
> + * distribute, sub license, and/or sell copies of the Software, and to
> + * permit persons to whom the Software is furnished to do so, subject to
> + * the following conditions:
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
> + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
> + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
> + * USE OR OTHER DEALINGS IN THE SOFTWARE.
> + *
> + * The above copyright notice and this permission notice (including the
> + * next paragraph) shall be included in all copies or substantial portions
> + * of the Software.
> + *
> + *
> + * Authors: Dave Airlie <airlied@xxxxxxxxxx>
> + *          Michael Thayer <michael.thayer@xxxxxxxxxx>
> + */
> +#include "vbox_drv.h"
> +#include <ttm/ttm_page_alloc.h>
> +
> +static inline struct vbox_private *vbox_bdev(struct ttm_bo_device *bd)
> +{
> +	return container_of(bd, struct vbox_private, ttm.bdev);
> +}
> +
> +static int vbox_ttm_mem_global_init(struct drm_global_reference *ref)
> +{
> +	return ttm_mem_global_init(ref->object);
> +}
> +
> +static void vbox_ttm_mem_global_release(struct drm_global_reference *ref)
> +{
> +	ttm_mem_global_release(ref->object);
> +}
> +
> +/**
> + * Adds the vbox memory manager object/structures to the global memory manager.
> + */
> +static int vbox_ttm_global_init(struct vbox_private *vbox)
> +{
> +	struct drm_global_reference *global_ref;
> +	int r;

Please use "rc" or "ret".

> +
> +	global_ref = &vbox->ttm.mem_global_ref;
> +	global_ref->global_type = DRM_GLOBAL_TTM_MEM;
> +	global_ref->size = sizeof(struct ttm_mem_global);
> +	global_ref->init = &vbox_ttm_mem_global_init;
> +	global_ref->release = &vbox_ttm_mem_global_release;
> +	r = drm_global_item_ref(global_ref);
> +	if (r != 0) {

Just say "if (rc) {".

> +		DRM_ERROR("Failed setting up TTM memory subsystem.\n");
> +		return r;
> +	}
> +
> +	vbox->ttm.bo_global_ref.mem_glob = vbox->ttm.mem_global_ref.object;
> +	global_ref = &vbox->ttm.bo_global_ref.ref;
> +	global_ref->global_type = DRM_GLOBAL_TTM_BO;
> +	global_ref->size = sizeof(struct ttm_bo_global);
> +	global_ref->init = &ttm_bo_global_init;
> +	global_ref->release = &ttm_bo_global_release;
> +
> +	r = drm_global_item_ref(global_ref);
> +	if (r != 0) {


Just "if (ret) {".

> +		DRM_ERROR("Failed setting up TTM BO subsystem.\n");
> +		drm_global_item_unref(&vbox->ttm.mem_global_ref);


This is a double free with the big magical future proof free everything
style error handling.

> +		return r;
> +	}
> +
> +	return 0;
> +}
> +
> +/**
> + * Removes the vbox memory manager object from the global memory manager.
> + */
> +static void vbox_ttm_global_release(struct vbox_private *vbox)
> +{
> +	if (!vbox->ttm.mem_global_ref.release)

Nope, not sufficient to prevent the double free I just mentioned.

> +		return;
> +
> +	drm_global_item_unref(&vbox->ttm.bo_global_ref.ref);
> +	drm_global_item_unref(&vbox->ttm.mem_global_ref);
> +	vbox->ttm.mem_global_ref.release = NULL;
> +}
> +
> +static void vbox_bo_ttm_destroy(struct ttm_buffer_object *tbo)
> +{
> +	struct vbox_bo *bo;
> +
> +	bo = container_of(tbo, struct vbox_bo, bo);
> +
> +	drm_gem_object_release(&bo->gem);
> +	kfree(bo);
> +}
> +
> +static bool vbox_ttm_bo_is_vbox_bo(struct ttm_buffer_object *bo)
> +{
> +	if (bo->destroy == &vbox_bo_ttm_destroy)
> +		return true;
> +
> +	return false;
> +}
> +
> +static int
> +vbox_bo_init_mem_type(struct ttm_bo_device *bdev, u32 type,
> +		      struct ttm_mem_type_manager *man)
> +{
> +	switch (type) {
> +	case TTM_PL_SYSTEM:
> +		man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
> +		man->available_caching = TTM_PL_MASK_CACHING;
> +		man->default_caching = TTM_PL_FLAG_CACHED;
> +		break;
> +	case TTM_PL_VRAM:
> +		man->func = &ttm_bo_manager_func;
> +		man->flags = TTM_MEMTYPE_FLAG_FIXED | TTM_MEMTYPE_FLAG_MAPPABLE;
> +		man->available_caching = TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_WC;
> +		man->default_caching = TTM_PL_FLAG_WC;
> +		break;
> +	default:
> +		DRM_ERROR("Unsupported memory type %u\n", (unsigned int)type);
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static void
> +vbox_bo_evict_flags(struct ttm_buffer_object *bo, struct ttm_placement *pl)
> +{
> +	struct vbox_bo *vboxbo = vbox_bo(bo);
> +
> +	if (!vbox_ttm_bo_is_vbox_bo(bo))
> +		return;
> +
> +	vbox_ttm_placement(vboxbo, TTM_PL_FLAG_SYSTEM);
> +	*pl = vboxbo->placement;
> +}
> +
> +static int vbox_bo_verify_access(struct ttm_buffer_object *bo,
> +				 struct file *filp)
> +{
> +	return 0;
> +}
> +
> +static int vbox_ttm_io_mem_reserve(struct ttm_bo_device *bdev,
> +				   struct ttm_mem_reg *mem)
> +{
> +	struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type];
> +	struct vbox_private *vbox = vbox_bdev(bdev);
> +
> +	mem->bus.addr = NULL;
> +	mem->bus.offset = 0;
> +	mem->bus.size = mem->num_pages << PAGE_SHIFT;
> +	mem->bus.base = 0;
> +	mem->bus.is_iomem = false;
> +	if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE))
> +		return -EINVAL;
> +	switch (mem->mem_type) {
> +	case TTM_PL_SYSTEM:
> +		/* system memory */
> +		return 0;
> +	case TTM_PL_VRAM:
> +		mem->bus.offset = mem->start << PAGE_SHIFT;
> +		mem->bus.base = pci_resource_start(vbox->dev->pdev, 0);
> +		mem->bus.is_iomem = true;
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> +	return 0;
> +}
> +
> +static void vbox_ttm_io_mem_free(struct ttm_bo_device *bdev,
> +				 struct ttm_mem_reg *mem)
> +{
> +}
> +
> +static int vbox_bo_move(struct ttm_buffer_object *bo,
> +			bool evict, bool interruptible,
> +			bool no_wait_gpu, struct ttm_mem_reg *new_mem)
> +{
> +	return ttm_bo_move_memcpy(bo, interruptible, no_wait_gpu, new_mem);
> +}
> +
> +static void vbox_ttm_backend_destroy(struct ttm_tt *tt)
> +{
> +	ttm_tt_fini(tt);
> +	kfree(tt);
> +}
> +
> +static struct ttm_backend_func vbox_tt_backend_func = {
> +	.destroy = &vbox_ttm_backend_destroy,
> +};
> +
> +static struct ttm_tt *vbox_ttm_tt_create(struct ttm_bo_device *bdev,
> +					 unsigned long size,
> +					 u32 page_flags,
> +					 struct page *dummy_read_page)
> +{
> +	struct ttm_tt *tt;
> +
> +	tt = kzalloc(sizeof(*tt), GFP_KERNEL);
> +	if (!tt)
> +		return NULL;
> +
> +	tt->func = &vbox_tt_backend_func;
> +	if (ttm_tt_init(tt, bdev, size, page_flags, dummy_read_page)) {
> +		kfree(tt);
> +		return NULL;
> +	}
> +
> +	return tt;
> +}
> +
> +static int vbox_ttm_tt_populate(struct ttm_tt *ttm)
> +{
> +	return ttm_pool_populate(ttm);
> +}
> +
> +static void vbox_ttm_tt_unpopulate(struct ttm_tt *ttm)
> +{
> +	ttm_pool_unpopulate(ttm);
> +}
> +
> +struct ttm_bo_driver vbox_bo_driver = {
> +	.ttm_tt_create = vbox_ttm_tt_create,
> +	.ttm_tt_populate = vbox_ttm_tt_populate,
> +	.ttm_tt_unpopulate = vbox_ttm_tt_unpopulate,
> +	.init_mem_type = vbox_bo_init_mem_type,
> +	.eviction_valuable = ttm_bo_eviction_valuable,
> +	.evict_flags = vbox_bo_evict_flags,
> +	.move = vbox_bo_move,
> +	.verify_access = vbox_bo_verify_access,
> +	.io_mem_reserve = &vbox_ttm_io_mem_reserve,
> +	.io_mem_free = &vbox_ttm_io_mem_free,
> +	.io_mem_pfn = ttm_bo_default_io_mem_pfn,
> +};
> +
> +int vbox_mm_init(struct vbox_private *vbox)
> +{
> +	int ret;
> +	struct drm_device *dev = vbox->dev;
> +	struct ttm_bo_device *bdev = &vbox->ttm.bdev;
> +
> +	ret = vbox_ttm_global_init(vbox);
> +	if (ret)
> +		return ret;
> +
> +	ret = ttm_bo_device_init(&vbox->ttm.bdev,
> +				 vbox->ttm.bo_global_ref.ref.object,
> +				 &vbox_bo_driver,
> +				 dev->anon_inode->i_mapping,
> +				 DRM_FILE_PAGE_OFFSET, true);
> +	if (ret) {
> +		DRM_ERROR("Error initialising bo driver; %d\n", ret);
> +		return ret;
> +	}
> +
> +	ret = ttm_bo_init_mm(bdev, TTM_PL_VRAM,
> +			     vbox->available_vram_size >> PAGE_SHIFT);
> +	if (ret) {
> +		DRM_ERROR("Failed ttm VRAM init: %d\n", ret);
> +		return ret;
> +	}
> +#ifdef DRM_MTRR_WC
> +	vbox->fb_mtrr = drm_mtrr_add(pci_resource_start(dev->pdev, 0),
> +				     pci_resource_len(dev->pdev, 0),
> +				     DRM_MTRR_WC);
> +#else
> +	vbox->fb_mtrr = arch_phys_wc_add(pci_resource_start(dev->pdev, 0),
> +					 pci_resource_len(dev->pdev, 0));
> +#endif
> +
> +	vbox->ttm.mm_initialised = true;
> +
> +	return 0;
> +}
> +
> +void vbox_mm_fini(struct vbox_private *vbox)
> +{
> +#ifdef DRM_MTRR_WC
> +	struct drm_device *dev = vbox->dev;
> +#endif
> +	if (!vbox->ttm.mm_initialised)
> +		return;
> +	ttm_bo_device_release(&vbox->ttm.bdev);
> +
> +	vbox_ttm_global_release(vbox);
> +
> +#ifdef DRM_MTRR_WC
> +	drm_mtrr_del(vbox->fb_mtrr,
> +		     pci_resource_start(dev->pdev, 0),
> +		     pci_resource_len(dev->pdev, 0), DRM_MTRR_WC);
> +#else
> +	arch_phys_wc_del(vbox->fb_mtrr);
> +#endif
> +}
> +
> +void vbox_ttm_placement(struct vbox_bo *bo, int domain)
> +{
> +	unsigned int i;
> +	u32 c = 0;
> +
> +	bo->placement.placement = bo->placements;
> +	bo->placement.busy_placement = bo->placements;
> +
> +	if (domain & TTM_PL_FLAG_VRAM)
> +		bo->placements[c++].flags =
> +		    TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_VRAM;
> +	if (domain & TTM_PL_FLAG_SYSTEM)
> +		bo->placements[c++].flags =
> +		    TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
> +	if (!c)
> +		bo->placements[c++].flags =
> +		    TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
> +
> +	bo->placement.num_placement = c;
> +	bo->placement.num_busy_placement = c;
> +
> +	for (i = 0; i < c; ++i) {
> +		bo->placements[i].fpfn = 0;
> +		bo->placements[i].lpfn = 0;
> +	}
> +}
> +
> +int vbox_bo_create(struct drm_device *dev, int size, int align,
> +		   u32 flags, struct vbox_bo **pvboxbo)
> +{
> +	struct vbox_private *vbox = dev->dev_private;
> +	struct vbox_bo *vboxbo;
> +	size_t acc_size;
> +	int ret;
> +
> +	vboxbo = kzalloc(sizeof(*vboxbo), GFP_KERNEL);
> +	if (!vboxbo)
> +		return -ENOMEM;
> +
> +	ret = drm_gem_object_init(dev, &vboxbo->gem, size);
> +	if (ret) {
> +		kfree(vboxbo);
> +		return ret;
> +	}
> +
> +	vboxbo->bo.bdev = &vbox->ttm.bdev;
> +
> +	vbox_ttm_placement(vboxbo, TTM_PL_FLAG_VRAM | TTM_PL_FLAG_SYSTEM);
> +
> +	acc_size = ttm_bo_dma_acc_size(&vbox->ttm.bdev, size,
> +				       sizeof(struct vbox_bo));
> +
> +	ret = ttm_bo_init(&vbox->ttm.bdev, &vboxbo->bo, size,
> +			  ttm_bo_type_device, &vboxbo->placement,
> +			  align >> PAGE_SHIFT, false, NULL, acc_size,
> +			  NULL, NULL, vbox_bo_ttm_destroy);
> +	if (ret)
> +		return ret;

This leaks vboxbo, I suspect.

> +
> +	*pvboxbo = vboxbo;
> +
> +	return 0;
> +}
> +


regards,
dan carpenter
_______________________________________________
devel mailing list
devel@xxxxxxxxxxxxxxxxxxxxxx
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel




[Index of Archives]     [Linux Driver Backports]     [DMA Engine]     [Linux GPIO]     [Linux SPI]     [Video for Linux]     [Linux USB Devel]     [Linux Coverity]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux