Re: [RFC PATCH v4 2/4] drm/ipvr: drm driver for VED

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



+ commenters of v1~v3

Thanks,
Yao

> -----Original Message-----
> From: Sean V Kelley [mailto:seanvk@xxxxxxxxx]
> Sent: Thursday, January 8, 2015 8:35
> To: Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
> Cc: dri-devel@xxxxxxxxxxxxxxxxxxxxx; Cheng, Yao; Sean V Kelley
> Subject: [RFC PATCH v4 2/4] drm/ipvr: drm driver for VED
> 
> From: Yao Cheng <yao.cheng@xxxxxxxxx>
> 
> Probes VED and creates a new drm device for hardware accelerated
> video decoding.
> Currently support VP8 decoding on valleyview.
> 
> v2:
> take David's comments
> 	- add mmap support and remove mmap_ioctl
> 	- remove postclose since it's deprecated
> 	- NULL set_busid
> 
> v3:
> take David, Daniel and Jesse's comments and massive refine
> 	- use drm_dev_alloc+drm_dev_register to replace
> drm_platform_init
> 	- same as above in exit side
> 	- remove fd based explit fence
> 	- refine ipvr_drm.h, use __u32 series and refine paddings
> 	- add doc to describe ipvr/ved terminology
> 	- ioctl refine: remove unused code
> 	- use uintptr_t series for address/number conversion
> 	- runtime pm refine: guarantee get/put paring
> 	- add PRIME feature and remove USERPTR ioctl
> 	- implement relocation fixup in EXEC ioctl
> 	- call drm_gem_get_pages to replace my own implementation
> 	- code cleanup: remove unused code
> 
> v4:
> bug fixing,
> 	- add missing unreference in ipvr_gem_fault()
> 	- add struct_mutex lock around
> drm_mm_insert_node_in_range_generic()
> 	- move ipvr_ctx_list from dev_priv to file_priv and add spinlock
> 	- check EAGAIN for pm_runtime_get/put and add lock
> 	- add mutex lock for do_execbuffer
> 	- put power on fence lockup
> 	- define all ioctls as DRM_AUTH|DRM_UNLOCKED
> 	- correctly set ved_busy
> 	- rename ipvr_misc_ioctl to ipvr_get_info_ioctl and remove unused
> code
> 
> Signed-off-by: Yao Cheng <yao.cheng@xxxxxxxxx>
> Signed-off-by: Sean V Kelley <seanvk@xxxxxxxxx>
> ---
>  Documentation/DocBook/drm.tmpl    |   39 ++
>  drivers/gpu/drm/Kconfig           |    2 +
>  drivers/gpu/drm/Makefile          |    1 +
>  drivers/gpu/drm/ipvr/Kconfig      |    9 +
>  drivers/gpu/drm/ipvr/Makefile     |   18 +
>  drivers/gpu/drm/ipvr/ipvr_bo.c    |  543 +++++++++++++++++
>  drivers/gpu/drm/ipvr/ipvr_bo.h    |   80 +++
>  drivers/gpu/drm/ipvr/ipvr_debug.c |  335 +++++++++++
>  drivers/gpu/drm/ipvr/ipvr_debug.h |   76 +++
>  drivers/gpu/drm/ipvr/ipvr_drm.h   |  259 ++++++++
>  drivers/gpu/drm/ipvr/ipvr_drv.c   |  617 +++++++++++++++++++
>  drivers/gpu/drm/ipvr/ipvr_drv.h   |  292 +++++++++
>  drivers/gpu/drm/ipvr/ipvr_exec.c  |  613 +++++++++++++++++++
>  drivers/gpu/drm/ipvr/ipvr_exec.h  |   57 ++
>  drivers/gpu/drm/ipvr/ipvr_fence.c |  487 +++++++++++++++
>  drivers/gpu/drm/ipvr/ipvr_fence.h |   72 +++
>  drivers/gpu/drm/ipvr/ipvr_gem.c   |  297 +++++++++
>  drivers/gpu/drm/ipvr/ipvr_gem.h   |   48 ++
>  drivers/gpu/drm/ipvr/ipvr_mmu.c   |  752 +++++++++++++++++++++++
>  drivers/gpu/drm/ipvr/ipvr_mmu.h   |  111 ++++
>  drivers/gpu/drm/ipvr/ipvr_trace.c |   11 +
>  drivers/gpu/drm/ipvr/ipvr_trace.h |  333 ++++++++++
>  drivers/gpu/drm/ipvr/ved_cmd.c    |  882
> +++++++++++++++++++++++++++
>  drivers/gpu/drm/ipvr/ved_cmd.h    |   70 +++
>  drivers/gpu/drm/ipvr/ved_fw.c     | 1199
> +++++++++++++++++++++++++++++++++++++
>  drivers/gpu/drm/ipvr/ved_fw.h     |   81 +++
>  drivers/gpu/drm/ipvr/ved_msg.h    |  256 ++++++++
>  drivers/gpu/drm/ipvr/ved_pm.c     |  335 +++++++++++
>  drivers/gpu/drm/ipvr/ved_pm.h     |   36 ++
>  drivers/gpu/drm/ipvr/ved_reg.h    |  561 +++++++++++++++++
>  30 files changed, 8472 insertions(+)
>  create mode 100644 drivers/gpu/drm/ipvr/Kconfig
>  create mode 100644 drivers/gpu/drm/ipvr/Makefile
>  create mode 100644 drivers/gpu/drm/ipvr/ipvr_bo.c
>  create mode 100644 drivers/gpu/drm/ipvr/ipvr_bo.h
>  create mode 100644 drivers/gpu/drm/ipvr/ipvr_debug.c
>  create mode 100644 drivers/gpu/drm/ipvr/ipvr_debug.h
>  create mode 100644 drivers/gpu/drm/ipvr/ipvr_drm.h
>  create mode 100644 drivers/gpu/drm/ipvr/ipvr_drv.c
>  create mode 100644 drivers/gpu/drm/ipvr/ipvr_drv.h
>  create mode 100644 drivers/gpu/drm/ipvr/ipvr_exec.c
>  create mode 100644 drivers/gpu/drm/ipvr/ipvr_exec.h
>  create mode 100644 drivers/gpu/drm/ipvr/ipvr_fence.c
>  create mode 100644 drivers/gpu/drm/ipvr/ipvr_fence.h
>  create mode 100644 drivers/gpu/drm/ipvr/ipvr_gem.c
>  create mode 100644 drivers/gpu/drm/ipvr/ipvr_gem.h
>  create mode 100644 drivers/gpu/drm/ipvr/ipvr_mmu.c
>  create mode 100644 drivers/gpu/drm/ipvr/ipvr_mmu.h
>  create mode 100644 drivers/gpu/drm/ipvr/ipvr_trace.c
>  create mode 100644 drivers/gpu/drm/ipvr/ipvr_trace.h
>  create mode 100644 drivers/gpu/drm/ipvr/ved_cmd.c
>  create mode 100644 drivers/gpu/drm/ipvr/ved_cmd.h
>  create mode 100644 drivers/gpu/drm/ipvr/ved_fw.c
>  create mode 100644 drivers/gpu/drm/ipvr/ved_fw.h
>  create mode 100644 drivers/gpu/drm/ipvr/ved_msg.h
>  create mode 100644 drivers/gpu/drm/ipvr/ved_pm.c
>  create mode 100644 drivers/gpu/drm/ipvr/ved_pm.h
>  create mode 100644 drivers/gpu/drm/ipvr/ved_reg.h
> 
> diff --git a/Documentation/DocBook/drm.tmpl
> b/Documentation/DocBook/drm.tmpl
> index 9db989c..e9cadf6 100644
> --- a/Documentation/DocBook/drm.tmpl
> +++ b/Documentation/DocBook/drm.tmpl
> @@ -4062,5 +4062,44 @@ int num_ioctls;</synopsis>
> 
>    </chapter>
>  !Cdrivers/gpu/drm/i915/i915_irq.c
> +  <chapter id="drmIpvr">
> +    <title>drm/ipvr Intel's driver for PowerVR video core</title>
> +    <para>
> +      The drm/ipvr driver intends to support the PowerVR video cores
> +	  integrated on Intel's platforms. The video cores can be categorized
> +	  into 3 types:
> +        <variablelist>
> +          <varlistentry>
> +            <term>VED</term>
> +            <listitem>
> +              <para>
> +                multi-format video decoder, various video codecs are supported,
> +				e.g. H.264, VP8, etc..
> +              </para>
> +            </listitem>
> +          </varlistentry>
> +          <varlistentry>
> +            <term>VEC</term>
> +            <listitem>
> +              <para>
> +                multi-format video encoder, various video codecs support like
> +				VED.
> +              </para>
> +            </listitem>
> +          </varlistentry>
> +          <varlistentry>
> +            <term>VSP</term>
> +            <listitem>
> +              <para>
> +		        multi-function video processing. supports various
> features such
> +				as color-space-conversion, sharpening,
> deblocking, etc..
> +              </para>
> +            </listitem>
> +          </varlistentry>
> +        </variablelist>
> +	  Now ipvr only supports VED on Valleyview platform. The names
> "VEC"
> +	  and "VSP" are reserved for possible future support.
> +    </para>
> +  </chapter>
>  </part>
>  </book>
> diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
> index 24c2d7c..c79b813 100644
> --- a/drivers/gpu/drm/Kconfig
> +++ b/drivers/gpu/drm/Kconfig
> @@ -165,6 +165,8 @@ config DRM_SAVAGE
>  	  Choose this option if you have a
> Savage3D/4/SuperSavage/Pro/Twister
>  	  chipset. If M is selected the module will be called savage.
> 
> +source "drivers/gpu/drm/ipvr/Kconfig"
> +
>  source "drivers/gpu/drm/exynos/Kconfig"
> 
>  source "drivers/gpu/drm/vmwgfx/Kconfig"
> diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
> index 47d8986..d364ea2 100644
> --- a/drivers/gpu/drm/Makefile
> +++ b/drivers/gpu/drm/Makefile
> @@ -41,6 +41,7 @@ obj-$(CONFIG_DRM_RADEON)+= radeon/
>  obj-$(CONFIG_DRM_MGA)	+= mga/
>  obj-$(CONFIG_DRM_I810)	+= i810/
>  obj-$(CONFIG_DRM_I915)  += i915/
> +obj-$(CONFIG_DRM_IPVR)  += ipvr/
>  obj-$(CONFIG_DRM_MGAG200) += mgag200/
>  obj-$(CONFIG_DRM_CIRRUS_QEMU) += cirrus/
>  obj-$(CONFIG_DRM_SIS)   += sis/
> diff --git a/drivers/gpu/drm/ipvr/Kconfig b/drivers/gpu/drm/ipvr/Kconfig
> new file mode 100644
> index 0000000..869bad4
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/Kconfig
> @@ -0,0 +1,9 @@
> +config DRM_IPVR
> +       tristate "IPVR video decode driver"
> +       depends on DRM
> +       select SHMEM
> +       select TMPFS
> +       default m
> +       help
> +         Choose this option if you want to enable accelerated video decode
> with VED hardware.
> +		 Currently support VP8 decoding on valleyview.
> diff --git a/drivers/gpu/drm/ipvr/Makefile b/drivers/gpu/drm/ipvr/Makefile
> new file mode 100644
> index 0000000..280647e
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/Makefile
> @@ -0,0 +1,18 @@
> +ccflags-y := -Iinclude/drm
> +
> +ipvr-y := \
> +        ipvr_drv.o \
> +        ipvr_bo.o \
> +        ipvr_exec.o \
> +        ipvr_fence.o \
> +        ipvr_gem.o \
> +        ipvr_mmu.o \
> +        ipvr_debug.o \
> +        ipvr_trace.o \
> +        ved_pm.o \
> +        ved_cmd.o \
> +        ved_fw.o
> +
> +obj-$(CONFIG_DRM_IPVR) += ipvr.o
> +
> +CFLAGS_ipvr_trace.o := -I$(src)
> diff --git a/drivers/gpu/drm/ipvr/ipvr_bo.c b/drivers/gpu/drm/ipvr/ipvr_bo.c
> new file mode 100644
> index 0000000..5fccf7e
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/ipvr_bo.c
> @@ -0,0 +1,543 @@
> +/*********************************************************
> *****************
> + * ipvr_bo.c: IPVR buffer creation/destory, import/export, map etc
> + *
> + * Copyright (c) 2014 Intel Corporation, Hillsboro, OR, USA
> + * All Rights Reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of
> MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
> License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + *
> + * Authors:
> + *    Yao Cheng <yao.cheng@xxxxxxxxx>
> + *
> +
> **********************************************************
> ****************/
> +
> +#include "ipvr_bo.h"
> +#include "ipvr_trace.h"
> +#include "ipvr_debug.h"
> +#include <drmP.h>
> +#include <linux/dma-buf.h>
> +
> +static inline bool cpu_cache_is_coherent(enum ipvr_cache_level level)
> +{
> +	/* on valleyview no cache snooping */
> +	return (level != IPVR_CACHE_WRITEBACK);
> +}
> +
> +static inline bool clflush_object(struct drm_ipvr_gem_object *obj, bool
> force)
> +{
> +	if (obj->sg_table == NULL)
> +		return false;
> +
> +	/* no need to flush if cache is coherent */
> +	if (!force && cpu_cache_is_coherent(obj->cache_level))
> +		return false;
> +
> +	drm_clflush_sg(obj->sg_table);
> +
> +	return true;
> +}
> +
> +static void
> +ipvr_object_free(struct drm_ipvr_gem_object *obj)
> +{
> +	struct drm_ipvr_private *dev_priv = obj->base.dev->dev_private;
> +	kmem_cache_free(dev_priv->ipvr_bo_slab, obj);
> +}
> +
> +static struct drm_ipvr_gem_object *
> +ipvr_object_alloc(struct drm_ipvr_private *dev_priv, size_t size)
> +{
> +	struct drm_ipvr_gem_object *obj;
> +
> +	obj = kmem_cache_alloc(dev_priv->ipvr_bo_slab, GFP_KERNEL |
> __GFP_ZERO);
> +	if (obj == NULL)
> +		return NULL;
> +	memset(obj, 0, sizeof(*obj));
> +
> +	return obj;
> +}
> +
> +static int ipvr_gem_mmu_bind_object(struct drm_ipvr_gem_object *obj)
> +{
> +	struct drm_ipvr_private *dev_priv = obj->base.dev->dev_private;
> +	const unsigned long entry = ipvr_gem_object_mmu_offset(obj);
> +
> +	if (IPVR_IS_ERR(entry)) {
> +		return IPVR_OFFSET_ERR(entry);
> +	}
> +
> +	IPVR_DEBUG_GENERAL("entry is 0x%lx, size is %zu, nents is %d.\n",
> +			entry, obj->base.size, obj->sg_table->nents);
> +
> +	return ipvr_mmu_insert_pages(dev_priv->mmu->default_pd,
> +		obj->pages, entry, obj->base.size >> PAGE_SHIFT,
> +		0, 0, 0);
> +}
> +
> +static void ipvr_gem_mmu_unbind_object(struct drm_ipvr_gem_object
> *obj)
> +{
> +	struct drm_ipvr_private *dev_priv = obj->base.dev->dev_private;
> +	const unsigned long entry = ipvr_gem_object_mmu_offset(obj);
> +	IPVR_DEBUG_GENERAL("entry is 0x%lx, size is %zu.\n",
> +			entry, obj->base.size);
> +	ipvr_mmu_remove_pages(dev_priv->mmu->default_pd,
> +		entry, obj->base.size >> PAGE_SHIFT, 0, 0);
> +}
> +
> +static void ipvr_gem_object_pin_pages(struct drm_ipvr_gem_object *obj)
> +{
> +	BUG_ON(obj->sg_table == NULL);
> +	obj->pages_pin_count++;
> +}
> +
> +static void ipvr_gem_object_unpin_pages(struct drm_ipvr_gem_object
> *obj)
> +{
> +	BUG_ON(obj->pages_pin_count == 0);
> +	obj->pages_pin_count--;
> +}
> +
> +static int ipvr_gem_bind_to_drm_mm(struct drm_ipvr_gem_object* obj,
> +			struct ipvr_address_space *vm)
> +{
> +	int ret = 0;
> +	struct drm_mm *mm;
> +	unsigned long start, end;
> +	/* bind to VPU address space */
> +	if (obj->tiling) {
> +		mm = &vm->tiling_mm;
> +		start = vm->tiling_start;
> +		end = vm->tiling_start + vm->tiling_total;
> +	} else {
> +		mm = &vm->linear_mm;
> +		start = vm->linear_start;
> +		end = vm->linear_start + vm->linear_total;
> +	}
> +	IPVR_DEBUG_GENERAL("call
> drm_mm_insert_node_in_range_generic.\n");
> +	ret = mutex_lock_interruptible(&obj->base.dev->struct_mutex);
> +	if (ret)
> +		return ret;
> +	ret = drm_mm_insert_node_in_range_generic(mm, &obj-
> >mm_node, obj->base.size,
> +						PAGE_SIZE, obj->cache_level,
> +						start, end,
> +
> 	DRM_MM_SEARCH_DEFAULT,
> +
> 	DRM_MM_CREATE_DEFAULT);
> +	if (ret) {
> +		/* no shrinker implemented yet so simply return error */
> +		IPVR_ERROR("failed on
> drm_mm_insert_node_in_range_generic: %d\n", ret);
> +		goto out;
> +	}
> +
> +	IPVR_DEBUG_GENERAL("drm_mm_insert_node_in_range_generic
> success: "
> +		"start=0x%lx, size=%lu, color=%lu.\n",
> +		obj->mm_node.start, obj->mm_node.size, obj-
> >mm_node.color);
> +
> +out:
> +	mutex_unlock(&obj->base.dev->struct_mutex);
> +
> +	return ret;
> +}
> +
> +struct drm_ipvr_gem_object* ipvr_gem_create(struct drm_ipvr_private
> *dev_priv,
> +			size_t size, u32 tiling, u32 cache_level)
> +{
> +	struct drm_ipvr_gem_object *obj;
> +	int ret = 0;
> +	int npages;
> +	struct address_space *mapping;
> +	gfp_t mask;
> +
> +	BUG_ON(size & (PAGE_SIZE - 1));
> +	npages = size >> PAGE_SHIFT;
> +	IPVR_DEBUG_GENERAL("create bo size is %zu, tiling is %u, "
> +			"cache level is %u.\n",	size, tiling, cache_level);
> +
> +	/* Allocate the new object */
> +	obj = ipvr_object_alloc(dev_priv, size);
> +	if (!obj)
> +		return ERR_PTR(-ENOMEM);
> +
> +	/* initialization */
> +	ret = drm_gem_object_init(dev_priv->dev, &obj->base, size);
> +	if (ret) {
> +		IPVR_ERROR("failed on drm_gem_object_init: %d\n", ret);
> +		goto err_free_obj;
> +	}
> +	init_waitqueue_head(&obj->event_queue);
> +	/* todo: need set correct mask */
> +	mask = GFP_HIGHUSER | __GFP_RECLAIMABLE;
> +
> +	/* ipvr cannot relocate objects above 4GiB. */
> +	mask &= ~__GFP_HIGHMEM;
> +	mask |= __GFP_DMA32;
> +
> +	mapping = file_inode(obj->base.filp)->i_mapping;
> +	mapping_set_gfp_mask(mapping, mask);
> +
> +	obj->base.write_domain = IPVR_GEM_DOMAIN_CPU;
> +	obj->base.read_domains = IPVR_GEM_DOMAIN_CPU;
> +	obj->drv_name = "ipvr";
> +	obj->fence = NULL;
> +	obj->tiling = tiling;
> +	obj->cache_level = cache_level;
> +
> +	/* get physical pages */
> +	obj->pages = drm_gem_get_pages(&obj->base);
> +	if (IS_ERR(obj->pages)) {
> +		ret = PTR_ERR(obj->pages);
> +		IPVR_ERROR("failed on drm_gem_get_pages: %d\n", ret);
> +		goto err_free_obj;
> +	}
> +
> +	obj->sg_table = drm_prime_pages_to_sg(obj->pages, obj-
> >base.size >> PAGE_SHIFT);
> +	if (IS_ERR(obj->sg_table)) {
> +		ret = PTR_ERR(obj->sg_table);
> +		IPVR_ERROR("failed on drm_gem_get_pages: %d\n", ret);
> +		goto err_put_pages;
> +	}
> +
> +	/* set cacheability */
> +	switch (obj->cache_level) {
> +	case IPVR_CACHE_UNCACHED:
> +		ret = set_pages_array_uc(obj->pages, npages);
> +		break;
> +	case IPVR_CACHE_WRITECOMBINE:
> +		ret = set_pages_array_wc(obj->pages, npages);
> +		break;
> +	case IPVR_CACHE_WRITEBACK:
> +		ret = set_pages_array_wb(obj->pages, npages);
> +		break;
> +	default:
> +		ret = -EINVAL;
> +		break;
> +	}
> +	if (ret) {
> +		IPVR_DEBUG_WARN("failed to set page cache: %d.\n", ret);
> +		goto err_put_sg;
> +	}
> +
> +	ipvr_gem_object_pin_pages(obj);
> +
> +	/* bind to VPU address space */
> +	ret = ipvr_gem_bind_to_drm_mm(obj, &dev_priv->addr_space);
> +	if (ret) {
> +		IPVR_ERROR("failed to call
> ipvr_gem_bind_to_drm_mm: %d.\n", ret);
> +		goto err_put_sg;
> +	}
> +
> +	ret = ipvr_gem_mmu_bind_object(obj);
> +	if (ret) {
> +		IPVR_ERROR("failed to call
> ipvr_gem_mmu_bind_object: %d.\n", ret);
> +		goto err_remove_node;
> +	}
> +
> +	ipvr_stat_add_object(dev_priv, obj);
> +	trace_ipvr_create_object(obj, ipvr_gem_object_mmu_offset(obj));
> +	return obj;
> +err_remove_node:
> +	drm_mm_remove_node(&obj->mm_node);
> +err_put_sg:
> +	sg_free_table(obj->sg_table);
> +	kfree(obj->sg_table);
> +err_put_pages:
> +	drm_gem_put_pages(&obj->base, obj->pages, false, false);
> +err_free_obj:
> +	ipvr_object_free(obj);
> +	return ERR_PTR(ret);
> +}
> +
> +void *ipvr_gem_object_vmap(struct drm_ipvr_gem_object *obj)
> +{
> +	pgprot_t pg = PAGE_KERNEL;
> +	switch (obj->cache_level) {
> +	case IPVR_CACHE_WRITECOMBINE:
> +		pg = pgprot_writecombine(pg);
> +		break;
> +	case IPVR_CACHE_UNCACHED:
> +		pg = pgprot_noncached(pg);
> +		break;
> +	default:
> +		break;
> +	}
> +	return vmap(obj->pages, obj->base.size >> PAGE_SHIFT, VM_MAP,
> pg);
> +}
> +
> +/*
> + * When the last reference to a GEM object is released the GEM core calls
> the
> + * drm_driver .gem_free_object() operation. That operation is mandatory
> for
> + * GEM-enabled drivers and must free the GEM object and all associated
> + * resources.
> + * called with struct_mutex locked.
> + */
> +void ipvr_gem_free_object(struct drm_gem_object *gem_obj)
> +{
> +	struct drm_device *dev = gem_obj->dev;
> +	struct drm_ipvr_gem_object *obj = to_ipvr_bo(gem_obj);
> +	drm_ipvr_private_t *dev_priv = dev->dev_private;
> +	int ret;
> +	unsigned long mmu_offset;
> +	int npages = gem_obj->size >> PAGE_SHIFT;
> +
> +	/* fixme: consider unlocked case */
> +	WARN_ON(!mutex_is_locked(&dev->struct_mutex));
> +
> +	mmu_offset = ipvr_gem_object_mmu_offset(obj);
> +	ipvr_gem_mmu_unbind_object(obj);
> +
> +	if (unlikely(obj->fence)) {
> +		ret = ipvr_fence_wait(obj->fence, true, false);
> +		if (ret)
> +			IPVR_DEBUG_WARN("Failed to wait fence
> signaled: %d.\n", ret);
> +	}
> +
> +	drm_mm_remove_node(&obj->mm_node);
> +	ipvr_gem_object_unpin_pages(obj);
> +
> +	if (WARN_ON(obj->pages_pin_count))
> +		obj->pages_pin_count = 0;
> +
> +	BUG_ON(!obj->pages || !obj->sg_table);
> +	/* set back to page_wb */
> +	set_pages_array_wb(obj->pages, npages);
> +	if (obj->base.import_attach) {
> +		IPVR_DEBUG_GENERAL("free imported object (mmu_offset
> 0x%lx)\n", mmu_offset);
> +		drm_prime_gem_destroy(&obj->base, obj->sg_table);
> +		drm_free_large(obj->pages);
> +		ipvr_stat_remove_imported(dev_priv, obj);
> +	}
> +	else {
> +		IPVR_DEBUG_GENERAL("free object (mmu_offset 0x%lx)\n",
> mmu_offset);
> +		sg_free_table(obj->sg_table);
> +		kfree(obj->sg_table);
> +		drm_gem_put_pages(&obj->base, obj->pages, obj->dirty,
> true);
> +		ipvr_stat_remove_object(dev_priv, obj);
> +	}
> +
> +	/* mmap offset is freed by drm_gem_object_release */
> +	drm_gem_object_release(&obj->base);
> +
> +	trace_ipvr_free_object(obj);
> +
> +	ipvr_object_free(obj);
> +}
> +
> +static inline struct page *get_object_page(struct drm_ipvr_gem_object
> *obj, int n)
> +{
> +	struct sg_page_iter sg_iter;
> +
> +	for_each_sg_page(obj->sg_table->sgl, &sg_iter, obj->sg_table-
> >nents, n)
> +		return sg_page_iter_page(&sg_iter);
> +
> +	return NULL;
> +}
> +
> +int ipvr_gem_object_apply_reloc(struct drm_ipvr_gem_object *obj,
> +				u64 offset_in_bo, u32 value)
> +{
> +	u64 page_offset = offset_in_page(offset_in_bo);
> +	char *vaddr;
> +	struct page *target_page;
> +
> +	/* set to cpu domain */
> +	target_page = get_object_page(obj,	offset_in_bo >>
> PAGE_SHIFT);
> +	if (!target_page)
> +		return -EINVAL;
> +
> +	/**
> +	 * for efficiency we'd better record the page index,
> +	 * and avoid frequent map/unmap on the same page
> +	 */
> +	vaddr = kmap_atomic(target_page);
> +	if (!vaddr)
> +		return -ENOMEM;
> +	*(u32 *)(vaddr + page_offset) = value;
> +
> +	kunmap_atomic(vaddr);
> +
> +	return 0;
> +}
> +
> +int ipvr_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
> +{
> +	struct drm_gem_object *obj = vma->vm_private_data;
> +	struct drm_device *dev = obj->dev;
> +	unsigned long pfn;
> +	pgoff_t pgoff;
> +	int ret;
> +	struct drm_ipvr_gem_object *ipvr_obj = to_ipvr_bo(obj);
> +
> +	/* Make sure we don't parallel update on a fault, nor move or
> remove
> +	 * something from beneath our feet
> +	 */
> +	ret = mutex_lock_interruptible(&dev->struct_mutex);
> +	if (ret)
> +		goto out;
> +
> +	if (!ipvr_obj->sg_table) {
> +		ret = -ENODATA;
> +		goto out_unlock;
> +	}
> +
> +	/* We don't use vmf->pgoff since that has the fake offset: */
> +	pgoff = ((unsigned long)vmf->virtual_address -
> +			vma->vm_start) >> PAGE_SHIFT;
> +
> +	pfn = page_to_pfn(ipvr_obj->pages[pgoff]);
> +
> +	IPVR_DEBUG_GENERAL("Inserting %p pfn %lx, pa %lx\n", vmf-
> >virtual_address,
> +			pfn, pfn << PAGE_SHIFT);
> +
> +	ret = vm_insert_pfn(vma, (unsigned long)vmf->virtual_address, pfn);
> +
> +out_unlock:
> +	mutex_unlock(&dev->struct_mutex);
> +out:
> +	switch (ret) {
> +	case -EAGAIN:
> +	case 0:
> +	case -ERESTARTSYS:
> +	case -EINTR:
> +	case -EBUSY:
> +		/*
> +		 * EBUSY is ok: this just means that another thread
> +		 * already did the job.
> +		 */
> +		return VM_FAULT_NOPAGE;
> +	case -ENOMEM:
> +		return VM_FAULT_OOM;
> +	default:
> +		return VM_FAULT_SIGBUS;
> +	}
> +}
> +
> +struct sg_table *ipvr_gem_prime_get_sg_table(struct drm_gem_object
> *obj)
> +{
> +	struct drm_ipvr_gem_object *ipvr_obj = to_ipvr_bo(obj);
> +	struct sg_table *sgt = NULL;
> +	int ret;
> +
> +	if (!ipvr_obj->sg_table) {
> +		ret = -ENOENT;
> +		goto out;
> +	}
> +
> +	sgt = drm_prime_pages_to_sg(ipvr_obj->pages, obj->size >>
> PAGE_SHIFT);
> +	if (IS_ERR(sgt)) {
> +		goto out;
> +	}
> +
> +	IPVR_DEBUG_GENERAL("exported sg_table for obj (mmu_offset
> 0x%lx)\n",
> +		ipvr_gem_object_mmu_offset(ipvr_obj));
> +out:
> +	return sgt;
> +}
> +
> +struct drm_gem_object *ipvr_gem_prime_import_sg_table(struct
> drm_device *dev,
> +		struct dma_buf_attachment *attach, struct sg_table *sg)
> +{
> +	struct drm_ipvr_gem_object *obj;
> +	int ret = 0;
> +	int i, npages;
> +	unsigned long pfn;
> +	struct drm_ipvr_private *dev_priv = dev->dev_private;
> +
> +	if (!sg || !attach || (attach->dmabuf->size & (PAGE_SIZE - 1)))
> +		return ERR_PTR(-EINVAL);
> +
> +	IPVR_DEBUG_ENTRY("enter, size=0x%zx\n", attach->dmabuf->size);
> +
> +	obj = ipvr_object_alloc(dev_priv, attach->dmabuf->size);
> +	if (!obj)
> +		return ERR_PTR(-ENOMEM);
> +
> +	memset(obj, 0, sizeof(*obj));
> +
> +	drm_gem_private_object_init(dev, &obj->base, attach->dmabuf-
> >size);
> +
> +	init_waitqueue_head(&obj->event_queue);
> +
> +	obj->drv_name = "ipvr";
> +	obj->fence = NULL;
> +	obj->cache_level = IPVR_CACHE_UNCACHED;
> +	obj->tiling = 0;
> +
> +	npages = attach->dmabuf->size >> PAGE_SHIFT;
> +
> +	obj->sg_table = sg;
> +	obj->pages = drm_malloc_ab(npages, sizeof(struct page *));
> +	if (!obj->pages) {
> +		ret = -ENOMEM;
> +		goto err_free_obj;
> +	}
> +
> +	ret = drm_prime_sg_to_page_addr_arrays(sg, obj->pages, NULL,
> npages);
> +	if (ret)
> +		goto err_put_pages;
> +
> +	/* validate sg_table
> +	 * should be under 4GiB
> +	 */
> +	for (i = 0; i < npages; ++i) {
> +		pfn = page_to_pfn(obj->pages[i]);
> +		if (pfn >= 0x00100000UL) {
> +			IPVR_ERROR("cannot support pfn 0x%lx.\n", pfn);
> +			ret = -EINVAL; /* what's the better err code? */
> +			goto err_put_pages;
> +		}
> +	}
> +
> +	ret = ipvr_gem_bind_to_drm_mm(obj, &dev_priv->addr_space);
> +	if (ret) {
> +		IPVR_ERROR("failed to call
> ipvr_gem_bind_to_drm_mm: %d.\n", ret);
> +		goto err_put_pages;
> +	}
> +
> +	/* do we really have to set the external pages uncached?
> +	 * this might causes perf issue in exporter side */
> +	ret = set_pages_array_uc(obj->pages, npages);
> +	if (ret)
> +		IPVR_DEBUG_WARN("failed to set imported pages as
> uncached: %d, ignore\n", ret);
> +
> +	ret = ipvr_gem_mmu_bind_object(obj);
> +	if (ret) {
> +		IPVR_ERROR("failed to call
> ipvr_gem_mmu_bind_object: %d.\n", ret);
> +		goto err_remove_node;
> +	}
> +	IPVR_DEBUG_GENERAL("imported sg_table, new bo mmu
> offset=0x%lx.\n",
> +		ipvr_gem_object_mmu_offset(obj));
> +	ipvr_stat_add_imported(dev_priv, obj);
> +	ipvr_gem_object_pin_pages(obj);
> +	return &obj->base;
> +err_remove_node:
> +	drm_mm_remove_node(&obj->mm_node);
> +err_put_pages:
> +	drm_free_large(obj->pages);
> +err_free_obj:
> +	ipvr_object_free(obj);
> +	return ERR_PTR(ret);
> +}
> +
> +int ipvr_gem_prime_pin(struct drm_gem_object *obj)
> +{
> +	struct drm_ipvr_private *dev_priv = obj->dev->dev_private;
> +	IPVR_DEBUG_ENTRY("mmu offset 0x%lx\n",
> ipvr_gem_object_mmu_offset(to_ipvr_bo(obj)));
> +	ipvr_stat_add_exported(dev_priv, to_ipvr_bo(obj));
> +	return 0;
> +}
> +
> +void ipvr_gem_prime_unpin(struct drm_gem_object *obj)
> +{
> +	struct drm_ipvr_private *dev_priv = obj->dev->dev_private;
> +	IPVR_DEBUG_ENTRY("mmu offset 0x%lx\n",
> ipvr_gem_object_mmu_offset(to_ipvr_bo(obj)));
> +	ipvr_stat_remove_exported(dev_priv, to_ipvr_bo(obj));
> +}
> diff --git a/drivers/gpu/drm/ipvr/ipvr_bo.h
> b/drivers/gpu/drm/ipvr/ipvr_bo.h
> new file mode 100644
> index 0000000..4981587
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/ipvr_bo.h
> @@ -0,0 +1,80 @@
> +/*********************************************************
> *****************
> + * ipvr_bo.h: IPVR buffer creation/destory, import/export, map etc
> + *
> + * Copyright (c) 2014 Intel Corporation, Hillsboro, OR, USA
> + * All Rights Reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of
> MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
> License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + *
> + * Authors:
> + *    Yao Cheng <yao.cheng@xxxxxxxxx>
> + *
> +
> **********************************************************
> ****************/
> +
> +
> +#ifndef _IPVR_BO_H_
> +#define _IPVR_BO_H_
> +
> +#include "ipvr_drv.h"
> +#include "ipvr_drm.h"
> +#include "ipvr_fence.h"
> +#include <drmP.h>
> +#include <drm_gem.h>
> +
> +struct ipvr_fence;
> +
> +struct drm_ipvr_gem_object {
> +	struct drm_gem_object base;
> +
> +	/* used to disinguish between i915 and ipvr */
> +	char *drv_name;
> +
> +	/** MM related */
> +	struct drm_mm_node mm_node;
> +
> +	bool tiling;
> +
> +	enum ipvr_cache_level cache_level;
> +
> +	bool dirty;
> +
> +	struct sg_table *sg_table;
> +	struct page **pages;
> +	int pages_pin_count;
> +
> +	struct ipvr_fence *fence;
> +	atomic_t reserved;
> +	wait_queue_head_t event_queue;
> +};
> +
> +struct drm_ipvr_gem_object* ipvr_gem_create(struct drm_ipvr_private
> *dev_priv,
> +			size_t size, u32 tiling, u32 cache_level);
> +void ipvr_gem_free_object(struct drm_gem_object *obj);
> +void *ipvr_gem_object_vmap(struct drm_ipvr_gem_object *obj);
> +int ipvr_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf);
> +int ipvr_gem_object_apply_reloc(struct drm_ipvr_gem_object *obj,
> +			u64 offset_in_bo, u32 value);
> +struct sg_table *ipvr_gem_prime_get_sg_table(struct drm_gem_object
> *obj);
> +struct drm_gem_object *ipvr_gem_prime_import_sg_table(struct
> drm_device *dev,
> +			struct dma_buf_attachment *attach, struct sg_table
> *sg);
> +int ipvr_gem_prime_pin(struct drm_gem_object *obj);
> +void ipvr_gem_prime_unpin(struct drm_gem_object *obj);
> +
> +static inline unsigned long
> +ipvr_gem_object_mmu_offset(struct drm_ipvr_gem_object *obj)
> +{
> +	return obj->mm_node.start;
> +}
> +
> +#endif
> diff --git a/drivers/gpu/drm/ipvr/ipvr_debug.c
> b/drivers/gpu/drm/ipvr/ipvr_debug.c
> new file mode 100644
> index 0000000..c30a78a
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/ipvr_debug.c
> @@ -0,0 +1,335 @@
> +/*********************************************************
> *****************
> + * ipvr_debug.c: IPVR debugfs support to assist bug triage
> + *
> + * Copyright (c) 2014 Intel Corporation, Hillsboro, OR, USA
> + * All Rights Reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of
> MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
> License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + *
> + * Authors:
> + *    Fei Jiang <fei.jiang@xxxxxxxxx>
> + *    Yao Cheng <yao.cheng@xxxxxxxxx>
> + *
> +
> **********************************************************
> ****************/
> +
> +#if defined(CONFIG_DEBUG_FS)
> +
> +#include "ipvr_debug.h"
> +#include "ipvr_drv.h"
> +#include "ved_reg.h"
> +#include <linux/seq_file.h>
> +#include <linux/debugfs.h>
> +
> +union ipvr_debugfs_vars debugfs_vars;
> +
> +static int ipvr_debug_info(struct seq_file *m, void *data)
> +{
> +	seq_printf(m, "ipvr platform\n");
> +	return 0;
> +}
> +
> +/* some bookkeeping */
> +void
> +ipvr_stat_add_object(struct drm_ipvr_private *dev_priv, struct
> drm_ipvr_gem_object *obj)
> +{
> +	spin_lock(&dev_priv->ipvr_stat.object_stat_lock);
> +	dev_priv->ipvr_stat.allocated_count++;
> +	dev_priv->ipvr_stat.allocated_memory += obj->base.size;
> +	spin_unlock(&dev_priv->ipvr_stat.object_stat_lock);
> +}
> +
> +void
> +ipvr_stat_remove_object(struct drm_ipvr_private *dev_priv, struct
> drm_ipvr_gem_object *obj)
> +{
> +	spin_lock(&dev_priv->ipvr_stat.object_stat_lock);
> +	dev_priv->ipvr_stat.allocated_count--;
> +	dev_priv->ipvr_stat.allocated_memory -= obj->base.size;
> +	spin_unlock(&dev_priv->ipvr_stat.object_stat_lock);
> +}
> +
> +void
> +ipvr_stat_add_imported(struct drm_ipvr_private *dev_priv, struct
> drm_ipvr_gem_object *obj)
> +{
> +	spin_lock(&dev_priv->ipvr_stat.object_stat_lock);
> +	dev_priv->ipvr_stat.imported_count++;
> +	dev_priv->ipvr_stat.imported_memory += obj->base.size;
> +	spin_unlock(&dev_priv->ipvr_stat.object_stat_lock);
> +}
> +
> +void
> +ipvr_stat_remove_imported(struct drm_ipvr_private *dev_priv, struct
> drm_ipvr_gem_object *obj)
> +{
> +	spin_lock(&dev_priv->ipvr_stat.object_stat_lock);
> +	dev_priv->ipvr_stat.imported_count--;
> +	dev_priv->ipvr_stat.imported_memory -= obj->base.size;
> +	spin_unlock(&dev_priv->ipvr_stat.object_stat_lock);
> +}
> +
> +void
> +ipvr_stat_add_exported(struct drm_ipvr_private *dev_priv, struct
> drm_ipvr_gem_object *obj)
> +{
> +	spin_lock(&dev_priv->ipvr_stat.object_stat_lock);
> +	dev_priv->ipvr_stat.exported_count++;
> +	dev_priv->ipvr_stat.exported_memory += obj->base.size;
> +	spin_unlock(&dev_priv->ipvr_stat.object_stat_lock);
> +}
> +
> +void
> +ipvr_stat_remove_exported(struct drm_ipvr_private *dev_priv, struct
> drm_ipvr_gem_object *obj)
> +{
> +	spin_lock(&dev_priv->ipvr_stat.object_stat_lock);
> +	dev_priv->ipvr_stat.exported_count--;
> +	dev_priv->ipvr_stat.exported_memory -= obj->base.size;
> +	spin_unlock(&dev_priv->ipvr_stat.object_stat_lock);
> +}
> +
> +void ipvr_stat_add_mmu_bind(struct drm_ipvr_private *dev_priv, size_t
> size)
> +{
> +	spin_lock(&dev_priv->ipvr_stat.object_stat_lock);
> +	dev_priv->ipvr_stat.mmu_used_size += size;
> +	spin_unlock(&dev_priv->ipvr_stat.object_stat_lock);
> +}
> +
> +void ipvr_stat_remove_mmu_bind(struct drm_ipvr_private *dev_priv,
> size_t size)
> +{
> +	spin_lock(&dev_priv->ipvr_stat.object_stat_lock);
> +	dev_priv->ipvr_stat.mmu_used_size -= size;
> +	spin_unlock(&dev_priv->ipvr_stat.object_stat_lock);
> +}
> +
> +static int ipvr_debug_gem_object_info(struct seq_file *m, void* data)
> +{
> +	struct drm_info_node *node = (struct drm_info_node *) m->private;
> +	struct drm_device *dev = node->minor->dev;
> +	struct drm_ipvr_private *dev_priv = dev->dev_private;
> +	int ret;
> +
> +	ret = mutex_lock_interruptible(&dev->struct_mutex);
> +	if (ret)
> +		return ret;
> +
> +	seq_printf(m, "total allocate %u objects, %zu bytes\n\n",
> +		   dev_priv->ipvr_stat.allocated_count,
> +		   dev_priv->ipvr_stat.allocated_memory);
> +	seq_printf(m, "total imported %u objects, %zu bytes\n\n",
> +		   dev_priv->ipvr_stat.imported_count,
> +		   dev_priv->ipvr_stat.imported_memory);
> +	seq_printf(m, "total exported %u objects, %zu bytes\n\n",
> +		   dev_priv->ipvr_stat.exported_count,
> +		   dev_priv->ipvr_stat.exported_memory);
> +	seq_printf(m, "total used MMU size %zu bytes\n\n",
> +		   dev_priv->ipvr_stat.mmu_used_size);
> +
> +	mutex_unlock(&dev->struct_mutex);
> +
> +	return 0;
> +}
> +
> +static int ipvr_debug_gem_seqno_info(struct seq_file *m, void *data)
> +{
> +	struct drm_info_node *node = (struct drm_info_node *) m->private;
> +	struct drm_device *dev = node->minor->dev;
> +	drm_ipvr_private_t *dev_priv = dev->dev_private;
> +	int ret;
> +
> +	ret = mutex_lock_interruptible(&dev->struct_mutex);
> +	if (ret)
> +		return ret;
> +
> +	seq_printf(m, "last signaled seq is %d, last emitted seq is %d\n",
> +		atomic_read(&dev_priv->fence_drv.signaled_seq),
> +		dev_priv->fence_drv.sync_seq);
> +
> +	mutex_unlock(&dev->struct_mutex);
> +
> +	return 0;
> +}
> +
> +static ssize_t ipvr_debug_ved_reg_read(struct file *filp, char __user *ubuf,
> +					size_t max, loff_t *ppos)
> +{
> +	struct drm_device *dev = filp->private_data;
> +	drm_ipvr_private_t *dev_priv = dev->dev_private;
> +	char buf[200], offset[20], operation[10], format[20], val[20];
> +	int len = 0, ret, no_of_tokens;
> +	unsigned long reg_offset, reg_to_write;
> +
> +	if (debugfs_vars.reg.reg_input == 0)
> +		return len;
> +
> +	snprintf(format, sizeof(format), "%%%zus %%%zus %%%zus",
> +			sizeof(operation), sizeof(offset), sizeof(val));
> +
> +	no_of_tokens = sscanf(debugfs_vars.reg.reg_vars,
> +					format, operation, offset, val);
> +
> +	if (no_of_tokens < 3)
> +		return len;
> +
> +	len = sizeof(debugfs_vars.reg.reg_vars);
> +
> +	if (strcmp(operation, IPVR_READ_TOKEN) == 0) {
> +		ret = kstrtoul(offset, 16, &reg_offset);
> +		if (ret)
> +			return -EINVAL;
> +
> +		len = scnprintf(buf, sizeof(buf), "0x%x: 0x%x\n",
> +			(u32)reg_offset,
> +			IPVR_REG_READ32((u32)reg_offset));
> +	} else if (strcmp(operation, IPVR_WRITE_TOKEN) == 0) {
> +		ret = kstrtoul(offset, 16, &reg_offset);
> +		if (ret)
> +			return -EINVAL;
> +
> +		ret = kstrtoul(val, 16, &reg_to_write);
> +		if (ret)
> +			return -EINVAL;
> +
> +		IPVR_REG_WRITE32(reg_offset, reg_to_write);
> +		len = scnprintf(buf, sizeof(buf),
> +				"0x%x: 0x%x\n",
> +				(u32)reg_offset,
> +				(u32)IPVR_REG_READ32(reg_offset));
> +	} else {
> +		len = scnprintf(buf, sizeof(buf), "Operation Not
> Supported\n");
> +	}
> +
> +	debugfs_vars.reg.reg_input = 0;
> +
> +	simple_read_from_buffer(ubuf, max, ppos, buf, len);
> +
> +	return len;
> +}
> +
> +static ssize_t
> +ipvr_debug_ved_reg_write(struct file *filp,const char __user *ubuf,
> +			size_t cnt, loff_t *ppos)
> +{
> +	/* reset the string */
> +	memset(debugfs_vars.reg.reg_vars, 0,
> IPVR_MAX_BUFFER_STR_LEN);
> +
> +	if (cnt > 0) {
> +		if (cnt > sizeof(debugfs_vars.reg.reg_vars) - 1)
> +			return -EINVAL;
> +
> +		if (copy_from_user(debugfs_vars.reg.reg_vars, ubuf, cnt))
> +			return -EFAULT;
> +
> +		debugfs_vars.reg.reg_vars[cnt] = 0;
> +
> +		/* Enable Read */
> +		debugfs_vars.reg.reg_input = 1;
> +	}
> +
> +	return cnt;
> +}
> +
> +/* As the drm_debugfs_init() routines are called before dev->dev_private
> is
> + * allocated we need to hook into the minor for release. */
> +static int ipvr_add_fake_info_node(struct drm_minor *minor,
> +					struct dentry *ent, const void *key)
> +{
> +	struct drm_info_node *node;
> +
> +	node = kmalloc(sizeof(struct drm_info_node), GFP_KERNEL);
> +	if (node == NULL) {
> +		debugfs_remove(ent);
> +		return -ENOMEM;
> +	}
> +
> +	node->minor = minor;
> +	node->dent = ent;
> +	node->info_ent = (void *) key;
> +
> +	mutex_lock(&minor->debugfs_lock);
> +	list_add(&node->list, &minor->debugfs_list);
> +	mutex_unlock(&minor->debugfs_lock);
> +
> +	return 0;
> +}
> +
> +static int ipvr_debugfs_create(struct dentry *root,
> +			       struct drm_minor *minor,
> +			       const char *name,
> +			       const struct file_operations *fops)
> +{
> +	struct drm_device *dev = minor->dev;
> +	struct dentry *ent;
> +
> +	ent = debugfs_create_file(name,
> +				  S_IRUGO | S_IWUSR,
> +				  root, dev,
> +				  fops);
> +	if (IS_ERR(ent))
> +		return PTR_ERR(ent);
> +
> +	return ipvr_add_fake_info_node(minor, ent, fops);
> +}
> +
> +static const struct file_operations ipvr_ved_reg_fops = {
> +	.owner = THIS_MODULE,
> +	.open = simple_open,
> +	.read = ipvr_debug_ved_reg_read,
> +	.write = ipvr_debug_ved_reg_write,
> +	.llseek = default_llseek,
> +};
> +
> +static struct drm_info_list ipvr_debugfs_list[] = {
> +	{"ipvr_capabilities", ipvr_debug_info, 0},
> +	{"ipvr_gem_objects", ipvr_debug_gem_object_info, 0},
> +	{"ipvr_gem_seqno", ipvr_debug_gem_seqno_info, 0},
> +
> +};
> +#define IPVR_DEBUGFS_ENTRIES ARRAY_SIZE(ipvr_debugfs_list)
> +
> +static struct ipvr_debugfs_files {
> +	const char *name;
> +	const struct file_operations *fops;
> +} ipvr_debugfs_files[] = {
> +	{"ipvr_ved_reg_api", &ipvr_ved_reg_fops},
> +};
> +
> +int ipvr_debugfs_init(struct drm_minor *minor)
> +{
> +	int ret, i;
> +
> +	for (i = 0; i < ARRAY_SIZE(ipvr_debugfs_files); i++) {
> +		ret = ipvr_debugfs_create(minor->debugfs_root, minor,
> +				   ipvr_debugfs_files[i].name,
> +				   ipvr_debugfs_files[i].fops);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return drm_debugfs_create_files(ipvr_debugfs_list,
> +				 IPVR_DEBUGFS_ENTRIES,
> +				 minor->debugfs_root, minor);
> +}
> +
> +void ipvr_debugfs_cleanup(struct drm_minor *minor)
> +{
> +	int i;
> +
> +	drm_debugfs_remove_files(ipvr_debugfs_list,
> +			  IPVR_DEBUGFS_ENTRIES, minor);
> +
> +	for (i = 0; i < ARRAY_SIZE(ipvr_debugfs_files); i++) {
> +		struct drm_info_list *info_list =
> +			(struct drm_info_list *)ipvr_debugfs_files[i].fops;
> +
> +		drm_debugfs_remove_files(info_list, 1, minor);
> +	}
> +}
> +
> +#endif /* CONFIG_DEBUG_FS */
> diff --git a/drivers/gpu/drm/ipvr/ipvr_debug.h
> b/drivers/gpu/drm/ipvr/ipvr_debug.h
> new file mode 100644
> index 0000000..a88382e
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/ipvr_debug.h
> @@ -0,0 +1,76 @@
> +/*********************************************************
> *****************
> + * ipvr_debug.h: IPVR debugfs support header file
> + *
> + * Copyright (c) 2014 Intel Corporation, Hillsboro, OR, USA
> + * All Rights Reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of
> MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
> License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + *
> + * Authors:
> + *    Fei Jiang <fei.jiang@xxxxxxxxx>
> + *    Yao Cheng <yao.cheng@xxxxxxxxx>
> + *
> +
> **********************************************************
> ****************/
> +
> +
> +#ifndef _IPVR_DEBUG_H_
> +#define _IPVR_DEBUG_H_
> +
> +#include "ipvr_bo.h"
> +#include "drmP.h"
> +
> +/* Operations supported */
> +#define IPVR_MAX_BUFFER_STR_LEN		200
> +
> +#define IPVR_READ_TOKEN			"READ"
> +#define IPVR_WRITE_TOKEN		"WRITE"
> +
> +/* DebugFS Variable declaration */
> +struct ipvr_debugfs_reg_vars {
> +	char reg_vars[IPVR_MAX_BUFFER_STR_LEN];
> +	u32 reg_input;
> +};
> +
> +union ipvr_debugfs_vars {
> +	struct ipvr_debugfs_reg_vars reg;
> +};
> +
> +int ipvr_debugfs_init(struct drm_minor *minor);
> +void ipvr_debugfs_cleanup(struct drm_minor *minor);
> +
> +void ipvr_stat_add_object(struct drm_ipvr_private *dev_priv,
> +			struct drm_ipvr_gem_object *obj);
> +
> +void ipvr_stat_remove_object(struct drm_ipvr_private *dev_priv,
> +			struct drm_ipvr_gem_object *obj);
> +
> +void ipvr_stat_add_imported(struct drm_ipvr_private *dev_priv,
> +			struct drm_ipvr_gem_object *obj);
> +
> +void ipvr_stat_remove_imported(struct drm_ipvr_private *dev_priv,
> +			struct drm_ipvr_gem_object *obj);
> +
> +void ipvr_stat_add_exported(struct drm_ipvr_private *dev_priv,
> +			struct drm_ipvr_gem_object *obj);
> +
> +void ipvr_stat_remove_exported(struct drm_ipvr_private *dev_priv,
> +			struct drm_ipvr_gem_object *obj);
> +
> +void ipvr_stat_add_mmu_bind(struct drm_ipvr_private *dev_priv,
> +			size_t size);
> +
> +void ipvr_stat_remove_mmu_bind(struct drm_ipvr_private *dev_priv,
> +			size_t size);
> +
> +#endif
> diff --git a/drivers/gpu/drm/ipvr/ipvr_drm.h
> b/drivers/gpu/drm/ipvr/ipvr_drm.h
> new file mode 100644
> index 0000000..fade9a3
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/ipvr_drm.h
> @@ -0,0 +1,259 @@
> +/*********************************************************
> *****************
> + * ipvr_drm.h: IPVR header file exported to user space
> + *
> + * Copyright (c) 2014 Intel Corporation, Hillsboro, OR, USA
> + * All Rights Reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of
> MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
> License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + *
> + * Authors:
> + *    Fei Jiang <fei.jiang@xxxxxxxxx>
> + *    Yao Cheng <yao.cheng@xxxxxxxxx>
> + *
> +
> **********************************************************
> ****************/
> +
> +
> +/* this file only define structs and macro which need export to user space
> */
> +#ifndef _IPVR_DRM_H_
> +#define _IPVR_DRM_H_
> +
> +#include <drm/drm.h>
> +struct drm_ipvr_context_create {
> +	/* passed ctx_info, including codec, profile info */
> +#define IPVR_CONTEXT_TYPE_VED   (0x1)
> +	__u32 ctx_type;
> +	/* returned back ctx_id */
> +	__u32 ctx_id;
> +	/*
> +	 * following tiling strides for VED are supported:
> +	 * stride 0: 512 for scheme 0, 1024 for scheme 1
> +	 * stride 1: 1024 for scheme 0, 2048 for scheme 1
> +	 * stride 2: 2048 for scheme 0, 4096 for scheme 1
> +	 * stride 3: 4096 for scheme 0
> +	 */
> +	__u32 tiling_stride;
> +	/*
> +	 * scheme 0: tile is 256x16, while minimal tile stride is 512
> +	 * scheme 1: tile is 512x8, while minimal tile stride is 1024
> +	 */
> +	__u32 tiling_scheme;
> +};
> +
> +struct drm_ipvr_context_destroy {
> +	__u32 ctx_id;
> +	__u32 pad64;
> +};
> +
> +/* ioctl used for querying info from driver */
> +enum drm_ipvr_misc_key {
> +	IPVR_DEVICE_INFO,
> +};
> +struct drm_ipvr_get_info {
> +	__u64 key;
> +	__u64 value;
> +};
> +
> +struct drm_ipvr_gem_relocation_entry {
> +	/**
> +	 * Handle of the buffer being pointed to by this relocation entry.
> +	 *
> +	 * It's appealing to make this be an index into the mm_validate_entry
> +	 * list to refer to the buffer, but this allows the driver to create
> +	 * a relocation list for state buffers and not re-write it per
> +	 * exec using the buffer.
> +	 */
> +	__u32 target_handle;
> +
> +	/**
> +	 * Value to be added to the offset of the target buffer to make up
> +	 * the relocation entry.
> +	 */
> +	__u32 delta;
> +
> +	/** Offset in the buffer the relocation entry will be written into */
> +	__u64 offset;
> +
> +	/**
> +	 * Offset value of the target buffer that the relocation entry was last
> +	 * written as.
> +	 *
> +	 * If the buffer has the same offset as last time, we can skip syncing
> +	 * and writing the relocation.  This value is written back out by
> +	 * the execbuffer ioctl when the relocation is written.
> +	 */
> +	__u64 presumed_offset;
> +
> +	/**
> +	 * Target memory domains read by this operation.
> +	 */
> +	__u32 read_domains;
> +
> +	/**
> +	 * Target memory domains written by this operation.
> +	 *
> +	 * Note that only one domain may be written by the whole
> +	 * execbuffer operation, so that where there are conflicts,
> +	 * the application will get -EINVAL back.
> +	 */
> +	__u32 write_domain;
> +};
> +
> +struct drm_ipvr_gem_exec_object {
> +	/**
> +	 * User's handle for a buffer to be bound into the MMU for this
> +	 * operation.
> +	 */
> +	__u32 handle;
> +
> +	/** Number of relocations to be performed on this buffer */
> +	__u32 relocation_count;
> +	/**
> +	 * Pointer to array of struct drm_i915_gem_relocation_entry
> containing
> +	 * the relocations to be performed in this buffer.
> +	 */
> +	__u64 relocs_ptr;
> +
> +	/** Required alignment in graphics aperture */
> +	__u64 alignment;
> +
> +	/**
> +	 * Returned value of the updated offset of the object, for future
> +	 * presumed_offset writes.
> +	 */
> +	__u64 offset;
> +
> +#define IPVR_EXEC_OBJECT_NEED_FENCE (1 << 0)
> +#define IPVR_EXEC_OBJECT_SUBMIT     (1 << 1)
> +	__u64 flags;
> +
> +	__u64 rsvd1;
> +	__u64 rsvd2;
> +};
> +
> +struct drm_ipvr_gem_execbuffer {
> +	/**
> +	 * List of gem_exec_object2 structs
> +	 */
> +	__u64 buffers_ptr;
> +	__u32 buffer_count;
> +
> +	/** Offset in the batchbuffer to start execution from. */
> +	__u32 exec_start_offset;
> +	/** Bytes used in batchbuffer from batch_start_offset */
> +	__u32 exec_len;
> +
> +	/**
> +	 * ID of hardware context.
> +	 */
> +	__u32 ctx_id;
> +
> +	__u64 flags;
> +	__u64 rsvd1;
> +	__u64 rsvd2;
> +};
> +
> +enum ipvr_cache_level
> +{
> +	IPVR_CACHE_UNCACHED,
> +	IPVR_CACHE_WRITEBACK,
> +	IPVR_CACHE_WRITECOMBINE,
> +	IPVR_CACHE_MAX,
> +};
> +
> +struct drm_ipvr_gem_create {
> +	/*
> +	 * Requested size for the object.
> +	 * The (page-aligned) allocated size for the object will be returned.
> +	 */
> +	__u64 size;
> +	__u64 rounded_size;
> +	__u64 mmu_offset;
> +	/*
> +	 * Returned handle for the object.
> +	 * Object handles are nonzero.
> +	 */
> +	__u32 handle;
> +	__u32 tiling;
> +
> +	__u32 cache_level;
> +	__u32 pad64;
> +	/*
> +	 * Handle used for user to mmap BO
> +	 */
> +	__u64 map_offset;
> +};
> +
> +struct drm_ipvr_gem_busy {
> +	/* Handle of the buffer to check for busy */
> +	__u32 handle;
> +
> +	/*
> +	 * Return busy status (1 if busy, 0 if idle).
> +	 * The high word is used to indicate on which rings the object
> +	 * currently resides:
> +	 *  16:31 - busy (r or r/w) rings (16 render, 17 bsd, 18 blt, etc)
> +	 */
> +	__u32 busy;
> +};
> +
> +struct drm_ipvr_gem_mmap_offset {
> +	/** Handle for the object being mapped. */
> +	__u32 handle;
> +	__u32 pad64;
> +	/**
> +	 * Fake offset to use for subsequent mmap call
> +	 *
> +	 * This is a fixed-size type for 32/64 compatibility.
> +	 */
> +	__u64 offset;
> +};
> +
> +struct drm_ipvr_gem_wait {
> +	/* Handle of BO we shall wait on */
> +	__u32 handle;
> +	__u32 flags;
> +	/** Number of nanoseconds to wait, Returns time remaining. */
> +	__s64 timeout_ns;
> +};
> +
> +/*
> + * IPVR GEM specific ioctls
> + */
> +#define DRM_IPVR_CONTEXT_CREATE     0x00
> +#define DRM_IPVR_CONTEXT_DESTROY    0x01
> +#define DRM_IPVR_GET_INFO           0x02
> +#define DRM_IPVR_GEM_EXECBUFFER     0x03
> +#define DRM_IPVR_GEM_BUSY           0x04
> +#define DRM_IPVR_GEM_CREATE         0x05
> +#define DRM_IPVR_GEM_WAIT           0x06
> +#define DRM_IPVR_GEM_MMAP_OFFSET    0x07
> +
> +#define DRM_IOCTL_IPVR_CONTEXT_CREATE	\
> +	DRM_IOWR(DRM_COMMAND_BASE +
> DRM_IPVR_CONTEXT_CREATE, struct drm_ipvr_context_create)
> +#define DRM_IOCTL_IPVR_CONTEXT_DESTROY	\
> +	DRM_IOW(DRM_COMMAND_BASE +
> DRM_IPVR_CONTEXT_DESTROY, struct drm_ipvr_context_destroy)
> +#define DRM_IOCTL_IPVR_GET_INFO		\
> +	DRM_IOWR(DRM_COMMAND_BASE + DRM_IPVR_GET_INFO, struct
> drm_ipvr_get_info)
> +#define DRM_IOCTL_IPVR_GEM_EXECBUFFER	\
> +	DRM_IOWR(DRM_COMMAND_BASE +
> DRM_IPVR_GEM_EXECBUFFER, struct drm_ipvr_gem_execbuffer)
> +#define DRM_IOCTL_IPVR_GEM_BUSY		\
> +	DRM_IOWR(DRM_COMMAND_BASE + DRM_IPVR_GEM_BUSY,
> struct drm_ipvr_gem_busy)
> +#define DRM_IOCTL_IPVR_GEM_CREATE	\
> +	DRM_IOWR(DRM_COMMAND_BASE + DRM_IPVR_GEM_CREATE,
> struct drm_ipvr_gem_create)
> +#define DRM_IOCTL_IPVR_GEM_WAIT		\
> +	DRM_IOWR(DRM_COMMAND_BASE + DRM_IPVR_GEM_WAIT,
> struct drm_ipvr_gem_wait)
> +#define DRM_IOCTL_IPVR_GEM_MMAP_OFFSET	\
> +	DRM_IOWR(DRM_COMMAND_BASE +
> DRM_IPVR_GEM_MMAP_OFFSET, struct drm_ipvr_gem_mmap_offset)
> +
> +#endif
> diff --git a/drivers/gpu/drm/ipvr/ipvr_drv.c
> b/drivers/gpu/drm/ipvr/ipvr_drv.c
> new file mode 100644
> index 0000000..29ffe39
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/ipvr_drv.c
> @@ -0,0 +1,617 @@
> +/*********************************************************
> *****************
> + * ipvr_drv.c: IPVR driver common file for initialization/de-initialization
> + *
> + * Copyright (c) 2014 Intel Corporation, Hillsboro, OR, USA
> + * All Rights Reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of
> MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
> License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + *
> + * Authors:
> + *    Fei Jiang <fei.jiang@xxxxxxxxx>
> + *    Yao Cheng <yao.cheng@xxxxxxxxx>
> + *
> +
> **********************************************************
> ****************/
> +
> +#include "ipvr_drv.h"
> +#include "ipvr_gem.h"
> +#include "ipvr_mmu.h"
> +#include "ipvr_exec.h"
> +#include "ipvr_bo.h"
> +#include "ipvr_debug.h"
> +#include "ipvr_trace.h"
> +#include "ved_fw.h"
> +#include "ved_pm.h"
> +#include "ved_reg.h"
> +#include "ved_cmd.h"
> +#include <linux/device.h>
> +#include <linux/version.h>
> +#include <uapi/drm/drm.h>
> +#include <linux/pm_runtime.h>
> +#include <linux/console.h>
> +#include <linux/module.h>
> +#include <asm/uaccess.h>
> +
> +int drm_ipvr_debug = 0x80;
> +int drm_ipvr_freq = 320;
> +
> +module_param_named(debug, drm_ipvr_debug, int, 0600);
> +module_param_named(freq, drm_ipvr_freq, int, 0600);
> +
> +MODULE_PARM_DESC(debug,
> +		"control debug info output"
> +		"default: 0"
> +		"0x01:IPVR_D_GENERAL, 0x02:IPVR_D_INIT,
> 0x04:IPVR_D_IRQ, 0x08:IPVR_D_ENTRY"
> +		"0x10:IPVR_D_PM, 0x20:IPVR_D_REG, 0x40:IPVR_D_VED,
> 0x80:IPVR_D_WARN");
> +MODULE_PARM_DESC(freq,
> +		"prefered VED frequency"
> +		"default: 320 MHz");
> +
> +static struct drm_ioctl_desc ipvr_gem_ioctls[] = {
> +	DRM_IOCTL_DEF_DRV(IPVR_CONTEXT_CREATE,
> +			ipvr_context_create_ioctl,
> DRM_AUTH|DRM_UNLOCKED),
> +	DRM_IOCTL_DEF_DRV(IPVR_CONTEXT_DESTROY,
> +			ipvr_context_destroy_ioctl,
> DRM_AUTH|DRM_UNLOCKED),
> +	DRM_IOCTL_DEF_DRV(IPVR_GET_INFO,
> +			ipvr_get_info_ioctl, DRM_AUTH|DRM_UNLOCKED),
> +	DRM_IOCTL_DEF_DRV(IPVR_GEM_EXECBUFFER,
> +			ipvr_gem_execbuffer_ioctl,
> DRM_AUTH|DRM_UNLOCKED),
> +	DRM_IOCTL_DEF_DRV(IPVR_GEM_BUSY,
> +			ipvr_gem_busy_ioctl, DRM_AUTH|DRM_UNLOCKED),
> +	DRM_IOCTL_DEF_DRV(IPVR_GEM_CREATE,
> +			ipvr_gem_create_ioctl,
> DRM_AUTH|DRM_UNLOCKED),
> +	DRM_IOCTL_DEF_DRV(IPVR_GEM_WAIT,
> +			ipvr_gem_wait_ioctl, DRM_AUTH|DRM_UNLOCKED),
> +	DRM_IOCTL_DEF_DRV(IPVR_GEM_MMAP_OFFSET,
> +			ipvr_gem_mmap_offset_ioctl,
> DRM_AUTH|DRM_UNLOCKED),
> +};
> +
> +static void ipvr_gem_init(struct drm_device *dev)
> +{
> +	struct drm_ipvr_private *dev_priv = dev->dev_private;
> +
> +	dev_priv->ipvr_bo_slab = kmem_cache_create("ipvr_gem_object",
> +				  sizeof(struct drm_ipvr_gem_object), 0,
> +				  SLAB_HWCACHE_ALIGN, NULL);
> +
> +	spin_lock_init(&dev_priv->ipvr_stat.object_stat_lock);
> +	dev_priv->ipvr_stat.interruptible = true;
> +}
> +
> +static void ipvr_gem_setup_mmu(struct drm_device *dev,
> +				       unsigned long linear_start,
> +				       unsigned long linear_end,
> +				       unsigned long tiling_start,
> +				       unsigned long tiling_end)
> +{
> +	/* Let GEM Manage all of the aperture.
> +	 */
> +	struct drm_ipvr_private *dev_priv = dev->dev_private;
> +	struct ipvr_address_space *addr_space = &dev_priv->addr_space;
> +
> +	addr_space->dev = dev_priv->dev;
> +
> +	/* Subtract the guard page ... */
> +	drm_mm_init(&addr_space->linear_mm, linear_start,
> +		    linear_end - linear_start - PAGE_SIZE);
> +	dev_priv->addr_space.linear_start = linear_start;
> +	dev_priv->addr_space.linear_total = linear_end - linear_start;
> +
> +	drm_mm_init(&addr_space->tiling_mm, tiling_start,
> +		    tiling_end - tiling_start - PAGE_SIZE);
> +	dev_priv->addr_space.tiling_start = tiling_start;
> +	dev_priv->addr_space.tiling_total = tiling_end - tiling_start;
> +}
> +
> +int ipvr_runtime_pm_get(struct drm_ipvr_private *dev_priv)
> +{
> +	int ret = 0;
> +	int pending;
> +	unsigned long irq_flags;
> +	struct platform_device *platdev = dev_priv->dev->platformdev;
> +	BUG_ON(!platdev);
> +	BUG_ON(atomic_read(&dev_priv->pending_events) < 0);
> +	spin_lock_irqsave(&dev_priv->power_usage_lock, irq_flags);
> +	if ((pending = atomic_inc_return(&dev_priv->pending_events)) == 1)
> {
> +		do {
> +			ret = pm_runtime_get_sync(&platdev->dev);
> +			if (ret == -EAGAIN) {
> +
> 	IPVR_DEBUG_WARN("pm_runtime_get_sync returns EAGAIN\n");
> +			}
> +			else if (ret < 0) {
> +				IPVR_ERROR("pm_runtime_get_sync
> returns %d\n", ret);
> +				pending = atomic_dec_return(&dev_priv-
> >pending_events);
> +			}
> +		} while (ret == -EAGAIN);
> +	}
> +	trace_ipvr_get_power(atomic_read(&platdev-
> >dev.power.usage_count),
> +		pending);
> +	spin_unlock_irqrestore(&dev_priv->power_usage_lock, irq_flags);
> +	return ret;
> +}
> +
> +int ipvr_runtime_pm_put(struct drm_ipvr_private *dev_priv, bool async)
> +{
> +	int ret = 0;
> +	int pending;
> +	unsigned long irq_flags;
> +	struct platform_device *platdev = dev_priv->dev->platformdev;
> +	BUG_ON(!platdev);
> +	BUG_ON(atomic_read(&dev_priv->pending_events) <= 0);
> +	spin_lock_irqsave(&dev_priv->power_usage_lock, irq_flags);
> +	if ((pending = atomic_dec_return(&dev_priv->pending_events)) ==
> 0) {
> +		do {
> +			if (async)
> +				ret = pm_runtime_put(&platdev->dev);
> +			else
> +				ret = pm_runtime_put_sync(&platdev->dev);
> +			if (ret == -EAGAIN)
> +				IPVR_DEBUG_WARN("pm_runtime_put
> returns EAGAIN\n");
> +			else if (ret < 0)
> +				IPVR_ERROR("pm_runtime_put
> returns %d\n", ret);
> +		} while (ret == -EAGAIN);
> +	}
> +	trace_ipvr_put_power(atomic_read(&platdev-
> >dev.power.usage_count),
> +		pending);
> +	spin_unlock_irqrestore(&dev_priv->power_usage_lock, irq_flags);
> +	return ret;
> +}
> +
> +int ipvr_runtime_pm_put_all(struct drm_ipvr_private *dev_priv, bool
> async)
> +{
> +	int ret = 0;
> +	unsigned long irq_flags;
> +	struct platform_device *platdev = dev_priv->dev->platformdev;
> +	BUG_ON(!platdev);
> +	spin_lock_irqsave(&dev_priv->power_usage_lock, irq_flags);
> +	if (atomic_read(&dev_priv->pending_events) > 0) {
> +		atomic_set(&dev_priv->pending_events, 0);
> +		do {
> +			if (async)
> +				ret = pm_runtime_put(&platdev->dev);
> +			else
> +				ret = pm_runtime_put_sync(&platdev->dev);
> +			if (ret == -EAGAIN)
> +				IPVR_DEBUG_WARN("pm_runtime_put
> returns EAGAIN\n");
> +			else if (ret < 0)
> +				IPVR_ERROR("pm_runtime_put
> returns %d\n", ret);
> +		} while (ret == -EAGAIN);
> +	}
> +	trace_ipvr_put_power(atomic_read(&platdev-
> >dev.power.usage_count),
> +		0);
> +	spin_unlock_irqrestore(&dev_priv->power_usage_lock, irq_flags);
> +	return ret;
> +}
> +
> +static int ipvr_drm_unload(struct drm_device *dev)
> +{
> +	struct drm_ipvr_private *dev_priv = dev->dev_private;
> +	IPVR_DEBUG_ENTRY("entered.");
> +	BUG_ON(!dev->platformdev);
> +
> +	if (dev_priv) {
> +		if (dev_priv->ipvr_bo_slab)
> +			kmem_cache_destroy(dev_priv->ipvr_bo_slab);
> +		ipvr_fence_driver_fini(dev_priv);
> +
> +		if (WARN_ON(ipvr_runtime_pm_get(dev_priv) < 0))
> +			IPVR_DEBUG_WARN("Error getting ipvr power\n");
> +		else {
> +			ved_core_deinit(dev_priv);
> +			if (WARN_ON(ipvr_runtime_pm_put_all(dev_priv,
> false) < 0))
> +				IPVR_DEBUG_WARN("Error getting ipvr
> power\n");
> +		}
> +		if (dev_priv->validate_ctx.buffers)
> +			vfree(dev_priv->validate_ctx.buffers);
> +
> +		if (dev_priv->mmu) {
> +			ipvr_mmu_driver_takedown(dev_priv->mmu);
> +			dev_priv->mmu = NULL;
> +		}
> +
> +		if (dev_priv->reg_base) {
> +			iounmap(dev_priv->reg_base);
> +			dev_priv->reg_base = NULL;
> +		}
> +
> +		list_del(&dev_priv->default_ctx.head);
> +		idr_remove(&dev_priv->ipvr_ctx_idr, dev_priv-
> >default_ctx.ctx_id);
> +		kfree(dev_priv);
> +
> +	}
> +	pm_runtime_disable(&dev->platformdev->dev);
> +
> +	return 0;
> +}
> +
> +static int ipvr_drm_load(struct drm_device *dev, unsigned long flags)
> +{
> +	struct drm_ipvr_private *dev_priv;
> +	u32 ctx_id;
> +	int ret = 0;
> +	struct resource *res_mmio;
> +	void __iomem* mmio_start;
> +
> +	if (!dev->platformdev)
> +		return -ENODEV;
> +
> +	dev_priv = kzalloc(sizeof(*dev_priv), GFP_KERNEL);
> +	if (dev_priv == NULL)
> +		return -ENOMEM;
> +
> +	dev->dev_private = dev_priv;
> +	dev_priv->dev = dev;
> +
> +	INIT_LIST_HEAD(&dev_priv->validate_ctx.validate_list);
> +
> +	dev_priv->pci_root = pci_get_bus_and_slot(0, PCI_DEVFN(0, 0));
> +	if (!dev_priv->pci_root) {
> +		kfree(dev_priv);
> +		return -ENODEV;
> +	}
> +
> +	dev->irq = platform_get_irq(dev->platformdev, 0);
> +	if (dev->irq < 0) {
> +		kfree(dev_priv);
> +		return -ENODEV;
> +	}
> +
> +	res_mmio = platform_get_resource(dev->platformdev,
> IORESOURCE_MEM, 0);
> +	if (!res_mmio) {
> +		kfree(dev_priv);
> +		return -ENXIO;
> +	}
> +
> +	mmio_start = ioremap_nocache(res_mmio->start,
> +					res_mmio->end - res_mmio->start);
> +	if (!mmio_start) {
> +		kfree(dev_priv);
> +		return -EACCES;
> +	}
> +
> +	dev_priv->reg_base = mmio_start;
> +	IPVR_DEBUG_VED("reg_base is %p - 0x%p.\n",
> +		dev_priv->reg_base,
> +		dev_priv->reg_base + (res_mmio->end - res_mmio->start));
> +
> +	atomic_set(&dev_priv->pending_events, 0);
> +	spin_lock_init(&dev_priv->power_usage_lock);
> +	pm_runtime_enable(&dev->platformdev->dev);
> +	if (WARN_ON(ipvr_runtime_pm_get(dev_priv) < 0)) {
> +		IPVR_ERROR("Error getting ipvr power\n");
> +		ret = -EBUSY;
> +		goto out_err;
> +	}
> +
> +	IPVR_DEBUG_INIT("MSVDX_CORE_REV_OFFSET by readl is 0x%x.\n",
> +		readl(dev_priv->reg_base + 0x640));
> +	IPVR_DEBUG_INIT("MSVDX_CORE_REV_OFFSET by
> VED_REG_READ32 is 0x%x.\n",
> +		IPVR_REG_READ32(MSVDX_CORE_REV_OFFSET));
> +
> +	/* mmu init */
> +	dev_priv->mmu = ipvr_mmu_driver_init(NULL, 0, dev_priv);
> +	if (!dev_priv->mmu) {
> +		ret = -EBUSY;
> +		goto out_err;
> +	}
> +
> +	ipvr_mmu_set_pd_context(ipvr_mmu_get_default_pd(dev_priv-
> >mmu), 0);
> +
> +	/*
> +	 * Initialize sequence numbers for the different command
> +	 * submission mechanisms.
> +	 */
> +	dev_priv->last_seq = 1;
> +
> +	ipvr_gem_init(dev);
> +
> +	ipvr_gem_setup_mmu(dev,
> +		IPVR_MEM_MMU_LINEAR_START,
> +		IPVR_MEM_MMU_LINEAR_END,
> +		IPVR_MEM_MMU_TILING_START,
> +		IPVR_MEM_MMU_TILING_END);
> +
> +	ved_core_init(dev_priv);
> +
> +	if (WARN_ON(ipvr_runtime_pm_put(dev_priv, false) < 0))
> +		IPVR_DEBUG_WARN("Error putting ipvr power\n");
> +
> +	dev_priv->ved_private->ved_needs_reset = 1;
> +
> +	ipvr_fence_driver_init(dev_priv);
> +
> +	dev_priv->validate_ctx.buffers =
> +		vmalloc(IPVR_NUM_VALIDATE_BUFFERS *
> +			sizeof(struct ipvr_validate_buffer));
> +	if (!dev_priv->validate_ctx.buffers) {
> +		ret = -ENOMEM;
> +		goto out_err;
> +	}
> +
> +	/* ipvr context initialization */
> +	spin_lock_init(&dev_priv->ipvr_ctx_lock);
> +	idr_init(&dev_priv->ipvr_ctx_idr);
> +	/* default ipvr context is used for scaling, rotation case */
> +	ctx_id = idr_alloc(&dev_priv->ipvr_ctx_idr, &dev_priv->default_ctx,
> +			   IPVR_MIN_CONTEXT_ID, IPVR_MAX_CONTEXT_ID,
> +			   GFP_NOWAIT);
> +	if (ctx_id < 0) {
> +		return -ENOMEM;
> +		goto out_err;
> +	}
> +	dev_priv->default_ctx.ctx_id = ctx_id;
> +	INIT_LIST_HEAD(&dev_priv->default_ctx.head);
> +	dev_priv->default_ctx.ctx_type = 0;
> +	dev_priv->default_ctx.ipvr_fpriv = NULL;
> +
> +	/* don't need protect with spinlock during module load stage */
> +	dev_priv->default_ctx.tiling_scheme = 0;
> +	dev_priv->default_ctx.tiling_stride = 0;
> +
> +	return 0;
> +out_err:
> +	ipvr_drm_unload(dev);
> +	return ret;
> +}
> +
> +/*
> + * The .open() method is called every time the device is opened by an
> + * application. Drivers can allocate per-file private data in this method and
> + * store them in the struct drm_file::driver_priv field. Note that the .open()
> + * method is called before .firstopen().
> + */
> +static int
> +ipvr_drm_open(struct drm_device *dev, struct drm_file *file_priv)
> +{
> +	struct drm_ipvr_file_private *ipvr_fp;
> +	IPVR_DEBUG_ENTRY("enter\n");
> +
> +	ipvr_fp = kzalloc(sizeof(*ipvr_fp), GFP_KERNEL);
> +	if (!ipvr_fp)
> +		return -ENOMEM;
> +
> +	file_priv->driver_priv = ipvr_fp;
> +	INIT_LIST_HEAD(&ipvr_fp->ctx_list);
> +	return 0;
> +}
> +
> +/*
> + * The close operation is split into .preclose() and .postclose() methods.
> + * Since .postclose() is deprecated, all resource destruction related to file
> + * handle are now done in .preclose() method.
> + */
> +static void
> +ipvr_drm_preclose(struct drm_device *dev, struct drm_file *file_priv)
> +{
> +	/* force close all contexts not explicitly closed by user */
> +	struct drm_ipvr_private *dev_priv;
> +	struct drm_ipvr_file_private *ipvr_fpriv;
> +	struct ved_private *ved_priv;
> +	struct ipvr_context *pos = NULL, *n = NULL;
> +	unsigned long irq_flags;
> +
> +	IPVR_DEBUG_ENTRY("enter\n");
> +	dev_priv = dev->dev_private;
> +	ipvr_fpriv = file_priv->driver_priv;
> +	ved_priv = dev_priv->ved_private;
> +
> +	spin_lock_irqsave(&dev_priv->ipvr_ctx_lock, irq_flags);
> +	if (ved_priv && (!list_empty(&ved_priv->ved_queue)
> +			|| (atomic_read(&dev_priv->pending_events) > 0)))
> {
> +		IPVR_DEBUG_WARN("Closing the FD while pending cmds
> exist!\n");
> +	}
> +	list_for_each_entry_safe(pos, n, &ipvr_fpriv->ctx_list, head) {
> +		IPVR_DEBUG_GENERAL("Video:remove context %d type
> 0x%x\n",
> +			pos->ctx_id, pos->ctx_type);
> +		list_del(&pos->head);
> +		idr_remove(&dev_priv->ipvr_ctx_idr, pos->ctx_id);
> +		kfree(pos);
> +	}
> +
> +	spin_unlock_irqrestore(&dev_priv->ipvr_ctx_lock, irq_flags);
> +	kfree(ipvr_fpriv);
> +}
> +
> +static irqreturn_t ipvr_irq_handler(int irq, void *arg)
> +{
> +	struct drm_device *dev = (struct drm_device *) arg;
> +	struct drm_ipvr_private *dev_priv = dev->dev_private;
> +	WARN_ON(ved_irq_handler(dev_priv->ved_private));
> +	return IRQ_HANDLED;
> +}
> +
> +static const struct file_operations ipvr_fops = {
> +	.owner = THIS_MODULE,
> +	.open = drm_open,
> +	.release = drm_release,
> +	.unlocked_ioctl = drm_ioctl,
> +#ifdef CONFIG_COMPAT
> +	.compat_ioctl = drm_ioctl,
> +#endif
> +	.mmap = drm_gem_mmap,
> +};
> +
> +static int ipvr_drm_freeze(struct drm_device *dev)
> +{
> +	int ret;
> +	struct drm_ipvr_private *dev_priv = dev->dev_private;
> +	IPVR_DEBUG_ENTRY("enter\n");
> +
> +	ret = ved_check_idle(dev_priv->ved_private);
> +	if (ret) {
> +		IPVR_DEBUG_PM("VED check idle fail: %d, skip freezing\n",
> ret);
> +		/**
> +		 * fixme: better to schedule a delayed task?
> +		 */
> +		return 0;
> +	}
> +
> +	if (dev->irq_enabled) {
> +		ret = drm_irq_uninstall(dev);
> +		if (ret) {
> +			IPVR_ERROR("Failed to uninstall drm irq
> handler: %d\n", ret);
> +		}
> +	}
> +
> +	if (is_ved_on(dev_priv)) {
> +		if (!ved_power_off(dev_priv)) {
> +			IPVR_ERROR("Failed to power off VED\n");
> +			return -EFAULT;
> +		}
> +		IPVR_DEBUG_PM("Successfully powered off\n");
> +	} else {
> +		IPVR_DEBUG_PM("Skiped power-off since already powered
> off\n");
> +	}
> +
> +	return 0;
> +}
> +
> +static int ipvr_drm_thaw(struct drm_device *dev)
> +{
> +	int ret;
> +	struct drm_ipvr_private *dev_priv = dev->dev_private;
> +	IPVR_DEBUG_ENTRY("enter\n");
> +	if (!is_ved_on(dev_priv)) {
> +		if (!ved_power_on(dev_priv)) {
> +			IPVR_ERROR("Failed to power on VED\n");
> +			return -EFAULT;
> +		}
> +		IPVR_DEBUG_PM("Successfully powered on\n");
> +	} else {
> +		IPVR_DEBUG_PM("Skiped power-on since already powered
> on\n");
> +	}
> +
> +	if (!dev->irq_enabled) {
> +		ret = drm_irq_install(dev, dev->irq);
> +		if (ret) {
> +			IPVR_ERROR("Failed to install drm irq handler: %d\n",
> ret);
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +static int ipvr_pm_suspend(struct device *dev)
> +{
> +	struct platform_device *platformdev = to_platform_device(dev);
> +	struct drm_device *drm_dev = platform_get_drvdata(platformdev);
> +	IPVR_DEBUG_PM("PM suspend called\n");
> +	return drm_dev? ipvr_drm_freeze(drm_dev): 0;
> +}
> +static int ipvr_pm_resume(struct device *dev)
> +{
> +	struct platform_device *platformdev = to_platform_device(dev);
> +	struct drm_device *drm_dev = platform_get_drvdata(platformdev);
> +	IPVR_DEBUG_PM("PM resume called\n");
> +	return drm_dev? ipvr_drm_thaw(drm_dev): 0;
> +}
> +
> +static const struct vm_operations_struct ipvr_gem_vm_ops = {
> +	.fault = ipvr_gem_fault,
> +	.open = drm_gem_vm_open,
> +	.close = drm_gem_vm_close,
> +};
> +
> +static struct drm_driver ipvr_drm_driver = {
> +	.driver_features = DRIVER_HAVE_IRQ | DRIVER_GEM |
> DRIVER_PRIME,
> +	.load = ipvr_drm_load,
> +	.unload = ipvr_drm_unload,
> +	.open = ipvr_drm_open,
> +	.preclose = ipvr_drm_preclose,
> +	.irq_handler = ipvr_irq_handler,
> +	.gem_free_object = ipvr_gem_free_object,
> +	.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
> +	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
> +	.gem_prime_export	= drm_gem_prime_export,
> +	.gem_prime_import	= drm_gem_prime_import,
> +	.gem_prime_get_sg_table = ipvr_gem_prime_get_sg_table,
> +	.gem_prime_import_sg_table = ipvr_gem_prime_import_sg_table,
> +	.gem_prime_pin		= ipvr_gem_prime_pin,
> +	.gem_prime_unpin	= ipvr_gem_prime_unpin,
> +#ifdef CONFIG_DEBUG_FS
> +	.debugfs_init = ipvr_debugfs_init,
> +	.debugfs_cleanup = ipvr_debugfs_cleanup,
> +#endif
> +	.gem_vm_ops = &ipvr_gem_vm_ops,
> +	.ioctls = ipvr_gem_ioctls,
> +	.num_ioctls = ARRAY_SIZE(ipvr_gem_ioctls),
> +	.fops = &ipvr_fops,
> +	.name = IPVR_DRIVER_NAME,
> +	.desc = IPVR_DRIVER_DESC,
> +	.date = IPVR_DRIVER_DATE,
> +	.major = IPVR_DRIVER_MAJOR,
> +	.minor = IPVR_DRIVER_MINOR,
> +	.patchlevel = IPVR_DRIVER_PATCHLEVEL,
> +};
> +
> +static int ipvr_plat_probe(struct platform_device *device)
> +{
> +	struct drm_device *drm_dev;
> +	int ret;
> +
> +	drm_dev = drm_dev_alloc(&ipvr_drm_driver, &device->dev);
> +	if (!drm_dev)
> +		return -ENOMEM;
> +
> +	drm_dev->platformdev = device;
> +	platform_set_drvdata(device, drm_dev);
> +	ret = drm_dev_register(drm_dev, 0);
> +	if (ret)
> +		goto err_free;
> +
> +	DRM_INFO("Initialized IPVR on minor %d\n", drm_dev->primary-
> >index);
> +
> +	return 0;
> +err_free:
> +	drm_dev_unref(drm_dev);
> +	return ret;
> +}
> +
> +static int ipvr_plat_remove(struct platform_device *device)
> +{
> +	struct drm_device *drm_dev = platform_get_drvdata(device);
> +	if (drm_dev) {
> +		drm_dev_unregister(drm_dev);
> +		drm_dev_unref(drm_dev);
> +		platform_set_drvdata(device, NULL);
> +	}
> +	return 0;
> +}
> +
> +static struct dev_pm_ops ipvr_pm_ops = {
> +	.suspend = ipvr_pm_suspend,
> +	.resume = ipvr_pm_resume,
> +	.freeze = ipvr_pm_suspend,
> +	.thaw = ipvr_pm_resume,
> +	.poweroff = ipvr_pm_suspend,
> +	.restore = ipvr_pm_resume,
> +#ifdef CONFIG_PM_RUNTIME
> +	.runtime_suspend = ipvr_pm_suspend,
> +	.runtime_resume = ipvr_pm_resume,
> +#endif
> +};
> +
> +static struct platform_driver ipvr_vlv_plat_driver = {
> +	.driver = {
> +		.name = "ipvr-ved-vlv",
> +		.owner = THIS_MODULE,
> +#ifdef CONFIG_PM
> +		.pm = &ipvr_pm_ops,
> +#endif
> +	},
> +	.probe = ipvr_plat_probe,
> +	.remove = ipvr_plat_remove,
> +};
> +
> +module_platform_driver(ipvr_vlv_plat_driver);
> +MODULE_LICENSE("GPL");
> diff --git a/drivers/gpu/drm/ipvr/ipvr_drv.h
> b/drivers/gpu/drm/ipvr/ipvr_drv.h
> new file mode 100644
> index 0000000..7f88380
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/ipvr_drv.h
> @@ -0,0 +1,292 @@
> +/*********************************************************
> *****************
> + * ipvr_drv.h: IPVR driver common header file
> + *
> + * Copyright (c) 2014 Intel Corporation, Hillsboro, OR, USA
> + * All Rights Reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of
> MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
> License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + *
> + * Authors:
> + *    Fei Jiang <fei.jiang@xxxxxxxxx>
> + *    Yao Cheng <yao.cheng@xxxxxxxxx>
> + *
> +
> **********************************************************
> ****************/
> +
> +#ifndef _IPVR_DRV_H_
> +#define _IPVR_DRV_H_
> +#include "drmP.h"
> +#include "ipvr_drm.h"
> +#include "ipvr_mmu.h"
> +#include <linux/version.h>
> +#include <linux/io-mapping.h>
> +#include <linux/i2c.h>
> +#include <linux/i2c-algo-bit.h>
> +#include <linux/backlight.h>
> +#include <linux/intel-iommu.h>
> +#include <linux/kref.h>
> +#include <linux/pm_qos.h>
> +#include <linux/mmu_notifier.h>
> +
> +#define IPVR_DRIVER_AUTHOR		"Intel, Inc."
> +#define IPVR_DRIVER_NAME		"ipvr"
> +#define IPVR_DRIVER_DESC		"PowerVR video drm driver"
> +#define IPVR_DRIVER_DATE		"20141113"
> +#define IPVR_DRIVER_MAJOR		0
> +#define IPVR_DRIVER_MINOR		1
> +#define IPVR_DRIVER_PATCHLEVEL	0
> +
> +/* read/write domains */
> +#define IPVR_GEM_DOMAIN_CPU		0x00000001
> +#define IPVR_GEM_DOMAIN_VPU		0x00000002
> +
> +/* context ID and type */
> +#define IPVR_CONTEXT_INVALID_ID 0
> +#define IPVR_MIN_CONTEXT_ID 1
> +#define IPVR_MAX_CONTEXT_ID 0xff
> +
> +/*
> + *Debug print bits setting
> + */
> +#define IPVR_D_GENERAL   (1 << 0)
> +#define IPVR_D_INIT      (1 << 1)
> +#define IPVR_D_IRQ       (1 << 2)
> +#define IPVR_D_ENTRY     (1 << 3)
> +#define IPVR_D_PM        (1 << 4)
> +#define IPVR_D_REG       (1 << 5)
> +#define IPVR_D_VED       (1 << 6)
> +#define IPVR_D_WARN      (1 << 7)
> +
> +#define IPVR_DEBUG_GENERAL(_fmt, _arg...) \
> +	IPVR_DEBUG(IPVR_D_GENERAL, _fmt, ##_arg)
> +#define IPVR_DEBUG_INIT(_fmt, _arg...) \
> +	IPVR_DEBUG(IPVR_D_INIT, _fmt, ##_arg)
> +#define IPVR_DEBUG_IRQ(_fmt, _arg...) \
> +	IPVR_DEBUG(IPVR_D_IRQ, _fmt, ##_arg)
> +#define IPVR_DEBUG_ENTRY(_fmt, _arg...) \
> +	IPVR_DEBUG(IPVR_D_ENTRY, _fmt, ##_arg)
> +#define IPVR_DEBUG_PM(_fmt, _arg...) \
> +	IPVR_DEBUG(IPVR_D_PM, _fmt, ##_arg)
> +#define IPVR_DEBUG_REG(_fmt, _arg...) \
> +	IPVR_DEBUG(IPVR_D_REG, _fmt, ##_arg)
> +#define IPVR_DEBUG_VED(_fmt, _arg...) \
> +	IPVR_DEBUG(IPVR_D_VED, _fmt, ##_arg)
> +#define IPVR_DEBUG_WARN(_fmt, _arg...) \
> +	IPVR_DEBUG(IPVR_D_WARN, _fmt, ##_arg)
> +
> +#define IPVR_DEBUG(_flag, _fmt, _arg...) \
> +	do { \
> +		if (unlikely((_flag) & drm_ipvr_debug)) \
> +			printk(KERN_INFO \
> +			       "[ipvr:0x%02x:%s] " _fmt , _flag, \
> +			       __func__ , ##_arg); \
> +	} while (0)
> +
> +#define IPVR_ERROR(_fmt, _arg...) \
> +	do { \
> +			printk(KERN_ERR \
> +			       "[ipvr:ERROR:%s] " _fmt, \
> +			       __func__ , ##_arg); \
> +	} while (0)
> +
> +#define IPVR_UDELAY(usec) \
> +	do { \
> +		cpu_relax(); \
> +	} while (0)
> +
> +#define IPVR_REG_WRITE32(_val, _offs) \
> +	iowrite32(_val, dev_priv->reg_base + (_offs))
> +#define IPVR_REG_READ32(_offs) \
> +	ioread32(dev_priv->reg_base + (_offs))
> +
> +typedef struct ipvr_validate_buffer ipvr_validate_buffer_t;
> +
> +#define to_ipvr_bo(x) container_of(x, struct drm_ipvr_gem_object, base)
> +
> +extern int drm_ipvr_debug;
> +extern int drm_ipvr_freq;
> +
> +struct ipvr_validate_context {
> +	ipvr_validate_buffer_t *buffers;
> +	int used_buffers;
> +	struct list_head validate_list;
> +};
> +
> +struct ipvr_mmu_driver;
> +struct ipvr_mmu_pd;
> +
> +struct ipvr_gem_stat {
> +	/**
> +	 * Are we in a non-interruptible section of code?
> +	 */
> +	bool interruptible;
> +
> +	/* accounting, useful for userland debugging */
> +	spinlock_t object_stat_lock;
> +	size_t allocated_memory;
> +	int allocated_count;
> +	size_t imported_memory;
> +	int imported_count;
> +	size_t exported_memory;
> +	int exported_count;
> +	size_t mmu_used_size;
> +};
> +
> +struct ipvr_address_space {
> +	struct drm_mm linear_mm;
> +	struct drm_mm tiling_mm;
> +	struct drm_device *dev;
> +	unsigned long linear_start;
> +	size_t linear_total;
> +	unsigned long tiling_start;
> +	size_t tiling_total;
> +
> +	/* need it during clear_range */
> +	struct {
> +		dma_addr_t addr;
> +		struct page *page;
> +	} scratch;
> +};
> +
> +struct ipvr_fence_driver {
> +	u16	sync_seq;
> +	atomic_t signaled_seq;
> +	unsigned long last_activity;
> +	bool initialized;
> +	spinlock_t fence_lock;
> +};
> +
> +struct ipvr_context {
> +	/* used to link into ipvr_ctx_list */
> +	struct list_head head;
> +	u32 ctx_id;
> +	/* used to double check ctx when find with idr, may be removed */
> +	struct drm_ipvr_file_private *ipvr_fpriv; /* DRM device file pointer
> */
> +	u32 ctx_type;
> +
> +	u16 cur_seq;
> +
> +	/* for IMG DDK, only use tiling for 2k and 4k buffer stride */
> +	/*
> +	 * following tiling strides for VED are supported:
> +	 * stride 0: 512 for scheme 0, 1024 for scheme 1
> +	 * stride 1: 1024 for scheme 0, 2048 for scheme 1
> +	 * stride 2: 2048 for scheme 0, 4096 for scheme 1
> +	 * stride 3: 4096 for scheme 0
> +	 */
> +	u8 tiling_stride;
> +	/*
> +	 * scheme 0: tile is 256x16, while minimal tile stride is 512
> +	 * scheme 1: tile is 512x8, while minimal tile stride is 1024
> +	 */
> +	u8 tiling_scheme;
> +};
> +
> +typedef struct drm_ipvr_private {
> +	struct drm_device *dev;
> +	struct pci_dev *pci_root;
> +
> +	/* IMG video context */
> +	spinlock_t ipvr_ctx_lock;
> +	struct idr ipvr_ctx_idr;
> +	struct ipvr_context default_ctx;
> +
> +	/* PM related */
> +	atomic_t pending_events;
> +	spinlock_t power_usage_lock;
> +
> +	/* exec related */
> +	struct ipvr_validate_context validate_ctx;
> +
> +	/* IMG MMU specific */
> +	struct ipvr_mmu_driver *mmu;
> +	atomic_t ipvr_mmu_invaldc;
> +
> +	/* GEM mm related */
> +	struct ipvr_gem_stat ipvr_stat;
> +	struct kmem_cache *ipvr_bo_slab;
> +	struct ipvr_address_space addr_space;
> +
> +	/* fence related */
> +	u32 last_seq;
> +	wait_queue_head_t fence_queue;
> +	struct ipvr_fence_driver fence_drv;
> +
> +	/* MMIO window shared from parent device */
> +	u8 __iomem* reg_base;
> +
> +	/*
> +	 * VED specific
> +	 */
> +	struct ved_private *ved_private;
> +}drm_ipvr_private_t;
> +
> +struct drm_ipvr_gem_object;
> +
> +/* VED private structure */
> +struct ved_private {
> +	struct drm_ipvr_private *dev_priv;
> +
> +	/* used to record seq got from irq fw-to-host msg */
> +	u16 ved_cur_seq;
> +
> +	/*
> +	 * VED Rendec Memory
> +	 */
> +	struct drm_ipvr_gem_object *ccb0;
> +	u32 base_addr0;
> +	struct drm_ipvr_gem_object *ccb1;
> +	u32 base_addr1;
> +	bool rendec_initialized;
> +
> +	/* VED firmware related */
> +	struct drm_ipvr_gem_object  *fw_bo;
> +	u32 fw_offset;
> +	u32 mtx_mem_size;
> +	bool fw_loaded_to_bo;
> +	bool ved_fw_loaded;
> +	void *ved_fw_ptr;
> +	size_t ved_fw_size;
> +
> +	/*
> +	 * ved command queue
> +	 */
> +	spinlock_t ved_lock;
> +	struct mutex ved_mutex;
> +	struct list_head ved_queue;
> +	/* busy means cmd submitted to fw, while irq hasn't been receieved
> */
> +	bool ved_busy;
> +	u32 ved_dash_access_ctrl;
> +
> +	/* pm related */
> +	int ved_needs_reset;
> +
> +	int default_tiling_stride;
> +	int default_tiling_scheme;
> +
> +	struct page *mmu_recover_page;
> +};
> +
> +struct drm_ipvr_file_private {
> +	/**
> +	 * protected by dev_priv->ipvr_ctx_lock
> +	 */
> +	struct list_head ctx_list;
> +};
> +
> +/* helpers for runtime pm */
> +int ipvr_runtime_pm_get(struct drm_ipvr_private *dev_priv);
> +int ipvr_runtime_pm_put(struct drm_ipvr_private *dev_priv, bool async);
> +int ipvr_runtime_pm_put_all(struct drm_ipvr_private *dev_priv, bool
> async);
> +
> +#endif
> diff --git a/drivers/gpu/drm/ipvr/ipvr_exec.c
> b/drivers/gpu/drm/ipvr/ipvr_exec.c
> new file mode 100644
> index 0000000..2e52dea
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/ipvr_exec.c
> @@ -0,0 +1,613 @@
> +/*********************************************************
> *****************
> + * ipvr_exec.c: IPVR command buffer execution
> + *
> + * Copyright (c) 2014 Intel Corporation, Hillsboro, OR, USA
> + * All Rights Reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of
> MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
> License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + *
> + * Authors:
> + *    Fei Jiang <fei.jiang@xxxxxxxxx>
> + *    Yao Cheng <yao.cheng@xxxxxxxxx>
> + *
> +
> **********************************************************
> ****************/
> +
> +#include "ipvr_exec.h"
> +#include "ipvr_gem.h"
> +#include "ipvr_mmu.h"
> +#include "ipvr_bo.h"
> +#include "ipvr_fence.h"
> +#include "ipvr_trace.h"
> +#include "ved_fw.h"
> +#include "ved_msg.h"
> +#include "ved_reg.h"
> +#include "ved_pm.h"
> +#include "ved_cmd.h"
> +#include <linux/io.h>
> +#include <linux/delay.h>
> +#include <linux/pm_runtime.h>
> +
> +static inline bool ipvr_bo_is_reserved(struct drm_ipvr_gem_object *obj)
> +{
> +	return atomic_read(&obj->reserved);
> +}
> +
> +static int
> +ipvr_bo_wait_unreserved(struct drm_ipvr_gem_object *obj, bool
> interruptible)
> +{
> +	if (interruptible) {
> +		return wait_event_interruptible(obj->event_queue,
> +					       !ipvr_bo_is_reserved(obj));
> +	} else {
> +		wait_event(obj->event_queue, !ipvr_bo_is_reserved(obj));
> +		return 0;
> +	}
> +}
> +
> +/**
> + * ipvr_bo_reserve - reserve the given bo
> + *
> + * @obj:     The buffer object to reserve.
> + * @interruptible:     whether the waiting is interruptible or not.
> + * @no_wait:    flag to indicate returning immediately
> + *
> + * Returns: 0 if successful, error code otherwise
> + */
> +int ipvr_bo_reserve(struct drm_ipvr_gem_object *obj,
> +			bool interruptible, bool no_wait)
> +{
> +	int ret;
> +
> +	while (unlikely(atomic_xchg(&obj->reserved, 1) != 0)) {
> +		if (no_wait)
> +			return -EBUSY;
> +		IPVR_DEBUG_GENERAL("wait bo unreserved, add to wait
> queue.\n");
> +		ret = ipvr_bo_wait_unreserved(obj, interruptible);
> +		if (unlikely(ret))
> +			return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +/**
> + * ipvr_bo_unreserve - unreserve the given bo
> + *
> + * @obj:     The buffer object to reserve.
> + *
> + * No return value.
> + */
> +void ipvr_bo_unreserve(struct drm_ipvr_gem_object *obj)
> +{
> +	atomic_set(&obj->reserved, 0);
> +	wake_up_all(&obj->event_queue);
> +}
> +
> +static void ipvr_backoff_reservation(struct list_head *list)
> +{
> +	struct ipvr_validate_buffer *entry;
> +
> +	list_for_each_entry(entry, list, head) {
> +		struct drm_ipvr_gem_object *obj = entry->ipvr_gem_bo;
> +		if (!atomic_read(&obj->reserved))
> +			continue;
> +		atomic_set(&obj->reserved, 0);
> +		wake_up_all(&obj->event_queue);
> +	}
> +}
> +
> +/*
> + * ipvr_reserve_buffers - Reserve buffers for validation.
> + *
> + * @list:     points to a bo list to be backoffed
> + *
> + * If a buffer in the list is marked for CPU access, we back off and
> + * wait for that buffer to become free for VPU access.
> + *
> + * If a buffer is reserved for another validation, the validator with
> + * the highest validation sequence backs off and waits for that buffer
> + * to become unreserved. This prevents deadlocks when validating multiple
> + * buffers in different orders.
> + *
> + * Returns:
> + * 0 on success, error code on failure.
> + */
> +int ipvr_reserve_buffers(struct list_head *list)
> +{
> +	struct ipvr_validate_buffer *entry;
> +	int ret;
> +
> +	if (list_empty(list))
> +		return 0;
> +
> +	list_for_each_entry(entry, list, head) {
> +		struct drm_ipvr_gem_object *bo = entry->ipvr_gem_bo;
> +
> +		ret = ipvr_bo_reserve(bo, true, true);
> +		switch (ret) {
> +		case 0:
> +			break;
> +		case -EBUSY:
> +			ret = ipvr_bo_reserve(bo, true, false);
> +			if (!ret)
> +				break;
> +			else
> +				goto err;
> +		default:
> +			goto err;
> +		}
> +	}
> +
> +	return 0;
> +err:
> +	ipvr_backoff_reservation(list);
> +	return ret;
> +}
> +
> +/**
> + * ipvr_set_tile - global setting of tiling info
> + *
> + * @dev:     the ipvr drm device
> + * @tiling_scheme:     see ipvr_drm.h for details
> + * @tiling_stride:     see ipvr_drm.h for details
> + *
> + * vxd392 hardware supports only one tile region so this configuration
> + * is global.
> + */
> +void ipvr_set_tile(struct drm_ipvr_private *dev_priv,
> +		u8 tiling_scheme, u8 tiling_stride)
> +{
> +	u32 cmd;
> +	u32 start = IPVR_MEM_MMU_TILING_START;
> +	u32 end = IPVR_MEM_MMU_TILING_END;
> +
> +	/* Enable memory tiling */
> +	cmd = ((start >> 20) + (((end >> 20) - 1) << 12) +
> +				((0x8 | tiling_stride) << 24));
> +	IPVR_DEBUG_GENERAL("VED: MMU Tiling register0 %08x.\n", cmd);
> +	IPVR_DEBUG_GENERAL("Region 0x%08x-0x%08x.\n", start, end);
> +	IPVR_REG_WRITE32(cmd, MSVDX_MMU_TILE_BASE0_OFFSET);
> +
> +	/* we need set tile format as 512x8 on Baytrail, which is shceme 1 */
> +	IPVR_REG_WRITE32(tiling_scheme << 3,
> MSVDX_MMU_CONTROL2_OFFSET);
> +}
> +
> +/**
> + * ipvr_find_ctx_with_fence - lookup the context with given fence seqno
> + *
> + * @dev_priv:     the ipvr drm device
> + * @fence:     fence seqno generated by the context
> + *
> + * Returns:
> + * context pointer if found.
> + * NULL if not found.
> + */
> +struct ipvr_context*
> +ipvr_find_ctx_with_fence(struct drm_ipvr_private *dev_priv, u16 fence)
> +{
> +	struct ipvr_context *pos;
> +	int id = 0;
> +
> +	spin_lock(&dev_priv->ipvr_ctx_lock);
> +	idr_for_each_entry(&dev_priv->ipvr_ctx_idr, pos, id) {
> +		if (pos->cur_seq == fence) {
> +			spin_unlock(&dev_priv->ipvr_ctx_lock);
> +			return pos;
> +		}
> +	}
> +	spin_unlock(&dev_priv->ipvr_ctx_lock);
> +
> +	return NULL;
> +}
> +
> +static void ipvr_unreference_buffers(struct ipvr_validate_context *context)
> +{
> +	struct ipvr_validate_buffer *entry, *next;
> +	struct drm_ipvr_gem_object *obj;
> +	struct list_head *list = &context->validate_list;
> +
> +	list_for_each_entry_safe(entry, next, list, head) {
> +		obj = entry->ipvr_gem_bo;
> +		list_del(&entry->head);
> +		drm_gem_object_unreference_unlocked(&obj->base);
> +		context->used_buffers--;
> +	}
> +}
> +
> +static int ipvr_update_buffers(struct drm_file *file_priv,
> +					struct ipvr_validate_context *context,
> +					u64 buffer_list,
> +					int count)
> +{
> +	struct ipvr_validate_buffer *entry;
> +	struct drm_ipvr_gem_exec_object __user *val_arg
> +		= (struct drm_ipvr_gem_exec_object __user
> *)(uintptr_t)buffer_list;
> +
> +	if (list_empty(&context->validate_list))
> +		return 0;
> +
> +	list_for_each_entry(entry, &context->validate_list, head) {
> +		if (!val_arg) {
> +			IPVR_DEBUG_WARN("unexpected end of val_arg
> list!!!\n");
> +			return -EINVAL;
> +		}
> +		if (unlikely(copy_to_user(val_arg, &entry->val_req,
> +					    sizeof(entry->val_req)))) {
> +			IPVR_ERROR("copy_to_user fault.\n");
> +			return -EFAULT;
> +		}
> +		val_arg ++;
> +	}
> +	return 0;
> +}
> +
> +static int ipvr_reference_buffers(struct drm_file *file_priv,
> +					struct ipvr_validate_context *context,
> +					u64 buffer_list,
> +					int count)
> +{
> +	struct drm_device *dev = file_priv->minor->dev;
> +	struct drm_ipvr_gem_exec_object __user *val_arg
> +		= (struct drm_ipvr_gem_exec_object __user
> *)(uintptr_t)buffer_list;
> +	struct ipvr_validate_buffer *item;
> +	struct drm_ipvr_gem_object *obj;
> +	int ret = 0;
> +	int i = 0;
> +
> +	for (i = 0; i < count; ++i) {
> +		if (unlikely(context->used_buffers >=
> IPVR_NUM_VALIDATE_BUFFERS)) {
> +			IPVR_ERROR("Too many buffers on validate list.\n");
> +			ret = -EINVAL;
> +			goto out_err;
> +		}
> +		item = &context->buffers[context->used_buffers];
> +		if (unlikely(copy_from_user(&item->val_req, val_arg,
> +					    sizeof(item->val_req)) != 0)) {
> +			IPVR_ERROR("copy_from_user fault.\n");
> +			ret = -EFAULT;
> +			goto out_err;
> +		}
> +		INIT_LIST_HEAD(&item->head);
> +		obj = to_ipvr_bo(drm_gem_object_lookup(dev, file_priv,
> +						item->val_req.handle));
> +		if (&obj->base == NULL) {
> +			IPVR_ERROR("cannot find obj for handle %u at
> position %d.\n",
> +				item->val_req.handle, i);
> +			ret = -ENOENT;
> +			goto out_err;
> +		}
> +		item->ipvr_gem_bo = obj;
> +
> +		list_add_tail(&item->head, &context->validate_list);
> +		context->used_buffers++;
> +
> +		val_arg++;
> +	}
> +
> +	return 0;
> +
> +out_err:
> +	ipvr_unreference_buffers(context);
> +	return ret;
> +}
> +
> +static int ipvr_fixup_reloc_entries(struct drm_device *dev,
> +					struct drm_file *filp,
> +					struct ipvr_validate_buffer *val_obj)
> +{
> +	int i, ret;
> +	u64 mmu_offset;
> +	struct drm_ipvr_gem_object *obj, *target_obj;
> +	struct drm_ipvr_gem_exec_object *exec_obj = &val_obj->val_req;
> +	struct drm_ipvr_gem_relocation_entry __user *reloc_entries
> +		= (struct drm_ipvr_gem_relocation_entry __user
> *)(uintptr_t)exec_obj->relocs_ptr;
> +	struct drm_ipvr_gem_relocation_entry local_reloc_entry;
> +
> +	obj = val_obj->ipvr_gem_bo;
> +	if (!obj)
> +		return -ENOENT;
> +
> +	/* todo: check write access */
> +
> +	/* overwrite user content and update relocation entries */
> +	mmu_offset = ipvr_gem_object_mmu_offset(obj);
> +	if (mmu_offset != exec_obj->offset) {
> +		exec_obj->offset = mmu_offset;
> +		IPVR_DEBUG_GENERAL("Fixup BO %u offset to 0x%llx\n",
> +			exec_obj->handle, exec_obj->offset);
> +	}
> +	for (i = 0; i < exec_obj->relocation_count; ++i) {
> +		if (unlikely(copy_from_user(&local_reloc_entry,
> &reloc_entries[i],
> +					    sizeof(local_reloc_entry)) != 0)) {
> +			IPVR_ERROR("copy_from_user fault.\n");
> +			return -EFAULT;
> +		}
> +		target_obj = to_ipvr_bo(drm_gem_object_lookup(dev, filp,
> +
> 	local_reloc_entry.target_handle));
> +		if (&target_obj->base == NULL) {
> +			IPVR_ERROR("cannot find obj for handle %u at
> position %d.\n",
> +				local_reloc_entry.target_handle, i);
> +			return -ENOENT;
> +		}
> +		ret = ipvr_gem_object_apply_reloc(obj,
> +				local_reloc_entry.offset,
> +				local_reloc_entry.delta +
> ipvr_gem_object_mmu_offset(target_obj));
> +		if (ret) {
> +			IPVR_ERROR("Failed applying reloc: %d\n", ret);
> +
> 	drm_gem_object_unreference_unlocked(&target_obj->base);
> +			return ret;
> +		}
> +		if (unlikely(copy_to_user(&reloc_entries[i],
> &local_reloc_entry,
> +						sizeof(local_reloc_entry)) !=
> 0)) {
> +			IPVR_DEBUG_WARN("copy_to_user fault.\n");
> +		}
> +		IPVR_DEBUG_GENERAL("Fixup offset %llx in BO %u to
> 0x%lx\n",
> +			local_reloc_entry.offset, exec_obj->handle,
> +			local_reloc_entry.delta +
> ipvr_gem_object_mmu_offset(target_obj));
> +		drm_gem_object_unreference_unlocked(&target_obj-
> >base);
> +	}
> +	return 0;
> +}
> +
> +static int ipvr_fixup_relocs(struct drm_device *dev,
> +					struct drm_file *filp,
> +					struct ipvr_validate_context *context)
> +{
> +	int ret;
> +	struct ipvr_validate_buffer *entry;
> +
> +	if (list_empty(&context->validate_list)) {
> +		IPVR_DEBUG_WARN("No relocs required in validate contex,
> skip\n");
> +		return 0;
> +	}
> +
> +	list_for_each_entry(entry, &context->validate_list, head) {
> +		IPVR_DEBUG_GENERAL("Fixing up reloc for BO handle %u\n",
> +			entry->val_req.handle);
> +		ret = ipvr_fixup_reloc_entries(dev, filp, entry);
> +		if (ret) {
> +			IPVR_ERROR("Failed to fixup reloc for BO
> handle %u\n",
> +				entry->val_req.handle);
> +			return ret;
> +		}
> +	}
> +	return 0;
> +}
> +
> +static int ipvr_validate_buffer_list(struct drm_file *file_priv,
> +					struct ipvr_validate_context *context,
> +					bool *need_fixup_relocs,
> +					struct drm_ipvr_gem_object
> **cmd_buffer)
> +{
> +	struct ipvr_validate_buffer *entry;
> +	struct drm_ipvr_gem_object *obj;
> +	struct list_head *list = &context->validate_list;
> +	int ret = 0;
> +	u64 real_mmu_offset;
> +
> +	list_for_each_entry(entry, list, head) {
> +		obj = entry->ipvr_gem_bo;
> +		/**
> +		 * need validate bo locate in the mmu space
> +		 * check if presumed offset is correct
> +		 * with ved_check_presumed, if presume is not correct,
> +		 * call fixup relocs with ved_fixup_relocs.
> +		 * current implementation doesn't support shrink/evict,
> +		 * so needn't validate mmu offset.
> +		 * need be implemented in the future if shrink/evict
> +		 * is supported.
> +		 */
> +		real_mmu_offset = ipvr_gem_object_mmu_offset(obj);
> +		if (IPVR_IS_ERR(real_mmu_offset))
> +			return -ENOENT;
> +		if (entry->val_req.offset != real_mmu_offset) {
> +			IPVR_DEBUG_GENERAL("BO %u offset doesn't
> match MMU, need fixup reloc\n", entry->val_req.handle);
> +			*need_fixup_relocs = true;
> +		}
> +		if (entry->val_req.flags & IPVR_EXEC_OBJECT_SUBMIT) {
> +			if (*cmd_buffer != NULL) {
> +				IPVR_ERROR("Only one BO can be submitted
> in one exec ioctl\n");
> +				return -EINVAL;
> +			}
> +			*cmd_buffer = obj;
> +		}
> +	}
> +
> +	return ret;
> +}
> +
> +/**
> + * ipvr_gem_do_execbuffer - lookup the context with given fence seqno
> + *
> + * @dev:     the ipvr drm device
> + * @file_priv:      the ipvr drm file pointer
> + * @args:      input argument passed from userland
> + * @vm:      ipvr address space for all the bo to bind to
> + *
> + * Returns: 0 on success, error code on failure
> + */
> +static int ipvr_gem_do_execbuffer(struct drm_device *dev,
> +					struct drm_file *file_priv,
> +					struct drm_ipvr_gem_execbuffer
> *args,
> +					struct ipvr_address_space *vm)
> +{
> +	drm_ipvr_private_t *dev_priv = dev->dev_private;
> +	struct ipvr_validate_context *context = &dev_priv->validate_ctx;
> +	struct ved_private *ved_priv = dev_priv->ved_private;
> +	struct drm_ipvr_gem_object *cmd_buffer = NULL;
> +	struct ipvr_context *ipvr_ctx  = NULL;
> +	int ret, ctx_id;
> +	bool need_fixup_relocs = false;
> +
> +	/* if not pass 0, use default context instead */
> +	if (args->ctx_id == 0)
> +		ctx_id = dev_priv->default_ctx.ctx_id;
> +	else
> +		ctx_id = args->ctx_id;
> +
> +	IPVR_DEBUG_GENERAL("try to find ctx according ctx_id %d.\n",
> ctx_id);
> +
> +	/* we're already in struct_mutex lock */
> +	ipvr_ctx = (struct ipvr_context *)
> +			idr_find(&dev_priv->ipvr_ctx_idr, ctx_id);
> +	if (!ipvr_ctx) {
> +		IPVR_DEBUG_WARN("video ctx is not found.\n");
> +		return -ENOENT;
> +	}
> +
> +	IPVR_DEBUG_GENERAL("reference all buffers passed through
> buffer_list.\n");
> +	ret = ipvr_reference_buffers(file_priv, context,
> +				args->buffers_ptr, args->buffer_count);
> +	if (unlikely(ret)) {
> +		IPVR_DEBUG_WARN("reference buffer failed: %d.\n", ret);
> +		return ret;
> +	}
> +
> +	IPVR_DEBUG_GENERAL("reserve all buffers to make them not
> accessed "
> +			"by other threads.\n");
> +	ret = ipvr_reserve_buffers(&context->validate_list);
> +	if (unlikely(ret)) {
> +		IPVR_ERROR("reserve buffers failed.\n");
> +		/* -EBUSY or -ERESTARTSYS */
> +		goto out_unref_buf;
> +	}
> +
> +	IPVR_DEBUG_GENERAL("validate buffer list, mainly check "
> +			"the bo mmu offset.\n");
> +	ret = ipvr_validate_buffer_list(file_priv, context,
> &need_fixup_relocs, &cmd_buffer);
> +	if (unlikely(ret)) {
> +		IPVR_ERROR("validate buffers failed: %d.\n", ret);
> +		goto out_backoff_reserv;
> +	}
> +
> +	if (unlikely(cmd_buffer == NULL)) {
> +		IPVR_ERROR("No cmd BO found.\n");
> +		ret = -EINVAL;
> +		goto out_backoff_reserv;
> +	}
> +
> +	if (unlikely(need_fixup_relocs)) {
> +		ret = ipvr_fixup_relocs(dev, file_priv, context);
> +		if (ret) {
> +			IPVR_ERROR("fixup relocs failed.\n");
> +			goto out_backoff_reserv;
> +		}
> +	}
> +
> +#if 0
> +	bo = idr_find(&file_priv->object_idr, args->cmdbuf_handle);
> +	if (!bo) {
> +		IPVR_DEBUG_WARN("Invalid cmd object handle 0x%x.\n",
> +			args->cmdbuf_handle);
> +		ret = -EINVAL;
> +		goto out_backoff_reserv;
> +	}
> +
> +	cmd_buffer = to_ipvr_bo(bo);
> +#endif
> +	/**
> +	 * check contex id and type
> +	 */
> +	/*
> +	 * only VED is supported currently
> +	 */
> +	if (ipvr_ctx->ctx_type == IPVR_CONTEXT_TYPE_VED)
> +	{
> +		/* fixme: should support non-zero start_offset */
> +		if (unlikely(args->exec_start_offset != 0)) {
> +			IPVR_ERROR("Unsupported exec_start_offset %u\n",
> args->exec_start_offset);
> +			ret = -EINVAL;
> +			goto out_backoff_reserv;
> +		}
> +
> +		ret = mutex_lock_interruptible(&ved_priv->ved_mutex);
> +		if (unlikely(ret)) {
> +			IPVR_ERROR("Error get VED mutex: %d\n", ret);
> +			/* -EINTR */
> +			goto out_backoff_reserv;
> +		}
> +
> +		IPVR_DEBUG_GENERAL("parse cmd buffer and send to
> VED.\n");
> +		ret = ved_cmdbuf_video(ved_priv, cmd_buffer,
> +				args->exec_len, ipvr_ctx );
> +		if (unlikely(ret)) {
> +			IPVR_ERROR("ved_cmdbuf_video returns %d.\n",
> ret);
> +			/* -EINVAL, -ENOMEM, -EFAULT, -EBUSY */
> +			mutex_unlock(&ved_priv->ved_mutex);
> +			goto out_backoff_reserv;
> +		}
> +
> +		mutex_unlock(&ved_priv->ved_mutex);
> +	}
> +
> +	/**
> +	 * update mmu_offsets and fence fds to user
> +	 */
> +	ret = ipvr_update_buffers(file_priv, context,
> +				args->buffers_ptr, args->buffer_count);
> +	if (unlikely(ret)) {
> +		IPVR_DEBUG_WARN("ipvr_update_buffers returns
> error %d.\n", ret);
> +		ret = 0;
> +	}
> +
> +out_backoff_reserv:
> +	IPVR_DEBUG_GENERAL("unreserve buffer list.\n");
> +	ipvr_backoff_reservation(&context->validate_list);
> +out_unref_buf:
> +	IPVR_DEBUG_GENERAL("unref bufs which are refered during bo
> lookup.\n");
> +	ipvr_unreference_buffers(context);
> +	return ret;
> +}
> +
> +/**
> + * ipvr_gem_do_execbuffer - lookup the context with given fence seqno
> + *
> + * ioctl entry for DRM_IPVR_GEM_EXECBUFFER
> + *
> + * Returns: 0 on success, error code on failure
> + */
> +int ipvr_gem_execbuffer_ioctl(struct drm_device *dev, void *data,
> +				struct drm_file *file_priv)
> +{
> +	struct drm_ipvr_private *dev_priv = dev->dev_private;
> +	struct drm_ipvr_gem_execbuffer *args = data;
> +	int ret;
> +	struct ipvr_validate_context *context = &dev_priv->validate_ctx;
> +
> +	ret = mutex_lock_interruptible(&dev->struct_mutex);
> +	if (ret)
> +		return ret;
> +
> +	if (!context || !context->buffers) {
> +		ret = -EINVAL;
> +		goto out;
> +	}
> +
> +	context->used_buffers = 0;
> +
> +	if (args->buffer_count < 1 ||
> +		args->buffer_count >
> +			(UINT_MAX / sizeof(struct ipvr_validate_buffer))) {
> +		IPVR_ERROR("validate %d buffers.\n", args->buffer_count);
> +		ret = -EINVAL;
> +		goto out;
> +	}
> +
> +	trace_ipvr_execbuffer(args);
> +	ret = ipvr_gem_do_execbuffer(dev, file_priv, args,
> +				    &dev_priv->addr_space);
> +out:
> +	mutex_unlock(&dev->struct_mutex);
> +	return ret;
> +}
> diff --git a/drivers/gpu/drm/ipvr/ipvr_exec.h
> b/drivers/gpu/drm/ipvr/ipvr_exec.h
> new file mode 100644
> index 0000000..cd174a8
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/ipvr_exec.h
> @@ -0,0 +1,57 @@
> +/*********************************************************
> *****************
> + * ipvr_exec.h: IPVR header file for command buffer execution
> + *
> + * Copyright (c) 2014 Intel Corporation, Hillsboro, OR, USA
> + * All Rights Reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of
> MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
> License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + *
> + * Authors:
> + *    Fei Jiang <fei.jiang@xxxxxxxxx>
> + *    Yao Cheng <yao.cheng@xxxxxxxxx>
> + *
> +
> **********************************************************
> ****************/
> +
> +#ifndef _IPVR_EXEC_H_
> +#define _IPVR_EXEC_H_
> +
> +#include "ipvr_drv.h"
> +#include "ipvr_drm.h"
> +#include "ipvr_gem.h"
> +#include "ipvr_fence.h"
> +
> +struct drm_ipvr_private;
> +
> +#define IPVR_NUM_VALIDATE_BUFFERS 2048
> +#define IPVR_MAX_RELOC_PAGES 1024
> +
> +struct ipvr_validate_buffer {
> +	struct drm_ipvr_gem_exec_object val_req;
> +	struct list_head head;
> +	struct drm_ipvr_gem_object *ipvr_gem_bo;
> +	struct ipvr_fence *old_fence;
> +};
> +
> +int ipvr_bo_reserve(struct drm_ipvr_gem_object *obj,
> +			bool interruptible, bool no_wait);
> +
> +void ipvr_bo_unreserve(struct drm_ipvr_gem_object *obj);
> +
> +struct ipvr_context*
> +ipvr_find_ctx_with_fence(struct drm_ipvr_private *dev_priv, u16 fence);
> +
> +void ipvr_set_tile(struct drm_ipvr_private *dev_priv,
> +			u8 tiling_scheme, u8 tiling_stride);
> +
> +#endif
> diff --git a/drivers/gpu/drm/ipvr/ipvr_fence.c
> b/drivers/gpu/drm/ipvr/ipvr_fence.c
> new file mode 100644
> index 0000000..ef8212a
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/ipvr_fence.c
> @@ -0,0 +1,487 @@
> +/*********************************************************
> *****************
> + * ipvr_fence.c: IPVR fence handling to track command exectuion status
> + *
> + * Copyright (c) 2014 Intel Corporation, Hillsboro, OR, USA
> + * All Rights Reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of
> MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
> License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + *
> + * Authors:
> + *    Fei Jiang <fei.jiang@xxxxxxxxx>
> + *    Yao Cheng <yao.cheng@xxxxxxxxx>
> + *
> +
> **********************************************************
> ****************/
> +
> +#include "ipvr_fence.h"
> +#include "ipvr_exec.h"
> +#include "ipvr_bo.h"
> +#include "ipvr_trace.h"
> +#include "ved_reg.h"
> +#include "ved_fw.h"
> +#include "ved_cmd.h"
> +#include <linux/debugfs.h>
> +#include <linux/export.h>
> +#include <linux/file.h>
> +#include <linux/fs.h>
> +#include <linux/kernel.h>
> +#include <linux/poll.h>
> +#include <linux/sched.h>
> +#include <linux/seq_file.h>
> +#include <linux/slab.h>
> +#include <linux/uaccess.h>
> +#include <linux/anon_inodes.h>
> +
> +/**
> + * ipvr_fence_create - create and init a fence
> + *
> + * @dev_priv: drm_ipvr_private pointer
> + * @fence: ipvr fence object
> + * @fence_fd: file descriptor for exporting fence
> + *
> + * Create a fence, actually the fence is written to ipvr through msg.
> + * exporting a new file descriptor to userspace.
> + * Returns pointer on success, ERR_PTR otherwise.
> + */
> +struct ipvr_fence* __must_check
> +ipvr_fence_create(struct drm_ipvr_private *dev_priv)
> +{
> +	struct ipvr_fence *fence;
> +	unsigned long irq_flags;
> +	u16 old_seq;
> +	struct ved_private *ved_priv;
> +
> +	ved_priv = dev_priv->ved_private;
> +
> +	fence = kzalloc(sizeof(struct ipvr_fence), GFP_KERNEL);
> +	if (!fence) {
> +		fence = ERR_PTR(-ENOMEM);
> +		goto out;
> +	}
> +
> +	kref_init(&fence->kref);
> +	fence->dev_priv = dev_priv;
> +
> +	spin_lock_irqsave(&dev_priv->fence_drv.fence_lock, irq_flags);
> +	/* cmds in one batch use different fence value */
> +	old_seq = dev_priv->fence_drv.sync_seq;
> +	dev_priv->fence_drv.sync_seq = dev_priv->last_seq++;
> +	dev_priv->fence_drv.sync_seq <<= 4;
> +	fence->seq = dev_priv->fence_drv.sync_seq;
> +
> +	spin_unlock_irqrestore(&dev_priv->fence_drv.fence_lock, irq_flags);
> +
> +	kref_get(&fence->kref);
> +	IPVR_DEBUG_GENERAL("fence is created and its seq is %u
> (0x%04x).\n",
> +		fence->seq, fence->seq);
> +out:
> +	return fence;
> +}
> +
> +/**
> + * ipvr_fence_destroy - destroy a fence
> + *
> + * @kref: fence kref
> + *
> + * Frees the fence object (all asics).
> + */
> +static void ipvr_fence_destroy(struct kref *kref)
> +{
> +	struct ipvr_fence *fence;
> +
> +	fence = container_of(kref, struct ipvr_fence, kref);
> +	kfree(fence);
> +}
> +
> +/**
> + * ipvr_fence_process - process a fence
> + *
> + * @dev_priv: drm_ipvr_private pointer
> + * @seq: indicate the fence seq has been signaled
> + * @err: indicate if err happened, for future use
> + *
> + * Checks the current fence value and wakes the fence queue
> + * if the sequence number has increased (all asics).
> + */
> +void ipvr_fence_process(struct drm_ipvr_private *dev_priv, u16 seq, u8 err)
> +{
> +	int signaled_seq_int;
> +	u16 signaled_seq;
> +	u16 last_emitted;
> +
> +	signaled_seq_int = atomic_read(&dev_priv-
> >fence_drv.signaled_seq);
> +	signaled_seq = (u16)signaled_seq_int;
> +	last_emitted = dev_priv->fence_drv.sync_seq;
> +
> +	if (ipvr_seq_after(seq, last_emitted)) {
> +		IPVR_DEBUG_WARN("seq error, seq is %u, signaled_seq
> is %u, "
> +				"last_emitted is %u.\n",
> +				seq, signaled_seq, last_emitted);
> +		return;
> +	}
> +	if (ipvr_seq_after(seq, signaled_seq)) {
> +		atomic_xchg(&dev_priv->fence_drv.signaled_seq, seq);
> +		dev_priv->fence_drv.last_activity = jiffies;
> +		IPVR_DEBUG_GENERAL("last emitted seq %u is updated.\n",
> seq);
> +		wake_up_all(&dev_priv->fence_queue);
> +	}
> +}
> +
> +/**
> + * ipvr_fence_signaled - check if a fence sequeuce number has signaled
> + *
> + * @dev_priv: ipvr device pointer
> + * @seq: sequence number
> + *
> + * Check if the last singled fence sequnce number is >= the requested
> + * sequence number (all asics).
> + * Returns true if the fence has signaled (current fence value
> + * is >= requested value) or false if it has not (current fence
> + * value is < the requested value.
> + */
> +static bool ipvr_fence_signaled(struct drm_ipvr_private *dev_priv, u16 seq)
> +{
> +	u16 curr_seq, signaled_seq;
> +	unsigned long irq_flags;
> +	spin_lock_irqsave(&dev_priv->fence_drv.fence_lock, irq_flags);
> +	curr_seq = dev_priv->ved_private->ved_cur_seq;
> +	signaled_seq = atomic_read(&dev_priv->fence_drv.signaled_seq);
> +
> +	if (ipvr_seq_after(seq, signaled_seq)) {
> +		/* poll new last sequence at least once */
> +		ipvr_fence_process(dev_priv, curr_seq,
> IPVR_CMD_SUCCESS);
> +		signaled_seq = atomic_read(&dev_priv-
> >fence_drv.signaled_seq);
> +		if (ipvr_seq_after(seq, signaled_seq)) {
> +			spin_unlock_irqrestore(&dev_priv-
> >fence_drv.fence_lock,
> +						irq_flags);
> +			return false;
> +		}
> +	}
> +	spin_unlock_irqrestore(&dev_priv->fence_drv.fence_lock, irq_flags);
> +	return true;
> +}
> +
> +/**
> + * ipvr_fence_lockup - ipvr lockup is detected
> + *
> + * @dev_priv: ipvr device pointer
> + * @fence: lockup detected when wait the specific fence
> + *
> + * During the calling of ipvr_fence_wait, if wait to timeout,
> + * indicate lockup happened, need flush cmd queue and reset ved
> + * If ipvr_fence_wait_empty_locked encounter lockup, fence is NULL
> + */
> +static void
> +ipvr_fence_lockup(struct drm_ipvr_private *dev_priv, struct ipvr_fence
> *fence)
> +{
> +	unsigned long irq_flags;
> +	struct ved_private *ved_priv = dev_priv->ved_private;
> +
> +	IPVR_DEBUG_WARN("timeout detected, flush queued cmd, maybe
> lockup.\n");
> +	IPVR_DEBUG_WARN("MSVDX_COMMS_FW_STATUS reg is 0x%x.\n",
> +			IPVR_REG_READ32(MSVDX_COMMS_FW_STATUS));
> +
> +	if (fence) {
> +		spin_lock_irqsave(&dev_priv->fence_drv.fence_lock,
> irq_flags);
> +		ipvr_fence_process(dev_priv, fence->seq,
> IPVR_CMD_LOCKUP);
> +		spin_unlock_irqrestore(&dev_priv->fence_drv.fence_lock,
> irq_flags);
> +	}
> +
> +	/* should behave according to ctx type in the future */
> +	ved_flush_cmd_queue(dev_priv->ved_private);
> +	ipvr_runtime_pm_put_all(dev_priv, false);
> +
> +	ved_priv->ved_needs_reset = 1;
> +}
> +
> +/**
> + * ipvr_fence_wait_seq - wait for a specific sequence number
> + *
> + * @dev_priv: ipvr device pointer
> + * @target_seq: sequence number we want to wait for
> + * @intr: use interruptable sleep
> + *
> + * Wait for the requested sequence number to be written.
> + * @intr selects whether to use interruptable (true) or non-interruptable
> + * (false) sleep when waiting for the sequence number.
> + * Returns 0 if the sequence number has passed, error for all other cases.
> + * -EDEADLK is returned when a VPU lockup has been detected.
> + */
> +static int ipvr_fence_wait_seq(struct drm_ipvr_private *dev_priv,
> +					u16 target_seq, bool intr)
> +{
> +	struct ipvr_fence_driver	*fence_drv = &dev_priv->fence_drv;
> +	unsigned long timeout, last_activity;
> +	u16 signaled_seq;
> +	int ret;
> +	unsigned long irq_flags;
> +	bool signaled;
> +	spin_lock_irqsave(&dev_priv->fence_drv.fence_lock, irq_flags);
> +
> +	while (ipvr_seq_after(target_seq,
> +			(u16)atomic_read(&fence_drv->signaled_seq))) {
> +		/* seems the fence_drv->last_activity is useless? */
> +		timeout = IPVR_FENCE_JIFFIES_TIMEOUT;
> +		signaled_seq = atomic_read(&fence_drv->signaled_seq);
> +		/* save last activity valuee, used to check for VPU lockups */
> +		last_activity = fence_drv->last_activity;
> +
> +		spin_unlock_irqrestore(&dev_priv->fence_drv.fence_lock,
> irq_flags);
> +		if (intr) {
> +			ret = wait_event_interruptible_timeout(
> +				dev_priv->fence_queue,
> +				(signaled = ipvr_fence_signaled(dev_priv,
> target_seq)),
> +				timeout);
> +		} else {
> +			ret = wait_event_timeout(
> +				dev_priv->fence_queue,
> +				(signaled = ipvr_fence_signaled(dev_priv,
> target_seq)),
> +				timeout);
> +		}
> +		spin_lock_irqsave(&dev_priv->fence_drv.fence_lock,
> irq_flags);
> +
> +		if (unlikely(!signaled)) {
> +			/* we were interrupted for some reason and fence
> +			 * isn't signaled yet, resume waiting until timeout  */
> +			if (unlikely(ret < 0)) {
> +				/* should return -ERESTARTSYS,
> +				 * interrupted by a signal */
> +				continue;
> +			}
> +
> +			/* check if sequence value has changed since
> +			 * last_activity */
> +			if (signaled_seq !=
> +				atomic_read(&fence_drv->signaled_seq)) {
> +				continue;
> +			}
> +
> +			if (last_activity != fence_drv->last_activity) {
> +				continue;
> +			}
> +
> +			/* lockup happen, it is better have some reg to check
> */
> +			IPVR_DEBUG_WARN("VPU lockup (waiting for 0x%0x
> last "
> +					"signaled fence id 0x%x).\n",
> +					target_seq, signaled_seq);
> +
> +			/* change last activity so nobody else
> +			 * think there is a lockup */
> +			fence_drv->last_activity = jiffies;
> +			spin_unlock_irqrestore(&dev_priv-
> >fence_drv.fence_lock,
> +					irq_flags);
> +			return -EDEADLK;
> +
> +		}
> +	}
> +	spin_unlock_irqrestore(&dev_priv->fence_drv.fence_lock, irq_flags);
> +	return 0;
> +}
> +
> +/**
> + * ipvr_fence_wait - wait for a fence to signal
> + *
> + * @fence: ipvr fence object
> + * @intr: use interruptable sleep
> + * @no_wait: not signaled, if need add into wait queue
> + *
> + * Wait for the requested fence to signal (all asics).
> + * @intr selects whether to use interruptable (true) or non-interruptable
> + * (false) sleep when waiting for the fence.
> + * Returns 0 if the fence has passed, error for all other cases.
> + */
> +int ipvr_fence_wait(struct ipvr_fence *fence, bool intr, bool no_wait)
> +{
> +	int ret;
> +	struct drm_ipvr_private *dev_priv;
> +
> +	if (fence == NULL || fence->seq == IPVR_FENCE_SIGNALED_SEQ) {
> +		IPVR_DEBUG_GENERAL("fence is NULL or has been
> singaled.\n");
> +		return 0;
> +	}
> +	dev_priv = fence->dev_priv;
> +
> +	IPVR_DEBUG_GENERAL("wait fence seq %u, last signaled seq is %d, "
> +			"last emitted seq is %u.\n", fence->seq,
> +			atomic_read(&dev_priv->fence_drv.signaled_seq),
> +			dev_priv->fence_drv.sync_seq);
> +	if (!no_wait)
> +		trace_ipvr_fence_wait_begin(fence,
> +			atomic_read(&dev_priv->fence_drv.signaled_seq),
> +			dev_priv->fence_drv.sync_seq);
> +
> +	if (ipvr_fence_signaled(dev_priv, fence->seq)) {
> +		IPVR_DEBUG_GENERAL("fence has been signaled.\n");
> +		/*
> +		 * compare with ttm_bo_wait, don't need create a tmp_obj
> +		 * it is better we also set bo->fence = NULL
> +		 */
> +		if (!no_wait)
> +			trace_ipvr_fence_wait_end(fence,
> +				atomic_read(&dev_priv-
> >fence_drv.signaled_seq),
> +				dev_priv->fence_drv.sync_seq);
> +		fence->seq = IPVR_FENCE_SIGNALED_SEQ;
> +		ipvr_fence_unref(&fence);
> +		return 0;
> +	}
> +
> +	if (no_wait)
> +		return -EBUSY;
> +
> +	ret = ipvr_fence_wait_seq(dev_priv, fence->seq, intr);
> +	if (ret) {
> +		if (ret == -EDEADLK) {
> +			trace_ipvr_fence_wait_lockup(fence,
> +					atomic_read(&dev_priv-
> >fence_drv.signaled_seq),
> +					dev_priv->fence_drv.sync_seq);
> +			ipvr_fence_lockup(dev_priv, fence);
> +		}
> +		return ret;
> +	}
> +	trace_ipvr_fence_wait_end(fence,
> +			atomic_read(&dev_priv->fence_drv.signaled_seq),
> +			dev_priv->fence_drv.sync_seq);
> +	fence->seq = IPVR_FENCE_SIGNALED_SEQ;
> +
> +	return 0;
> +}
> +
> +/**
> + * ipvr_fence_driver_init - init the fence driver
> + *
> + * @dev_priv: ipvr device pointer
> + *
> + * Init the fence driver, will not fail
> + */
> +void ipvr_fence_driver_init(struct drm_ipvr_private *dev_priv)
> +{
> +	spin_lock_init(&dev_priv->fence_drv.fence_lock);
> +	init_waitqueue_head(&dev_priv->fence_queue);
> +	dev_priv->fence_drv.sync_seq = 0;
> +	atomic_set(&dev_priv->fence_drv.signaled_seq, 0);
> +	dev_priv->fence_drv.last_activity = jiffies;
> +	dev_priv->fence_drv.initialized = false;
> +}
> +
> +/**
> + * ipvr_fence_wait_empty_locked - wait for all fences to signal
> + *
> + * @dev_priv: ipvr device pointer
> + *
> + * Wait for all fences to be signalled.
> + */
> +void ipvr_fence_wait_empty_locked(struct drm_ipvr_private *dev_priv)
> +{
> +	u16 seq;
> +
> +	seq = dev_priv->fence_drv.sync_seq;
> +
> +	while(1) {
> +		int ret;
> +		ret = ipvr_fence_wait_seq(dev_priv, seq, false);
> +		if (ret == 0) {
> +			return;
> +		} else if (ret == -EDEADLK) {
> +			ipvr_fence_lockup(dev_priv, NULL);
> +			IPVR_DEBUG_WARN("Lockup found waiting for
> seq %d.\n",
> +					seq);
> +			return;
> +		} else {
> +			continue;
> +		}
> +	}
> +}
> +
> +/**
> + * ipvr_fence_driver_fini - tear down the fence driver
> + * for all possible rings.
> + *
> + * @dev_priv: ipvr device pointer
> + *
> + * Tear down the fence driver for all possible rings (all asics).
> + */
> +void ipvr_fence_driver_fini(struct drm_ipvr_private *dev_priv)
> +{
> +	if (!dev_priv->fence_drv.initialized)
> +		return;
> +	ipvr_fence_wait_empty_locked(dev_priv);
> +	wake_up_all(&dev_priv->fence_queue);
> +	dev_priv->fence_drv.initialized = false;
> +}
> +
> +/**
> + * ipvr_fence_ref - take a ref on a fence
> + *
> + * @fence: fence object
> + *
> + * Take a reference on a fence (all asics).
> + * Returns the fence.
> + */
> +struct ipvr_fence *ipvr_fence_ref(struct ipvr_fence *fence)
> +{
> +	kref_get(&fence->kref);
> +	return fence;
> +}
> +
> +/**
> + * ipvr_fence_unref - remove a ref on a fence
> + *
> + * @fence: ipvr fence object
> + *
> + * Remove a reference on a fence, if ref == 0, destory the fence.
> + */
> +void ipvr_fence_unref(struct ipvr_fence **fence)
> +{
> +	struct ipvr_fence *tmp = *fence;
> +
> +	*fence = NULL;
> +	if (tmp) {
> +		kref_put(&tmp->kref, &ipvr_fence_destroy);
> +	}
> +}
> +
> +/**
> + * ipvr_fence_buffer_objects - bind fence to buffer list
> + *
> + * @list: validation buffer list
> + * @fence: ipvr fence object
> + *
> + * bind a fence to all obj in the validation list
> + */
> +void
> +ipvr_fence_buffer_objects(struct list_head *list, struct ipvr_fence *fence)
> +{
> +	struct ipvr_validate_buffer *entry;
> +	struct drm_ipvr_gem_object *obj;
> +
> +	if (list_empty(list))
> +		return;
> +
> +	list_for_each_entry(entry, list, head) {
> +		obj = entry->ipvr_gem_bo;
> +		/**
> +		 * do not update fence if val_args specifies so
> +		 */
> +		if (entry->val_req.flags & IPVR_EXEC_OBJECT_NEED_FENCE)
> {
> +			entry->old_fence = obj->fence;
> +			obj->fence = ipvr_fence_ref(fence);
> +			if (entry->old_fence)
> +				ipvr_fence_unref(&entry->old_fence);
> +		}
> +		else {
> +			IPVR_DEBUG_GENERAL("obj 0x%lx marked as non-
> fence\n",
> +				ipvr_gem_object_mmu_offset(obj));
> +		}
> +		ipvr_bo_unreserve(obj);
> +	}
> +}
> diff --git a/drivers/gpu/drm/ipvr/ipvr_fence.h
> b/drivers/gpu/drm/ipvr/ipvr_fence.h
> new file mode 100644
> index 0000000..91c1bc6
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/ipvr_fence.h
> @@ -0,0 +1,72 @@
> +/*********************************************************
> *****************
> + * ipvr_fence.h: IPVR header file for fence handling
> + *
> + * Copyright (c) 2014 Intel Corporation, Hillsboro, OR, USA
> + * All Rights Reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of
> MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
> License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + *
> + * Authors:
> + *    Fei Jiang <fei.jiang@xxxxxxxxx>
> + *    Yao Cheng <yao.cheng@xxxxxxxxx>
> + *
> +
> **********************************************************
> ****************/
> +
> +#ifndef _IPVR_FENCE_H_
> +#define _IPVR_FENCE_H_
> +
> +#include "ipvr_drv.h"
> +
> +/* seq_after(a,b) returns true if the seq a is after seq b.*/
> +#define ipvr_seq_after(a,b) \
> +    (typecheck(u16, a) && \
> +     typecheck(u16, b) && \
> +     ((s16)(a - b) > 0))
> +
> +enum ipvr_cmd_status {
> +   IPVR_CMD_SUCCESS,
> +   IPVR_CMD_FAILED,
> +   IPVR_CMD_LOCKUP,
> +   IPVR_CMD_SKIP
> +};
> +
> +#define IPVR_FENCE_JIFFIES_TIMEOUT		(HZ / 2)
> +/* fence seq are set to this number when signaled */
> +#define IPVR_FENCE_SIGNALED_SEQ		0LL
> +
> +struct ipvr_fence {
> +	struct drm_ipvr_private *dev_priv;
> +	struct kref kref;
> +	/* protected by dev_priv->fence_drv.fence_lock */
> +	u16 seq;
> +	char name[32];
> +};
> +
> +int ipvr_fence_wait(struct ipvr_fence *fence, bool intr, bool no_wait);
> +
> +void ipvr_fence_process(struct drm_ipvr_private *dev_priv, u16 seq, u8
> err);
> +
> +void ipvr_fence_driver_init(struct drm_ipvr_private *dev_priv);
> +
> +void ipvr_fence_driver_fini(struct drm_ipvr_private *dev_priv);
> +
> +struct ipvr_fence* __must_check ipvr_fence_create(struct
> drm_ipvr_private *dev_priv);
> +
> +void ipvr_fence_buffer_objects(struct list_head *list, struct ipvr_fence
> *fence);
> +
> +void ipvr_fence_unref(struct ipvr_fence **fence);
> +
> +void ipvr_fence_wait_empty_locked(struct drm_ipvr_private *dev_priv);
> +
> +#endif
> diff --git a/drivers/gpu/drm/ipvr/ipvr_gem.c
> b/drivers/gpu/drm/ipvr/ipvr_gem.c
> new file mode 100644
> index 0000000..94c266a
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/ipvr_gem.c
> @@ -0,0 +1,297 @@
> +/*********************************************************
> *****************
> + * ipvr_gem.c: IPVR hook file for gem ioctls
> + *
> + * Copyright (c) 2014 Intel Corporation, Hillsboro, OR, USA
> + * All Rights Reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of
> MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
> License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + *
> + * Authors:
> + *    Fei Jiang <fei.jiang@xxxxxxxxx>
> + *    Yao Cheng <yao.cheng@xxxxxxxxx>
> + *
> +
> **********************************************************
> ****************/
> +
> +#include "ipvr_gem.h"
> +#include "ipvr_bo.h"
> +#include "ipvr_fence.h"
> +#include "ipvr_exec.h"
> +#include "ipvr_trace.h"
> +#include <drm_gem.h>
> +#include <linux/slab.h>
> +#include <linux/swap.h>
> +#include <linux/pci.h>
> +#include <linux/dma-buf.h>
> +
> +#define VLV_IPVR_DEV_ID (0xf31)
> +
> +int
> +ipvr_context_create_ioctl(struct drm_device *dev,
> +			void *data, struct drm_file *file_priv)
> +{
> +	struct drm_ipvr_context_create *args = data;
> +	struct drm_ipvr_private *dev_priv = dev->dev_private;
> +	struct drm_ipvr_file_private *fpriv = file_priv->driver_priv;
> +	struct ipvr_context *ipvr_ctx  = NULL;
> +	unsigned long irq_flags;
> +	int ctx_id, ret = 0;
> +
> +	IPVR_DEBUG_ENTRY("enter\n");
> +	/*
> +	 * todo: only one tiling region is supported now,
> +	 * maybe we need create additional tiling region for rotation case,
> +	 * which has different tiling stride
> +	 */
> +	if (!(args->tiling_scheme == 0 && args->tiling_stride <= 3) &&
> +		!(args->tiling_scheme == 1 && args->tiling_stride <= 2)) {
> +		IPVR_DEBUG_WARN("unsupported tiling scheme %d and
> stide %d.\n",
> +			args->tiling_scheme, args->tiling_stride);
> +		return -EINVAL;
> +	}
> +	/* add video decode context */
> +	ipvr_ctx = kzalloc(sizeof(struct ipvr_context), GFP_KERNEL);
> +	if (ipvr_ctx  == NULL)
> +		return -ENOMEM;
> +
> +	spin_lock_irqsave(&dev_priv->ipvr_ctx_lock, irq_flags);
> +	ctx_id = idr_alloc(&dev_priv->ipvr_ctx_idr, ipvr_ctx ,
> +			   IPVR_MIN_CONTEXT_ID, IPVR_MAX_CONTEXT_ID,
> +			   GFP_NOWAIT);
> +	if (ctx_id < 0) {
> +		IPVR_ERROR("idr_alloc got %d, return ENOMEM\n", ctx_id);
> +		spin_unlock_irqrestore(&dev_priv->ipvr_ctx_lock, irq_flags);
> +		return -ENOMEM;
> +	}
> +	ipvr_ctx->ctx_id = ctx_id;
> +
> +	INIT_LIST_HEAD(&ipvr_ctx->head);
> +	ipvr_ctx->ctx_type = args->ctx_type;
> +	ipvr_ctx->ipvr_fpriv = file_priv->driver_priv;
> +	list_add(&ipvr_ctx ->head, &fpriv->ctx_list);
> +	spin_unlock_irqrestore(&dev_priv->ipvr_ctx_lock, irq_flags);
> +	args->ctx_id = ctx_id;
> +	IPVR_DEBUG_INIT("add ctx type 0x%x, ctx_id is %d.\n",
> +			ipvr_ctx->ctx_type, ctx_id);
> +
> +	ipvr_ctx->tiling_scheme = args->tiling_scheme;
> +	ipvr_ctx->tiling_stride = args->tiling_stride;
> +
> +	return ret;
> +}
> +
> +int
> +ipvr_context_destroy_ioctl(struct drm_device *dev,
> +			void *data, struct drm_file *file_priv)
> +{
> +	struct drm_ipvr_context_destroy *args = data;
> +	struct ved_private *ved_priv;
> +	struct drm_ipvr_private *dev_priv = dev->dev_private;
> +	struct ipvr_context *ipvr_ctx  = NULL;
> +	unsigned long irq_flags;
> +
> +	IPVR_DEBUG_ENTRY("enter\n");
> +	spin_lock_irqsave(&dev_priv->ipvr_ctx_lock, irq_flags);
> +	ved_priv = dev_priv->ved_private;
> +	if (ved_priv && (!list_empty(&ved_priv->ved_queue)
> +			|| (atomic_read(&dev_priv->pending_events) > 0)))
> {
> +		IPVR_DEBUG_WARN("Destroying the context while pending
> cmds exist!\n");
> +	}
> +	ipvr_ctx = (struct ipvr_context *)
> +			idr_find(&dev_priv->ipvr_ctx_idr, args->ctx_id);
> +	if (!ipvr_ctx) {
> +		IPVR_ERROR("can not find given context %u\n", args-
> >ctx_id);
> +		spin_unlock_irqrestore(&dev_priv->ipvr_ctx_lock, irq_flags);
> +		return -EINVAL;
> +	}
> +
> +	if (ipvr_ctx->ipvr_fpriv != file_priv->driver_priv) {
> +		IPVR_ERROR("given contex %u doesn't belong to the file\n",
> args->ctx_id);
> +		spin_unlock_irqrestore(&dev_priv->ipvr_ctx_lock, irq_flags);
> +		return -ENOENT;
> +	}
> +
> +	IPVR_DEBUG_GENERAL("Video:remove context %d type 0x%x\n",
> +		ipvr_ctx->ctx_id, ipvr_ctx->ctx_type);
> +	list_del(&ipvr_ctx->head);
> +	idr_remove(&dev_priv->ipvr_ctx_idr, ipvr_ctx->ctx_id);
> +	kfree(ipvr_ctx);
> +	spin_unlock_irqrestore(&dev_priv->ipvr_ctx_lock, irq_flags);
> +	return 0;
> +}
> +
> +int
> +ipvr_get_info_ioctl(struct drm_device *dev, void *data,
> +			struct drm_file *file_priv)
> +{
> +	struct drm_ipvr_private *dev_priv = dev->dev_private;
> +	struct drm_ipvr_get_info *args = data;
> +	int ret = 0;
> +
> +	IPVR_DEBUG_ENTRY("enter\n");
> +	if (!dev_priv) {
> +		IPVR_DEBUG_WARN("called with no initialization.\n");
> +		return -ENODEV;
> +	}
> +	switch (args->key) {
> +	case IPVR_DEVICE_INFO: {
> +		/* only vlv supported now
> +		 */
> +		args->value = VLV_IPVR_DEV_ID << 16;
> +		break;
> +	}
> +	default:
> +		ret = -EINVAL;
> +		break;
> +	}
> +	return ret;
> +}
> +
> +int ipvr_gem_create_ioctl(struct drm_device *dev, void *data,
> +				struct drm_file *file_priv)
> +{
> +	int ret;
> +	struct drm_ipvr_gem_create *args = data;
> +	struct drm_ipvr_gem_object *obj;
> +	struct drm_ipvr_private *dev_priv = dev->dev_private;
> +	if (args->cache_level >= IPVR_CACHE_MAX)
> +		return -EINVAL;
> +	if (args->size == 0)
> +		return -EINVAL;
> +	args->rounded_size = roundup(args->size, PAGE_SIZE);
> +	obj = ipvr_gem_create(dev_priv, args->rounded_size, args->tiling,
> +			      args->cache_level);
> +	if (IS_ERR(obj)) {
> +		ret = PTR_ERR(obj);
> +		goto out;
> +	}
> +	args->mmu_offset = ipvr_gem_object_mmu_offset(obj);
> +	/* create handle */
> +	ret = drm_gem_handle_create(file_priv, &obj->base, &args-
> >handle);
> +	if (ret) {
> +		IPVR_ERROR("could not allocate mmap offset: %d\n", ret);
> +		goto out_free;
> +	}
> +	/* drop reference from allocate - handle holds it now */
> +	drm_gem_object_unreference_unlocked(&obj->base);
> +	/* create map offset */
> +	ret = drm_gem_create_mmap_offset(&obj->base);
> +	if (ret) {
> +		IPVR_ERROR("could not allocate mmap offset: %d\n", ret);
> +		goto out_free;
> +	}
> +	args->map_offset = drm_vma_node_offset_addr(&obj-
> >base.vma_node);
> +	IPVR_DEBUG_GENERAL("bo create done, handle: %u, vpu offset:
> 0x%llx.\n",
> +		args->handle, args->mmu_offset);
> +	return 0;
> +out_free:
> +	ipvr_gem_free_object(&obj->base);
> +out:
> +	return ret;
> +}
> +
> +int ipvr_gem_busy_ioctl(struct drm_device *dev, void *data,
> +				struct drm_file *file_priv)
> +{
> +	struct drm_ipvr_gem_busy *args = data;
> +	struct drm_ipvr_gem_object *obj;
> +	int ret = 0;
> +
> +	obj = to_ipvr_bo(drm_gem_object_lookup(dev, file_priv, args-
> >handle));
> +	if (!obj || &obj->base == NULL) {
> +		return -ENOENT;
> +	}
> +	IPVR_DEBUG_GENERAL("Checking bo %p (fence %p seq %u) busy
> status\n",
> +        obj, obj->fence, ((obj->fence)? obj->fence->seq: 0));
> +
> +	ret = ipvr_bo_reserve(obj, true, false);
> +	if (unlikely(ret != 0))
> +		goto out;
> +	ret = ipvr_fence_wait(obj->fence, true, true);
> +	ipvr_bo_unreserve(obj);
> +
> +    args->busy = ret? 1: 0;
> +out:
> +	drm_gem_object_unreference_unlocked(&obj->base);
> +	return ret;
> +}
> +
> +/**
> + * ipvr_gem_wait_ioctl - implements DRM_IOCTL_IPVR_GEM_WAIT
> + * @DRM_IOCTL_ARGS: standard ioctl arguments
> + *
> + * Returns 0 if successful, else an error is returned with the remaining time
> in
> + * the timeout parameter.
> + *  -ETIME: object is still busy after timeout
> + *  -ERESTARTSYS: signal interrupted the wait
> + *  -ENONENT: object doesn't exist
> + * Also possible, but rare:
> + *  -EAGAIN: VPU wedged
> + *  -ENOMEM: damn
> + *  -ENODEV: Internal IRQ fail
> + *  -E?: The add request failed
> + *
> + * The wait ioctl with a timeout of 0 reimplements the busy ioctl. With any
> + * non-zero timeout parameter the wait ioctl will wait for the given number
> of
> + * nanoseconds on an object becoming unbusy. Since the wait itself does so
> + * without holding struct_mutex the object may become re-busied before
> this
> + * function completes. A similar but shorter * race condition exists in the
> busy
> + * ioctl
> + */
> +int ipvr_gem_wait_ioctl(struct drm_device *dev,
> +				void *data, struct drm_file *file_priv)
> +{
> +	struct drm_ipvr_gem_wait *args = data;
> +	struct drm_ipvr_gem_object *obj;
> +	int ret = 0;
> +
> +	IPVR_DEBUG_ENTRY("wait %d buffer to finish execution.\n", args-
> >handle);
> +	obj = to_ipvr_bo(drm_gem_object_lookup(dev, file_priv, args-
> >handle));
> +	if (&obj->base == NULL) {
> +		return -ENOENT;
> +	}
> +
> +	ret = ipvr_bo_reserve(obj, true, false);
> +	if (unlikely(ret != 0))
> +		goto out;
> +
> +	ret = ipvr_fence_wait(obj->fence, true, false);
> +
> +	ipvr_bo_unreserve(obj);
> +
> +out:
> +	drm_gem_object_unreference_unlocked(&obj->base);
> +	return ret;
> +}
> +
> +int ipvr_gem_mmap_offset_ioctl(struct drm_device *dev,
> +				void *data, struct drm_file *file_priv)
> +{
> +	int ret = 0;
> +	struct drm_ipvr_gem_mmap_offset *args = data;
> +	struct drm_ipvr_gem_object *obj;
> +
> +	IPVR_DEBUG_ENTRY("getting mmap offset for BO %u.\n", args-
> >handle);
> +	obj = to_ipvr_bo(drm_gem_object_lookup(dev, file_priv, args-
> >handle));
> +
> +	/* create map offset */
> +	ret = drm_gem_create_mmap_offset(&obj->base);
> +	if (ret) {
> +		IPVR_ERROR("could not allocate mmap offset: %d\n", ret);
> +		goto out;
> +	}
> +	args->offset = drm_vma_node_offset_addr(&obj->base.vma_node);
> +out:
> +	drm_gem_object_unreference_unlocked(&obj->base);
> +	return ret;
> +}
> diff --git a/drivers/gpu/drm/ipvr/ipvr_gem.h
> b/drivers/gpu/drm/ipvr/ipvr_gem.h
> new file mode 100644
> index 0000000..525f630
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/ipvr_gem.h
> @@ -0,0 +1,48 @@
> +/*********************************************************
> *****************
> + * ipvr_gem.h: IPVR header file for GEM ioctls
> + *
> + * Copyright (c) 2014 Intel Corporation, Hillsboro, OR, USA
> + * All Rights Reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of
> MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
> License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + *
> + * Authors:
> + *    Fei Jiang <fei.jiang@xxxxxxxxx>
> + *    Yao Cheng <yao.cheng@xxxxxxxxx>
> + *
> +
> **********************************************************
> ****************/
> +
> +#ifndef _IPVR_GEM_H_
> +#define _IPVR_GEM_H_
> +
> +#include "ipvr_drv.h"
> +
> +int ipvr_context_create_ioctl(struct drm_device *dev,
> +			void *data, struct drm_file *file_priv);
> +int ipvr_context_destroy_ioctl(struct drm_device *dev,
> +			void *data, struct drm_file *file_priv);
> +int ipvr_get_info_ioctl(struct drm_device *dev,
> +			void *data,	struct drm_file *file_priv);
> +int ipvr_gem_execbuffer_ioctl(struct drm_device *dev,
> +			void *data, struct drm_file *file_priv);
> +int ipvr_gem_busy_ioctl(struct drm_device *dev,
> +			void *data, struct drm_file *file_priv);
> +int ipvr_gem_create_ioctl(struct drm_device *dev,
> +			void *data, struct drm_file *file_priv);
> +int ipvr_gem_wait_ioctl(struct drm_device *dev,
> +			void *data, struct drm_file *file_priv);
> +int ipvr_gem_mmap_offset_ioctl(struct drm_device *dev,
> +			void *data, struct drm_file *file_priv);
> +
> +#endif
> diff --git a/drivers/gpu/drm/ipvr/ipvr_mmu.c
> b/drivers/gpu/drm/ipvr/ipvr_mmu.c
> new file mode 100644
> index 0000000..0f8f364
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/ipvr_mmu.c
> @@ -0,0 +1,752 @@
> +/*********************************************************
> *****************
> + * ipvr_mmu.c: IPVR MMU handling to support VED, VEC, VSP buffer access
> + *
> + * Copyright (c) 2014 Intel Corporation, Hillsboro, OR, USA
> + * Copyright (c) Imagination Technologies Limited, UK
> + * Copyright (c) 2003 Tungsten Graphics, Inc., Cedar Park, Texas.
> + * All Rights Reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of
> MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
> License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + *
> + * Authors:
> + *    Fei Jiang <fei.jiang@xxxxxxxxx>
> + *    Yao Cheng <yao.cheng@xxxxxxxxx>
> + *
> +
> **********************************************************
> ****************/
> +
> +#include "ipvr_mmu.h"
> +#include "ipvr_debug.h"
> +
> +/*
> + * Code for the VED MMU
> + * Assumes system page size is same with VED (4KiB).
> + * Doesn't work for the case of page size mismatch.
> + */
> +
> +/*
> + * clflush on one processor only:
> + * clflush should apparently flush the cache line on all processors in an
> + * SMP system.
> + */
> +
> +/*
> + * kmap atomic:
> + * Usage of the slots must be completely encapsulated within a spinlock,
> and
> + * no other functions that may be using the locks for other purposed may
> be
> + * called from within the locked region.
> + * Since the slots are per processor, this will guarantee that we are the only
> + * user.
> + */
> +
> +/*
> + *PTE's and PDE's
> + */
> +#define IPVR_PDE_MASK		0x003FFFFF
> +#define IPVR_PDE_SHIFT		22
> +#define IPVR_PTE_SHIFT		12
> +#define IPVR_PTE_VALID		0x0001	/* PTE / PDE valid */
> +#define IPVR_PTE_WO			0x0002	/* Write only */
> +#define IPVR_PTE_RO			0x0004	/* Read only */
> +#define IPVR_PTE_CACHED		0x0008	/* CPU cache coherent */
> +
> +struct ipvr_mmu_pt {
> +	struct ipvr_mmu_pd *pd;
> +	u32 index;
> +	u32 count;
> +	struct page *p;
> +	u32 *v;
> +};
> +
> +struct ipvr_mmu_pd {
> +	struct ipvr_mmu_driver *driver;
> +	u32 hw_context;
> +	struct ipvr_mmu_pt **tables;
> +	struct page *p;
> +	struct page *dummy_pt;
> +	struct page *dummy_page;
> +	u32 pd_mask;
> +	u32 invalid_pde;
> +	u32 invalid_pte;
> +};
> +
> +static inline u32 ipvr_mmu_pt_index(u32 offset)
> +{
> +	return (offset >> IPVR_PTE_SHIFT) & 0x3FF;
> +}
> +
> +static inline u32 ipvr_mmu_pd_index(u32 offset)
> +{
> +	return offset >> IPVR_PDE_SHIFT;
> +}
> +
> +#if defined(CONFIG_X86)
> +static inline void ipvr_clflush(void *addr)
> +{
> +	__asm__ __volatile__("clflush (%0)\n" : : "r"(addr) : "memory");
> +}
> +
> +static inline void
> +ipvr_mmu_clflush(struct ipvr_mmu_driver *driver, void *addr)
> +{
> +	if (!driver->has_clflush)
> +		return;
> +
> +	mb();
> +	ipvr_clflush(addr);
> +	mb();
> +}
> +
> +static void
> +ipvr_mmu_page_clflush(struct ipvr_mmu_driver *driver, struct page* page)
> +{
> +	u32 clflush_add = driver->clflush_add >> PAGE_SHIFT;
> +	u32 clflush_count = PAGE_SIZE / clflush_add;
> +	int i;
> +	u8 *clf;
> +
> +	clf = kmap_atomic(page);
> +
> +	mb();
> +	for (i = 0; i < clflush_count; ++i) {
> +		ipvr_clflush(clf);
> +		clf += clflush_add;
> +	}
> +	mb();
> +
> +	kunmap_atomic(clf);
> +}
> +
> +static void ipvr_mmu_pages_clflush(struct ipvr_mmu_driver *driver,
> +				struct page *page[], int num_pages)
> +{
> +	int i;
> +
> +	if (!driver->has_clflush)
> +		return ;
> +
> +	for (i = 0; i < num_pages; i++)
> +		ipvr_mmu_page_clflush(driver, *page++);
> +}
> +#else
> +
> +static inline void
> +ipvr_mmu_clflush(struct ipvr_mmu_driver *driver, void *addr)
> +{
> +	;
> +}
> +
> +static void ipvr_mmu_pages_clflush(struct ipvr_mmu_driver *driver,
> +				struct page *page[], int num_pages)
> +{
> +	IPVR_DEBUG_GENERAL("Dumy ipvr_mmu_pages_clflush\n");
> +}
> +
> +#endif
> +
> +static void
> +ipvr_mmu_flush_pd_locked(struct ipvr_mmu_driver *driver, bool force)
> +{
> +	if (atomic_read(&driver->needs_tlbflush) || force) {
> +		if (!driver->dev_priv)
> +			goto out;
> +
> +		atomic_set(&driver->dev_priv->ipvr_mmu_invaldc, 1);
> +	}
> +out:
> +	atomic_set(&driver->needs_tlbflush, 0);
> +}
> +
> +static void ipvr_mmu_flush(struct ipvr_mmu_driver *driver, bool rc_prot)
> +{
> +	if (rc_prot)
> +		down_write(&driver->sem);
> +
> +	if (!driver->dev_priv)
> +		goto out;
> +
> +	atomic_set(&driver->dev_priv->ipvr_mmu_invaldc, 1);
> +
> +out:
> +	if (rc_prot)
> +		up_write(&driver->sem);
> +}
> +
> +void ipvr_mmu_set_pd_context(struct ipvr_mmu_pd *pd, u32 hw_context)
> +{
> +	ipvr_mmu_pages_clflush(pd->driver, &pd->p, 1);
> +	down_write(&pd->driver->sem);
> +	wmb();
> +	ipvr_mmu_flush_pd_locked(pd->driver, 1);
> +	pd->hw_context = hw_context;
> +	up_write(&pd->driver->sem);
> +}
> +
> +static inline unsigned long
> +ipvr_pd_addr_end(unsigned long addr, unsigned long end)
> +{
> +
> +	addr = (addr + IPVR_PDE_MASK + 1) & ~IPVR_PDE_MASK;
> +	return (addr < end) ? addr : end;
> +}
> +
> +static inline u32 ipvr_mmu_mask_pte(u32 pfn, u32 type)
> +{
> +	u32 mask = IPVR_PTE_VALID;
> +
> +	if (type & IPVR_MMU_CACHED_MEMORY)
> +		mask |= IPVR_PTE_CACHED;
> +	if (type & IPVR_MMU_RO_MEMORY)
> +		mask |= IPVR_PTE_RO;
> +	if (type & IPVR_MMU_WO_MEMORY)
> +		mask |= IPVR_PTE_WO;
> +
> +	return (pfn << PAGE_SHIFT) | mask;
> +}
> +
> +static struct ipvr_mmu_pd* __must_check
> +ipvr_mmu_alloc_pd(struct ipvr_mmu_driver *driver, u32 invalid_type)
> +{
> +	struct ipvr_mmu_pd *pd = kmalloc(sizeof(*pd), GFP_KERNEL);
> +	u32 *v;
> +	int i;
> +
> +	if (!pd)
> +		return NULL;
> +
> +	pd->p = alloc_page(GFP_DMA32);
> +	if (!pd->p)
> +		goto out_err1;
> +	pd->dummy_pt = alloc_page(GFP_DMA32);
> +	if (!pd->dummy_pt)
> +		goto out_err2;
> +	pd->dummy_page = alloc_page(GFP_DMA32);
> +	if (!pd->dummy_page)
> +		goto out_err3;
> +
> +	pd->invalid_pde =
> +		ipvr_mmu_mask_pte(page_to_pfn(pd->dummy_pt),
> invalid_type);
> +	pd->invalid_pte =
> +		ipvr_mmu_mask_pte(page_to_pfn(pd->dummy_page),
> invalid_type);
> +
> +	v = kmap(pd->dummy_pt);
> +	if (!v)
> +		goto out_err4;
> +	for (i = 0; i < (PAGE_SIZE / sizeof(u32)); ++i)
> +		v[i] = pd->invalid_pte;
> +
> +	kunmap(pd->dummy_pt);
> +
> +	v = kmap(pd->p);
> +	if (!v)
> +		goto out_err4;
> +	for (i = 0; i < (PAGE_SIZE / sizeof(u32)); ++i)
> +		v[i] = pd->invalid_pde;
> +
> +	kunmap(pd->p);
> +
> +	v = kmap(pd->dummy_page);
> +	if (!v)
> +		goto out_err4;
> +	clear_page(v);
> +	kunmap(pd->dummy_page);
> +
> +	pd->tables = vmalloc_user(sizeof(struct ipvr_mmu_pt *) * 1024);
> +	if (!pd->tables)
> +		goto out_err4;
> +
> +	pd->hw_context = -1;
> +	pd->pd_mask = IPVR_PTE_VALID;
> +	pd->driver = driver;
> +
> +	return pd;
> +
> +out_err4:
> +	__free_page(pd->dummy_page);
> +out_err3:
> +	__free_page(pd->dummy_pt);
> +out_err2:
> +	__free_page(pd->p);
> +out_err1:
> +	kfree(pd);
> +	return NULL;
> +}
> +
> +static void ipvr_mmu_free_pt(struct ipvr_mmu_pt *pt)
> +{
> +	__free_page(pt->p);
> +	kfree(pt);
> +}
> +
> +static void ipvr_mmu_free_pagedir(struct ipvr_mmu_pd *pd)
> +{
> +	struct ipvr_mmu_driver *driver = pd->driver;
> +	struct ipvr_mmu_pt *pt;
> +	int i;
> +
> +	down_write(&driver->sem);
> +	if (pd->hw_context != -1)
> +		ipvr_mmu_flush_pd_locked(driver, 1);
> +
> +	/* Should take the spinlock here, but we don't need to do that
> +	   since we have the semaphore in write mode. */
> +
> +	for (i = 0; i < 1024; ++i) {
> +		pt = pd->tables[i];
> +		if (pt)
> +			ipvr_mmu_free_pt(pt);
> +	}
> +
> +	vfree(pd->tables);
> +	__free_page(pd->dummy_page);
> +	__free_page(pd->dummy_pt);
> +	__free_page(pd->p);
> +	kfree(pd);
> +	up_write(&driver->sem);
> +}
> +
> +static struct ipvr_mmu_pt *ipvr_mmu_alloc_pt(struct ipvr_mmu_pd *pd)
> +{
> +	struct ipvr_mmu_pt *pt = kmalloc(sizeof(*pt), GFP_KERNEL);
> +	void *v;
> +	u32 clflush_add = pd->driver->clflush_add >> PAGE_SHIFT;
> +	u32 clflush_count = PAGE_SIZE / clflush_add;
> +	spinlock_t *lock = &pd->driver->lock;
> +	u8 *clf;
> +	u32 *ptes;
> +	int i;
> +
> +	if (!pt)
> +		return NULL;
> +
> +	pt->p = alloc_page(GFP_DMA32);
> +	if (!pt->p) {
> +		kfree(pt);
> +		return NULL;
> +	}
> +
> +	spin_lock(lock);
> +
> +	v = kmap_atomic(pt->p);
> +
> +	clf = (u8 *) v;
> +	ptes = (u32 *) v;
> +	for (i = 0; i < (PAGE_SIZE / sizeof(u32)); ++i)
> +		*ptes++ = pd->invalid_pte;
> +
> +
> +#if defined(CONFIG_X86)
> +	if (pd->driver->has_clflush && pd->hw_context != -1) {
> +		mb();
> +		for (i = 0; i < clflush_count; ++i) {
> +			ipvr_clflush(clf);
> +			clf += clflush_add;
> +		}
> +		mb();
> +	}
> +#endif
> +	kunmap_atomic(v);
> +
> +	spin_unlock(lock);
> +
> +	pt->count = 0;
> +	pt->pd = pd;
> +	pt->index = 0;
> +
> +	return pt;
> +}
> +
> +static struct ipvr_mmu_pt *
> +ipvr_mmu_pt_alloc_map_lock(struct ipvr_mmu_pd *pd, unsigned long
> addr)
> +{
> +	u32 index = ipvr_mmu_pd_index(addr);
> +	struct ipvr_mmu_pt *pt;
> +	u32 *v;
> +	spinlock_t *lock = &pd->driver->lock;
> +
> +	spin_lock(lock);
> +	pt = pd->tables[index];
> +	while (!pt) {
> +		spin_unlock(lock);
> +		pt = ipvr_mmu_alloc_pt(pd);
> +		if (!pt)
> +			return NULL;
> +		spin_lock(lock);
> +
> +		if (pd->tables[index]) {
> +			spin_unlock(lock);
> +			ipvr_mmu_free_pt(pt);
> +			spin_lock(lock);
> +			pt = pd->tables[index];
> +			continue;
> +		}
> +
> +		v = kmap_atomic(pd->p);
> +
> +		pd->tables[index] = pt;
> +		v[index] = (page_to_pfn(pt->p) << 12) |
> +			pd->pd_mask;
> +
> +
> +		pt->index = index;
> +
> +		kunmap_atomic((void *) v);
> +
> +		if (pd->hw_context != -1) {
> +			ipvr_mmu_clflush(pd->driver, (void *) &v[index]);
> +			atomic_set(&pd->driver->needs_tlbflush, 1);
> +		}
> +	}
> +
> +	pt->v = kmap_atomic(pt->p);
> +
> +	return pt;
> +}
> +
> +static struct ipvr_mmu_pt *
> +ipvr_mmu_pt_map_lock(struct ipvr_mmu_pd *pd, unsigned long addr)
> +{
> +	u32 index = ipvr_mmu_pd_index(addr);
> +	struct ipvr_mmu_pt *pt;
> +	spinlock_t *lock = &pd->driver->lock;
> +
> +	spin_lock(lock);
> +	pt = pd->tables[index];
> +	if (!pt) {
> +		spin_unlock(lock);
> +		return NULL;
> +	}
> +
> +	pt->v = kmap_atomic(pt->p);
> +
> +	return pt;
> +}
> +
> +static void ipvr_mmu_pt_unmap_unlock(struct ipvr_mmu_pt *pt)
> +{
> +	struct ipvr_mmu_pd *pd = pt->pd;
> +	u32 *v;
> +
> +	kunmap_atomic(pt->v);
> +
> +	if (pt->count == 0) {
> +		v = kmap_atomic(pd->p);
> +
> +		v[pt->index] = pd->invalid_pde;
> +		pd->tables[pt->index] = NULL;
> +
> +		if (pd->hw_context != -1) {
> +			ipvr_mmu_clflush(pd->driver,
> +					(void *) &v[pt->index]);
> +			atomic_set(&pd->driver->needs_tlbflush, 1);
> +		}
> +
> +		kunmap_atomic(pt->v);
> +
> +		spin_unlock(&pd->driver->lock);
> +		ipvr_mmu_free_pt(pt);
> +		return;
> +	}
> +	spin_unlock(&pd->driver->lock);
> +}
> +
> +static inline void
> +ipvr_mmu_set_pte(struct ipvr_mmu_pt *pt, unsigned long addr, u32 pte)
> +{
> +	pt->v[ipvr_mmu_pt_index(addr)] = pte;
> +}
> +
> +static inline void
> +ipvr_mmu_invalidate_pte(struct ipvr_mmu_pt *pt, unsigned long addr)
> +{
> +	pt->v[ipvr_mmu_pt_index(addr)] = pt->pd->invalid_pte;
> +}
> +
> +struct ipvr_mmu_pd *ipvr_mmu_get_default_pd(struct ipvr_mmu_driver
> *driver)
> +{
> +	struct ipvr_mmu_pd *pd;
> +
> +	/* down_read(&driver->sem); */
> +	pd = driver->default_pd;
> +	/* up_read(&driver->sem); */
> +
> +	return pd;
> +}
> +
> +/* Returns the physical address of the PD shared by sgx/msvdx */
> +u32 __must_check ipvr_get_default_pd_addr32(struct ipvr_mmu_driver
> *driver)
> +{
> +	struct ipvr_mmu_pd *pd;
> +	unsigned long pfn;
> +	pd = ipvr_mmu_get_default_pd(driver);
> +	pfn = page_to_pfn(pd->p);
> +	if (pfn >= 0x00100000UL)
> +		return 0;
> +	return pfn << PAGE_SHIFT;
> +}
> +
> +void ipvr_mmu_driver_takedown(struct ipvr_mmu_driver *driver)
> +{
> +	ipvr_mmu_free_pagedir(driver->default_pd);
> +	kfree(driver);
> +}
> +
> +struct ipvr_mmu_driver * __must_check
> +ipvr_mmu_driver_init(u8 __iomem * registers, u32 invalid_type,
> +			struct drm_ipvr_private *dev_priv)
> +{
> +	struct ipvr_mmu_driver *driver;
> +
> +	driver = kmalloc(sizeof(*driver), GFP_KERNEL);
> +	if (!driver)
> +		return NULL;
> +
> +	driver->dev_priv = dev_priv;
> +
> +	driver->default_pd =
> +		ipvr_mmu_alloc_pd(driver, invalid_type);
> +	if (!driver->default_pd)
> +		goto out_err1;
> +
> +	spin_lock_init(&driver->lock);
> +	init_rwsem(&driver->sem);
> +	down_write(&driver->sem);
> +	driver->register_map = registers;
> +	atomic_set(&driver->needs_tlbflush, 1);
> +
> +	driver->has_clflush = false;
> +
> +#if defined(CONFIG_X86)
> +	if (cpu_has_clflush) {
> +		u32 tfms, misc, cap0, cap4, clflush_size;
> +
> +		/*
> +		 * clflush size is determined at kernel setup for x86_64
> +		 *  but not for i386. We have to do it here.
> +		 */
> +
> +		cpuid(0x00000001, &tfms, &misc, &cap0, &cap4);
> +		clflush_size = ((misc >> 8) & 0xff) * 8;
> +		driver->has_clflush = true;
> +		driver->clflush_add =
> +			PAGE_SIZE * clflush_size / sizeof(u32);
> +		driver->clflush_mask = driver->clflush_add - 1;
> +		driver->clflush_mask = ~driver->clflush_mask;
> +	}
> +#endif
> +
> +	up_write(&driver->sem);
> +	return driver;
> +
> +out_err1:
> +	kfree(driver);
> +	return NULL;
> +}
> +
> +#if defined(CONFIG_X86)
> +static void ipvr_mmu_flush_ptes(struct ipvr_mmu_pd *pd,
> +			unsigned long address,
> +			int num_pages,
> +			u32 desired_tile_stride,
> +			u32 hw_tile_stride)
> +{
> +	struct ipvr_mmu_pt *pt;
> +	int rows = 1;
> +	int i;
> +	unsigned long addr;
> +	unsigned long end;
> +	unsigned long next;
> +	unsigned long add;
> +	unsigned long row_add;
> +	unsigned long clflush_add = pd->driver->clflush_add;
> +	unsigned long clflush_mask = pd->driver->clflush_mask;
> +	IPVR_DEBUG_GENERAL("call x86 ipvr_mmu_flush_ptes, address is
> 0x%lx, "
> +			"num pages is %d.\n", address, num_pages);
> +	if (!pd->driver->has_clflush) {
> +		IPVR_DEBUG_GENERAL("call ipvr_mmu_pages_clflush.\n");
> +		ipvr_mmu_pages_clflush(pd->driver, &pd->p, num_pages);
> +		return;
> +	}
> +
> +	if (hw_tile_stride)
> +		rows = num_pages / desired_tile_stride;
> +	else
> +		desired_tile_stride = num_pages;
> +
> +	add = desired_tile_stride << PAGE_SHIFT;
> +	row_add = hw_tile_stride << PAGE_SHIFT;
> +	mb();
> +	for (i = 0; i < rows; ++i) {
> +		addr = address;
> +		end = addr + add;
> +
> +		do {
> +			next = ipvr_pd_addr_end(addr, end);
> +			pt = ipvr_mmu_pt_map_lock(pd, addr);
> +			if (!pt)
> +				continue;
> +			do {
> +				ipvr_clflush(&pt-
> >v[ipvr_mmu_pt_index(addr)]);
> +			} while (addr +=
> +					 clflush_add,
> +				 (addr & clflush_mask) < next);
> +
> +			ipvr_mmu_pt_unmap_unlock(pt);
> +		} while (addr = next, next != end);
> +		address += row_add;
> +	}
> +	mb();
> +}
> +#else
> +
> +static void ipvr_mmu_flush_ptes(struct ipvr_mmu_pd *pd,
> +					unsigned long address,
> +					int num_pages,
> +					u32 desired_tile_stride,
> +					u32 hw_tile_stride)
> +{
> +	IPVR_DEBUG_GENERAL("call non-x86 ipvr_mmu_flush_ptes.\n");
> +}
> +#endif
> +
> +void ipvr_mmu_remove_pages(struct ipvr_mmu_pd *pd, unsigned long
> address,
> +			int num_pages, u32 desired_tile_stride,
> +			u32 hw_tile_stride)
> +{
> +	struct ipvr_mmu_pt *pt;
> +	int rows = 1;
> +	int i;
> +	unsigned long addr;
> +	unsigned long end;
> +	unsigned long next;
> +	unsigned long add;
> +	unsigned long row_add;
> +	unsigned long f_address = address;
> +
> +	if (hw_tile_stride)
> +		rows = num_pages / desired_tile_stride;
> +	else
> +		desired_tile_stride = num_pages;
> +
> +	add = desired_tile_stride << PAGE_SHIFT;
> +	row_add = hw_tile_stride << PAGE_SHIFT;
> +
> +	/* down_read(&pd->driver->sem); */
> +
> +	/* Make sure we only need to flush this processor's cache */
> +
> +	for (i = 0; i < rows; ++i) {
> +
> +		addr = address;
> +		end = addr + add;
> +
> +		do {
> +			next = ipvr_pd_addr_end(addr, end);
> +			pt = ipvr_mmu_pt_map_lock(pd, addr);
> +			if (!pt)
> +				continue;
> +			do {
> +				ipvr_mmu_invalidate_pte(pt, addr);
> +				--pt->count;
> +
> +			} while (addr += PAGE_SIZE, addr < next);
> +			ipvr_mmu_pt_unmap_unlock(pt);
> +
> +		} while (addr = next, next != end);
> +		address += row_add;
> +	}
> +	if (pd->hw_context != -1)
> +		ipvr_mmu_flush_ptes(pd, f_address, num_pages,
> +				   desired_tile_stride, hw_tile_stride);
> +
> +	/* up_read(&pd->driver->sem); */
> +
> +	if (pd->hw_context != -1)
> +		ipvr_mmu_flush(pd->driver, 0);
> +	ipvr_stat_remove_mmu_bind(pd->driver->dev_priv, num_pages <<
> PAGE_SHIFT);
> +}
> +
> +int ipvr_mmu_insert_pages(struct ipvr_mmu_pd *pd, struct page **pages,
> +			unsigned long address, int num_pages,
> +			u32 desired_tile_stride,
> +			u32 hw_tile_stride, u32 type)
> +{
> +	struct ipvr_mmu_pt *pt;
> +	int rows = 1;
> +	int i;
> +	u32 pte;
> +	unsigned long addr;
> +	unsigned long end;
> +	unsigned long next;
> +	unsigned long add;
> +	unsigned long row_add;
> +	unsigned long f_address = address;
> +	unsigned long pfn;
> +	int ret = 0;
> +
> +	if (hw_tile_stride) {
> +		if (num_pages % desired_tile_stride != 0)
> +			return -EINVAL;
> +		rows = num_pages / desired_tile_stride;
> +	} else {
> +		desired_tile_stride = num_pages;
> +	}
> +
> +	add = desired_tile_stride << PAGE_SHIFT;
> +	row_add = hw_tile_stride << PAGE_SHIFT;
> +
> +	down_read(&pd->driver->sem);
> +
> +	for (i = 0; i < rows; ++i) {
> +
> +		addr = address;
> +		end = addr + add;
> +
> +		do {
> +			next = ipvr_pd_addr_end(addr, end);
> +			pt = ipvr_mmu_pt_alloc_map_lock(pd, addr);
> +			if (!pt) {
> +				ret = -ENOMEM;
> +				goto out;
> +			}
> +			do {
> +				pfn = page_to_pfn(*pages++);
> +				/* should be under 4GiB */
> +				if (pfn >= 0x00100000UL) {
> +					IPVR_ERROR("cannot support pfn
> 0x%lx\n", pfn);
> +					ret = -EINVAL;
> +					goto out;
> +				}
> +				pte = ipvr_mmu_mask_pte(pfn, type);
> +				ipvr_mmu_set_pte(pt, addr, pte);
> +				pt->count++;
> +			} while (addr += PAGE_SIZE, addr < next);
> +			ipvr_mmu_pt_unmap_unlock(pt);
> +
> +		} while (addr = next, next != end);
> +
> +		address += row_add;
> +	}
> +out:
> +	if (pd->hw_context != -1)
> +		ipvr_mmu_flush_ptes(pd, f_address, num_pages,
> +				   desired_tile_stride, hw_tile_stride);
> +
> +	up_read(&pd->driver->sem);
> +
> +	if (pd->hw_context != -1)
> +		ipvr_mmu_flush(pd->driver, 1);
> +
> +	ipvr_stat_add_mmu_bind(pd->driver->dev_priv, num_pages <<
> PAGE_SHIFT);
> +	return ret;
> +}
> diff --git a/drivers/gpu/drm/ipvr/ipvr_mmu.h
> b/drivers/gpu/drm/ipvr/ipvr_mmu.h
> new file mode 100644
> index 0000000..1f524d4
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/ipvr_mmu.h
> @@ -0,0 +1,111 @@
> +/*********************************************************
> *****************
> + * ipvr_mmu.h: IPVR header file for VED/VEC/VSP MMU handling
> + *
> + * Copyright (c) 2014 Intel Corporation, Hillsboro, OR, USA
> + * Copyright (c) Imagination Technologies Limited, UK
> + * Copyright (c) 2003 Tungsten Graphics, Inc., Cedar Park, Texas.
> + * All Rights Reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of
> MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
> License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + *
> + * Authors:
> + *    Eric Anholt <eric@xxxxxxxxxx>
> + *    Fei Jiang <fei.jiang@xxxxxxxxx>
> + *    Yao Cheng <yao.cheng@xxxxxxxxx>
> + *
> +
> **********************************************************
> ****************/
> +
> +#ifndef _IPVR_MMU_H_
> +#define _IPVR_MMU_H_
> +
> +#include "ipvr_drv.h"
> +
> +static inline bool __must_check IPVR_IS_ERR(__force const unsigned long
> offset)
> +{
> +	return unlikely((offset) >= (unsigned long)-MAX_ERRNO);
> +}
> +
> +static inline long __must_check IPVR_OFFSET_ERR(__force const unsigned
> long offset)
> +{
> +	return (long)offset;
> +}
> +
> +static inline unsigned long __must_check IPVR_ERR_OFFSET(__force const
> long err)
> +{
> +	return (unsigned long)err;
> +}
> +
> +/**
> + * memory access control for VPU
> + */
> +#define IPVR_MMU_CACHED_MEMORY	  (1 << 0)	/* Bind to
> MMU only */
> +#define IPVR_MMU_RO_MEMORY	  	  (1 << 1)	/* MMU RO
> memory */
> +#define IPVR_MMU_WO_MEMORY	      (1 << 2)	/* MMU WO memory
> */
> +
> +/*
> + * linear MMU size is 512M : 0 - 512M
> + * tiling MMU size is 512M : 512M - 1024M
> + */
> +#define IPVR_MEM_MMU_LINEAR_START	0x00000000
> +#define IPVR_MEM_MMU_LINEAR_END		0x20000000
> +#define IPVR_MEM_MMU_TILING_START	0x20000000
> +#define IPVR_MEM_MMU_TILING_END		0x40000000
> +
> +struct ipvr_mmu_pd;
> +struct ipvr_mmu_pt;
> +
> +struct ipvr_mmu_driver {
> +	/* protects driver- and pd structures. Always take in read mode
> +	 * before taking the page table spinlock.
> +	 */
> +	struct rw_semaphore sem;
> +
> +	/* protects page tables, directory tables and pt tables.
> +	 * and pt structures.
> +	 */
> +	spinlock_t lock;
> +
> +	atomic_t needs_tlbflush;
> +
> +	u8 __iomem *register_map;
> +	struct ipvr_mmu_pd *default_pd;
> +
> +	bool has_clflush;
> +	u32 clflush_add;
> +	unsigned long clflush_mask;
> +
> +	struct drm_ipvr_private *dev_priv;
> +};
> +
> +struct ipvr_mmu_driver *__must_check ipvr_mmu_driver_init(u8
> __iomem *registers,
> +			u32 invalid_type, struct drm_ipvr_private *dev_priv);
> +
> +void ipvr_mmu_driver_takedown(struct ipvr_mmu_driver *driver);
> +
> +struct ipvr_mmu_pd *
> +ipvr_mmu_get_default_pd(struct ipvr_mmu_driver *driver);
> +
> +void ipvr_mmu_set_pd_context(struct ipvr_mmu_pd *pd, u32
> hw_context);
> +
> +u32 __must_check ipvr_get_default_pd_addr32(struct ipvr_mmu_driver
> *driver);
> +
> +int ipvr_mmu_insert_pages(struct ipvr_mmu_pd *pd, struct page **pages,
> +			unsigned long address, int num_pages,
> +			u32 desired_tile_stride, u32 hw_tile_stride, u32
> type);
> +
> +void ipvr_mmu_remove_pages(struct ipvr_mmu_pd *pd,
> +			unsigned long address, int num_pages,
> +			u32 desired_tile_stride, u32 hw_tile_stride);
> +
> +#endif
> diff --git a/drivers/gpu/drm/ipvr/ipvr_trace.c
> b/drivers/gpu/drm/ipvr/ipvr_trace.c
> new file mode 100644
> index 0000000..91c0bda
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/ipvr_trace.c
> @@ -0,0 +1,11 @@
> +/*
> + * Copyright © 2014 Intel Corporation
> + *
> + * Authors:
> + *    Yao Cheng <yao.cheng@xxxxxxxxx>>
> + */
> +
> +#ifndef __CHECKER__
> +#define CREATE_TRACE_POINTS
> +#include "ipvr_trace.h"
> +#endif
> diff --git a/drivers/gpu/drm/ipvr/ipvr_trace.h
> b/drivers/gpu/drm/ipvr/ipvr_trace.h
> new file mode 100644
> index 0000000..6ea8b9a
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/ipvr_trace.h
> @@ -0,0 +1,333 @@
> +/*********************************************************
> *****************
> + * ipvr_trace.h: IPVR header file for trace support
> + *
> + * Copyright (c) 2014 Intel Corporation, Hillsboro, OR, USA
> + * All Rights Reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of
> MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
> License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + *
> + * Authors:
> + *    Fei Jiang <fei.jiang@xxxxxxxxx>
> + *    Yao Cheng <yao.cheng@xxxxxxxxx>
> + *
> +
> **********************************************************
> ****************/
> +
> +#if !defined(_IPVR_TRACE_H_) || defined(TRACE_HEADER_MULTI_READ)
> +#define _IPVR_TRACE_H_
> +
> +#include "ipvr_bo.h"
> +#include "ipvr_fence.h"
> +#include "ved_msg.h"
> +#include <drm/drmP.h>
> +#include <linux/stringify.h>
> +#include <linux/types.h>
> +#include <linux/tracepoint.h>
> +
> +#undef TRACE_SYSTEM
> +#define TRACE_SYSTEM ipvr
> +#define TRACE_SYSTEM_STRING __stringify(TRACE_SYSTEM)
> +#define TRACE_INCLUDE_FILE ipvr_trace
> +
> +/* object tracking */
> +
> +TRACE_EVENT(ipvr_create_object,
> +	TP_PROTO(struct drm_ipvr_gem_object *obj, u64 mmu_offset),
> +	TP_ARGS(obj, mmu_offset),
> +	TP_STRUCT__entry(
> +		__field(struct drm_ipvr_gem_object *, obj)
> +		__field(u32, size)
> +		__field(bool, tiling)
> +		__field(u32, cache_level)
> +		__field(u64, mmu_offset)
> +	),
> +	TP_fast_assign(
> +		__entry->obj = obj;
> +		__entry->size = obj->base.size;
> +		__entry->tiling = obj->tiling;
> +		__entry->cache_level = obj->cache_level;
> +		__entry->mmu_offset = mmu_offset;
> +	),
> +	TP_printk("obj=0x%p, size=%u, tiling=%u, cache=%u,
> mmu_offset=0x%llx",
> +		__entry->obj, __entry->size, __entry->tiling,
> +		__entry->cache_level, __entry->mmu_offset)
> +);
> +
> +TRACE_EVENT(ipvr_free_object,
> +	TP_PROTO(struct drm_ipvr_gem_object *obj),
> +	TP_ARGS(obj),
> +	TP_STRUCT__entry(
> +		__field(struct drm_ipvr_gem_object *, obj)
> +	),
> +	TP_fast_assign(
> +		__entry->obj = obj;
> +	),
> +	TP_printk("obj=0x%p", __entry->obj)
> +);
> +
> +TRACE_EVENT(ipvr_fence_wait_begin,
> +	TP_PROTO(struct ipvr_fence *fence,
> +		u32 signaled_seq,
> +		u16 sync_seq),
> +	TP_ARGS(fence, signaled_seq, sync_seq),
> +	TP_STRUCT__entry(
> +		__field(struct ipvr_fence *, fence)
> +		__field(u16, fence_seq)
> +		__field(u32, signaled_seq)
> +		__field(u16, sync_seq)
> +	),
> +	TP_fast_assign(
> +		__entry->fence = fence;
> +		__entry->fence_seq = fence->seq;
> +		__entry->signaled_seq = signaled_seq;
> +		__entry->sync_seq = sync_seq;
> +	),
> +	TP_printk("fence=%p, fence_seq=%d, signaled_seq=%d,
> sync_seq=%d",
> +		__entry->fence, __entry->fence_seq,
> +		__entry->signaled_seq, __entry->sync_seq)
> +);
> +
> +TRACE_EVENT(ipvr_fence_wait_end,
> +	TP_PROTO(struct ipvr_fence *fence,
> +		u32 signaled_seq,
> +		u16 sync_seq),
> +	TP_ARGS(fence, signaled_seq, sync_seq),
> +	TP_STRUCT__entry(
> +		__field(struct ipvr_fence *, fence)
> +		__field(u16, fence_seq)
> +		__field(u32, signaled_seq)
> +		__field(u16, sync_seq)
> +	),
> +	TP_fast_assign(
> +		__entry->fence = fence;
> +		__entry->fence_seq = fence->seq;
> +		__entry->signaled_seq = signaled_seq;
> +		__entry->sync_seq = sync_seq;
> +	),
> +	TP_printk("fence=%p, fence_seq=%d, signaled_seq=%d,
> sync_seq=%d",
> +		__entry->fence, __entry->fence_seq,
> +		__entry->signaled_seq, __entry->sync_seq)
> +);
> +
> +
> +TRACE_EVENT(ipvr_fence_wait_lockup,
> +	TP_PROTO(struct ipvr_fence *fence,
> +		u32 signaled_seq,
> +		u16 sync_seq),
> +	TP_ARGS(fence, signaled_seq, sync_seq),
> +	TP_STRUCT__entry(
> +		__field(struct ipvr_fence *, fence)
> +		__field(u16, fence_seq)
> +		__field(u32, signaled_seq)
> +		__field(u16, sync_seq)
> +	),
> +	TP_fast_assign(
> +		__entry->fence = fence;
> +		__entry->fence_seq = fence->seq;
> +		__entry->signaled_seq = signaled_seq;
> +		__entry->sync_seq = sync_seq;
> +	),
> +	TP_printk("fence=%p, fence_seq=%d, signaled_seq=%d,
> sync_seq=%d",
> +		__entry->fence, __entry->fence_seq,
> +		__entry->signaled_seq, __entry->sync_seq)
> +);
> +
> +TRACE_EVENT(ipvr_execbuffer,
> +	TP_PROTO(struct drm_ipvr_gem_execbuffer *exec),
> +	TP_ARGS(exec),
> +	TP_STRUCT__entry(
> +		__field(u64, buffers_ptr)
> +		__field(u32, buffer_count)
> +		__field(u32, exec_start_offset)
> +		__field(u32, exec_len)
> +		__field(u32, ctx_id)
> +	),
> +	TP_fast_assign(
> +		__entry->buffers_ptr = exec->buffers_ptr;
> +		__entry->buffer_count = exec->buffer_count;
> +		__entry->exec_start_offset = exec->exec_start_offset;
> +		__entry->exec_len = exec->exec_len;
> +		__entry->ctx_id = exec->ctx_id;
> +	),
> +	TP_printk("buffers_ptr=0x%llx, buffer_count=0x%d, "
> +		"exec_start_offset=0x%x, exec_len=%u, ctx_id=%d",
> +		__entry->buffers_ptr, __entry->buffer_count,
> +		__entry->exec_start_offset, __entry->exec_len,
> +		__entry->ctx_id)
> +);
> +
> +TRACE_EVENT(ved_cmd_send,
> +	TP_PROTO(u32 ctx_id, u32 cmd_id, u32 seq),
> +	TP_ARGS(ctx_id, cmd_id, seq),
> +	TP_STRUCT__entry(
> +		__field(u32, ctx_id)
> +		__field(u32, cmd_id)
> +		__field(u32, seq)
> +	),
> +	TP_fast_assign(
> +		__entry->ctx_id = ctx_id;
> +		__entry->cmd_id = cmd_id;
> +		__entry->seq = seq;
> +	),
> +	TP_printk("ctx_id=0x%08x, cmd_id=0x%08x, seq=0x%08x",
> +		__entry->ctx_id, __entry->cmd_id, __entry->seq)
> +);
> +
> +TRACE_EVENT(ved_cmd_copy,
> +	TP_PROTO(u32 ctx_id, u32 cmd_id, u32 seq),
> +	TP_ARGS(ctx_id, cmd_id, seq),
> +	TP_STRUCT__entry(
> +		__field(u32, ctx_id)
> +		__field(u32, cmd_id)
> +		__field(u32, seq)
> +	),
> +	TP_fast_assign(
> +		__entry->ctx_id = ctx_id;
> +		__entry->cmd_id = cmd_id;
> +		__entry->seq = seq;
> +	),
> +	TP_printk("ctx_id=0x%08x, cmd_id=0x%08x, seq=0x%08x",
> +		__entry->ctx_id, __entry->cmd_id, __entry->seq)
> +);
> +
> +TRACE_EVENT(ipvr_get_power,
> +	TP_PROTO(int usage, int pending),
> +	TP_ARGS(usage, pending),
> +	TP_STRUCT__entry(
> +		__field(int, usage)
> +		__field(int, pending)
> +	),
> +	TP_fast_assign(
> +		__entry->usage = usage;
> +		__entry->pending = pending;
> +	),
> +	TP_printk("power usage %d, pending events %d",
> +		__entry->usage,
> +		__entry->pending)
> +);
> +
> +TRACE_EVENT(ipvr_put_power,
> +	TP_PROTO(int usage, int pending),
> +	TP_ARGS(usage, pending),
> +	TP_STRUCT__entry(
> +		__field(int, usage)
> +		__field(int, pending)
> +	),
> +	TP_fast_assign(
> +		__entry->usage = usage;
> +		__entry->pending = pending;
> +	),
> +	TP_printk("power usage %d, pending events %d",
> +		__entry->usage,
> +		__entry->pending)
> +);
> +
> +TRACE_EVENT(ved_power_on,
> +	TP_PROTO(int freq),
> +	TP_ARGS(freq),
> +	TP_STRUCT__entry(
> +		__field(int, freq)
> +	),
> +	TP_fast_assign(
> +		__entry->freq = freq;
> +	),
> +	TP_printk("frequency %d MHz", __entry->freq)
> +);
> +
> +TRACE_EVENT(ved_power_off,
> +	TP_PROTO(int freq),
> +	TP_ARGS(freq),
> +	TP_STRUCT__entry(
> +		__field(int, freq)
> +	),
> +	TP_fast_assign(
> +		__entry->freq = freq;
> +	),
> +	TP_printk("frequency %d MHz", __entry->freq)
> +);
> +
> +TRACE_EVENT(ved_irq_completed,
> +	TP_PROTO(struct ipvr_context *ctx, struct fw_completed_msg
> *completed_msg),
> +	TP_ARGS(ctx, completed_msg),
> +	TP_STRUCT__entry(
> +		__field(s64, ctx_id)
> +		__field(u16, seqno)
> +		__field(u32, flags)
> +		__field(u32, vdebcr)
> +		__field(u16, start_mb)
> +		__field(u16, last_mb)
> +	),
> +	TP_fast_assign(
> +		__entry->ctx_id = ctx? ctx->ctx_id: -1;
> +		__entry->seqno = completed_msg->header.bits.msg_fence;
> +		__entry->flags = completed_msg->flags;
> +		__entry->vdebcr = completed_msg->vdebcr;
> +		__entry->start_mb = completed_msg->mb.bits.start_mb;
> +		__entry->last_mb = completed_msg->mb.bits.last_mb;
> +	),
> +	TP_printk("ctx=%lld, seq=0x%04x, flags=0x%08x, vdebcr=0x%08x,
> mb=[%u, %u]",
> +		__entry->ctx_id,
> +		__entry->seqno,
> +		__entry->flags,
> +		__entry->vdebcr,
> +		__entry->start_mb,
> +		__entry->last_mb)
> +);
> +
> +TRACE_EVENT(ved_irq_panic,
> +	TP_PROTO(struct fw_panic_msg *panic_msg, u32 err_trig,
> +		u32 irq_status, u32 mmu_status, u32 dmac_status),
> +	TP_ARGS(panic_msg, err_trig, irq_status, mmu_status, dmac_status),
> +	TP_STRUCT__entry(
> +		__field(u16, seqno)
> +		__field(u32, fe_status)
> +		__field(u32, be_status)
> +		__field(u16, rsvd)
> +		__field(u16, last_mb)
> +		__field(u32, err_trig)
> +		__field(u32, irq_status)
> +		__field(u32, mmu_status)
> +		__field(u32, dmac_status)
> +
> +	),
> +	TP_fast_assign(
> +		__entry->seqno = panic_msg->header.bits.msg_fence;
> +		__entry->fe_status = panic_msg->fe_status;
> +		__entry->be_status = panic_msg->be_status;
> +		__entry->rsvd = panic_msg->mb.bits.reserved2;
> +		__entry->last_mb = panic_msg->mb.bits.last_mb;
> +		__entry->err_trig = err_trig;
> +		__entry->irq_status = irq_status;
> +		__entry->mmu_status = mmu_status;
> +		__entry->dmac_status = dmac_status;
> +	),
> +	TP_printk("seq=0x%04x, status=[fe 0x%08x be 0x%08x],
> rsvd=0x%04x, "
> +		"last_mb=%u, err_trig=0x%08x, irq_status=0x%08x, "
> +		"mmu_status=0x%08x, dmac_status=0x%08x",
> +		__entry->seqno,
> +		__entry->fe_status,
> +		__entry->be_status,
> +		__entry->rsvd,
> +		__entry->last_mb,
> +		__entry->err_trig,
> +		__entry->irq_status,
> +		__entry->mmu_status,
> +		__entry->dmac_status)
> +);
> +
> +#endif /* _IPVR_TRACE_H_ */
> +
> + /* This part must be outside protection */
> +#undef TRACE_INCLUDE_PATH
> +#define TRACE_INCLUDE_PATH .
> +#include <trace/define_trace.h>
> diff --git a/drivers/gpu/drm/ipvr/ved_cmd.c
> b/drivers/gpu/drm/ipvr/ved_cmd.c
> new file mode 100644
> index 0000000..d9de33e
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/ved_cmd.c
> @@ -0,0 +1,882 @@
> +/*********************************************************
> *****************
> + * ved_cmd.c: VED command handling between host driver and VED
> firmware
> + *
> + * Copyright (c) 2014 Intel Corporation, Hillsboro, OR, USA
> + * Copyright (c) Imagination Technologies Limited, UK
> + * Copyright (c) 2003 Tungsten Graphics, Inc., Cedar Park, Texas.
> + * All Rights Reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of
> MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
> License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + *
> + * Authors:
> + *    Fei Jiang <fei.jiang@xxxxxxxxx>
> + *    Yao Cheng <yao.cheng@xxxxxxxxx>
> + *
> +
> **********************************************************
> ****************/
> +
> +#include "ipvr_gem.h"
> +#include "ipvr_mmu.h"
> +#include "ipvr_bo.h"
> +#include "ipvr_trace.h"
> +#include "ipvr_fence.h"
> +#include "ved_cmd.h"
> +#include "ved_fw.h"
> +#include "ved_msg.h"
> +#include "ved_reg.h"
> +#include "ved_pm.h"
> +#include <linux/pm_runtime.h>
> +#include <linux/io.h>
> +#include <linux/delay.h>
> +
> +#ifndef list_first_entry
> +#define list_first_entry(ptr, type, member) \
> +	list_entry((ptr)->next, type, member)
> +#endif
> +
> +int ved_mtx_send(struct ved_private *ved_priv, const void *msg)
> +{
> +	static struct fw_padding_msg pad_msg;
> +	const u32 *p_msg = (u32 *)msg;
> +	struct drm_ipvr_private *dev_priv = ved_priv->dev_priv;
> +	u32 msg_num, words_free, ridx, widx, buf_size, buf_offset;
> +	int ret = 0;
> +	int i;
> +	union msg_header *header;
> +	header = (union msg_header *)msg;
> +
> +	IPVR_DEBUG_ENTRY("enter.\n");
> +
> +	/* we need clocks enabled before we touch VEC local ram,
> +	 * but fw will take care of the clock after fw is loaded
> +	 */
> +
> +	msg_num = (header->bits.msg_size + 3) / 4;
> +
> +	/* debug code for msg dump */
> +	IPVR_DEBUG_VED("MSVDX: ved_mtx_send is %dDW\n", msg_num);
> +
> +	for (i = 0; i < msg_num; i++)
> +		IPVR_DEBUG_VED("   0x%08x\n", p_msg[i]);
> +
> +	buf_size = IPVR_REG_READ32(MSVDX_COMMS_TO_MTX_BUF_SIZE)
> &
> +		   ((1 << 16) - 1);
> +
> +	if (msg_num > buf_size) {
> +		ret = -EINVAL;
> +		IPVR_ERROR("VED: message exceed maximum, ret:%d\n",
> ret);
> +		goto out;
> +	}
> +
> +	ridx = IPVR_REG_READ32(MSVDX_COMMS_TO_MTX_RD_INDEX);
> +	widx = IPVR_REG_READ32(MSVDX_COMMS_TO_MTX_WRT_INDEX);
> +
> +
> +	buf_size = IPVR_REG_READ32(MSVDX_COMMS_TO_MTX_BUF_SIZE)
> &
> +		   ((1 << 16) - 1);
> +	/*0x2000 is VEC Local Ram offset*/
> +	buf_offset =
> +
> 	(IPVR_REG_READ32(MSVDX_COMMS_TO_MTX_BUF_SIZE) >> 16) +
> 0x2000;
> +
> +	/* message would wrap, need to send a pad message */
> +	if (widx + msg_num > buf_size) {
> +		/* Shouldn't happen for a PAD message itself */
> +		if (header->bits.msg_type == MTX_MSGID_PADDING)
> +			IPVR_DEBUG_WARN("VED: should not wrap pad msg,
> "
> +				"buf_size is %d, widx is %d, msg_num
> is %d.\n",
> +				buf_size, widx, msg_num);
> +
> +		/* if the read pointer is at zero then we must wait for it to
> +		 * change otherwise the write pointer will equal the read
> +		 * pointer,which should only happen when the buffer is
> empty
> +		 *
> +		 * This will only happens if we try to overfill the queue,
> +		 * queue management should make
> +		 * sure this never happens in the first place.
> +		 */
> +		if (0 == ridx) {
> +			ret = -EINVAL;
> +			IPVR_ERROR("MSVDX: RIndex=0, ret:%d\n", ret);
> +			goto out;
> +		}
> +
> +		/* Send a pad message */
> +		pad_msg.header.bits.msg_size = (buf_size - widx) << 2;
> +		pad_msg.header.bits.msg_type = MTX_MSGID_PADDING;
> +		ved_mtx_send(ved_priv, (void *)&pad_msg);
> +		widx =
> IPVR_REG_READ32(MSVDX_COMMS_TO_MTX_WRT_INDEX);
> +	}
> +
> +	if (widx >= ridx)
> +		words_free = buf_size - (widx - ridx) - 1;
> +	else
> +		words_free = ridx - widx - 1;
> +
> +	if (msg_num > words_free) {
> +		ret = -EINVAL;
> +		IPVR_ERROR("MSVDX: msg_num > words_free, ret:%d\n",
> ret);
> +		goto out;
> +	}
> +	while (msg_num > 0) {
> +		IPVR_REG_WRITE32(*p_msg++, buf_offset + (widx << 2));
> +		msg_num--;
> +		widx++;
> +		if (buf_size == widx)
> +			widx = 0;
> +	}
> +
> +	IPVR_REG_WRITE32(widx, MSVDX_COMMS_TO_MTX_WRT_INDEX);
> +
> +	/* Make sure clocks are enabled before we kick
> +	 * but fw will take care of the clock after fw is loaded
> +	 */
> +
> +	/* signal an interrupt to let the mtx know there is a new message */
> +	IPVR_REG_WRITE32(1, MTX_KICK_INPUT_OFFSET);
> +
> +	/* Read MSVDX Register several times in case Idle signal assert */
> +	IPVR_REG_READ32(MSVDX_INTERRUPT_STATUS_OFFSET);
> +	IPVR_REG_READ32(MSVDX_INTERRUPT_STATUS_OFFSET);
> +	IPVR_REG_READ32(MSVDX_INTERRUPT_STATUS_OFFSET);
> +	IPVR_REG_READ32(MSVDX_INTERRUPT_STATUS_OFFSET);
> +
> +out:
> +	return ret;
> +}
> +
> +static int ved_cmd_send(struct ved_private *ved_priv, void *cmd,
> +			u32 cmd_size, struct ipvr_context *ipvr_ctx)
> +{
> +	int ret = 0;
> +	union msg_header *header;
> +	u32 cur_seq = 0xffffffff;
> +
> +	while (cmd_size > 0) {
> +		u32 cur_cmd_size, cur_cmd_id;
> +		header = (union msg_header *)cmd;
> +		cur_cmd_size = header->bits.msg_size;
> +		cur_cmd_id = header->bits.msg_type;
> +
> +		cur_seq = ((struct fw_msg_header *)cmd)-
> >header.bits.msg_fence;
> +
> +		if (cur_seq != 0xffffffff) {
> +			ipvr_ctx->cur_seq = cur_seq;
> +		}
> +
> +		if (cur_cmd_size > cmd_size) {
> +			ret = -EINVAL;
> +			IPVR_ERROR("VED: cmd_size %u
> cur_cmd_size %u.\n",
> +				  cmd_size, cur_cmd_size);
> +			goto out;
> +		}
> +
> +		/* Send the message to h/w */
> +		trace_ved_cmd_send(ipvr_ctx->ctx_id, cur_cmd_id,
> cur_seq);
> +		ret = ved_mtx_send(ved_priv, cmd);
> +		if (ret) {
> +			IPVR_DEBUG_WARN("VED: ret:%d\n", ret);
> +			goto out;
> +		}
> +		cmd += cur_cmd_size;
> +		cmd_size -= cur_cmd_size;
> +		if (cur_cmd_id == MTX_MSGID_HOST_BE_OPP ||
> +			cur_cmd_id == MTX_MSGID_DEBLOCK ||
> +			cur_cmd_id == MTX_MSGID_INTRA_OOLD) {
> +			cmd += (sizeof(struct fw_deblock_msg) -
> cur_cmd_size);
> +			cmd_size -=
> +				(sizeof(struct fw_deblock_msg) -
> cur_cmd_size);
> +		}
> +	}
> +out:
> +	IPVR_DEBUG_VED("VED: ret:%d\n", ret);
> +	return ret;
> +}
> +
> +int ved_cmd_dequeue_send(struct ved_private *ved_priv)
> +{
> +	struct ved_cmd_queue *ved_cmd = NULL;
> +	int ret = 0;
> +	unsigned long irq_flags;
> +
> +	spin_lock_irqsave(&ved_priv->ved_lock, irq_flags);
> +	if (list_empty(&ved_priv->ved_queue)) {
> +		IPVR_DEBUG_VED("VED: ved cmd queue empty.\n");
> +		ved_priv->ved_busy = false;
> +		spin_unlock_irqrestore(&ved_priv->ved_lock, irq_flags);
> +		return -ENODATA;
> +	}
> +
> +	ved_cmd = list_first_entry(&ved_priv->ved_queue,
> +				     struct ved_cmd_queue, head);
> +	list_del(&ved_cmd->head);
> +	spin_unlock_irqrestore(&ved_priv->ved_lock, irq_flags);
> +
> +	IPVR_DEBUG_VED("VED: cmd queue seq is %08x.\n", ved_cmd-
> >cmd_seq);
> +
> +	ipvr_set_tile(ved_priv->dev_priv, ved_cmd->tiling_scheme,
> +				   ved_cmd->tiling_stride);
> +
> +	ret = ved_cmd_send(ved_priv, ved_cmd->cmd,
> +			   ved_cmd->cmd_size, ved_cmd->ipvr_ctx);
> +	if (ret) {
> +		IPVR_ERROR("VED: ved_cmd_send failed.\n");
> +		ret = -EFAULT;
> +	}
> +
> +	kfree(ved_cmd->cmd);
> +	kfree(ved_cmd);
> +
> +	return ret;
> +}
> +
> +void ved_flush_cmd_queue(struct ved_private *ved_priv)
> +{
> +	struct ved_cmd_queue *ved_cmd;
> +	struct list_head *list, *next;
> +	unsigned long irq_flags;
> +	spin_lock_irqsave(&ved_priv->ved_lock, irq_flags);
> +	/* Flush the VED cmd queue and signal all fences in the queue */
> +	list_for_each_safe(list, next, &ved_priv->ved_queue) {
> +		ved_cmd = list_entry(list, struct ved_cmd_queue, head);
> +		list_del(list);
> +		IPVR_DEBUG_VED("VED: flushing sequence:0x%08x.\n",
> +				  ved_cmd->cmd_seq);
> +		ved_priv->ved_cur_seq = ved_cmd->cmd_seq;
> +
> +		ipvr_fence_process(ved_priv->dev_priv, ved_cmd-
> >cmd_seq, IPVR_CMD_SKIP);
> +		kfree(ved_cmd->cmd);
> +		kfree(ved_cmd);
> +	}
> +	ved_priv->ved_busy = false;
> +	spin_unlock_irqrestore(&ved_priv->ved_lock, irq_flags);
> +}
> +
> +static int
> +ved_map_command(struct ved_private *ved_priv,
> +				struct drm_ipvr_gem_object *cmd_buffer,
> +				u32 cmd_size, void **ved_cmd,
> +				u16 sequence, s32 copy_cmd,
> +				struct ipvr_context *ipvr_ctx)
> +{
> +	int ret = 0;
> +	u32 cmd_size_remain;
> +	void *cmd, *cmd_copy, *cmd_start;
> +	union msg_header *header;
> +	struct ipvr_fence *fence = NULL;
> +
> +	/* command buffers may not exceed page boundary */
> +	if (cmd_size > PAGE_SIZE)
> +		return -EINVAL;
> +
> +	cmd_start = kmap(sg_page(cmd_buffer->sg_table->sgl));
> +	if (!cmd_start) {
> +		IPVR_ERROR("VED: kmap failed.\n");
> +		return -EFAULT;
> +	}
> +
> +	cmd = cmd_start;
> +	cmd_size_remain = cmd_size;
> +
> +	while (cmd_size_remain > 0) {
> +		u32 cur_cmd_size, cur_cmd_id, mmu_ptd,
> msvdx_mmu_invalid;
> +		if (cmd_size_remain < MTX_GENMSG_HEADER_SIZE) {
> +			ret = -EINVAL;
> +			goto out;
> +		}
> +		header = (union msg_header *)cmd;
> +		cur_cmd_size = header->bits.msg_size;
> +		cur_cmd_id = header->bits.msg_type;
> +		mmu_ptd = 0;
> +		msvdx_mmu_invalid = 0;
> +
> +		IPVR_DEBUG_VED("cmd start at %p cur_cmd_size = %d"
> +			       " cur_cmd_id = %02x fence = %08x\n",
> +			       cmd, cur_cmd_size,
> +			       cur_cmd_id, sequence);
> +		if ((cur_cmd_size % sizeof(u32))
> +		    || (cur_cmd_size > cmd_size_remain)) {
> +			ret = -EINVAL;
> +			IPVR_ERROR("VED: cmd size err, ret:%d.\n", ret);
> +			goto out;
> +		}
> +
> +		switch (cur_cmd_id) {
> +		case MTX_MSGID_DECODE_FE: {
> +			struct fw_decode_msg *decode_msg;
> +			if (sizeof(struct fw_decode_msg) > cmd_size_remain)
> {
> +				/* Msg size is not correct */
> +				ret = -EINVAL;
> +				IPVR_DEBUG_WARN("MSVDX: wrong msg
> size.\n");
> +				goto out;
> +			}
> +			decode_msg = (struct fw_decode_msg *)cmd;
> +			decode_msg->header.bits.msg_fence = sequence;
> +
> +			mmu_ptd = ipvr_get_default_pd_addr32(ved_priv-
> >dev_priv->mmu);
> +			if (mmu_ptd == 0) {
> +				ret = -EINVAL;
> +				IPVR_DEBUG_WARN("MSVDX: invalid PD
> addr32.\n");
> +				goto out;
> +			}
> +			msvdx_mmu_invalid = atomic_cmpxchg(&ved_priv-
> >dev_priv->ipvr_mmu_invaldc, 1, 0);
> +			if (msvdx_mmu_invalid == 1) {
> +				decode_msg->flag_size.bits.flags |=
> FW_INVALIDATE_MMU;
> +				IPVR_DEBUG_VED("VED: Set MMU
> invalidate\n");
> +			}
> +			/* if ctx_id is not passed, use default id */
> +			if (decode_msg->mmu_context.bits.context == 0)
> +				decode_msg->mmu_context.bits.context =
> +					ved_priv->dev_priv-
> >default_ctx.ctx_id;
> +
> +			decode_msg->mmu_context.bits.mmu_ptd =
> mmu_ptd >> 8;
> +			IPVR_DEBUG_VED("VED: MSGID_DECODE_FE:"
> +					" - fence: %08x"
> +					" - flags: %08x - buffer_size: %08x"
> +					" - crtl_alloc_addr: %08x"
> +					" - context: %08x - mmu_ptd: %08x"
> +					" - operating_mode: %08x.\n",
> +					decode_msg-
> >header.bits.msg_fence,
> +					decode_msg->flag_size.bits.flags,
> +					decode_msg-
> >flag_size.bits.buffer_size,
> +					decode_msg->crtl_alloc_addr,
> +					decode_msg-
> >mmu_context.bits.context,
> +					decode_msg-
> >mmu_context.bits.mmu_ptd,
> +					decode_msg->operating_mode);
> +			break;
> +		}
> +		default:
> +			/* Msg not supported */
> +			ret = -EINVAL;
> +			IPVR_DEBUG_WARN("VED: msg not supported.\n");
> +			goto out;
> +		}
> +
> +		cmd += cur_cmd_size;
> +		cmd_size_remain -= cur_cmd_size;
> +	}
> +
> +	fence = ipvr_fence_create(ved_priv->dev_priv);
> +	if (IS_ERR(fence)) {
> +		ret = PTR_ERR(fence);
> +		IPVR_ERROR("Failed calling ipvr_fence_create: %d\n", ret);
> +		goto out;
> +	}
> +
> +	ipvr_fence_buffer_objects(&ved_priv->dev_priv-
> >validate_ctx.validate_list,
> +				fence);
> +
> +	if (copy_cmd) {
> +		IPVR_DEBUG_VED("VED: copying command.\n");
> +
> +		cmd_copy = kzalloc(cmd_size, GFP_KERNEL);
> +		if (cmd_copy == NULL) {
> +			ret = -ENOMEM;
> +			IPVR_ERROR("VED: fail to callc, ret=:%d\n", ret);
> +			goto out;
> +		}
> +		memcpy(cmd_copy, cmd_start, cmd_size);
> +		*ved_cmd = cmd_copy;
> +	} else {
> +		IPVR_DEBUG_VED("VED: did NOT copy command.\n");
> +		ipvr_set_tile(ved_priv->dev_priv, ved_priv-
> >default_tiling_scheme,
> +					ved_priv->default_tiling_stride);
> +
> +		ret = ved_cmd_send(ved_priv, cmd_start, cmd_size,
> ipvr_ctx);
> +		if (ret) {
> +			IPVR_ERROR("VED: ved_cmd_send failed\n");
> +			ret = -EINVAL;
> +		}
> +	}
> +
> +out:
> +	kunmap(sg_page(cmd_buffer->sg_table->sgl));
> +
> +	return ret;
> +}
> +
> +static int
> +ved_submit_cmdbuf_copy(struct ved_private *ved_priv,
> +				struct drm_ipvr_gem_object *cmd_buffer,
> +				u32 cmd_size,
> +				struct ipvr_context *ipvr_ctx,
> +				u32 fence_flag)
> +{
> +	struct ved_cmd_queue *ved_cmd;
> +	u16 sequence =  (ved_priv->dev_priv->last_seq << 4);
> +	unsigned long irq_flags;
> +	void *cmd = NULL;
> +	int ret;
> +	union msg_header *header;
> +
> +	/* queue the command to be sent when the h/w is ready */
> +	IPVR_DEBUG_VED("VED: queueing sequence:%08x.\n",
> +			  sequence);
> +	ved_cmd = kzalloc(sizeof(struct ved_cmd_queue),
> +			    GFP_KERNEL);
> +	if (ved_cmd == NULL) {
> +		IPVR_ERROR("MSVDXQUE: Out of memory...\n");
> +		return -ENOMEM;
> +	}
> +
> +	ret = ved_map_command(ved_priv, cmd_buffer, cmd_size,
> +				&cmd, sequence, 1, ipvr_ctx);
> +	if (ret) {
> +		IPVR_ERROR("VED: Failed to extract cmd\n");
> +		kfree(ved_cmd);
> +		/* -EINVAL or -EFAULT or -ENOMEM */
> +		return ret;
> +	}
> +	header = (union msg_header *)cmd;
> +	ved_cmd->cmd = cmd;
> +	ved_cmd->cmd_size = cmd_size;
> +	ved_cmd->cmd_seq = sequence;
> +
> +	ved_cmd->tiling_scheme = ved_priv->default_tiling_scheme;
> +	ved_cmd->tiling_stride = ved_priv->default_tiling_stride;
> +	ved_cmd->ipvr_ctx = ipvr_ctx;
> +	spin_lock_irqsave(&ved_priv->ved_lock, irq_flags);
> +	list_add_tail(&ved_cmd->head, &ved_priv->ved_queue);
> +	spin_unlock_irqrestore(&ved_priv->ved_lock, irq_flags);
> +	if (!ved_priv->ved_busy) {
> +		ved_priv->ved_busy = true;
> +		IPVR_DEBUG_VED("VED: Need immediate dequeue.\n");
> +		ved_cmd_dequeue_send(ved_priv);
> +	}
> +	trace_ved_cmd_copy(ipvr_ctx->ctx_id, header->bits.msg_type,
> sequence);
> +
> +	return ret;
> +}
> +
> +int
> +ved_submit_video_cmdbuf(struct ved_private *ved_priv,
> +				struct drm_ipvr_gem_object *cmd_buffer,
> +				u32 cmd_size,
> +				struct ipvr_context *ipvr_ctx,
> +				u32 fence_flag)
> +{
> +	unsigned long irq_flags;
> +	struct drm_ipvr_private *dev_priv = ved_priv->dev_priv;
> +	u16 sequence =  (dev_priv->last_seq << 4) & 0xffff;
> +	int ret = 0;
> +
> +	if (sequence == IPVR_FENCE_SIGNALED_SEQ) {
> +		sequence =  (++ved_priv->dev_priv->last_seq << 4) & 0xffff;
> +	}
> +
> +	if (!ipvr_ctx) {
> +		IPVR_ERROR("VED: null ctx\n");
> +		return -ENOENT;
> +	}
> +
> +	spin_lock_irqsave(&ved_priv->ved_lock, irq_flags);
> +
> +	IPVR_DEBUG_VED("sequence is 0x%x, needs_reset is 0x%x.\n",
> +			sequence, ved_priv->ved_needs_reset);
> +
> +	if (WARN_ON(ipvr_runtime_pm_get(ved_priv->dev_priv) < 0)) {
> +		IPVR_ERROR("Failed to get ipvr power\n");
> +		spin_unlock_irqrestore(&ved_priv->ved_lock, irq_flags);
> +		return -EBUSY;
> +	}
> +
> +	if (ved_priv->ved_busy) {
> +		spin_unlock_irqrestore(&ved_priv->ved_lock, irq_flags);
> +		ret = ved_submit_cmdbuf_copy(ved_priv, cmd_buffer,
> +			    cmd_size, ipvr_ctx, fence_flag);
> +
> +		return ret;
> +	}
> +
> +	if (ved_priv->ved_needs_reset) {
> +		spin_unlock_irqrestore(&ved_priv->ved_lock, irq_flags);
> +		IPVR_DEBUG_VED("VED: will reset msvdx.\n");
> +
> +		if (ved_core_reset(ved_priv)) {
> +			ret = -EBUSY;
> +			IPVR_ERROR("VED: Reset failed.\n");
> +			goto out_power_put;
> +		}
> +
> +		ved_priv->ved_needs_reset = 0;
> +		ved_priv->ved_busy = false;
> +
> +		if (ved_core_init(ved_priv->dev_priv)) {
> +			ret = -EBUSY;
> +			IPVR_DEBUG_WARN("VED: ved_core_init fail.\n");
> +			goto out_power_put;
> +		}
> +
> +		spin_lock_irqsave(&ved_priv->ved_lock, irq_flags);
> +	}
> +
> +	if (!ved_priv->ved_fw_loaded) {
> +		spin_unlock_irqrestore(&ved_priv->ved_lock, irq_flags);
> +		IPVR_DEBUG_VED("VED: reload FW to MTX\n");
> +		ret = ved_setup_fw(ved_priv);
> +		if (ret) {
> +			IPVR_ERROR("VED: fail to load FW\n");
> +			/* FIXME: find a proper return value */
> +			ret = -EFAULT;
> +			goto out_power_put;
> +		}
> +		ved_priv->ved_fw_loaded = true;
> +
> +		IPVR_DEBUG_VED("VED: load firmware successfully\n");
> +		spin_lock_irqsave(&ved_priv->ved_lock, irq_flags);
> +	}
> +
> +	ved_priv->ved_busy = true;
> +	spin_unlock_irqrestore(&ved_priv->ved_lock, irq_flags);
> +	IPVR_DEBUG_VED("VED: commit command to HW,seq=0x%08x\n",
> +			  sequence);
> +	ret = ved_map_command(ved_priv, cmd_buffer, cmd_size,
> +				NULL, sequence, 0, ipvr_ctx);
> +	if (ret) {
> +		IPVR_ERROR("VED: Failed to extract cmd.\n");
> +		goto out_power_put;
> +	}
> +
> +	return 0;
> +out_power_put:
> +	if (WARN_ON(ipvr_runtime_pm_put(ved_priv->dev_priv, false) < 0))
> +		IPVR_ERROR("Failed to put ipvr power\n");
> +	return ret;
> +}
> +
> +int ved_cmdbuf_video(struct ved_private *ved_priv,
> +						struct drm_ipvr_gem_object
> *cmd_buffer,
> +						u32 cmdbuf_size,
> +						struct ipvr_context *ipvr_ctx)
> +{
> +	return ved_submit_video_cmdbuf(ved_priv, cmd_buffer,
> cmdbuf_size, ipvr_ctx, 0);
> +}
> +
> +static int ved_handle_panic_msg(struct ved_private *ved_priv,
> +					struct fw_panic_msg *panic_msg)
> +{
> +	/* For VXD385 firmware, fence value is not validate here */
> +	u32 diff = 0;
> +	u16 fence;
> +	u32 err_trig, irq_sts, mmu_sts, dmac_sts;
> +	struct drm_ipvr_private *dev_priv = ved_priv->dev_priv;
> +	IPVR_DEBUG_WARN("MSVDX: MSGID_CMD_HW_PANIC:"
> +		  "Fault detected"
> +		  " - Fence: %08x"
> +		  " - fe_status mb: %08x"
> +		  " - be_status mb: %08x"
> +		  " - reserved2: %08x"
> +		  " - last mb: %08x"
> +		  " - resetting and ignoring error\n",
> +		  panic_msg->header.bits.msg_fence,
> +		  panic_msg->fe_status,
> +		  panic_msg->be_status,
> +		  panic_msg->mb.bits.reserved2,
> +		  panic_msg->mb.bits.last_mb);
> +	/*
> +	 * If bit 8 of MSVDX_INTERRUPT_STATUS is set the fault
> +	 * was caused in the DMAC. In this case you should
> +	 * check bits 20:22 of MSVDX_INTERRUPT_STATUS.
> +	 * If bit 20 is set there was a problem DMAing the buffer
> +	 * back to host. If bit 22 is set you'll need to get the
> +	 * value of MSVDX_DMAC_STREAM_STATUS (0x648).
> +	 * If bit 1 is set then there was an issue DMAing
> +	 * the bitstream or termination code for parsing.
> +	 */
> +	err_trig = IPVR_REG_READ32(MSVDX_COMMS_ERROR_TRIG);
> +	irq_sts = IPVR_REG_READ32(MSVDX_INTERRUPT_STATUS_OFFSET);
> +	mmu_sts = IPVR_REG_READ32(MSVDX_MMU_STATUS_OFFSET);
> +	dmac_sts =
> IPVR_REG_READ32(MSVDX_DMAC_STREAM_STATUS_OFFSET);
> +	IPVR_DEBUG_WARN("MSVDX: MSVDX_COMMS_ERROR_TRIG is
> 0x%x,"
> +		"MSVDX_INTERRUPT_STATUS is 0x%x,"
> +		"MSVDX_MMU_STATUS is 0x%x,"
> +		"MSVDX_DMAC_STREAM_STATUS is 0x%x.\n",
> +		err_trig, irq_sts, mmu_sts, dmac_sts);
> +
> +	trace_ved_irq_panic(panic_msg, err_trig, irq_sts, mmu_sts,
> dmac_sts);
> +
> +	fence = panic_msg->header.bits.msg_fence;
> +
> +	ved_priv->ved_needs_reset = 1;
> +
> +	diff = ved_priv->ved_cur_seq - dev_priv->last_seq;
> +	if (diff > 0x0FFFFFFF)
> +		ved_priv->ved_cur_seq++;
> +
> +	IPVR_DEBUG_WARN("VED: Fence ID missing, assuming %08x\n",
> +			ved_priv->ved_cur_seq);
> +
> +	ipvr_fence_process(dev_priv, ved_priv->ved_cur_seq,
> IPVR_CMD_FAILED);
> +
> +	/* Flush the command queue */
> +	ved_flush_cmd_queue(ved_priv);
> +	ved_priv->ved_busy = false;
> +	return 0;
> +}
> +
> +static int
> +ved_handle_completed_msg(struct ved_private *ved_priv,
> +				struct fw_completed_msg *completed_msg)
> +{
> +	struct drm_ipvr_private *dev_priv = ved_priv->dev_priv;
> +	u16 fence, flags;
> +	int ret = 0;
> +	struct ipvr_context *ipvr_ctx;
> +
> +	IPVR_DEBUG_VED("VED: MSGID_CMD_COMPLETED:"
> +		" - Fence: %08x - flags: %08x - vdebcr: %08x"
> +		" - first_mb : %d - last_mb: %d\n",
> +		completed_msg->header.bits.msg_fence,
> +		completed_msg->flags, completed_msg->vdebcr,
> +		completed_msg->mb.bits.start_mb,
> +		completed_msg->mb.bits.last_mb);
> +
> +	flags = completed_msg->flags;
> +	fence = completed_msg->header.bits.msg_fence;
> +
> +	ved_priv->ved_cur_seq = fence;
> +
> +	ipvr_fence_process(dev_priv, fence, IPVR_CMD_SUCCESS);
> +
> +	ipvr_ctx = ipvr_find_ctx_with_fence(dev_priv, fence);
> +	trace_ved_irq_completed(ipvr_ctx, completed_msg);
> +	if (unlikely(ipvr_ctx == NULL)) {
> +		IPVR_DEBUG_WARN("abnormal complete msg:
> seq=0x%04x.\n", fence);
> +		ret = -EINVAL;
> +		goto out_clear_busy;
> +	}
> +
> +	if (flags & FW_VA_RENDER_HOST_INT) {
> +		/* Now send the next command from the msvdx cmd queue
> */
> +		if (ved_cmd_dequeue_send(ved_priv) == 0)
> +			goto out;
> +	}
> +
> +out_clear_busy:
> +	ved_priv->ved_busy = false;
> +out:
> +	return 0;
> +}
> +
> +/*
> + * MSVDX MTX interrupt
> + */
> +static void ved_mtx_interrupt(struct ved_private *ved_priv)
> +{
> +	static u32 buf[128]; /* message buffer */
> +	u32 ridx, widx, buf_size, buf_offset;
> +	u32 num, ofs; /* message num and offset */
> +	union msg_header *header;
> +	int cmd_complete = 0;
> +	struct drm_ipvr_private *dev_priv = ved_priv->dev_priv;
> +	IPVR_DEBUG_VED("VED: Got a VED MTX interrupt.\n");
> +
> +	/* we need clocks enabled before we touch VEC local ram,
> +	 * but fw will take care of the clock after fw is loaded
> +	 */
> +
> +loop: /* just for coding style check */
> +	ridx = IPVR_REG_READ32(MSVDX_COMMS_TO_HOST_RD_INDEX);
> +	widx =
> IPVR_REG_READ32(MSVDX_COMMS_TO_HOST_WRT_INDEX);
> +
> +	/* Get out of here if nothing */
> +	if (ridx == widx)
> +		goto done;
> +
> +	buf_size =
> IPVR_REG_READ32(MSVDX_COMMS_TO_HOST_BUF_SIZE) &
> +		((1 << 16) - 1);
> +	/*0x2000 is VEC Local Ram offset*/
> +	buf_offset =
> (IPVR_REG_READ32(MSVDX_COMMS_TO_HOST_BUF_SIZE) >> 16)
> +		+ 0x2000;
> +
> +	ofs = 0;
> +	buf[ofs] = IPVR_REG_READ32(buf_offset + (ridx << 2));
> +	header = (union msg_header *)buf;
> +
> +	/* round to nearest word */
> +	num = (header->bits.msg_size + 3) / 4;
> +
> +	/* ASSERT(num <= sizeof(buf) / sizeof(u32)); */
> +
> +	if (++ridx >= buf_size)
> +		ridx = 0;
> +
> +	for (ofs++; ofs < num; ofs++) {
> +		buf[ofs] = IPVR_REG_READ32(buf_offset + (ridx << 2));
> +
> +		if (++ridx >= buf_size)
> +			ridx = 0;
> +	}
> +
> +	/* Update the Read index */
> +	IPVR_REG_WRITE32(ridx, MSVDX_COMMS_TO_HOST_RD_INDEX);
> +
> +	if (ved_priv->ved_needs_reset)
> +		goto loop;
> +
> +	switch (header->bits.msg_type) {
> +	case MTX_MSGID_HW_PANIC: {
> +		struct fw_panic_msg *panic_msg = (struct fw_panic_msg
> *)buf;
> +		cmd_complete = 1;
> +		ved_handle_panic_msg(ved_priv, panic_msg);
> +		/**
> +		 * panic msg clears all pending cmds and breaks the cmd<-
> >irq pairing
> +		 */
> +		if (WARN_ON(ipvr_runtime_pm_put_all(ved_priv->dev_priv,
> true) < 0)) {
> +			IPVR_ERROR("Error clearing pending events and put
> power\n");
> +		}
> +		goto done;
> +	}
> +
> +	case MTX_MSGID_COMPLETED: {
> +		struct fw_completed_msg *completed_msg =
> +					(struct fw_completed_msg *)buf;
> +		cmd_complete = 1;
> +		if (ved_handle_completed_msg(ved_priv, completed_msg))
> +			cmd_complete = 0;
> +		/**
> +		 * for VP8, cmd and COMPLETED msg are paired. we can
> safely call
> +		 * get in execbuf_ioctl and call put here
> +		 */
> +		if (WARN_ON(ipvr_runtime_pm_put(ved_priv->dev_priv,
> true) < 0)) {
> +			IPVR_ERROR("Error put power\n");
> +		}
> +		break;
> +	}
> +
> +	default:
> +		IPVR_ERROR("VED: unknown message from MTX,
> ID:0x%08x.\n",
> +			header->bits.msg_type);
> +		goto done;
> +	}
> +
> +done:
> +	IPVR_DEBUG_VED("VED Interrupt: finish process a message.\n");
> +	if (ridx != widx) {
> +		IPVR_DEBUG_VED("VED: there are more message to be
> read.\n");
> +		goto loop;
> +	}
> +
> +	mb();	/* TBD check this... */
> +}
> +
> +/*
> + * MSVDX interrupt.
> + */
> +int ved_irq_handler(struct ved_private *ved_priv)
> +{
> +	u32 msvdx_stat;
> +	struct drm_ipvr_private *dev_priv = ved_priv->dev_priv;
> +	msvdx_stat =
> IPVR_REG_READ32(MSVDX_INTERRUPT_STATUS_OFFSET);
> +
> +	/* driver only needs to handle mtx irq
> +	 * For MMU fault irq, there's always a HW PANIC generated
> +	 * if HW/FW is totally hang, the lockup function will handle
> +	 * the reseting
> +	 */
> +	if (msvdx_stat &
> MSVDX_INTERRUPT_STATUS_MMU_FAULT_IRQ_MASK) {
> +		/*Ideally we should we should never get to this */
> +		IPVR_DEBUG_IRQ("VED: MMU Fault:0x%x\n", msvdx_stat);
> +
> +		/* Pause MMU */
> +
> 	IPVR_REG_WRITE32(MSVDX_MMU_CONTROL0_MMU_PAUSE_MAS
> K,
> +			     MSVDX_MMU_CONTROL0_OFFSET);
> +		wmb();
> +
> +		/* Clear this interupt bit only */
> +
> 	IPVR_REG_WRITE32(MSVDX_INTERRUPT_STATUS_MMU_FAULT_IR
> Q_MASK,
> +			     MSVDX_INTERRUPT_CLEAR_OFFSET);
> +		IPVR_REG_READ32(MSVDX_INTERRUPT_CLEAR_OFFSET);
> +		rmb();
> +
> +		ved_priv->ved_needs_reset = 1;
> +	} else if (msvdx_stat &
> MSVDX_INTERRUPT_STATUS_MTX_IRQ_MASK) {
> +		IPVR_DEBUG_IRQ("VED: msvdx_stat: 0x%x(MTX)\n",
> msvdx_stat);
> +
> +		/* Clear all interupt bits */
> +		IPVR_REG_WRITE32(0xffff,
> MSVDX_INTERRUPT_CLEAR_OFFSET);
> +
> +		IPVR_REG_READ32(MSVDX_INTERRUPT_CLEAR_OFFSET);
> +		rmb();
> +
> +		ved_mtx_interrupt(ved_priv);
> +	}
> +
> +	return 0;
> +}
> +
> +int ved_check_idle(struct ved_private *ved_priv)
> +{
> +	int loop, ret;
> +	struct drm_ipvr_private *dev_priv;
> +	if (!ved_priv)
> +		return 0;
> +
> +	dev_priv = ved_priv->dev_priv;
> +	if (!ved_priv->ved_fw_loaded)
> +		return 0;
> +
> +	if (ved_priv->ved_busy) {
> +		IPVR_DEBUG_PM("VED: ved_busy was set, return busy.\n");
> +		return -EBUSY;
> +	}
> +
> +	/* on some cores below 50502, there is one instance that
> +	 * read requests may not go to zero is in the case of a page fault,
> +	 * check core revision by reg MSVDX_CORE_REV, 385 core is 0x20001
> +	 * check if mmu page fault happend by reg
> MSVDX_INTERRUPT_STATUS,
> +	 * check was it a page table rather than a protection fault
> +	 * by reg MSVDX_MMU_STATUS, for such case,
> +	 * need call ved_core_reset as the work around */
> +	if ((IPVR_REG_READ32(MSVDX_CORE_REV_OFFSET) < 0x00050502)
> &&
> +		(IPVR_REG_READ32(MSVDX_INTERRUPT_STATUS_OFFSET)
> +			&
> MSVDX_INTERRUPT_STATUS_MMU_FAULT_IRQ_MASK) &&
> +		(IPVR_REG_READ32(MSVDX_MMU_STATUS_OFFSET) & 1)) {
> +		IPVR_DEBUG_WARN("mmu page fault, recover by
> core_reset.\n");
> +		return 0;
> +	}
> +
> +	/* check MSVDX_MMU_MEM_REQ to confirm there's no memory
> requests */
> +	for (loop = 0; loop < 10; loop++)
> +		ret = ved_wait_for_register(ved_priv,
> +					MSVDX_MMU_MEM_REQ_OFFSET,
> +					0, 0xff, 100, 1);
> +	if (ret) {
> +		IPVR_DEBUG_WARN("MSVDX: MSVDX_MMU_MEM_REQ
> reg is 0x%x,\n"
> +				"indicate mem busy, prevent power off ved,"
> +				"MSVDX_COMMS_FW_STATUS reg is 0x%x,"
> +				"MSVDX_COMMS_ERROR_TRIG reg is 0x%x,",
> +
> 	IPVR_REG_READ32(MSVDX_MMU_MEM_REQ_OFFSET),
> +
> 	IPVR_REG_READ32(MSVDX_COMMS_FW_STATUS),
> +
> 	IPVR_REG_READ32(MSVDX_COMMS_ERROR_TRIG));
> +		return -EBUSY;
> +	}
> +
> +	return 0;
> +}
> +
> +void ved_check_reset_fw(struct ved_private *ved_priv)
> +{
> +	unsigned long irq_flags;
> +
> +	spin_lock_irqsave(&ved_priv->ved_lock, irq_flags);
> +
> +	/* handling fw upload here if required */
> +	/* power off first, then hw_begin will power up/upload FW correctly
> */
> +	if (ved_priv->ved_needs_reset &
> MSVDX_RESET_NEEDS_REUPLOAD_FW) {
> +		ved_priv->ved_needs_reset &=
> ~MSVDX_RESET_NEEDS_REUPLOAD_FW;
> +		spin_unlock_irqrestore(&ved_priv->ved_lock, irq_flags);
> +		IPVR_DEBUG_VED("VED: force power off VED due to decode
> err\n");
> +		spin_lock_irqsave(&ved_priv->ved_lock, irq_flags);
> +	}
> +	spin_unlock_irqrestore(&ved_priv->ved_lock, irq_flags);
> +}
> diff --git a/drivers/gpu/drm/ipvr/ved_cmd.h
> b/drivers/gpu/drm/ipvr/ved_cmd.h
> new file mode 100644
> index 0000000..f604b02
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/ved_cmd.h
> @@ -0,0 +1,70 @@
> +/*********************************************************
> *****************
> + * ved_cmd.h: VED header file to support command buffer handling
> + *
> + * Copyright (c) 2014 Intel Corporation, Hillsboro, OR, USA
> + * Copyright (c) Imagination Technologies Limited, UK
> + * Copyright (c) 2003 Tungsten Graphics, Inc., Cedar Park, Texas.
> + * All Rights Reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of
> MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
> License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + *
> + * Authors:
> + *    Fei Jiang <fei.jiang@xxxxxxxxx>
> + *    Yao Cheng <yao.cheng@xxxxxxxxx>
> + *
> +
> **********************************************************
> ****************/
> +
> +#ifndef _VED_CMD_H_
> +#define _VED_CMD_H_
> +
> +#include "ipvr_drv.h"
> +#include "ipvr_drm.h"
> +#include "ipvr_gem.h"
> +#include "ipvr_fence.h"
> +#include "ipvr_exec.h"
> +#include "ved_reg.h"
> +#include "ved_pm.h"
> +
> +struct ved_cmd_queue {
> +	struct list_head head;
> +	void *cmd;
> +	u32 cmd_size;
> +	u16 cmd_seq;
> +	u32 fence_flag;
> +	u8 tiling_scheme;
> +	u8 tiling_stride;
> +	struct ipvr_context *ipvr_ctx;
> +};
> +
> +int ved_irq_handler(struct ved_private *ved_priv);
> +
> +int ved_mtx_send(struct ved_private *ved_priv, const void *msg);
> +
> +int ved_check_idle(struct ved_private *ved_priv);
> +
> +void ved_check_reset_fw(struct ved_private *ved_priv);
> +
> +void ved_flush_cmd_queue(struct ved_private *ved_priv);
> +
> +int ved_cmdbuf_video(struct ved_private *ved_priv,
> +			struct drm_ipvr_gem_object *cmd_buffer,
> +			u32 cmdbuf_size, struct ipvr_context *ipvr_ctx);
> +
> +int ved_submit_video_cmdbuf(struct ved_private *ved_priv,
> +			struct drm_ipvr_gem_object *cmd_buffer, u32
> cmd_size,
> +			struct ipvr_context *ipvr_ctx, u32 fence_flag);
> +
> +int ved_cmd_dequeue_send(struct ved_private *ved_priv);
> +
> +#endif
> diff --git a/drivers/gpu/drm/ipvr/ved_fw.c b/drivers/gpu/drm/ipvr/ved_fw.c
> new file mode 100644
> index 0000000..43682da
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/ved_fw.c
> @@ -0,0 +1,1199 @@
> +/*********************************************************
> *****************
> + * ved_fw.c: VED initialization and mtx-firmware upload
> + *
> + * Copyright (c) 2014 Intel Corporation, Hillsboro, OR, USA
> + * Copyright (c) Imagination Technologies Limited, UK
> + * Copyright (c) 2003 Tungsten Graphics, Inc., Cedar Park, Texas.
> + * All Rights Reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of
> MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
> License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + *
> + * Authors:
> + *    Fei Jiang <fei.jiang@xxxxxxxxx>
> + *    Yao Cheng <yao.cheng@xxxxxxxxx>
> + *
> +
> **********************************************************
> ****************/
> +
> +#include "ipvr_bo.h"
> +#include "ipvr_mmu.h"
> +#include "ipvr_gem.h"
> +#include "ved_fw.h"
> +#include "ved_cmd.h"
> +#include "ved_msg.h"
> +#include "ved_reg.h"
> +#include <linux/firmware.h>
> +#include <linux/module.h>
> +#include <asm/cacheflush.h>
> +
> +#define STACKGUARDWORD			0x10101010
> +#define MSVDX_MTX_DATA_LOCATION		0x82880000
> +#define UNINITILISE_MEM			0xcdcdcdcd
> +#define FIRMWARE_NAME "msvdx_fw_mfld_DE2.0.bin"
> +
> +/* VED FW header */
> +struct ved_fw {
> +	u32 ver;
> +	u32 text_size;
> +	u32 data_size;
> +	u32 data_location;
> +};
> +
> +
> +void ved_clear_irq(struct ved_private *ved_priv)
> +{
> +	u32 mtx_int = 0;
> +	struct drm_ipvr_private *dev_priv = ved_priv->dev_priv;
> +	/* Clear MTX interrupt */
> +	REGIO_WRITE_FIELD_LITE(mtx_int, MSVDX_INTERRUPT_STATUS,
> MTX_IRQ, 1);
> +	IPVR_REG_WRITE32(mtx_int, MSVDX_INTERRUPT_CLEAR_OFFSET);
> +}
> +
> +/* following two functions also works for CLV and MFLD */
> +/* IPVR_INT_ENABLE_R is set in ipvr_irq_(un)install_islands */
> +void ved_disable_irq(struct ved_private *ved_priv)
> +{
> +	u32 enables = 0;
> +	struct drm_ipvr_private *dev_priv = ved_priv->dev_priv;
> +	REGIO_WRITE_FIELD_LITE(enables, MSVDX_INTERRUPT_STATUS,
> MTX_IRQ, 0);
> +	IPVR_REG_WRITE32(enables,
> MSVDX_HOST_INTERRUPT_ENABLE_OFFSET);
> +}
> +
> +void ved_enable_irq(struct ved_private *ved_priv)
> +{
> +	u32 enables = 0;
> +	struct drm_ipvr_private *dev_priv = ved_priv->dev_priv;
> +	/* Only enable the master core IRQ*/
> +	REGIO_WRITE_FIELD_LITE(enables, MSVDX_INTERRUPT_STATUS,
> MTX_IRQ,
> +			       1);
> +	IPVR_REG_WRITE32(enables,
> MSVDX_HOST_INTERRUPT_ENABLE_OFFSET);
> +}
> +
> +/*
> + * the original 1000 of udelay is derive from reference driver
> + * From Liu, Haiyang, changed the originial udelay value from 1000 to 5
> + * can save 3% C0 residence
> + */
> +int
> +ved_wait_for_register(struct ved_private *ved_priv,
> +			    u32 offset, u32 value, u32 enable,
> +			    u32 poll_cnt, u32 timeout)
> +{
> +	u32 reg_value = 0;
> +	struct drm_ipvr_private *dev_priv = ved_priv->dev_priv;
> +	while (poll_cnt) {
> +		reg_value = IPVR_REG_READ32(offset);
> +		if (value == (reg_value & enable))
> +			return 0;
> +
> +		/* Wait a bit */
> +		IPVR_UDELAY(timeout);
> +		poll_cnt--;
> +	}
> +	IPVR_DEBUG_REG("MSVDX: Timeout while waiting for
> register %08x:"
> +		       " expecting %08x (mask %08x), got %08x\n",
> +		       offset, value, enable, reg_value);
> +
> +	return -EFAULT;
> +}
> +
> +void
> +ved_set_clocks(struct ved_private *ved_priv, u32 clock_state)
> +{
> +	u32 old_clock_state = 0;
> +	struct drm_ipvr_private *dev_priv = ved_priv->dev_priv;
> +	/* IPVR_DEBUG_VED("SetClocks to %x.\n", clock_state); */
> +	old_clock_state =
> IPVR_REG_READ32(MSVDX_MAN_CLK_ENABLE_OFFSET);
> +	if (old_clock_state == clock_state)
> +		return;
> +
> +	if (clock_state == 0) {
> +		/* Turn off clocks procedure */
> +		if (old_clock_state) {
> +			/* Turn off all the clocks except core */
> +			IPVR_REG_WRITE32(
> +
> 	MSVDX_MAN_CLK_ENABLE_CORE_MAN_CLK_ENABLE_MASK,
> +				MSVDX_MAN_CLK_ENABLE_OFFSET);
> +
> +			/* Make sure all the clocks are off except core */
> +			ved_wait_for_register(ved_priv,
> +				MSVDX_MAN_CLK_ENABLE_OFFSET,
> +
> 	MSVDX_MAN_CLK_ENABLE_CORE_MAN_CLK_ENABLE_MASK,
> +				0xffffffff, 2000000, 5);
> +
> +			/* Turn off core clock */
> +			IPVR_REG_WRITE32(0,
> MSVDX_MAN_CLK_ENABLE_OFFSET);
> +		}
> +	} else {
> +		u32 clocks_en = clock_state;
> +
> +		/*Make sure that core clock is not accidentally turned off */
> +		clocks_en |=
> MSVDX_MAN_CLK_ENABLE_CORE_MAN_CLK_ENABLE_MASK;
> +
> +		/* If all clocks were disable do the bring up procedure */
> +		if (old_clock_state == 0) {
> +			/* turn on core clock */
> +			IPVR_REG_WRITE32(
> +
> 	MSVDX_MAN_CLK_ENABLE_CORE_MAN_CLK_ENABLE_MASK,
> +				MSVDX_MAN_CLK_ENABLE_OFFSET);
> +
> +			/* Make sure core clock is on */
> +			ved_wait_for_register(ved_priv,
> +				MSVDX_MAN_CLK_ENABLE_OFFSET,
> +
> 	MSVDX_MAN_CLK_ENABLE_CORE_MAN_CLK_ENABLE_MASK,
> +				0xffffffff, 2000000, 5);
> +
> +			/* turn on the other clocks as well */
> +			IPVR_REG_WRITE32(clocks_en,
> MSVDX_MAN_CLK_ENABLE_OFFSET);
> +
> +			/* Make sure that all they are on */
> +			ved_wait_for_register(ved_priv,
> +					MSVDX_MAN_CLK_ENABLE_OFFSET,
> +					clocks_en, 0xffffffff, 2000000, 5);
> +		} else {
> +			IPVR_REG_WRITE32(clocks_en,
> MSVDX_MAN_CLK_ENABLE_OFFSET);
> +
> +			/* Make sure that they are on */
> +			ved_wait_for_register(ved_priv,
> +					MSVDX_MAN_CLK_ENABLE_OFFSET,
> +					clocks_en, 0xffffffff, 2000000, 5);
> +		}
> +	}
> +}
> +
> +int ved_core_reset(struct ved_private *ved_priv)
> +{
> +	int ret = 0;
> +	int loop;
> +	u32 cmd;
> +	struct drm_ipvr_private *dev_priv = ved_priv->dev_priv;
> +	/* Enable Clocks */
> +	IPVR_DEBUG_GENERAL("Enabling clocks.\n");
> +	ved_set_clocks(ved_priv, clk_enable_all);
> +
> +	/* Always pause the MMU as the core may be still active
> +	 * when resetting.  It is very bad to have memory
> +	 * activity at the same time as a reset - Very Very bad
> +	 */
> +	IPVR_REG_WRITE32(2, MSVDX_MMU_CONTROL0_OFFSET);
> +
> +	/* BRN26106, BRN23944, BRN33671 */
> +	/* This is neccessary for all cores up to Tourmaline */
> +	if ((IPVR_REG_READ32(MSVDX_CORE_REV_OFFSET) < 0x00050502)
> &&
> +		(IPVR_REG_READ32(MSVDX_INTERRUPT_STATUS_OFFSET)
> +			&
> MSVDX_INTERRUPT_STATUS_MMU_FAULT_IRQ_MASK) &&
> +		(IPVR_REG_READ32(MSVDX_MMU_STATUS_OFFSET) & 1)) {
> +		u32 *pptd;
> +		int loop;
> +		unsigned long ptd_addr;
> +
> +		/* do work around */
> +		ptd_addr = page_to_pfn(ved_priv->mmu_recover_page) <<
> PAGE_SHIFT;
> +		/* fixme: check ptd_addr bit length */
> +		pptd = kmap(ved_priv->mmu_recover_page);
> +		if (!pptd) {
> +			IPVR_ERROR("failed to kmap mmu recover page.\n");
> +			return -ENOMEM;
> +		}
> +		for (loop = 0; loop < 1024; loop++)
> +			pptd[loop] = ptd_addr | 0x00000003;
> +		IPVR_REG_WRITE32(ptd_addr,
> MSVDX_MMU_DIR_LIST_BASE_OFFSET +  0);
> +		IPVR_REG_WRITE32(ptd_addr,
> MSVDX_MMU_DIR_LIST_BASE_OFFSET +  4);
> +		IPVR_REG_WRITE32(ptd_addr,
> MSVDX_MMU_DIR_LIST_BASE_OFFSET +  8);
> +		IPVR_REG_WRITE32(ptd_addr,
> MSVDX_MMU_DIR_LIST_BASE_OFFSET + 12);
> +
> +		IPVR_REG_WRITE32(6, MSVDX_MMU_CONTROL0_OFFSET);
> +
> 	IPVR_REG_WRITE32(MSVDX_INTERRUPT_STATUS_MMU_FAULT_IR
> Q_MASK,
> +
> 	MSVDX_INTERRUPT_STATUS_OFFSET);
> +		kunmap(ved_priv->mmu_recover_page);
> +	}
> +
> +	/* make sure *ALL* outstanding reads have gone away */
> +	for (loop = 0; loop < 10; loop++)
> +		ret = ved_wait_for_register(ved_priv,
> MSVDX_MMU_MEM_REQ_OFFSET,
> +					    0, 0xff, 100, 1);
> +	if (ret) {
> +		IPVR_DEBUG_WARN("MSVDX_MMU_MEM_REQ is %d,\n"
> +			"indicate outstanding read request 0.\n",
> +
> 	IPVR_REG_READ32(MSVDX_MMU_MEM_REQ_OFFSET));
> +		ret = -1;
> +		return ret;
> +	}
> +	/* disconnect RENDEC decoders from memory */
> +	cmd = IPVR_REG_READ32(MSVDX_RENDEC_CONTROL1_OFFSET);
> +	REGIO_WRITE_FIELD(cmd, MSVDX_RENDEC_CONTROL1,
> RENDEC_DEC_DISABLE, 1);
> +	IPVR_REG_WRITE32(cmd, MSVDX_RENDEC_CONTROL1_OFFSET);
> +
> +	/* Issue software reset for all but core */
> +	IPVR_REG_WRITE32((unsigned
> int)~MSVDX_CONTROL_MSVDX_SOFT_RESET_MASK,
> +			MSVDX_CONTROL_OFFSET);
> +	IPVR_REG_READ32(MSVDX_CONTROL_OFFSET);
> +	/* bit format is set as little endian */
> +	IPVR_REG_WRITE32(0, MSVDX_CONTROL_OFFSET);
> +	/* make sure read requests are zero */
> +	ret = ved_wait_for_register(ved_priv,
> MSVDX_MMU_MEM_REQ_OFFSET,
> +				    0, 0xff, 100, 100);
> +	if (!ret) {
> +		/* Issue software reset */
> +
> 	IPVR_REG_WRITE32(MSVDX_CONTROL_MSVDX_SOFT_RESET_MAS
> K,
> +				MSVDX_CONTROL_OFFSET);
> +
> +		ret = ved_wait_for_register(ved_priv,
> MSVDX_CONTROL_OFFSET, 0,
> +
> 	MSVDX_CONTROL_MSVDX_SOFT_RESET_MASK,
> +					2000000, 5);
> +		if (!ret) {
> +			/* Clear interrupt enabled flag */
> +			IPVR_REG_WRITE32(0,
> MSVDX_HOST_INTERRUPT_ENABLE_OFFSET);
> +
> +			/* Clear any pending interrupt flags */
> +			IPVR_REG_WRITE32(0xFFFFFFFF,
> MSVDX_INTERRUPT_CLEAR_OFFSET);
> +		} else {
> +			IPVR_DEBUG_WARN("MSVDX_CONTROL_OFFSET
> is %d,\n"
> +				"indicate software reset failed.\n",
> +
> 	IPVR_REG_READ32(MSVDX_CONTROL_OFFSET));
> +		}
> +	} else {
> +		IPVR_DEBUG_WARN("MSVDX_MMU_MEM_REQ is %d,\n"
> +			"indicate outstanding read request 1.\n",
> +
> 	IPVR_REG_READ32(MSVDX_MMU_MEM_REQ_OFFSET));
> +	}
> +	return ret;
> +}
> +
> +/*
> + * Reset chip and disable interrupts.
> + * Return 0 success, 1 failure
> + * use ved_core_reset instead of ved_reset
> + */
> +int ved_reset(struct ved_private *ved_priv)
> +{
> +	int ret = 0;
> +	struct drm_ipvr_private *dev_priv = ved_priv->dev_priv;
> +	/* Issue software reset */
> +	/* IPVR_REG_WRITE32(msvdx_sw_reset_all, MSVDX_CONTROL); */
> +
> 	IPVR_REG_WRITE32(MSVDX_CONTROL_MSVDX_SOFT_RESET_MAS
> K,
> +			MSVDX_CONTROL_OFFSET);
> +
> +	ret = ved_wait_for_register(ved_priv, MSVDX_CONTROL_OFFSET, 0,
> +			MSVDX_CONTROL_MSVDX_SOFT_RESET_MASK,
> 2000000, 5);
> +	if (!ret) {
> +		/* Clear interrupt enabled flag */
> +		IPVR_REG_WRITE32(0,
> MSVDX_HOST_INTERRUPT_ENABLE_OFFSET);
> +
> +		/* Clear any pending interrupt flags */
> +		IPVR_REG_WRITE32(0xFFFFFFFF,
> MSVDX_INTERRUPT_CLEAR_OFFSET);
> +	} else {
> +		IPVR_DEBUG_WARN("MSVDX_CONTROL_OFFSET is %d,\n"
> +			"indicate software reset failed.\n",
> +			IPVR_REG_READ32(MSVDX_CONTROL_OFFSET));
> +	}
> +
> +	return ret;
> +}
> +
> +static int ved_alloc_ccb_for_rendec(struct ved_private *ved_priv,
> +
> 	int32_t ccb0_size,
> +
> 	int32_t ccb1_size)
> +{
> +	int ret;
> +	size_t size;
> +	u8 *ccb0_addr = NULL;
> +	u8 *ccb1_addr = NULL;
> +
> +	IPVR_DEBUG_INIT("VED: setting up RENDEC, allocate CCB 0/1\n");
> +
> +	/*handling for ccb0*/
> +	if (ved_priv->ccb0 == NULL) {
> +		size = roundup(ccb0_size, PAGE_SIZE);
> +		if (size == 0)
> +			return -EINVAL;
> +
> +		/* Allocate the new object */
> +		ved_priv->ccb0 = ipvr_gem_create(ved_priv->dev_priv, size,
> 0, 0);
> +		if (IS_ERR(ved_priv->ccb0)) {
> +			ret = PTR_ERR(ved_priv->ccb0);
> +			IPVR_ERROR("VED: failed to allocate ccb0
> buffer: %d.\n", ret);
> +			ved_priv->ccb0 = NULL;
> +			return -ENOMEM;
> +		}
> +
> +		ved_priv->base_addr0 =
> ipvr_gem_object_mmu_offset(ved_priv->ccb0);
> +
> +		ccb0_addr = ipvr_gem_object_vmap(ved_priv->ccb0);
> +		if (!ccb0_addr) {
> +			ret = -ENOMEM;
> +			IPVR_ERROR("VED: kmap failed for ccb0
> buffer: %d.\n", ret);
> +			goto err_free_ccb0;
> +		}
> +
> +		memset(ccb0_addr, 0, size);
> +		vunmap(ccb0_addr);
> +	}
> +
> +	/*handling for ccb1*/
> +	if (ved_priv->ccb1 == NULL) {
> +		size = roundup(ccb1_size, PAGE_SIZE);
> +		if (size == 0) {
> +			return -EINVAL;
> +			goto err_free_ccb0;
> +		}
> +
> +		/* Allocate the new object */
> +		ved_priv->ccb1 = ipvr_gem_create(ved_priv->dev_priv, size,
> 0, 0);
> +		if (IS_ERR(ved_priv->ccb1)) {
> +			ret = PTR_ERR(ved_priv->ccb1);
> +			IPVR_ERROR("VED: failed to allocate ccb1
> buffer: %d.\n", ret);
> +			goto err_free_ccb0;
> +		}
> +
> +		ved_priv->base_addr1 =
> ipvr_gem_object_mmu_offset(ved_priv->ccb1);
> +
> +		ccb1_addr = ipvr_gem_object_vmap(ved_priv->ccb1);
> +		if (!ccb1_addr) {
> +			ret = -ENOMEM;
> +			IPVR_ERROR("VED: kmap failed for ccb1
> buffer: %d.\n", ret);
> +			goto err_free_ccb1;
> +		}
> +
> +		memset(ccb1_addr, 0, size);
> +		vunmap(ccb1_addr);
> +	}
> +
> +	IPVR_DEBUG_INIT("VED: RENDEC A: %08x RENDEC B: %08x\n",
> +			ved_priv->base_addr0, ved_priv->base_addr1);
> +
> +	return 0;
> +err_free_ccb1:
> +	drm_gem_object_unreference_unlocked(&ved_priv->ccb1->base);
> +	ved_priv->ccb1 = NULL;
> +err_free_ccb0:
> +	drm_gem_object_unreference_unlocked(&ved_priv->ccb0->base);
> +	ved_priv->ccb0 = NULL;
> +	return ret;
> +}
> +
> +static void ved_free_ccb(struct ved_private *ved_priv)
> +{
> +	if (ved_priv->ccb0) {
> +		drm_gem_object_unreference_unlocked(&ved_priv->ccb0-
> >base);
> +		ved_priv->ccb0 = NULL;
> +	}
> +	if (ved_priv->ccb1) {
> +		drm_gem_object_unreference_unlocked(&ved_priv->ccb1-
> >base);
> +		ved_priv->ccb1 = NULL;
> +	}
> +}
> +
> +static void ved_rendec_init_by_reg(struct ved_private *ved_priv)
> +{
> +	u32 cmd;
> +	struct drm_ipvr_private *dev_priv = ved_priv->dev_priv;
> +
> +	IPVR_REG_WRITE32(ved_priv->base_addr0,
> MSVDX_RENDEC_BASE_ADDR0_OFFSET);
> +	IPVR_REG_WRITE32(ved_priv->base_addr1,
> MSVDX_RENDEC_BASE_ADDR1_OFFSET);
> +
> +	cmd = 0;
> +	REGIO_WRITE_FIELD(cmd, MSVDX_RENDEC_BUFFER_SIZE,
> +			RENDEC_BUFFER_SIZE0, RENDEC_A_SIZE / 4096);
> +	REGIO_WRITE_FIELD(cmd, MSVDX_RENDEC_BUFFER_SIZE,
> +			RENDEC_BUFFER_SIZE1, RENDEC_B_SIZE / 4096);
> +	IPVR_REG_WRITE32(cmd, MSVDX_RENDEC_BUFFER_SIZE_OFFSET);
> +
> +	cmd = 0;
> +	REGIO_WRITE_FIELD(cmd, MSVDX_RENDEC_CONTROL1,
> +			RENDEC_DECODE_START_SIZE, 0);
> +	REGIO_WRITE_FIELD(cmd, MSVDX_RENDEC_CONTROL1,
> +			RENDEC_BURST_SIZE_W, 1);
> +	REGIO_WRITE_FIELD(cmd, MSVDX_RENDEC_CONTROL1,
> +			RENDEC_BURST_SIZE_R, 1);
> +	REGIO_WRITE_FIELD(cmd, MSVDX_RENDEC_CONTROL1,
> +			RENDEC_EXTERNAL_MEMORY, 1);
> +	IPVR_REG_WRITE32(cmd, MSVDX_RENDEC_CONTROL1_OFFSET);
> +
> +	cmd = 0x00101010;
> +	IPVR_REG_WRITE32(cmd, MSVDX_RENDEC_CONTEXT0_OFFSET);
> +	IPVR_REG_WRITE32(cmd, MSVDX_RENDEC_CONTEXT1_OFFSET);
> +	IPVR_REG_WRITE32(cmd, MSVDX_RENDEC_CONTEXT2_OFFSET);
> +	IPVR_REG_WRITE32(cmd, MSVDX_RENDEC_CONTEXT3_OFFSET);
> +	IPVR_REG_WRITE32(cmd, MSVDX_RENDEC_CONTEXT4_OFFSET);
> +	IPVR_REG_WRITE32(cmd, MSVDX_RENDEC_CONTEXT5_OFFSET);
> +
> +	cmd = 0;
> +	REGIO_WRITE_FIELD(cmd, MSVDX_RENDEC_CONTROL0,
> RENDEC_INITIALISE, 1);
> +	IPVR_REG_WRITE32(cmd, MSVDX_RENDEC_CONTROL0_OFFSET);
> +}
> +
> +int ved_rendec_init_by_msg(struct ved_private *ved_priv)
> +{
> +	/* at this stage, FW is uplaoded successfully,
> +	 * can send RENDEC init message */
> +	struct fw_init_msg init_msg;
> +	init_msg.header.bits.msg_size = sizeof(struct fw_init_msg);
> +	init_msg.header.bits.msg_type = MTX_MSGID_INIT;
> +	init_msg.rendec_addr0 = ved_priv->base_addr0;
> +	init_msg.rendec_addr1 = ved_priv->base_addr1;
> +	init_msg.rendec_size.bits.rendec_size0 = RENDEC_A_SIZE / (4 *
> 1024);
> +	init_msg.rendec_size.bits.rendec_size1 = RENDEC_B_SIZE / (4 *
> 1024);
> +	return ved_mtx_send(ved_priv, (void *)&init_msg);
> +}
> +
> +static void ved_get_mtx_control_from_dash(struct ved_private *ved_priv)
> +{
> +	int count = 0;
> +	u32 reg_val = 0;
> +	struct drm_ipvr_private *dev_priv = ved_priv->dev_priv;
> +
> +	REGIO_WRITE_FIELD(reg_val, MSVDX_MTX_DEBUG,
> MTX_DBG_IS_SLAVE, 1);
> +	REGIO_WRITE_FIELD(reg_val, MSVDX_MTX_DEBUG,
> MTX_DBG_GPIO_IN, 0x02);
> +	IPVR_REG_WRITE32(reg_val, MSVDX_MTX_DEBUG_OFFSET);
> +
> +	do {
> +		reg_val = IPVR_REG_READ32(MSVDX_MTX_DEBUG_OFFSET);
> +		count++;
> +	} while (((reg_val & 0x18) != 0) && count < 50000);
> +
> +	if (count >= 50000)
> +		IPVR_DEBUG_VED("VED: timeout in
> get_mtx_control_from_dash.\n");
> +
> +	/* Save the access control register...*/
> +	ved_priv->ved_dash_access_ctrl =
> IPVR_REG_READ32(MTX_RAM_ACCESS_CONTROL_OFFSET);
> +}
> +
> +static void
> +ved_release_mtx_control_from_dash(struct ved_private *ved_priv)
> +{
> +	struct drm_ipvr_private *dev_priv = ved_priv->dev_priv;
> +	/* restore access control */
> +	IPVR_REG_WRITE32(ved_priv->ved_dash_access_ctrl,
> MTX_RAM_ACCESS_CONTROL_OFFSET);
> +	/* release bus */
> +	IPVR_REG_WRITE32(0x4, MSVDX_MTX_DEBUG_OFFSET);
> +}
> +
> +/* for future debug info of msvdx related registers */
> +static void
> +ved_setup_fw_dump(struct ved_private *ved_priv, u32 dma_channel)
> +{
> +	struct drm_ipvr_private *dev_priv = ved_priv->dev_priv;
> +	IPVR_DEBUG_REG("dump registers during fw upload for debug:\n");
> +	/* for DMAC REGISTER */
> +	IPVR_DEBUG_REG("MTX_SYSC_CDMAA is 0x%x\n",
> +			IPVR_REG_READ32(MTX_SYSC_CDMAA_OFFSET));
> +	IPVR_DEBUG_REG("MTX_SYSC_CDMAC value is 0x%x\n",
> +			IPVR_REG_READ32(MTX_SYSC_CDMAC_OFFSET));
> +	IPVR_DEBUG_REG("DMAC_SETUP value is 0x%x\n",
> +			IPVR_REG_READ32(DMAC_DMAC_SETUP_OFFSET +
> dma_channel));
> +	IPVR_DEBUG_REG("DMAC_DMAC_COUNT value is 0x%x\n",
> +			IPVR_REG_READ32(DMAC_DMAC_COUNT_OFFSET +
> dma_channel));
> +	IPVR_DEBUG_REG("DMAC_DMAC_PERIPH_OFFSET value is 0x%x\n",
> +			IPVR_REG_READ32(DMAC_DMAC_PERIPH_OFFSET +
> dma_channel));
> +	IPVR_DEBUG_REG("DMAC_DMAC_PERIPHERAL_ADDR value is
> 0x%x\n",
> +
> 	IPVR_REG_READ32(DMAC_DMAC_PERIPHERAL_ADDR_OFFSET +
> +				       dma_channel));
> +	IPVR_DEBUG_REG("MSVDX_CONTROL value is 0x%x\n",
> +			IPVR_REG_READ32(MSVDX_CONTROL_OFFSET));
> +	IPVR_DEBUG_REG("DMAC_DMAC_IRQ_STAT value is 0x%x\n",
> +
> 	IPVR_REG_READ32(DMAC_DMAC_IRQ_STAT_OFFSET));
> +	IPVR_DEBUG_REG("MSVDX_MMU_CONTROL0 value is 0x%x\n",
> +
> 	IPVR_REG_READ32(MSVDX_MMU_CONTROL0_OFFSET));
> +	IPVR_DEBUG_REG("DMAC_DMAC_COUNT 2222 value is 0x%x\n",
> +			IPVR_REG_READ32(DMAC_DMAC_COUNT_OFFSET +
> dma_channel));
> +
> +	/* for MTX REGISTER */
> +	IPVR_DEBUG_REG("MTX_ENABLE_OFFSET is 0x%x\n",
> +			IPVR_REG_READ32(MTX_ENABLE_OFFSET));
> +	IPVR_DEBUG_REG("MTX_KICK_INPUT_OFFSET value is 0x%x\n",
> +			IPVR_REG_READ32(MTX_KICK_INPUT_OFFSET));
> +	IPVR_DEBUG_REG("MTX_REG_READ_WRITE_REQUEST_OFFSET
> value is 0x%x\n",
> +
> 	IPVR_REG_READ32(MTX_REGISTER_READ_WRITE_REQUEST_OFFSET
> ));
> +	IPVR_DEBUG_REG("MTX_RAM_ACCESS_CONTROL_OFFSET value is
> 0x%x\n",
> +
> 	IPVR_REG_READ32(MTX_RAM_ACCESS_CONTROL_OFFSET));
> +	IPVR_DEBUG_REG("MTX_RAM_ACCESS_STATUS_OFFSET value is
> 0x%x\n",
> +
> 	IPVR_REG_READ32(MTX_RAM_ACCESS_STATUS_OFFSET));
> +	IPVR_DEBUG_REG("MTX_SYSC_TIMERDIV_OFFSET value is 0x%x\n",
> +			IPVR_REG_READ32(MTX_SYSC_TIMERDIV_OFFSET));
> +	IPVR_DEBUG_REG("MTX_SYSC_CDMAC_OFFSET value is 0x%x\n",
> +			IPVR_REG_READ32(MTX_SYSC_CDMAC_OFFSET));
> +	IPVR_DEBUG_REG("MTX_SYSC_CDMAA_OFFSET value is 0x%x\n",
> +			IPVR_REG_READ32(MTX_SYSC_CDMAA_OFFSET));
> +	IPVR_DEBUG_REG("MTX_SYSC_CDMAS0_OFFSET value is 0x%x\n",
> +			IPVR_REG_READ32(MTX_SYSC_CDMAS0_OFFSET));
> +	IPVR_DEBUG_REG("MTX_SYSC_CDMAT_OFFSET value is 0x%x\n",
> +			IPVR_REG_READ32(MTX_SYSC_CDMAT_OFFSET));
> +
> +	/* for MSVDX CORE REGISTER */
> +	IPVR_DEBUG_REG("MSVDX_CONTROL_OFFSET is 0x%x\n",
> +			IPVR_REG_READ32(MSVDX_CONTROL_OFFSET));
> +	IPVR_DEBUG_REG("MSVDX_INTERRUPT_CLEAR_OFFSET value is
> 0x%x\n",
> +
> 	IPVR_REG_READ32(MSVDX_INTERRUPT_CLEAR_OFFSET));
> +	IPVR_DEBUG_REG("MSVDX_INTERRUPT_STATUS_OFFSET value is
> 0x%x\n",
> +
> 	IPVR_REG_READ32(MSVDX_INTERRUPT_STATUS_OFFSET));
> +	IPVR_DEBUG_REG("MMSVDX_HOST_INTERRUPT_ENABLE_OFFSET
> value is 0x%x\n",
> +
> 	IPVR_REG_READ32(MSVDX_HOST_INTERRUPT_ENABLE_OFFSET));
> +	IPVR_DEBUG_REG("MSVDX_MAN_CLK_ENABLE_OFFSET value is
> 0x%x\n",
> +
> 	IPVR_REG_READ32(MSVDX_MAN_CLK_ENABLE_OFFSET));
> +	IPVR_DEBUG_REG("MSVDX_CORE_ID_OFFSET value is 0x%x\n",
> +			IPVR_REG_READ32(MSVDX_CORE_ID_OFFSET));
> +	IPVR_DEBUG_REG("MSVDX_MMU_STATUS_OFFSET value is
> 0x%x\n",
> +
> 	IPVR_REG_READ32(MSVDX_MMU_STATUS_OFFSET));
> +	IPVR_DEBUG_REG("FE_MSVDX_WDT_CONTROL_OFFSET value is
> 0x%x\n",
> +
> 	IPVR_REG_READ32(FE_MSVDX_WDT_CONTROL_OFFSET));
> +	IPVR_DEBUG_REG("FE_MSVDX_WDTIMER_OFFSET value is 0x%x\n",
> +			IPVR_REG_READ32(FE_MSVDX_WDTIMER_OFFSET));
> +	IPVR_DEBUG_REG("BE_MSVDX_WDT_CONTROL_OFFSET value is
> 0x%x\n",
> +
> 	IPVR_REG_READ32(BE_MSVDX_WDT_CONTROL_OFFSET));
> +	IPVR_DEBUG_REG("BE_MSVDX_WDTIMER_OFFSET value is 0x%x\n",
> +			IPVR_REG_READ32(BE_MSVDX_WDTIMER_OFFSET));
> +
> +	/* for MSVDX RENDEC REGISTER */
> +	IPVR_DEBUG_REG("VEC_SHIFTREG_CONTROL_OFFSET is 0x%x\n",
> +
> 	IPVR_REG_READ32(VEC_SHIFTREG_CONTROL_OFFSET));
> +	IPVR_DEBUG_REG("MSVDX_RENDEC_CONTROL0_OFFSET value is
> 0x%x\n",
> +
> 	IPVR_REG_READ32(MSVDX_RENDEC_CONTROL0_OFFSET));
> +	IPVR_DEBUG_REG("MSVDX_RENDEC_BUFFER_SIZE_OFFSET value is
> 0x%x\n",
> +
> 	IPVR_REG_READ32(MSVDX_RENDEC_BUFFER_SIZE_OFFSET));
> +	IPVR_DEBUG_REG("MSVDX_RENDEC_BASE_ADDR0_OFFSET value is
> 0x%x\n",
> +
> 	IPVR_REG_READ32(MSVDX_RENDEC_BASE_ADDR0_OFFSET));
> +	IPVR_DEBUG_REG("MMSVDX_RENDEC_BASE_ADDR1_OFFSET value
> is 0x%x\n",
> +
> 	IPVR_REG_READ32(MSVDX_RENDEC_BASE_ADDR1_OFFSET));
> +	IPVR_DEBUG_REG("MSVDX_RENDEC_READ_DATA_OFFSET value is
> 0x%x\n",
> +
> 	IPVR_REG_READ32(MSVDX_RENDEC_READ_DATA_OFFSET));
> +	IPVR_DEBUG_REG("MSVDX_RENDEC_CONTEXT0_OFFSET value is
> 0x%x\n",
> +
> 	IPVR_REG_READ32(MSVDX_RENDEC_CONTEXT0_OFFSET));
> +	IPVR_DEBUG_REG("MSVDX_RENDEC_CONTEXT1_OFFSET value is
> 0x%x\n",
> +
> 	IPVR_REG_READ32(MSVDX_RENDEC_CONTEXT1_OFFSET));
> +	IPVR_DEBUG_REG("MSVDX_CMDS_END_SLICE_PICTURE_OFFSET
> value is 0x%x\n",
> +
> 	IPVR_REG_READ32(MSVDX_CMDS_END_SLICE_PICTURE_OFFSET));
> +
> +	IPVR_DEBUG_REG("MSVDX_MMU_MEM_REQ value is 0x%x\n",
> +
> 	IPVR_REG_READ32(MSVDX_MMU_MEM_REQ_OFFSET));
> +	IPVR_DEBUG_REG("MSVDX_SYS_MEMORY_DEBUG2 value is
> 0x%x\n",
> +			IPVR_REG_READ32(0x6fc));
> +}
> +
> +static void ved_upload_fw(struct ved_private *ved_priv,
> +				u32 address, const u32 words)
> +{
> +	u32 reg_val = 0;
> +	u32 cmd;
> +	u32 uCountReg, offset, mmu_ptd;
> +	u32 size = words * 4; /* byte count */
> +	u32 dma_channel = 0; /* Setup a Simple DMA for Ch0 */
> +	struct drm_ipvr_private *dev_priv = ved_priv->dev_priv;
> +
> +	IPVR_DEBUG_VED("VED: Upload firmware by DMA.\n");
> +	ved_get_mtx_control_from_dash(ved_priv);
> +
> +	/*
> +	 * dma transfers to/from the mtx have to be 32-bit aligned and
> +	 * in multiples of 32 bits
> +	 */
> +	IPVR_REG_WRITE32(address, MTX_SYSC_CDMAA_OFFSET);
> +
> +	/* burst size in multiples of 64 bits (allowed values are 2 or 4) */
> +	REGIO_WRITE_FIELD_LITE(reg_val, MTX_SYSC_CDMAC, BURSTSIZE,
> 4);
> +	/* false means write to mtx mem, true means read from mtx mem
> */
> +	REGIO_WRITE_FIELD_LITE(reg_val, MTX_SYSC_CDMAC, RNW, 0);
> +	/* begin transfer */
> +	REGIO_WRITE_FIELD_LITE(reg_val, MTX_SYSC_CDMAC, ENABLE,
> 	1);
> +	/* specifies transfer size of the DMA operation by 32-bit words */
> +	REGIO_WRITE_FIELD_LITE(reg_val, MTX_SYSC_CDMAC, LENGTH,
> 	words);
> +	IPVR_REG_WRITE32(reg_val, MTX_SYSC_CDMAC_OFFSET);
> +
> +	/* toggle channel 0 usage between mtx and other msvdx peripherals
> */
> +	{
> +		reg_val = IPVR_REG_READ32(MSVDX_CONTROL_OFFSET);
> +		REGIO_WRITE_FIELD(reg_val, MSVDX_CONTROL,
> DMAC_CH0_SELECT,  0);
> +		IPVR_REG_WRITE32(reg_val, MSVDX_CONTROL_OFFSET);
> +	}
> +
> +	/* Clear the DMAC Stats */
> +	IPVR_REG_WRITE32(0 , DMAC_DMAC_IRQ_STAT_OFFSET +
> dma_channel);
> +
> +	offset = ved_priv->fw_offset;
> +	IPVR_DEBUG_VED("fw mmu offset is 0x%x.\n", offset);
> +
> +	/* use bank 0 */
> +	cmd = 0;
> +	IPVR_REG_WRITE32(cmd, MSVDX_MMU_BANK_INDEX_OFFSET);
> +
> +	/* Write PTD to mmu base 0*/
> +	mmu_ptd = ipvr_get_default_pd_addr32(ved_priv->dev_priv-
> >mmu);
> +	BUG_ON(mmu_ptd == 0);
> +	IPVR_REG_WRITE32(mmu_ptd,
> MSVDX_MMU_DIR_LIST_BASE_OFFSET + 0);
> +	IPVR_DEBUG_VED("mmu_ptd is %d.\n", mmu_ptd);
> +
> +	/* Invalidate */
> +	reg_val = IPVR_REG_READ32(MSVDX_MMU_CONTROL0_OFFSET);
> +	reg_val &= ~0xf;
> +	REGIO_WRITE_FIELD(reg_val, MSVDX_MMU_CONTROL0,
> MMU_INVALDC, 1);
> +	IPVR_REG_WRITE32(reg_val, MSVDX_MMU_CONTROL0_OFFSET);
> +
> +	IPVR_REG_WRITE32(offset, DMAC_DMAC_SETUP_OFFSET +
> dma_channel);
> +
> +	/* Only use a single dma - assert that this is valid */
> +	if ((size >> 2) >= (1 << 15)) {
> +		IPVR_ERROR("DMA size beyond limit, abort firmware
> upload.\n");
> +		return;
> +	}
> +
> +	uCountReg =
> IPVR_DMAC_VALUE_COUNT(IPVR_DMAC_BSWAP_NO_SWAP, 0,
> +					 IPVR_DMAC_DIR_MEM_TO_PERIPH,
> 0, (size >> 2));
> +	/* Set the number of bytes to dma*/
> +	IPVR_REG_WRITE32(uCountReg, DMAC_DMAC_COUNT_OFFSET +
> dma_channel);
> +
> +	cmd =
> IPVR_DMAC_VALUE_PERIPH_PARAM(IPVR_DMAC_ACC_DEL_0,
> +					  IPVR_DMAC_INCR_OFF,
> +					  IPVR_DMAC_BURST_2);
> +	IPVR_REG_WRITE32(cmd, DMAC_DMAC_PERIPH_OFFSET +
> dma_channel);
> +
> +	/* Set destination port for dma */
> +	cmd = 0;
> +	REGIO_WRITE_FIELD(cmd, DMAC_DMAC_PERIPHERAL_ADDR, ADDR,
> +			  MTX_SYSC_CDMAT_OFFSET);
> +	IPVR_REG_WRITE32(cmd,
> DMAC_DMAC_PERIPHERAL_ADDR_OFFSET + dma_channel);
> +
> +
> +	/* Finally, rewrite the count register with the enable bit set */
> +	IPVR_REG_WRITE32(uCountReg | DMAC_DMAC_COUNT_EN_MASK,
> +			DMAC_DMAC_COUNT_OFFSET + dma_channel);
> +
> +	/* Wait for all to be done */
> +	if (ved_wait_for_register(ved_priv,
> +				  DMAC_DMAC_IRQ_STAT_OFFSET +
> dma_channel,
> +
> DMAC_DMAC_IRQ_STAT_TRANSFER_FIN_MASK,
> +
> DMAC_DMAC_IRQ_STAT_TRANSFER_FIN_MASK,
> +				  2000000, 5)) {
> +		ved_setup_fw_dump(ved_priv, dma_channel);
> +		ved_release_mtx_control_from_dash(ved_priv);
> +		return;
> +	}
> +
> +	/* Assert that the MTX DMA port is all done aswell */
> +	if (ved_wait_for_register(ved_priv,
> +			MTX_SYSC_CDMAS0_OFFSET,
> +			1, 1, 2000000, 5)) {
> +		ved_release_mtx_control_from_dash(ved_priv);
> +		return;
> +	}
> +
> +	ved_release_mtx_control_from_dash(ved_priv);
> +
> +	IPVR_DEBUG_VED("VED: Upload done\n");
> +}
> +
> +static int ved_get_fw_bo(struct ved_private *ved_priv,
> +				   const struct firmware **raw, char *name)
> +{
> +	int rc = 0;
> +	size_t fw_size;
> +	void *ptr = NULL;
> +	void *fw_bo_addr = NULL;
> +	u32 *last_word;
> +	struct ved_fw *fw;
> +
> +	rc = request_firmware(raw, name, &ved_priv->dev_priv->dev-
> >platformdev->dev);
> +	if (rc)
> +		return rc;
> +
> +	if (!*raw) {
> +		rc = -ENOMEM;
> +		IPVR_ERROR("VED: %s request_firmware failed:
> Reason %d.\n", name, rc);
> +		goto out;
> +	}
> +
> +	if ((*raw)->size < sizeof(struct ved_fw)) {
> +		rc = -ENOMEM;
> +		IPVR_ERROR("VED: %s is is not correct size(%zd).\n", name,
> (*raw)->size);
> +		goto out;
> +	}
> +
> +	ptr = (void *)((*raw))->data;
> +	if (!ptr) {
> +		rc = -ENOMEM;
> +		IPVR_ERROR("VED: Failed to load %s.\n", name);
> +		goto out;
> +	}
> +
> +	/* another sanity check... */
> +	fw_size = sizeof(struct ved_fw) +
> +		  sizeof(u32) * ((struct ved_fw *) ptr)->text_size +
> +		  sizeof(u32) * ((struct ved_fw *) ptr)->data_size;
> +	if ((*raw)->size < fw_size) {
> +		rc = -ENOMEM;
> +		IPVR_ERROR("VED: %s is is not correct size(%zd).\n",
> +			  name, (*raw)->size);
> +		goto out;
> +	}
> +
> +	fw_bo_addr = ipvr_gem_object_vmap(ved_priv->fw_bo);
> +	if (!fw_bo_addr) {
> +		rc = -ENOMEM;
> +		IPVR_ERROR("VED: kmap failed for fw buffer.\n");
> +		goto out;
> +	}
> +
> +	fw = (struct ved_fw *)ptr;
> +	memset(fw_bo_addr, UNINITILISE_MEM, ved_priv-
> >mtx_mem_size);
> +	memcpy(fw_bo_addr, ptr + sizeof(struct ved_fw),
> +	       sizeof(u32) * fw->text_size);
> +	memcpy(fw_bo_addr + (fw->data_location -
> MSVDX_MTX_DATA_LOCATION),
> +	       (void *)ptr + sizeof(struct ved_fw) + sizeof(u32) * fw->text_size,
> +	       sizeof(u32) * fw->data_size);
> +	last_word = (u32 *)(fw_bo_addr + ved_priv->mtx_mem_size - 4);
> +	/*
> +	 * Write a know value to last word in mtx memory
> +	 * Usefull for detection of stack overrun
> +	 */
> +	*last_word = STACKGUARDWORD;
> +
> +	vunmap(fw_bo_addr);
> +	IPVR_DEBUG_VED("VED: releasing firmware resouces.\n");
> +	IPVR_DEBUG_VED("VED: Load firmware into BO successfully.\n");
> +out:
> +	release_firmware(*raw);
> +	return rc;
> +}
> +
> +static u32 *
> +ved_get_fw(struct ved_private *ved_priv, const struct firmware **raw,
> char *name)
> +{
> +	int rc = 0;
> +	size_t fw_size;
> +	void *ptr = NULL;
> +	struct ved_fw *fw;
> +	ved_priv->ved_fw_ptr = NULL;
> +
> +	rc = request_firmware(raw, name, &ved_priv->dev_priv->dev-
> >platformdev->dev);
> +	if (rc)
> +		return NULL;
> +
> +	if (!*raw) {
> +		IPVR_ERROR("VED: %s request_firmware failed:
> Reason %d\n",
> +			  name, rc);
> +		goto out;
> +	}
> +
> +	if ((*raw)->size < sizeof(struct ved_fw)) {
> +		IPVR_ERROR("VED: %s is is not correct size(%zd)\n",
> +			  name, (*raw)->size);
> +		goto out;
> +	}
> +
> +	ptr = (int *)((*raw))->data;
> +	if (!ptr) {
> +		IPVR_ERROR("VED: Failed to load %s.\n", name);
> +		goto out;
> +	}
> +	fw = (struct ved_fw *)ptr;
> +
> +	/* another sanity check... */
> +	fw_size = sizeof(fw) +
> +		  sizeof(u32) * fw->text_size +
> +		  sizeof(u32) * fw->data_size;
> +	if ((*raw)->size < fw_size) {
> +		IPVR_ERROR("VED: %s is is not correct size(%zd).\n",
> +			   name, (*raw)->size);
> +		goto out;
> +	}
> +
> +	ved_priv->ved_fw_ptr = kzalloc(fw_size, GFP_KERNEL);
> +	if (!ved_priv->ved_fw_ptr)
> +		IPVR_ERROR("VED: allocate FW buffer failed.\n");
> +	else {
> +		memcpy(ved_priv->ved_fw_ptr, ptr, fw_size);
> +		ved_priv->ved_fw_size = fw_size;
> +	}
> +
> +out:
> +	IPVR_DEBUG_VED("VED: releasing firmware resouces.\n");
> +	release_firmware(*raw);
> +	return ved_priv->ved_fw_ptr;
> +}
> +
> +static void
> +ved_write_mtx_core_reg(struct ved_private *ved_priv,
> +			       const u32 core_reg, const u32 val)
> +{
> +	u32 reg = 0;
> +	struct drm_ipvr_private *dev_priv = ved_priv->dev_priv;
> +
> +	/* Put data in MTX_RW_DATA */
> +	IPVR_REG_WRITE32(val,
> MTX_REGISTER_READ_WRITE_DATA_OFFSET);
> +
> +	/* DREADY is set to 0 and request a write */
> +	reg = core_reg;
> +	REGIO_WRITE_FIELD_LITE(reg,
> MTX_REGISTER_READ_WRITE_REQUEST,
> +			       MTX_RNW, 0);
> +	REGIO_WRITE_FIELD_LITE(reg,
> MTX_REGISTER_READ_WRITE_REQUEST,
> +			       MTX_DREADY, 0);
> +	IPVR_REG_WRITE32(reg,
> MTX_REGISTER_READ_WRITE_REQUEST_OFFSET);
> +
> +	ved_wait_for_register(ved_priv,
> +			      MTX_REGISTER_READ_WRITE_REQUEST_OFFSET,
> +
> MTX_REGISTER_READ_WRITE_REQUEST_MTX_DREADY_MASK,
> +
> MTX_REGISTER_READ_WRITE_REQUEST_MTX_DREADY_MASK,
> +			      2000000, 5);
> +}
> +
> +int ved_alloc_fw_bo(struct ved_private *ved_priv)
> +{
> +	u32 core_rev;
> +	int ret;
> +	struct drm_ipvr_private *dev_priv = ved_priv->dev_priv;
> +
> +	core_rev = IPVR_REG_READ32(MSVDX_CORE_REV_OFFSET);
> +
> +	if ((core_rev & 0xffffff) < 0x020000)
> +		ved_priv->mtx_mem_size = 16 * 1024;
> +	else
> +		ved_priv->mtx_mem_size = 56 * 1024;
> +
> +	IPVR_DEBUG_INIT("VED: MTX mem size is 0x%08x bytes,"
> +			"allocate firmware BO size 0x%08x.\n",
> +			ved_priv->mtx_mem_size,
> +			ved_priv->mtx_mem_size + 4096);
> +
> +	/* Allocate the new object */
> +	ved_priv->fw_bo = ipvr_gem_create(ved_priv->dev_priv,
> +						ved_priv->mtx_mem_size +
> 4096, 0, 0);
> +	if (IS_ERR(ved_priv->fw_bo)) {
> +		IPVR_ERROR("VED: failed to allocate fw buffer: %ld.\n",
> +			PTR_ERR(ved_priv->fw_bo));
> +		ved_priv->fw_bo = NULL;
> +		return -ENOMEM;
> +	}
> +	ved_priv->fw_offset = ipvr_gem_object_mmu_offset(ved_priv-
> >fw_bo);
> +	if (IPVR_IS_ERR(ved_priv->fw_offset)) {
> +		ved_priv->fw_bo = NULL;
> +		goto err_free_fw_bo;
> +	}
> +	return 0;
> +err_free_fw_bo:
> +	drm_gem_object_unreference_unlocked(&ved_priv->fw_bo-
> >base);
> +	return ret;
> +}
> +
> +int ved_setup_fw(struct ved_private *ved_priv)
> +{
> +	u32 ram_bank_size;
> +	struct drm_ipvr_private *dev_priv = ved_priv->dev_priv;
> +	int ret = 0;
> +	struct ved_fw *fw;
> +	u32 *fw_ptr = NULL;
> +	u32 *text_ptr = NULL;
> +	u32 *data_ptr = NULL;
> +	const struct firmware *raw = NULL;
> +
> +	/* todo : Assert the clock is on - if not turn it on to upload code */
> +	IPVR_DEBUG_VED("VED: ved_setup_fw.\n");
> +
> +	ved_set_clocks(ved_priv, clk_enable_all);
> +
> +	/* Reset MTX */
> +	IPVR_REG_WRITE32(MTX_SOFT_RESET_MTX_RESET_MASK,
> +			MTX_SOFT_RESET_OFFSET);
> +
> +	IPVR_REG_WRITE32(FIRMWAREID,
> MSVDX_COMMS_FIRMWARE_ID);
> +
> +	IPVR_REG_WRITE32(0, MSVDX_COMMS_ERROR_TRIG);
> +	IPVR_REG_WRITE32(199, MTX_SYSC_TIMERDIV_OFFSET); /*
> MTX_SYSC_TIMERDIV */
> +	IPVR_REG_WRITE32(0, MSVDX_EXT_FW_ERROR_STATE); /*
> EXT_FW_ERROR_STATE */
> +	IPVR_REG_WRITE32(0, MSVDX_COMMS_MSG_COUNTER);
> +	IPVR_REG_WRITE32(0, MSVDX_COMMS_SIGNATURE);
> +	IPVR_REG_WRITE32(0, MSVDX_COMMS_TO_HOST_RD_INDEX);
> +	IPVR_REG_WRITE32(0, MSVDX_COMMS_TO_HOST_WRT_INDEX);
> +	IPVR_REG_WRITE32(0, MSVDX_COMMS_TO_MTX_RD_INDEX);
> +	IPVR_REG_WRITE32(0, MSVDX_COMMS_TO_MTX_WRT_INDEX);
> +	IPVR_REG_WRITE32(0, MSVDX_COMMS_FW_STATUS);
> +	IPVR_REG_WRITE32(DSIABLE_IDLE_GPIO_SIG |
> +			DSIABLE_Auto_CLOCK_GATING |
> +			RETURN_VDEB_DATA_IN_COMPLETION |
> +			NOT_ENABLE_ON_HOST_CONCEALMENT,
> +			MSVDX_COMMS_OFFSET_FLAGS);
> +	IPVR_REG_WRITE32(0, MSVDX_COMMS_SIGNATURE);
> +
> +	/* read register bank size */
> +	{
> +		u32 bank_size, reg;
> +		reg =
> IPVR_REG_READ32(MSVDX_MTX_RAM_BANK_OFFSET);
> +		bank_size =
> +			REGIO_READ_FIELD(reg, MSVDX_MTX_RAM_BANK,
> +					 MTX_RAM_BANK_SIZE);
> +		ram_bank_size = (u32)(1 << (bank_size + 2));
> +	}
> +
> +	IPVR_DEBUG_VED("VED: RAM bank size = %d bytes\n",
> ram_bank_size);
> +
> +	/* if FW already loaded from storage */
> +	if (ved_priv->ved_fw_ptr) {
> +		fw_ptr = ved_priv->ved_fw_ptr;
> +	} else {
> +		fw_ptr = ved_get_fw(ved_priv, &raw, FIRMWARE_NAME);
> +		IPVR_DEBUG_VED("VED:load msvdx_fw_mfld_DE2.0.bin by
> udevd\n");
> +	}
> +	if (!fw_ptr) {
> +		IPVR_ERROR("VED:load ved_fw.bin failed,is udevd
> running?\n");
> +		ret = 1;
> +		goto out;
> +	}
> +
> +	if (!ved_priv->fw_loaded_to_bo) { /* Load firmware into BO */
> +		IPVR_DEBUG_VED("MSVDX:load ved_fw.bin by udevd into
> BO\n");
> +		ret = ved_get_fw_bo(ved_priv, &raw, FIRMWARE_NAME);
> +		if (ret) {
> +			IPVR_ERROR("VED: failed to call
> ved_get_fw_bo: %d.\n", ret);
> +			goto out;
> +		}
> +		ved_priv->fw_loaded_to_bo = true;
> +	}
> +
> +	fw = (struct ved_fw *) fw_ptr;
> +
> +	/* need check fw->ver? */
> +	text_ptr = (u32 *)((u8 *) fw_ptr + sizeof(struct ved_fw));
> +	data_ptr = text_ptr + fw->text_size;
> +
> +	/* maybe we can judge fw version according to fw text size */
> +
> +	IPVR_DEBUG_VED("VED: Retrieved pointers for firmware\n");
> +	IPVR_DEBUG_VED("VED: text_size: %d\n", fw->text_size);
> +	IPVR_DEBUG_VED("VED: data_size: %d\n", fw->data_size);
> +	IPVR_DEBUG_VED("VED: data_location: 0x%x\n", fw-
> >data_location);
> +	IPVR_DEBUG_VED("VED: First 4 bytes of text: 0x%x\n", *text_ptr);
> +	IPVR_DEBUG_VED("VED: First 4 bytes of data: 0x%x\n", *data_ptr);
> +	IPVR_DEBUG_VED("VED: Uploading firmware\n");
> +
> +	ved_upload_fw(ved_priv, 0, ved_priv->mtx_mem_size / 4);
> +
> +	/*	-- Set starting PC address	*/
> +	ved_write_mtx_core_reg(ved_priv, MTX_PC, PC_START_ADDRESS);
> +
> +	/*	-- Turn on the thread	*/
> +	IPVR_REG_WRITE32(MTX_ENABLE_MTX_ENABLE_MASK,
> MTX_ENABLE_OFFSET);
> +
> +	/* Wait for the signature value to be written back */
> +	ret = ved_wait_for_register(ved_priv, MSVDX_COMMS_SIGNATURE,
> +				    MSVDX_COMMS_SIGNATURE_VALUE,
> +				    0xffffffff, /* Enabled bits */
> +				    2000000, 5);
> +	if (ret) {
> +		IPVR_ERROR("VED: firmware fails to initialize.\n");
> +		goto out;
> +	}
> +
> +	IPVR_DEBUG_VED("VED: MTX Initial indications OK.\n");
> +	IPVR_DEBUG_VED("VED: MSVDX_COMMS_AREA_ADDR = %08x.\n",
> +		       MSVDX_COMMS_AREA_ADDR);
> +out:
> +	/* no need to put fw bo, we will do it at driver unload */
> +	return ret;
> +}
> +
> +
> +/* This value is hardcoded in FW */
> +#define WDT_CLOCK_DIVIDER 128
> +int ved_post_boot_init(struct ved_private *ved_priv)
> +{
> +	u32 device_node_flags =
> +			DSIABLE_IDLE_GPIO_SIG |
> DSIABLE_Auto_CLOCK_GATING |
> +			RETURN_VDEB_DATA_IN_COMPLETION |
> +			NOT_ENABLE_ON_HOST_CONCEALMENT;
> +	int reg_val = 0;
> +
> +	/* DDK set fe_wdt_clks as 0x820 and be_wdt_clks as 0x8200 */
> +	u32 fe_wdt_clks = 0x334 * WDT_CLOCK_DIVIDER;
> +	u32 be_wdt_clks = 0x2008 * WDT_CLOCK_DIVIDER;
> +	struct drm_ipvr_private *dev_priv = ved_priv->dev_priv;
> +
> +	IPVR_REG_WRITE32(FIRMWAREID,
> MSVDX_COMMS_FIRMWARE_ID);
> +	IPVR_REG_WRITE32(device_node_flags,
> MSVDX_COMMS_OFFSET_FLAGS);
> +
> +	/* read register bank size */
> +	{
> +		u32 ram_bank_size;
> +		u32 bank_size, reg;
> +		reg =
> IPVR_REG_READ32(MSVDX_MTX_RAM_BANK_OFFSET);
> +		bank_size =
> +			REGIO_READ_FIELD(reg, MSVDX_MTX_RAM_BANK,
> +					 MTX_RAM_BANK_SIZE);
> +		ram_bank_size = (u32)(1 << (bank_size + 2));
> +		IPVR_DEBUG_INIT("VED: RAM bank size = %d bytes\n",
> +				ram_bank_size);
> +	}
> +	/* host end */
> +
> +	/* DDK setup tiling region here */
> +	/* DDK set MMU_CONTROL2 register */
> +
> +	/* set watchdog timer here */
> +	REGIO_WRITE_FIELD(reg_val, FE_MSVDX_WDT_CONTROL,
> +			  FE_WDT_CNT_CTRL, 0x3);
> +	REGIO_WRITE_FIELD(reg_val, FE_MSVDX_WDT_CONTROL,
> +			  FE_WDT_ENABLE, 0);
> +	REGIO_WRITE_FIELD(reg_val, FE_MSVDX_WDT_CONTROL,
> +			  FE_WDT_ACTION0, 1);
> +	REGIO_WRITE_FIELD(reg_val, FE_MSVDX_WDT_CONTROL,
> +			  FE_WDT_CLEAR_SELECT, 1);
> +	REGIO_WRITE_FIELD(reg_val, FE_MSVDX_WDT_CONTROL,
> +			  FE_WDT_CLKDIV_SELECT, 7);
> +	IPVR_REG_WRITE32(fe_wdt_clks / WDT_CLOCK_DIVIDER,
> +			FE_MSVDX_WDT_COMPAREMATCH_OFFSET);
> +	IPVR_REG_WRITE32(reg_val, FE_MSVDX_WDT_CONTROL_OFFSET);
> +
> +	reg_val = 0;
> +	/* DDK set BE_WDT_CNT_CTRL as 0x5 and BE_WDT_CLEAR_SELECT
> as 0x1 */
> +	REGIO_WRITE_FIELD(reg_val, BE_MSVDX_WDT_CONTROL,
> +			  BE_WDT_CNT_CTRL, 0x7);
> +	REGIO_WRITE_FIELD(reg_val, BE_MSVDX_WDT_CONTROL,
> +			  BE_WDT_ENABLE, 0);
> +	REGIO_WRITE_FIELD(reg_val, BE_MSVDX_WDT_CONTROL,
> +			  BE_WDT_ACTION0, 1);
> +	REGIO_WRITE_FIELD(reg_val, BE_MSVDX_WDT_CONTROL,
> +			  BE_WDT_CLEAR_SELECT, 0xd);
> +	REGIO_WRITE_FIELD(reg_val, BE_MSVDX_WDT_CONTROL,
> +			  BE_WDT_CLKDIV_SELECT, 7);
> +
> +	IPVR_REG_WRITE32(be_wdt_clks / WDT_CLOCK_DIVIDER,
> +			BE_MSVDX_WDT_COMPAREMATCH_OFFSET);
> +	IPVR_REG_WRITE32(reg_val, BE_MSVDX_WDT_CONTROL_OFFSET);
> +
> +	return ved_rendec_init_by_msg(ved_priv);
> +}
> +
> +int ved_post_init(struct ved_private *ved_priv)
> +{
> +	u32 cmd;
> +	int ret;
> +	struct drm_ipvr_private *dev_priv;
> +
> +	if (!ved_priv)
> +		return -EINVAL;
> +
> +	ved_priv->ved_busy = false;
> +	dev_priv = ved_priv->dev_priv;
> +
> +	/* Enable MMU by removing all bypass bits */
> +	IPVR_REG_WRITE32(0, MSVDX_MMU_CONTROL0_OFFSET);
> +
> +	ved_rendec_init_by_reg(ved_priv);
> +	if (!ved_priv->fw_bo) {
> +		ret = ved_alloc_fw_bo(ved_priv);
> +		if (ret) {
> +			IPVR_ERROR("VED: ved_alloc_fw_bo failed: %d.\n",
> ret);
> +			return ret;
> +		}
> +	}
> +	/* move fw loading to the place receiving first cmd buffer */
> +	ved_priv->ved_fw_loaded = false; /* need to load firware */
> +	/* it should be set at punit post boot init phase */
> +	IPVR_REG_WRITE32(820,
> FE_MSVDX_WDT_COMPAREMATCH_OFFSET);
> +	IPVR_REG_WRITE32(8200,
> BE_MSVDX_WDT_COMPAREMATCH_OFFSET);
> +
> +	IPVR_REG_WRITE32(820,
> FE_MSVDX_WDT_COMPAREMATCH_OFFSET);
> +	IPVR_REG_WRITE32(8200,
> BE_MSVDX_WDT_COMPAREMATCH_OFFSET);
> +
> +	ved_clear_irq(ved_priv);
> +	ved_enable_irq(ved_priv);
> +
> +	cmd = 0;
> +	cmd = IPVR_REG_READ32(VEC_SHIFTREG_CONTROL_OFFSET);
> +	REGIO_WRITE_FIELD(cmd, VEC_SHIFTREG_CONTROL,
> +	  SR_MASTER_SELECT, 1);  /* Host */
> +	IPVR_REG_WRITE32(cmd, VEC_SHIFTREG_CONTROL_OFFSET);
> +
> +	return 0;
> +}
> +
> +int __must_check ved_core_init(struct drm_ipvr_private *dev_priv)
> +{
> +	int ret;
> +	struct ved_private *ved_priv;
> +	if (!dev_priv->ved_private) {
> +		ved_priv = kzalloc(sizeof(struct ved_private), GFP_KERNEL);
> +		if (!ved_priv) {
> +			IPVR_ERROR("VED: alloc ved_private failed.\n");
> +			return -ENOMEM;
> +		}
> +
> +		dev_priv->ved_private = ved_priv;
> +		ved_priv->dev_priv = dev_priv;
> +
> +		/* Initialize comand ved queueing */
> +		INIT_LIST_HEAD(&ved_priv->ved_queue);
> +		mutex_init(&ved_priv->ved_mutex);
> +		spin_lock_init(&ved_priv->ved_lock);
> +		ved_priv->mmu_recover_page = alloc_page(GFP_DMA32);
> +		if (!ved_priv->mmu_recover_page) {
> +			ret = -ENOMEM;
> +			IPVR_ERROR("VED: alloc mmu_recover_page
> failed: %d.\n", ret);
> +			goto err_free_ved_priv;
> +		}
> +		IPVR_DEBUG_INIT("VED: successfully initialized
> ved_private.\n");
> +		dev_priv->ved_private= ved_priv;
> +	}
> +	ved_priv = dev_priv->ved_private;
> +
> +	ret = ved_alloc_ccb_for_rendec(dev_priv->ved_private,
> +			RENDEC_A_SIZE, RENDEC_B_SIZE);
> +	if (unlikely(ret)) {
> +		IPVR_ERROR("VED: msvdx_alloc_ccb_for_rendec
> failed: %d.\n", ret);
> +		goto err_free_mmu_recover_page;
> +	}
> +
> +	ret = ved_post_init(ved_priv);
> +	if (unlikely(ret)) {
> +		IPVR_ERROR("VED: ved_post_init failed: %d.\n", ret);
> +		goto err_free_ccb;
> +	}
> +
> +	return 0;
> +err_free_ccb:
> +	ved_free_ccb(ved_priv);
> +err_free_mmu_recover_page:
> +	__free_page(ved_priv->mmu_recover_page);
> +err_free_ved_priv:
> +	kfree(ved_priv);
> +	dev_priv->ved_private = NULL;
> +	return ret;
> +}
> +
> +int ved_core_deinit(struct drm_ipvr_private *dev_priv)
> +{
> +	struct ved_private *ved_priv = dev_priv->ved_private;
> +	if (NULL == ved_priv) {
> +		IPVR_ERROR("VED: ved_priv is NULL!\n");
> +		return -1;
> +	}
> +
> +	IPVR_DEBUG_INIT("VED: set the VED clock to 0.\n");
> +	ved_set_clocks(ved_priv, 0);
> +
> +	if (ved_priv->ccb0 || ved_priv->ccb1)
> +		ved_free_ccb(ved_priv);
> +
> +	if (ved_priv->fw_bo) {
> +		drm_gem_object_unreference_unlocked(&ved_priv-
> >fw_bo->base);
> +		ved_priv->fw_bo = NULL;
> +	}
> +
> +	if (ved_priv->ved_fw_ptr)
> +		kfree(ved_priv->ved_fw_ptr);
> +
> +	if (ved_priv->mmu_recover_page)
> +		__free_page(ved_priv->mmu_recover_page);
> +
> +	kfree(ved_priv);
> +	dev_priv->ved_private = NULL;
> +
> +	return 0;
> +}
> diff --git a/drivers/gpu/drm/ipvr/ved_fw.h
> b/drivers/gpu/drm/ipvr/ved_fw.h
> new file mode 100644
> index 0000000..3ced466
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/ved_fw.h
> @@ -0,0 +1,81 @@
> +/*********************************************************
> *****************
> + * ved_fw.h: VED firmware support header file
> + *
> + * Copyright (c) 2014 Intel Corporation, Hillsboro, OR, USA
> + * Copyright (c) Imagination Technologies Limited, UK
> + * Copyright (c) 2003 Tungsten Graphics, Inc., Cedar Park, Texas.
> + * All Rights Reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of
> MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
> License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + *
> + * Authors:
> + *    Fei Jiang <fei.jiang@xxxxxxxxx>
> + *    Yao Cheng <yao.cheng@xxxxxxxxx>
> + *
> +
> **********************************************************
> ****************/
> +
> +
> +#ifndef _VED_FW_H_
> +#define _VED_FW_H_
> +
> +#include "ipvr_drv.h"
> +
> +#define FIRMWAREID		0x014d42ab
> +
> +/*  Non-Optimal Invalidation is not default */
> +#define MSVDX_DEVICE_NODE_FLAGS_MMU_NONOPT_INV	2
> +
> +#define FW_VA_RENDER_HOST_INT		0x00004000
> +#define MSVDX_DEVICE_NODE_FLAGS_MMU_HW_INVALIDATION
> 	0x00000020
> +#define FW_DEVA_ERROR_DETECTED 0x08000000
> +
> +/* There is no work currently underway on the hardware */
> +#define MSVDX_FW_STATUS_HW_IDLE	0x00000001
> +#define MSVDX_DEVICE_NODE_FLAG_BRN23154_BLOCK_ON_FE
> 	0x00000200
> +#define MSVDX_DEVICE_NODE_FLAGS_DEFAULT_D0
> 		\
> +	(MSVDX_DEVICE_NODE_FLAGS_MMU_NONOPT_INV |
> 		\
> +		MSVDX_DEVICE_NODE_FLAGS_MMU_HW_INVALIDATION |
> 		\
> +		MSVDX_DEVICE_NODE_FLAG_BRN23154_BLOCK_ON_FE)
> +
> +#define MSVDX_DEVICE_NODE_FLAGS_DEFAULT_D1
> 		\
> +	(MSVDX_DEVICE_NODE_FLAGS_MMU_HW_INVALIDATION |
> 		\
> +		MSVDX_DEVICE_NODE_FLAG_BRN23154_BLOCK_ON_FE)
> +
> +#define MTX_CODE_BASE		(0x80900000)
> +#define MTX_DATA_BASE		(0x82880000)
> +#define PC_START_ADDRESS	(0x80900000)
> +
> +#define MTX_CORE_CODE_MEM	(0x10)
> +#define MTX_CORE_DATA_MEM	(0x18)
> +
> +#define RENDEC_A_SIZE	(4 * 1024 * 1024)
> +#define RENDEC_B_SIZE	(1024 * 1024)
> +
> +#define TERMINATION_SIZE	48
> +
> +#define MSVDX_RESET_NEEDS_REUPLOAD_FW		(0x2)
> +#define MSVDX_RESET_NEEDS_INIT_FW		(0x1)
> +
> +/* init/deinit all ved_private related */
> +int __must_check ved_core_init(struct drm_ipvr_private *dev_priv);
> +int ved_core_deinit(struct drm_ipvr_private *dev_priv);
> +
> +/* used for resetting VED after power saving */
> +int ved_setup_fw(struct ved_private *ved_priv);
> +int ved_core_reset(struct ved_private *ved_priv);
> +int ved_wait_for_register(struct ved_private *ved_priv,
> +			u32 offset, u32 value, u32 enable,
> +			u32 poll_cnt, u32 timeout);
> +
> +#endif
> diff --git a/drivers/gpu/drm/ipvr/ved_msg.h
> b/drivers/gpu/drm/ipvr/ved_msg.h
> new file mode 100644
> index 0000000..1a1e89d
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/ved_msg.h
> @@ -0,0 +1,256 @@
> +/*********************************************************
> *****************
> + * ved_msg.h: VED message definition
> + *
> + * Copyright (c) 2014 Intel Corporation, Hillsboro, OR, USA
> + * Copyright (c) 2003 Imagination Technologies Limited, UK
> + * All Rights Reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of
> MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
> License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + *
> + * Authors:
> + *    Fei Jiang <fei.jiang@xxxxxxxxx>
> + *    Li Zeng <li.zeng@xxxxxxxxx>
> + *    Yao Cheng <yao.cheng@xxxxxxxxx>
> + *
> +
> **********************************************************
> ****************/
> +
> +#ifndef _VED_MSG_H_
> +#define _VED_MSG_H_
> +
> +/* Start of parser specific Host->MTX messages. */
> +#define	FWRK_MSGID_START_PSR_HOSTMTX_MSG	(0x80)
> +
> +/* Start of parser specific MTX->Host messages. */
> +#define	FWRK_MSGID_START_PSR_MTXHOST_MSG	(0xC0)
> +
> +/* Host defined msg, just for host use, MTX not recgnize */
> +#define	FWRK_MSGID_HOST_EMULATED		(0x40)
> +
> +/* This type defines the framework specified message ids */
> +enum {
> +	/* ! Sent by the VA driver on the host to the mtx firmware.
> +	 */
> +	MTX_MSGID_PADDING = 0,
> +	MTX_MSGID_INIT = FWRK_MSGID_START_PSR_HOSTMTX_MSG,
> +	MTX_MSGID_DECODE_FE,
> +	MTX_MSGID_DEBLOCK,
> +	MTX_MSGID_INTRA_OOLD,
> +	MTX_MSGID_DECODE_BE,
> +	MTX_MSGID_HOST_BE_OPP,
> +
> +	/*! Sent by the mtx firmware to itself.
> +	 */
> +	MTX_MSGID_RENDER_MC_INTERRUPT,
> +
> +	/* used to ditinguish mrst and mfld */
> +	MTX_MSGID_DEBLOCK_MFLD = FWRK_MSGID_HOST_EMULATED,
> +	MTX_MSGID_INTRA_OOLD_MFLD,
> +	MTX_MSGID_DECODE_BE_MFLD,
> +	MTX_MSGID_HOST_BE_OPP_MFLD,
> +
> +	/*! Sent by the DXVA firmware on the MTX to the host.
> +	 */
> +	MTX_MSGID_COMPLETED =
> FWRK_MSGID_START_PSR_MTXHOST_MSG,
> +	MTX_MSGID_COMPLETED_BATCH,
> +	MTX_MSGID_DEBLOCK_REQUIRED,
> +	MTX_MSGID_TEST_RESPONCE,
> +	MTX_MSGID_ACK,
> +	MTX_MSGID_FAILED,
> +	MTX_MSGID_CONTIGUITY_WARNING,
> +	MTX_MSGID_HW_PANIC,
> +};
> +
> +#define MTX_GENMSG_SIZE_TYPE		u8
> +#define MTX_GENMSG_SIZE_MASK		(0xFF)
> +#define MTX_GENMSG_SIZE_SHIFT		(0)
> +#define MTX_GENMSG_SIZE_OFFSET		(0x0000)
> +
> +#define MTX_GENMSG_ID_TYPE		u8
> +#define MTX_GENMSG_ID_MASK		(0xFF)
> +#define MTX_GENMSG_ID_SHIFT		(0)
> +#define MTX_GENMSG_ID_OFFSET		(0x0001)
> +
> +#define MTX_GENMSG_HEADER_SIZE		2
> +
> +#define MTX_GENMSG_FENCE_TYPE		u16
> +#define MTX_GENMSG_FENCE_MASK		(0xFFFF)
> +#define MTX_GENMSG_FENCE_OFFSET		(0x0002)
> +#define MTX_GENMSG_FENCE_SHIFT		(0)
> +
> +#define FW_INVALIDATE_MMU		(0x0010)
> +
> +union msg_header {
> +	struct {
> +		u32 msg_size:8;
> +		u32 msg_type:8;
> +		u32 msg_fence:16;
> +	} bits;
> +	u32 value;
> +};
> +
> +struct fw_init_msg {
> +	union {
> +		struct {
> +			u32 msg_size:8;
> +			u32 msg_type:8;
> +			u32 reserved:16;
> +		} bits;
> +		u32 value;
> +	} header;
> +	u32 rendec_addr0;
> +	u32 rendec_addr1;
> +	union {
> +		struct {
> +			u32 rendec_size0:16;
> +			u32 rendec_size1:16;
> +		} bits;
> +		u32 value;
> +	} rendec_size;
> +};
> +
> +struct fw_decode_msg {
> +	union {
> +		struct {
> +			u32 msg_size:8;
> +			u32 msg_type:8;
> +			u32 msg_fence:16;
> +		} bits;
> +		u32 value;
> +	} header;
> +	union {
> +		struct {
> +			u32 flags:16;
> +			u32 buffer_size:16;
> +		} bits;
> +		u32 value;
> +	} flag_size;
> +	u32 crtl_alloc_addr;
> +	union {
> +		struct {
> +			u32 context:8;
> +			u32 mmu_ptd:24;
> +		} bits;
> +		u32 value;
> +	} mmu_context;
> +	u32 operating_mode;
> +};
> +
> +struct fw_deblock_msg {
> +	union {
> +		struct {
> +			u32 msg_size:8;
> +			u32 msg_type:8;
> +			u32 msg_fence:16;
> +		} bits;
> +		u32 value;
> +	} header;
> +	union {
> +		struct {
> +			u32 flags:16;
> +			u32 slice_field_type:2;
> +			u32 reserved:14;
> +		} bits;
> +		u32 value;
> +	} flag_type;
> +	u32 operating_mode;
> +	union {
> +		struct {
> +			u32 context:8;
> +			u32 mmu_ptd:24;
> +		} bits;
> +		u32 value;
> +	} mmu_context;
> +	union {
> +		struct {
> +			u32 frame_height_mb:16;
> +			u32 pic_width_mb:16;
> +		} bits;
> +		u32 value;
> +	} pic_size;
> +	u32 address_a0;
> +	u32 address_a1;
> +	u32 mb_param_address;
> +	u32 ext_stride_a;
> +	u32 address_b0;
> +	u32 address_b1;
> +	u32 alt_output_flags_b;
> +	/* additional msg outside of IMG msg */
> +	u32 address_c0;
> +	u32 address_c1;
> +};
> +
> +#define MTX_PADMSG_SIZE 2
> +struct fw_padding_msg {
> +	union {
> +		struct {
> +			u32 msg_size:8;
> +			u32 msg_type:8;
> +		} bits;
> +		u16 value;
> +	} header;
> +};
> +
> +struct fw_msg_header {
> +	union {
> +		struct {
> +			u32 msg_size:8;
> +			u32 msg_type:8;
> +			u32 msg_fence:16;
> +		} bits;
> +		u32 value;
> +	} header;
> +};
> +
> +struct fw_completed_msg {
> +	union {
> +		struct {
> +			u32 msg_size:8;
> +			u32 msg_type:8;
> +			u32 msg_fence:16;
> +		} bits;
> +		u32 value;
> +	} header;
> +	union {
> +		struct {
> +			u32 start_mb:16;
> +			u32 last_mb:16;
> +		} bits;
> +		u32 value;
> +	} mb;
> +	u32 flags;
> +	u32 vdebcr;
> +};
> +
> +struct fw_panic_msg {
> +	union {
> +		struct {
> +			u32 msg_size:8;
> +			u32 msg_type:8;
> +			u32 msg_fence:16;
> +		} bits;
> +		u32 value;
> +	} header;
> +	u32 fe_status;
> +	u32 be_status;
> +	union {
> +		struct {
> +			u32 last_mb:16;
> +			u32 reserved2:16;
> +		} bits;
> +		u32 value;
> +	} mb;
> +};
> +
> +
> +#endif
> diff --git a/drivers/gpu/drm/ipvr/ved_pm.c
> b/drivers/gpu/drm/ipvr/ved_pm.c
> new file mode 100644
> index 0000000..ee11c39
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/ved_pm.c
> @@ -0,0 +1,335 @@
> +/*********************************************************
> *****************
> + * ved_pm.c: VED power management support
> + *
> + * Copyright (c) 2014 Intel Corporation, Hillsboro, OR, USA
> + * All Rights Reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of
> MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
> License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + *
> + * Authors:
> + *    Yao Cheng <yao.cheng@xxxxxxxxx>
> + *
> +
> **********************************************************
> ****************/
> +
> +
> +#include "ipvr_trace.h"
> +#include "ved_pm.h"
> +#include "ved_reg.h"
> +#include "ved_cmd.h"
> +#include "ved_fw.h"
> +#ifdef CONFIG_INTEL_SOC_PMC
> +#include <linux/intel_mid_pm.h>
> +#endif
> +#include <linux/module.h>
> +#include <linux/pm_runtime.h>
> +
> +extern int drm_ipvr_freq;
> +
> +#define PCI_ROOT_MSGBUS_CTRL_REG      0xD0
> +#define PCI_ROOT_MSGBUS_DATA_REG      0xD4
> +#define PCI_ROOT_MSGBUS_CTRL_EXT_REG  0xD8
> +#define PCI_ROOT_MSGBUS_READ          0x10
> +#define PCI_ROOT_MSGBUS_WRITE         0x11
> +#define PCI_ROOT_MSGBUS_DWORD_ENABLE  0xf0
> +
> +/* VED power state set/get */
> +#define PUNIT_PORT			0x04
> +#define VEDSSPM0 			0x32
> +#define VEDSSPM1 			0x33
> +#define VEDSSC				0x1
> +
> +/* VED frequency set/get */
> +#define IP_FREQ_VALID     0x80     /* Freq is valid bit */
> +
> +#define IP_FREQ_SIZE         5     /* number of bits in freq fields */
> +#define IP_FREQ_MASK      0x1f     /* Bit mask for freq field */
> +
> +/*  Positions of various frequency fields */
> +#define IP_FREQ_POS          0     /* Freq control [4:0] */
> +#define IP_FREQ_GUAR_POS     8     /* Freq guar   [12:8] */
> +#define IP_FREQ_STAT_POS    24     /* Freq status [28:24] */
> +
> +enum APM_VED_STATUS {
> +	VED_APM_STS_D0 = 0,
> +	VED_APM_STS_D1,
> +	VED_APM_STS_D2,
> +	VED_APM_STS_D3
> +};
> +
> +#define GET_FREQ_NUMBER(freq_code)	((1600 * 2)/((freq_code) + 1))
> +/* valid freq code: 0x9, 0xb, 0xd, 0xf, 0x11, 0x13 */
> +#define FREQ_CODE_CLAMP(code) ((code < 0x9)? 0x9: ((code > 0x13)?
> 0x13: code))
> +#define GET_FREQ_CODE(freq_num)	FREQ_CODE_CLAMP((((1600 *
> 2/freq_num + 1) >> 1) << 1) - 1)
> +
> +#ifdef CONFIG_INTEL_SOC_PMC
> +extern int pmc_nc_set_power_state(int islands, int state_type, int reg);
> +extern int pmc_nc_get_power_state(int islands, int reg);
> +#endif
> +
> +static int ved_save_context(struct ved_private *ved_priv)
> +{
> +	int offset;
> +	int ret;
> +	struct drm_ipvr_private *dev_priv = ved_priv->dev_priv;
> +
> +	ved_priv->ved_needs_reset = 1;
> +	/* Reset MTX */
> +	IPVR_REG_WRITE32(MTX_SOFT_RESET_MTXRESET,
> MTX_SOFT_RESET_OFFSET);
> +
> +	/* why need reset msvdx before power off it, need check IMG */
> +	ret = ved_core_reset(ved_priv);
> +	if (unlikely(ret))
> +		IPVR_DEBUG_WARN("failed to call ved_core_reset: %d\n",
> ret);
> +
> +	/* Initialize VEC Local RAM */
> +	for (offset = 0; offset < VEC_LOCAL_MEM_BYTE_SIZE / 4; ++offset)
> +		IPVR_REG_WRITE32(0, VEC_LOCAL_MEM_OFFSET + offset *
> 4);
> +
> +	return 0;
> +}
> +
> +static u32 ipvr_msgbus_read32(struct pci_dev *pci_root, u8 port, u32 addr)
> +{
> +    u32 data;
> +    u32 cmd;
> +    u32 cmdext;
> +
> +    cmd = (PCI_ROOT_MSGBUS_READ << 24) | (port << 16) |
> +        ((addr & 0xff) << 8) | PCI_ROOT_MSGBUS_DWORD_ENABLE;
> +    cmdext = addr & 0xffffff00;
> +
> +    if (cmdext) {
> +        /* This resets to 0 automatically, no need to write 0 */
> +        pci_write_config_dword(pci_root,
> PCI_ROOT_MSGBUS_CTRL_EXT_REG,
> +                    cmdext);
> +    }
> +
> +    pci_write_config_dword(pci_root, PCI_ROOT_MSGBUS_CTRL_REG, cmd);
> +    pci_read_config_dword(pci_root, PCI_ROOT_MSGBUS_DATA_REG,
> &data);
> +
> +    return data;
> +}
> +
> +static void ipvr_msgbus_write32(struct pci_dev *pci_root, u8 port, u32 addr,
> u32 data)
> +{
> +    u32 cmd;
> +    u32 cmdext;
> +
> +    cmd = (PCI_ROOT_MSGBUS_WRITE << 24) | (port << 16) |
> +        ((addr & 0xFF) << 8) | PCI_ROOT_MSGBUS_DWORD_ENABLE;
> +    cmdext = addr & 0xffffff00;
> +
> +    pci_write_config_dword(pci_root, PCI_ROOT_MSGBUS_DATA_REG,
> data);
> +
> +    if (cmdext) {
> +        /* This resets to 0 automatically, no need to write 0 */
> +        pci_write_config_dword(pci_root,
> PCI_ROOT_MSGBUS_CTRL_EXT_REG,
> +            cmdext);
> +    }
> +
> +    pci_write_config_dword(pci_root, PCI_ROOT_MSGBUS_CTRL_REG, cmd);
> +}
> +
> +static int ipvr_pm_cmd_freq_wait(struct pci_dev *pci_root, u32 reg_freq,
> u32 *freq_code_rlzd)
> +{
> +	int tcount;
> +	u32 freq_val;
> +
> +	for (tcount = 0; ; tcount++) {
> +		freq_val = ipvr_msgbus_read32(pci_root, PUNIT_PORT,
> reg_freq);
> +		if ((freq_val & IP_FREQ_VALID) == 0)
> +			break;
> +		if (tcount > 500) {
> +			IPVR_ERROR("P-Unit freq request wait timeout %x",
> +				freq_val);
> +			return -EBUSY;
> +		}
> +		udelay(1);
> +	}
> +
> +	if (freq_code_rlzd) {
> +		*freq_code_rlzd = ((freq_val >> IP_FREQ_STAT_POS) &
> +			IP_FREQ_MASK);
> +	}
> +
> +	return 0;
> +}
> +
> +static int ipvr_pm_cmd_freq_get(struct pci_dev *pci_root, u32 reg_freq)
> +{
> +	u32 freq_val;
> +	int freq_code = 0;
> +
> +	ipvr_pm_cmd_freq_wait(pci_root, reg_freq, NULL);
> +
> +	freq_val = ipvr_msgbus_read32(pci_root, PUNIT_PORT, reg_freq);
> +	freq_code =(int)((freq_val>>IP_FREQ_STAT_POS) &
> ~IP_FREQ_VALID);
> +	return freq_code;
> +}
> +
> +static int ipvr_pm_cmd_freq_set(struct pci_dev *pci_root, u32 reg_freq,
> u32 freq_code, u32 *p_freq_code_rlzd)
> +{
> +	u32 freq_val;
> +	u32 freq_code_realized;
> +	int rva;
> +
> +	rva = ipvr_pm_cmd_freq_wait(pci_root, reg_freq, NULL);
> +	if (rva < 0) {
> +		IPVR_ERROR("pm_cmd_freq_wait 1 failed: %d\n", rva);
> +		return rva;
> +	}
> +
> +	freq_val = IP_FREQ_VALID | freq_code;
> +	ipvr_msgbus_write32(pci_root, PUNIT_PORT, reg_freq, freq_val);
> +
> +	rva = ipvr_pm_cmd_freq_wait(pci_root, reg_freq,
> &freq_code_realized);
> +	if (rva < 0) {
> +		IPVR_ERROR("pm_cmd_freq_wait 2 failed: %d\n", rva);
> +		return rva;
> +	}
> +
> +	if (p_freq_code_rlzd)
> +		*p_freq_code_rlzd = freq_code_realized;
> +
> +	return rva;
> +}
> +
> +static int ved_set_freq(struct drm_ipvr_private *dev_priv, u32 freq_code)
> +{
> +	u32 freq_code_rlzd = 0;
> +	int ret;
> +
> +	ret = ipvr_pm_cmd_freq_set(dev_priv->pci_root,
> +		VEDSSPM1, freq_code, &freq_code_rlzd);
> +	if (ret < 0) {
> +		IPVR_ERROR("failed to set freqency, current is %x\n",
> +			freq_code_rlzd);
> +	}
> +
> +	return ret;
> +}
> +
> +static int ved_get_freq(struct drm_ipvr_private *dev_priv)
> +{
> +	return ipvr_pm_cmd_freq_get(dev_priv->pci_root, VEDSSPM1);
> +}
> +
> +#ifdef CONFIG_INTEL_SOC_PMC
> +static inline bool do_power_on(struct drm_ipvr_private *dev_priv)
> +{
> +	if (pmc_nc_set_power_state(VEDSSC, 0, VEDSSPM0)) {
> +		IPVR_ERROR("VED: pmu_nc_set_power_state ON fail!\n");
> +		return false;
> +	}
> +	return true;
> +}
> +static inline bool do_power_off(struct drm_ipvr_private *dev_priv)
> +{
> +	if (pmc_nc_set_power_state(VEDSSC, 1, VEDSSPM0)) {
> +		IPVR_ERROR("VED: pmu_nc_set_power_state OFF fail!\n");
> +		return false;
> +	}
> +	return true;
> +}
> +#else
> +static inline bool do_power_on(struct drm_ipvr_private *dev_priv)
> +{
> +	u32 pwr_sts;
> +	do {
> +		ipvr_msgbus_write32(dev_priv->pci_root, PUNIT_PORT,
> VEDSSPM0, VED_APM_STS_D0);
> +		udelay(10);
> +		pwr_sts = ipvr_msgbus_read32(dev_priv->pci_root,
> PUNIT_PORT, VEDSSPM0);
> +	} while (pwr_sts != 0x0);
> +	do {
> +		ipvr_msgbus_write32(dev_priv->pci_root, PUNIT_PORT,
> VEDSSPM0, VED_APM_STS_D3);
> +		udelay(10);
> +		pwr_sts = ipvr_msgbus_read32(dev_priv->pci_root,
> PUNIT_PORT, VEDSSPM0);
> +	} while (pwr_sts != 0x03000003);
> +	do {
> +		ipvr_msgbus_write32(dev_priv->pci_root, PUNIT_PORT,
> VEDSSPM0, VED_APM_STS_D0);
> +		udelay(10);
> +		pwr_sts = ipvr_msgbus_read32(dev_priv->pci_root,
> PUNIT_PORT, VEDSSPM0);
> +	} while (pwr_sts != 0x0);
> +	return true;
> +}
> +static inline bool do_power_off(struct drm_ipvr_private *dev_priv)
> +{
> +	u32 pwr_sts;
> +	do {
> +		ipvr_msgbus_write32(dev_priv->pci_root, PUNIT_PORT,
> VEDSSPM0, VED_APM_STS_D3);
> +		udelay(10);
> +		pwr_sts = ipvr_msgbus_read32(dev_priv->pci_root,
> PUNIT_PORT, VEDSSPM0);
> +	} while (pwr_sts != 0x03000003);
> +	return true;
> +}
> +#endif
> +
> +bool ved_power_on(struct drm_ipvr_private *dev_priv)
> +{
> +	int ved_freq_code_before, ved_freq_code_requested,
> ved_freq_code_after;
> +	IPVR_DEBUG_PM("VED: power on msvdx.\n");
> +
> +	if (dev_priv->ved_private)
> +		dev_priv->ved_private->ved_busy = false;
> +	if (!do_power_on(dev_priv))
> +		return false;
> +
> +	ved_freq_code_before = ved_get_freq(dev_priv);
> +	ved_freq_code_requested = GET_FREQ_CODE(drm_ipvr_freq);
> +	if (ved_set_freq(dev_priv, ved_freq_code_requested)) {
> +		IPVR_ERROR("Failed to set VED frequency\n");
> +	}
> +
> +	ved_freq_code_after = ved_get_freq(dev_priv);
> +	IPVR_DEBUG_PM("VED freqency requested %dMHz: actual %dMHz
> => %dMHz\n",
> +		drm_ipvr_freq,
> GET_FREQ_NUMBER(ved_freq_code_before),
> +		GET_FREQ_NUMBER(ved_freq_code_after));
> +	drm_ipvr_freq = GET_FREQ_NUMBER(ved_freq_code_after);
> +
> +	trace_ved_power_on(drm_ipvr_freq);
> +	return true;
> +}
> +
> +bool ved_power_off(struct drm_ipvr_private *dev_priv)
> +{
> +	int ved_freq_code;
> +	int ret;
> +	IPVR_DEBUG_PM("VED: power off msvdx.\n");
> +
> +	if (dev_priv->ved_private) {
> +		ret = ved_save_context(dev_priv->ved_private);
> +		if (unlikely(ret)) {
> +			IPVR_ERROR("Failed to save VED context: %d, stop
> powering off\n", ret);
> +			return false;
> +		}
> +		dev_priv->ved_private->ved_busy = false;
> +	}
> +
> +	ved_freq_code = ved_get_freq(dev_priv);
> +	drm_ipvr_freq = GET_FREQ_NUMBER(ved_freq_code);
> +	IPVR_DEBUG_PM("VED freqency: code %d (%dMHz)\n",
> ved_freq_code, drm_ipvr_freq);
> +
> +	if (!do_power_off(dev_priv))
> +		return false;
> +
> +	trace_ved_power_off(drm_ipvr_freq);
> +	return true;
> +}
> +
> +bool is_ved_on(struct drm_ipvr_private *dev_priv)
> +{
> +	u32 pwr_sts;
> +	pwr_sts = ipvr_msgbus_read32(dev_priv->pci_root, PUNIT_PORT,
> VEDSSPM0);
> +	return (pwr_sts == VED_APM_STS_D0);
> +}
> diff --git a/drivers/gpu/drm/ipvr/ved_pm.h
> b/drivers/gpu/drm/ipvr/ved_pm.h
> new file mode 100644
> index 0000000..4ed1485
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/ved_pm.h
> @@ -0,0 +1,36 @@
> +/*********************************************************
> *****************
> + * ved_pm.h: VED power management header file
> + *
> + * Copyright (c) 2014 Intel Corporation, Hillsboro, OR, USA
> + * All Rights Reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of
> MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
> License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + *
> + * Authors:
> + *    Yao Cheng <yao.cheng@xxxxxxxxx>
> + *
> +
> **********************************************************
> ****************/
> +
> +#ifndef _VED_PM_H_
> +#define _VED_PM_H_
> +
> +#include "ipvr_drv.h"
> +
> +bool is_ved_on(struct drm_ipvr_private *dev_priv);
> +
> +bool __must_check ved_power_on(struct drm_ipvr_private *dev_priv);
> +
> +bool ved_power_off(struct drm_ipvr_private *dev_priv);
> +
> +#endif
> diff --git a/drivers/gpu/drm/ipvr/ved_reg.h
> b/drivers/gpu/drm/ipvr/ved_reg.h
> new file mode 100644
> index 0000000..b7c69cf
> --- /dev/null
> +++ b/drivers/gpu/drm/ipvr/ved_reg.h
> @@ -0,0 +1,561 @@
> +/*********************************************************
> *****************
> + * ved_reg.h: VED register definition
> + *
> + * Copyright (c) 2014 Intel Corporation, Hillsboro, OR, USA
> + * Copyright (c) Imagination Technologies Limited, UK
> + * All Rights Reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of
> MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
> License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + *
> + * Authors:
> + *    Fei Jiang <fei.jiang@xxxxxxxxx>
> + *    Yao Cheng <yao.cheng@xxxxxxxxx>
> + *
> +
> **********************************************************
> ****************/
> +
> +#ifndef _VED_REG_H_
> +#define _VED_REG_H_
> +
> +#include "ipvr_drv.h"
> +
> +#define REGISTER(__group__, __reg__)
> (__group__##_##__reg__##_OFFSET)
> +
> +#define MTX_INTERNAL_REG(R_SPECIFIER , U_SPECIFIER)	\
> +	(((R_SPECIFIER)<<4) | (U_SPECIFIER))
> +#define MTX_PC		MTX_INTERNAL_REG(0, 5)
> +
> +#define MEMIO_READ_FIELD(vpMem, field)
> 		\
> +	((u32)(((*((field##_TYPE*)(((u32)vpMem) + field##_OFFSET))) \
> +			& field##_MASK) >> field##_SHIFT))
> 	\
> +
> +#define MEMIO_WRITE_FIELD(vpMem, field, value)
> 		\
> +do {
> 	\
> +	((*((field##_TYPE*)(((u32)vpMem) + field##_OFFSET))) =	\
> +		((*((field##_TYPE*)(((u32)vpMem) + field##_OFFSET)))
> 	\
> +			& (field##_TYPE)~field##_MASK) |
> 	\
> +	(field##_TYPE)(((u32)(value) << field##_SHIFT) & field##_MASK)); \
> +} while (0)
> +
> +#define MEMIO_WRITE_FIELD_LITE(vpMem, field, value)
> 		\
> +do {
> 	\
> +	 (*((field##_TYPE*)(((u32)vpMem) + field##_OFFSET))) =
> 	\
> +	((*((field##_TYPE*)(((u32)vpMem) + field##_OFFSET))) |
> 	\
> +		(field##_TYPE)(((u32)(value) << field##_SHIFT)));	\
> +} while (0)
> +
> +#define REGIO_READ_FIELD(reg_val, reg, field)
> 		\
> +	((reg_val & reg##_##field##_MASK) >> reg##_##field##_SHIFT)
> +
> +#define REGIO_WRITE_FIELD(reg_val, reg, field, value)
> 		\
> +do {
> 	\
> +	(reg_val) =
> 	\
> +	((reg_val) & ~(reg##_##field##_MASK)) |
> 	\
> +	(((value) << (reg##_##field##_SHIFT)) & (reg##_##field##_MASK));
> 	\
> +} while (0)
> +
> +
> +#define REGIO_WRITE_FIELD_LITE(reg_val, reg, field, value)
> 	\
> +do {
> 	\
> +	(reg_val) = ((reg_val) | ((value) << (reg##_##field##_SHIFT)));	\
> +} while (0)
> +
> +/****** MSVDX.Technical Reference Manual.2.0.2.4.External VXD38x
> **************
> +Offset address 			Name 			Identifier
> +0x0000 - 0x03FF (1024B)		MTX Register
> 	REG_MSVDX_MTX
> +0x0400 - 0x047F (128B)		VDMC Register		REG_MSVDX
> _VDMC
> +0x0480 - 0x04FF (128B)		VDEB Register		REG_MSVDX
> _VDEB
> +0x0500 - 0x05FF (256B)		DMAC Register		REG_MSVDX
> _DMAC
> +0x0600 - 0x06FF (256B)		MSVDX Core Register	REG_MSVDX
> _SYS
> +0x0700 - 0x07FF (256B)		VEC iQ Matrix RAM
> 	REG_MSVDX_VEC_IQRAM
> +0x0800 - 0x0FFF (2048B)		VEC Registers		REG_MSVDX
> _VEC
> +0x1000 - 0x1FFF (4kB)		Command Register	REG_MSVDX _CMD
> +0x2000 - 0x2FFF (4kB)		VEC Local RAM		REG_MSVDX
> _VEC_RAM
> +0x3000 - 0x4FFF (8kB)		VEC VLC Table		RAM REG_MSVDX
> _VEC_VLC
> +0x5000 - 0x5FFF (4kB)		AXI Register		REG_MSVDX _AXI
> +*********************************************************
> *********************/
> +
> +/*************** MTX registers start: 0x0000 - 0x03FF (1024B)
> ****************/
> +#define MTX_ENABLE_OFFSET				(0x0000)
> +#define MTX_ENABLE_MTX_ENABLE_MASK
> 	(0x00000001)
> +#define MTX_ENABLE_MTX_ENABLE_SHIFT				(0)
> +
> +#define MTX_KICK_INPUT_OFFSET				(0x0080)
> +
> +#define MTX_REGISTER_READ_WRITE_REQUEST_OFFSET
> 	(0x00FC)
> +#define MTX_REGISTER_READ_WRITE_REQUEST_MTX_DREADY_MASK
> 	(0x80000000)
> +#define MTX_REGISTER_READ_WRITE_REQUEST_MTX_DREADY_SHIFT
> 	(31)
> +#define MTX_REGISTER_READ_WRITE_REQUEST_MTX_RNW_MASK
> 	(0x00010000)
> +#define MTX_REGISTER_READ_WRITE_REQUEST_MTX_RNW_SHIFT
> 	(16)
> +
> +#define MTX_REGISTER_READ_WRITE_DATA_OFFSET
> 	(0x00F8)
> +
> +#define MTX_RAM_ACCESS_DATA_TRANSFER_OFFSET
> 	(0x0104)
> +
> +#define MTX_RAM_ACCESS_CONTROL_OFFSET
> 	(0x0108)
> +#define MTX_RAM_ACCESS_CONTROL_MTX_MCMID_MASK
> 	(0x0FF00000)
> +#define MTX_RAM_ACCESS_CONTROL_MTX_MCMID_SHIFT
> 	(20)
> +#define MTX_RAM_ACCESS_CONTROL_MTX_MCM_ADDR_MASK
> 	(0x000FFFFC)
> +#define MTX_RAM_ACCESS_CONTROL_MTX_MCM_ADDR_SHIFT
> 	(2)
> +#define MTX_RAM_ACCESS_CONTROL_MTX_MCMAI_MASK
> 	(0x00000002)
> +#define MTX_RAM_ACCESS_CONTROL_MTX_MCMAI_SHIFT
> 	(1)
> +#define MTX_RAM_ACCESS_CONTROL_MTX_MCMR_MASK
> 	(0x00000001)
> +#define MTX_RAM_ACCESS_CONTROL_MTX_MCMR_SHIFT
> 	(0)
> +
> +#define MTX_RAM_ACCESS_STATUS_OFFSET
> 	(0x010C)
> +
> +#define MTX_SOFT_RESET_OFFSET				(0x0200)
> +#define MTX_SOFT_RESET_MTX_RESET_MASK
> 	(0x00000001)
> +#define MTX_SOFT_RESET_MTX_RESET_SHIFT
> 	(0)
> +#define	MTX_SOFT_RESET_MTXRESET
> 	(0x00000001)
> +
> +#define MTX_SYSC_TIMERDIV_OFFSET			(0x0208)
> +
> +#define MTX_SYSC_CDMAC_OFFSET				(0x0340)
> +#define MTX_SYSC_CDMAC_BURSTSIZE_MASK
> 	(0x07000000)
> +#define MTX_SYSC_CDMAC_BURSTSIZE_SHIFT
> 	(24)
> +#define MTX_SYSC_CDMAC_RNW_MASK
> 	(0x00020000)
> +#define MTX_SYSC_CDMAC_RNW_SHIFT				(17)
> +#define MTX_SYSC_CDMAC_ENABLE_MASK
> 	(0x00010000)
> +#define MTX_SYSC_CDMAC_ENABLE_SHIFT				(16)
> +#define MTX_SYSC_CDMAC_LENGTH_MASK
> 	(0x0000FFFF)
> +#define MTX_SYSC_CDMAC_LENGTH_SHIFT				(0)
> +
> +#define MTX_SYSC_CDMAA_OFFSET				(0x0344)
> +
> +#define MTX_SYSC_CDMAS0_OFFSET      			(0x0348)
> +
> +#define MTX_SYSC_CDMAT_OFFSET				(0x0350)
> +/************************** MTX registers end
> **************************/
> +
> +/**************** DMAC Registers: 0x0500 - 0x05FF (256B)
> ***************/
> +#define DMAC_DMAC_COUNT_EN_MASK         		(0x00010000)
> +#define DMAC_DMAC_IRQ_STAT_TRANSFER_FIN_MASK            (0x00020000)
> +
> +#define DMAC_DMAC_SETUP_OFFSET
> 	(0x0500)
> +
> +#define DMAC_DMAC_COUNT_OFFSET
> 	(0x0504)
> +#define DMAC_DMAC_COUNT_BSWAP_LSBMASK
> 	(0x00000001)
> +#define DMAC_DMAC_COUNT_BSWAP_SHIFT            		(30)
> +#define DMAC_DMAC_COUNT_PW_LSBMASK
> 	(0x00000003)
> +#define DMAC_DMAC_COUNT_PW_SHIFT                		(27)
> +#define DMAC_DMAC_COUNT_DIR_LSBMASK
> 	(0x00000001)
> +#define DMAC_DMAC_COUNT_DIR_SHIFT				(26)
> +#define DMAC_DMAC_COUNT_PI_LSBMASK
> 	(0x00000003)
> +#define DMAC_DMAC_COUNT_PI_SHIFT				(24)
> +#define DMAC_DMAC_COUNT_CNT_LSBMASK
> 	(0x0000FFFF)
> +#define DMAC_DMAC_COUNT_CNT_SHIFT				(0)
> +#define DMAC_DMAC_COUNT_EN_MASK
> 	(0x00010000)
> +#define DMAC_DMAC_COUNT_EN_SHIFT				(16)
> +
> +#define DMAC_DMAC_PERIPH_OFFSET
> 	(0x0508)
> +#define DMAC_DMAC_PERIPH_ACC_DEL_LSBMASK
> 	(0x00000007)
> +#define DMAC_DMAC_PERIPH_ACC_DEL_SHIFT
> 	(29)
> +#define DMAC_DMAC_PERIPH_INCR_LSBMASK
> 	(0x00000001)
> +#define DMAC_DMAC_PERIPH_INCR_SHIFT				(27)
> +#define DMAC_DMAC_PERIPH_BURST_LSBMASK
> 	(0x00000007)
> +#define DMAC_DMAC_PERIPH_BURST_SHIFT
> 	(24)
> +
> +#define DMAC_DMAC_IRQ_STAT_OFFSET			(0x050C)
> +#define DMAC_DMAC_IRQ_STAT_TRANSFER_FIN_MASK
> 	(0x00020000)
> +
> +#define DMAC_DMAC_PERIPHERAL_ADDR_OFFSET		(0x0514)
> +#define DMAC_DMAC_PERIPHERAL_ADDR_ADDR_MASK
> 	(0x007FFFFF)
> +#define DMAC_DMAC_PERIPHERAL_ADDR_ADDR_LSBMASK
> 	(0x007FFFFF)
> +#define DMAC_DMAC_PERIPHERAL_ADDR_ADDR_SHIFT
> 	(0)
> +
> +/* DMAC control */
> +#define IPVR_DMAC_VALUE_COUNT(BSWAP, PW, DIR, PERIPH_INCR,
> COUNT) 	\
> +		((((BSWAP) & DMAC_DMAC_COUNT_BSWAP_LSBMASK) <<
> 	\
> +			DMAC_DMAC_COUNT_BSWAP_SHIFT) |
> 	\
> +		(((PW) & DMAC_DMAC_COUNT_PW_LSBMASK) <<
> 	\
> +			DMAC_DMAC_COUNT_PW_SHIFT) |
> 	\
> +		(((DIR) & DMAC_DMAC_COUNT_DIR_LSBMASK) <<
> 	\
> +			DMAC_DMAC_COUNT_DIR_SHIFT) |
> 	\
> +		(((PERIPH_INCR) & DMAC_DMAC_COUNT_PI_LSBMASK) <<
> 	\
> +			DMAC_DMAC_COUNT_PI_SHIFT) |
> 	\
> +		(((COUNT) & DMAC_DMAC_COUNT_CNT_LSBMASK) <<
> 		\
> +			DMAC_DMAC_COUNT_CNT_SHIFT))
> +
> +#define IPVR_DMAC_VALUE_PERIPH_PARAM(ACC_DEL, INCR, BURST)
> 		\
> +		((((ACC_DEL) & DMAC_DMAC_PERIPH_ACC_DEL_LSBMASK)
> <<	\
> +			DMAC_DMAC_PERIPH_ACC_DEL_SHIFT) |
> 	\
> +		(((INCR) & DMAC_DMAC_PERIPH_INCR_LSBMASK) <<
> 	\
> +			DMAC_DMAC_PERIPH_INCR_SHIFT) | 		\
> +		(((BURST) & DMAC_DMAC_PERIPH_BURST_LSBMASK) <<
> 		\
> +			DMAC_DMAC_PERIPH_BURST_SHIFT))
> +
> +typedef enum {
> +	/* !< No byte swapping will be performed. */
> +	IPVR_DMAC_BSWAP_NO_SWAP = 0x0,
> +	/* !< Byte order will be reversed. */
> +	IPVR_DMAC_BSWAP_REVERSE = 0x1,
> +} DMAC_eBSwap;
> +
> +typedef enum {
> +	/* !< Data from memory to peripheral. */
> +	IPVR_DMAC_DIR_MEM_TO_PERIPH = 0x0,
> +	/* !< Data from peripheral to memory. */
> +	IPVR_DMAC_DIR_PERIPH_TO_MEM = 0x1,
> +} DMAC_eDir;
> +
> +typedef enum {
> +	IPVR_DMAC_ACC_DEL_0	= 0x0,	/* !< Access delay zero clock
> cycles */
> +	IPVR_DMAC_ACC_DEL_256    = 0x1,	/* !< Access delay 256 clock
> cycles */
> +	IPVR_DMAC_ACC_DEL_512    = 0x2,	/* !< Access delay 512 clock
> cycles */
> +	IPVR_DMAC_ACC_DEL_768    = 0x3,	/* !< Access delay 768 clock
> cycles */
> +	IPVR_DMAC_ACC_DEL_1024   = 0x4,	/* !< Access delay 1024 clock
> cycles */
> +	IPVR_DMAC_ACC_DEL_1280   = 0x5,	/* !< Access delay 1280 clock
> cycles */
> +	IPVR_DMAC_ACC_DEL_1536   = 0x6,	/* !< Access delay 1536 clock
> cycles */
> +	IPVR_DMAC_ACC_DEL_1792   = 0x7,	/* !< Access delay 1792 clock
> cycles */
> +} DMAC_eAccDel;
> +
> +typedef enum {
> +	IPVR_DMAC_INCR_OFF	= 0,	/* !< Static peripheral
> address. */
> +	IPVR_DMAC_INCR_ON	= 1,	/* !< Incrementing peripheral
> address. */
> +} DMAC_eIncr;
> +
> +typedef enum {
> +	IPVR_DMAC_BURST_0	= 0x0,	/* !< burst size of 0 */
> +	IPVR_DMAC_BURST_1        = 0x1,	/* !< burst size of 1 */
> +	IPVR_DMAC_BURST_2        = 0x2,	/* !< burst size of 2 */
> +	IPVR_DMAC_BURST_3        = 0x3,	/* !< burst size of 3 */
> +	IPVR_DMAC_BURST_4        = 0x4,	/* !< burst size of 4 */
> +	IPVR_DMAC_BURST_5        = 0x5,	/* !< burst size of 5 */
> +	IPVR_DMAC_BURST_6        = 0x6,	/* !< burst size of 6 */
> +	IPVR_DMAC_BURST_7        = 0x7,	/* !< burst size of 7 */
> +} DMAC_eBurst;
> +/************************** DMAC Registers end
> **************************/
> +
> +/**************** MSVDX Core Registers: 0x0600 - 0x06FF (256B)
> ***************/
> +#define MSVDX_CONTROL_OFFSET
> 	(0x0600)
> +#define MSVDX_CONTROL_MSVDX_SOFT_RESET_MASK
> 	(0x00000100)
> +#define MSVDX_CONTROL_MSVDX_SOFT_RESET_SHIFT
> 	(8)
> +#define MSVDX_CONTROL_DMAC_CH0_SELECT_MASK
> 	(0x00001000)
> +#define MSVDX_CONTROL_DMAC_CH0_SELECT_SHIFT
> 	(12)
> +#define MSVDX_CONTROL_MSVDX_SOFT_RESET_MASK
> 	(0x00000100)
> +#define MSVDX_CONTROL_MSVDX_SOFT_RESET_SHIFT
> 	(8)
> +#define MSVDX_CONTROL_MSVDX_FE_SOFT_RESET_MASK
> 	(0x00010000)
> +#define MSVDX_CONTROL_MSVDX_BE_SOFT_RESET_MASK
> 	(0x00100000)
> +#define MSVDX_CONTROL_MSVDX_VEC_MEMIF_SOFT_RESET_MASK
> 		(0x01000000)
> +#define
> MSVDX_CONTROL_MSVDX_VEC_RENDEC_DEC_SOFT_RESET_MASK
> 	(0x10000000)
> +#define msvdx_sw_reset_all \
> +	(MSVDX_CONTROL_MSVDX_SOFT_RESET_MASK |
> 	\
> +	MSVDX_CONTROL_MSVDX_FE_SOFT_RESET_MASK |		\
> +	MSVDX_CONTROL_MSVDX_BE_SOFT_RESET_MASK	|
> 	\
> +	MSVDX_CONTROL_MSVDX_VEC_MEMIF_SOFT_RESET_MASK |
> 	\
> +	MSVDX_CONTROL_MSVDX_VEC_RENDEC_DEC_SOFT_RESET_MASK)
> +
> +#define MSVDX_INTERRUPT_CLEAR_OFFSET			(0x060C)
> +
> +#define MSVDX_INTERRUPT_STATUS_OFFSET
> 	(0x0608)
> +#define MSVDX_INTERRUPT_STATUS_MMU_FAULT_IRQ_MASK
> 	(0x00000F00)
> +#define MSVDX_INTERRUPT_STATUS_MMU_FAULT_IRQ_SHIFT
> 	(8)
> +#define MSVDX_INTERRUPT_STATUS_MTX_IRQ_MASK
> 	(0x00004000)
> +#define MSVDX_INTERRUPT_STATUS_MTX_IRQ_SHIFT
> 	(14)
> +
> +#define MSVDX_HOST_INTERRUPT_ENABLE_OFFSET		(0x0610)
> +
> +#define MSVDX_MAN_CLK_ENABLE_OFFSET			(0x0620)
> +#define MSVDX_MAN_CLK_ENABLE_CORE_MAN_CLK_ENABLE_MASK
> 		(0x00000001)
> +#define
> MSVDX_MAN_CLK_ENABLE_VDEB_PROCESS_MAN_CLK_ENABLE_MASK
> 	(0x00000002)
> +#define
> MSVDX_MAN_CLK_ENABLE_VDEB_ACCESS_MAN_CLK_ENABLE_MASK
> 	(0x00000004)
> +#define MSVDX_MAN_CLK_ENABLE_VDMC_MAN_CLK_ENABLE_MASK
> 		(0x00000008)
> +#define
> MSVDX_MAN_CLK_ENABLE_VEC_ENTDEC_MAN_CLK_ENABLE_MASK
> 	(0x00000010)
> +#define
> MSVDX_MAN_CLK_ENABLE_VEC_ITRANS_MAN_CLK_ENABLE_MASK
> 	(0x00000020)
> +#define MSVDX_MAN_CLK_ENABLE_MTX_MAN_CLK_ENABLE_MASK
> 		(0x00000040)
> +#define
> MSVDX_MAN_CLK_ENABLE_VDEB_PROCESS_AUTO_CLK_ENABLE_MASK
> (0x00020000)
> +#define
> MSVDX_MAN_CLK_ENABLE_VDEB_ACCESS_AUTO_CLK_ENABLE_MASK
> 	(0x00040000)
> +#define MSVDX_MAN_CLK_ENABLE_VDMC_AUTO_CLK_ENABLE_MASK
> 	(0x00080000)
> +#define
> MSVDX_MAN_CLK_ENABLE_VEC_ENTDEC_AUTO_CLK_ENABLE_MASK
> 	(0x00100000)
> +#define
> MSVDX_MAN_CLK_ENABLE_VEC_ITRANS_AUTO_CLK_ENABLE_MASK
> 	(0x00200000)
> +
> +#define clk_enable_all	\
> +	(MSVDX_MAN_CLK_ENABLE_CORE_MAN_CLK_ENABLE_MASK
> 			| \
> +
> 	MSVDX_MAN_CLK_ENABLE_VDEB_PROCESS_MAN_CLK_ENABLE_M
> ASK 		| \
> +
> 	MSVDX_MAN_CLK_ENABLE_VDEB_ACCESS_MAN_CLK_ENABLE_MA
> SK 		| \
> +	MSVDX_MAN_CLK_ENABLE_VDMC_MAN_CLK_ENABLE_MASK
> 	 		| \
> +
> 	MSVDX_MAN_CLK_ENABLE_VEC_ENTDEC_MAN_CLK_ENABLE_MAS
> K 		| \
> +
> 	MSVDX_MAN_CLK_ENABLE_VEC_ITRANS_MAN_CLK_ENABLE_MAS
> K 		| \
> +	MSVDX_MAN_CLK_ENABLE_MTX_MAN_CLK_ENABLE_MASK)
> +
> +#define clk_enable_minimal \
> +	(MSVDX_MAN_CLK_ENABLE_CORE_MAN_CLK_ENABLE_MASK | \
> +	MSVDX_MAN_CLK_ENABLE_MTX_MAN_CLK_ENABLE_MASK)
> +
> +#define clk_enable_auto	\
> +
> 	(MSVDX_MAN_CLK_ENABLE_VDEB_PROCESS_AUTO_CLK_ENABLE_
> MASK	| \
> +
> 	MSVDX_MAN_CLK_ENABLE_VDEB_ACCESS_AUTO_CLK_ENABLE_M
> ASK		| \
> +	MSVDX_MAN_CLK_ENABLE_VDMC_AUTO_CLK_ENABLE_MASK
> 			| \
> +
> 	MSVDX_MAN_CLK_ENABLE_VEC_ENTDEC_AUTO_CLK_ENABLE_MA
> SK		| \
> +
> 	MSVDX_MAN_CLK_ENABLE_VEC_ITRANS_AUTO_CLK_ENABLE_MAS
> K		| \
> +	MSVDX_MAN_CLK_ENABLE_CORE_MAN_CLK_ENABLE_MASK
> 		| \
> +	MSVDX_MAN_CLK_ENABLE_MTX_MAN_CLK_ENABLE_MASK)
> +
> +#define MSVDX_CORE_ID_OFFSET				(0x0630)
> +#define MSVDX_CORE_REV_OFFSET				(0x0640)
> +
> +#define MSVDX_DMAC_STREAM_STATUS_OFFSET
> 	(0x0648)
> +
> +#define MSVDX_MMU_CONTROL0_OFFSET			(0x0680)
> +#define MSVDX_MMU_CONTROL0_MMU_PAUSE_MASK
> 	(0x00000002)
> +#define MSVDX_MMU_CONTROL0_MMU_PAUSE_SHIFT
> 	(1)
> +#define MSVDX_MMU_CONTROL0_MMU_INVALDC_MASK
> 	(0x00000008)
> +#define MSVDX_MMU_CONTROL0_MMU_INVALDC_SHIFT
> 	(3)
> +
> +#define MSVDX_MMU_BANK_INDEX_OFFSET
> 	(0x0688)
> +
> +#define MSVDX_MMU_STATUS_OFFSET
> 	(0x068C)
> +
> +#define MSVDX_MMU_CONTROL2_OFFSET			(0x0690)
> +
> +#define MSVDX_MMU_DIR_LIST_BASE_OFFSET
> 	(0x0694)
> +
> +#define MSVDX_MMU_MEM_REQ_OFFSET			(0x06D0)
> +
> +#define MSVDX_MMU_TILE_BASE0_OFFSET			(0x06D4)
> +
> +#define MSVDX_MMU_TILE_BASE1_OFFSET			(0x06D8)
> +
> +#define MSVDX_MTX_RAM_BANK_OFFSET			(0x06F0)
> +#define MSVDX_MTX_RAM_BANK_MTX_RAM_BANK_SIZE_MASK
> 	(0x000F0000)
> +#define MSVDX_MTX_RAM_BANK_MTX_RAM_BANK_SIZE_SHIFT
> 	(16)
> +
> +#define MSVDX_MTX_DEBUG_OFFSET
> 	MSVDX_MTX_RAM_BANK_OFFSET
> +#define MSVDX_MTX_DEBUG_MTX_DBG_IS_SLAVE_MASK
> 	(0x00000004)
> +#define MSVDX_MTX_DEBUG_MTX_DBG_IS_SLAVE_LSBMASK
> 	(0x00000001)
> +#define MSVDX_MTX_DEBUG_MTX_DBG_IS_SLAVE_SHIFT
> 	(2)
> +#define MSVDX_MTX_DEBUG_MTX_DBG_GPIO_IN_MASK
> 	(0x00000003)
> +#define MSVDX_MTX_DEBUG_MTX_DBG_GPIO_IN_LSBMASK
> 	(0x00000003)
> +#define MSVDX_MTX_DEBUG_MTX_DBG_GPIO_IN_SHIFT
> 	(0)
> +
> +/*watch dog for FE and BE*/
> +#define FE_MSVDX_WDT_CONTROL_OFFSET			(0x0664)
> +/* MSVDX_CORE, CR_FE_MSVDX_WDT_CONTROL, FE_WDT_CNT_CTRL */
> +#define FE_MSVDX_WDT_CONTROL_FE_WDT_CNT_CTRL_MASK
> 	(0x00060000)
> +#define FE_MSVDX_WDT_CONTROL_FE_WDT_CNT_CTRL_LSBMASK
> 	(0x00000003)
> +#define FE_MSVDX_WDT_CONTROL_FE_WDT_CNT_CTRL_SHIFT
> 	(17)
> +/* MSVDX_CORE, CR_FE_MSVDX_WDT_CONTROL, FE_WDT_ENABLE */
> +#define FE_MSVDX_WDT_CONTROL_FE_WDT_ENABLE_MASK
> 	(0x00010000)
> +#define FE_MSVDX_WDT_CONTROL_FE_WDT_ENABLE_LSBMASK
> 	(0x00000001)
> +#define FE_MSVDX_WDT_CONTROL_FE_WDT_ENABLE_SHIFT
> 	(16)
> +/* MSVDX_CORE, CR_FE_MSVDX_WDT_CONTROL, FE_WDT_ACTION1 */
> +#define FE_MSVDX_WDT_CONTROL_FE_WDT_ACTION1_MASK
> 	(0x00003000)
> +#define FE_MSVDX_WDT_CONTROL_FE_WDT_ACTION1_LSBMASK
> 	(0x00000003)
> +#define FE_MSVDX_WDT_CONTROL_FE_WDT_ACTION1_SHIFT
> 	(12)
> +/* MSVDX_CORE, CR_FE_MSVDX_WDT_CONTROL, FE_WDT_ACTION0 */
> +#define FE_MSVDX_WDT_CONTROL_FE_WDT_ACTION0_MASK
> 	(0x00000100)
> +#define FE_MSVDX_WDT_CONTROL_FE_WDT_ACTION0_LSBMASK
> 	(0x00000001)
> +#define FE_MSVDX_WDT_CONTROL_FE_WDT_ACTION0_SHIFT
> 	(8)
> +/* MSVDX_CORE, CR_FE_MSVDX_WDT_CONTROL, FE_WDT_CLEAR_SELECT
> */
> +#define FE_MSVDX_WDT_CONTROL_FE_WDT_CLEAR_SELECT_MASK
> 		(0x00000030)
> +#define FE_MSVDX_WDT_CONTROL_FE_WDT_CLEAR_SELECT_LSBMASK
> 	(0x00000003)
> +#define FE_MSVDX_WDT_CONTROL_FE_WDT_CLEAR_SELECT_SHIFT
> 	(4)
> +/* MSVDX_CORE, CR_FE_MSVDX_WDT_CONTROL,
> FE_WDT_CLKDIV_SELECT */
> +#define FE_MSVDX_WDT_CONTROL_FE_WDT_CLKDIV_SELECT_MASK
> 		(0x00000007)
> +#define FE_MSVDX_WDT_CONTROL_FE_WDT_CLKDIV_SELECT_LSBMASK
> 	(0x00000007)
> +#define FE_MSVDX_WDT_CONTROL_FE_WDT_CLKDIV_SELECT_SHIFT
> 	(0)
> +
> +#define FE_MSVDX_WDTIMER_OFFSET
> 	(0x0668)
> +/* MSVDX_CORE, CR_FE_MSVDX_WDTIMER, FE_WDT_COUNTER */
> +#define FE_MSVDX_WDTIMER_FE_WDT_COUNTER_MASK
> 	(0x0000FFFF)
> +#define FE_MSVDX_WDTIMER_FE_WDT_COUNTER_LSBMASK
> 	(0x0000FFFF)
> +#define FE_MSVDX_WDTIMER_FE_WDT_COUNTER_SHIFT
> 	(0)
> +
> +#define FE_MSVDX_WDT_COMPAREMATCH_OFFSET		(0x066c)
> +/* MSVDX_CORE, CR_FE_MSVDX_WDT_COMPAREMATCH, FE_WDT_CM1
> */
> +#define FE_MSVDX_WDT_COMPAREMATCH_FE_WDT_CM1_MASK
> 	(0xFFFF0000)
> +#define FE_MSVDX_WDT_COMPAREMATCH_FE_WDT_CM1_LSBMASK
> 		(0x0000FFFF)
> +#define FE_MSVDX_WDT_COMPAREMATCH_FE_WDT_CM1_SHIFT
> 	(16)
> +/* MSVDX_CORE, CR_FE_MSVDX_WDT_COMPAREMATCH, FE_WDT_CM0
> */
> +#define FE_MSVDX_WDT_COMPAREMATCH_FE_WDT_CM0_MASK
> 	(0x0000FFFF)
> +#define FE_MSVDX_WDT_COMPAREMATCH_FE_WDT_CM0_LSBMASK
> 		(0x0000FFFF)
> +#define FE_MSVDX_WDT_COMPAREMATCH_FE_WDT_CM0_SHIFT
> 	(0)
> +
> +#define BE_MSVDX_WDT_CONTROL_OFFSET			(0x0670)
> +/* MSVDX_CORE, CR_BE_MSVDX_WDT_CONTROL, BE_WDT_CNT_CTRL */
> +#define BE_MSVDX_WDT_CONTROL_BE_WDT_CNT_CTRL_MASK
> 	(0x001E0000)
> +#define BE_MSVDX_WDT_CONTROL_BE_WDT_CNT_CTRL_LSBMASK
> 	(0x0000000F)
> +#define BE_MSVDX_WDT_CONTROL_BE_WDT_CNT_CTRL_SHIFT
> 	(17)
> +/* MSVDX_CORE, CR_BE_MSVDX_WDT_CONTROL, BE_WDT_ENABLE */
> +#define BE_MSVDX_WDT_CONTROL_BE_WDT_ENABLE_MASK
> 	(0x00010000)
> +#define BE_MSVDX_WDT_CONTROL_BE_WDT_ENABLE_LSBMASK
> 	(0x00000001)
> +#define BE_MSVDX_WDT_CONTROL_BE_WDT_ENABLE_SHIFT
> 	(16)
> +/* MSVDX_CORE, CR_BE_MSVDX_WDT_CONTROL, BE_WDT_ACTION0 */
> +#define BE_MSVDX_WDT_CONTROL_BE_WDT_ACTION0_MASK
> 	(0x00000100)
> +#define BE_MSVDX_WDT_CONTROL_BE_WDT_ACTION0_LSBMASK
> 	(0x00000001)
> +#define BE_MSVDX_WDT_CONTROL_BE_WDT_ACTION0_SHIFT
> 	(8)
> +/* MSVDX_CORE, CR_BE_MSVDX_WDT_CONTROL,
> BE_WDT_CLEAR_SELECT */
> +#define BE_MSVDX_WDT_CONTROL_BE_WDT_CLEAR_SELECT_MASK
> 		(0x000000F0)
> +#define BE_MSVDX_WDT_CONTROL_BE_WDT_CLEAR_SELECT_LSBMASK
> 	(0x0000000F)
> +#define BE_MSVDX_WDT_CONTROL_BE_WDT_CLEAR_SELECT_SHIFT
> 	(4)
> +/* MSVDX_CORE, CR_BE_MSVDX_WDT_CONTROL,
> BE_WDT_CLKDIV_SELECT */
> +#define BE_MSVDX_WDT_CONTROL_BE_WDT_CLKDIV_SELECT_MASK
> 		(0x00000007)
> +#define BE_MSVDX_WDT_CONTROL_BE_WDT_CLKDIV_SELECT_LSBMASK
> 	(0x00000007)
> +#define BE_MSVDX_WDT_CONTROL_BE_WDT_CLKDIV_SELECT_SHIFT
> 	(0)
> +
> +#define BE_MSVDX_WDTIMER_OFFSET
> 	(0x0674)
> +/* MSVDX_CORE, CR_BE_MSVDX_WDTIMER, BE_WDT_COUNTER */
> +#define BE_MSVDX_WDTIMER_BE_WDT_COUNTER_MASK
> 	(0x0000FFFF)
> +#define BE_MSVDX_WDTIMER_BE_WDT_COUNTER_LSBMASK
> 	(0x0000FFFF)
> +#define BE_MSVDX_WDTIMER_BE_WDT_COUNTER_SHIFT
> 	(0)
> +
> +#define BE_MSVDX_WDT_COMPAREMATCH_OFFSET
> 	(0x678)
> +/* MSVDX_CORE, CR_BE_MSVDX_WDT_COMPAREMATCH, BE_WDT_CM0
> */
> +#define BE_MSVDX_WDT_COMPAREMATCH_BE_WDT_CM0_MASK
> 	(0x0000FFFF)
> +#define BE_MSVDX_WDT_COMPAREMATCH_BE_WDT_CM0_LSBMASK
> 		(0x0000FFFF)
> +#define BE_MSVDX_WDT_COMPAREMATCH_BE_WDT_CM0_SHIFT
> 	(0)
> +
> +/*watch dog end*/
> +/************************** MSVDX Core Registers end
> *************************/
> +
> +/******************* VEC Registers: 0x0800 - 0x0FFF (2048B)
> ******************/
> +#define VEC_SHIFTREG_CONTROL_OFFSET			(0x0818)
> +#define VEC_SHIFTREG_CONTROL_SR_MASTER_SELECT_MASK
> 	(0x00000300)
> +#define VEC_SHIFTREG_CONTROL_SR_MASTER_SELECT_SHIFT
> 	(8)
> +/************************** VEC Registers end
> **************************/
> +
> +/************************** RENDEC Registers
> **************************/
> +#define MSVDX_RENDEC_CONTROL0_OFFSET
> 	(0x0868)
> +#define MSVDX_RENDEC_CONTROL0_RENDEC_INITIALISE_MASK
> 	(0x00000001)
> +#define MSVDX_RENDEC_CONTROL0_RENDEC_INITIALISE_SHIFT
> 	(0)
> +
> +#define MSVDX_RENDEC_CONTROL1_OFFSET
> 	(0x086C)
> +#define
> MSVDX_RENDEC_CONTROL1_RENDEC_DECODE_START_SIZE_MASK
> 	(0x000000FF)
> +#define
> MSVDX_RENDEC_CONTROL1_RENDEC_DECODE_START_SIZE_SHIFT	(0)
> +#define MSVDX_RENDEC_CONTROL1_RENDEC_BURST_SIZE_W_MASK
> 		(0x000C0000)
> +#define MSVDX_RENDEC_CONTROL1_RENDEC_BURST_SIZE_W_SHIFT
> 		(18)
> +#define MSVDX_RENDEC_CONTROL1_RENDEC_BURST_SIZE_R_MASK
> 		(0x00030000)
> +#define MSVDX_RENDEC_CONTROL1_RENDEC_BURST_SIZE_R_SHIFT
> 		(16)
> +#define
> MSVDX_RENDEC_CONTROL1_RENDEC_EXTERNAL_MEMORY_MASK
> 	(0x01000000)
> +#define
> MSVDX_RENDEC_CONTROL1_RENDEC_EXTERNAL_MEMORY_SHIFT	(24)
> +#define MSVDX_RENDEC_CONTROL1_RENDEC_DEC_DISABLE_MASK
> 	(0x08000000)
> +#define MSVDX_RENDEC_CONTROL1_RENDEC_DEC_DISABLE_SHIFT
> 	(27)
> +
> +#define MSVDX_RENDEC_BUFFER_SIZE_OFFSET
> 	(0x0870)
> +#define MSVDX_RENDEC_BUFFER_SIZE_RENDEC_BUFFER_SIZE0_MASK
> 	(0x0000FFFF)
> +#define MSVDX_RENDEC_BUFFER_SIZE_RENDEC_BUFFER_SIZE0_SHIFT
> 	(0)
> +#define MSVDX_RENDEC_BUFFER_SIZE_RENDEC_BUFFER_SIZE1_MASK
> 	(0xFFFF0000)
> +#define MSVDX_RENDEC_BUFFER_SIZE_RENDEC_BUFFER_SIZE1_SHIFT
> 	(16)
> +
> +#define MSVDX_RENDEC_BASE_ADDR0_OFFSET
> 	(0x0874)
> +
> +#define MSVDX_RENDEC_BASE_ADDR1_OFFSET
> 	(0x0878)
> +
> +#define MSVDX_RENDEC_READ_DATA_OFFSET
> 	(0x0898)
> +
> +#define MSVDX_RENDEC_CONTEXT0_OFFSET
> 	(0x0950)
> +
> +#define MSVDX_RENDEC_CONTEXT1_OFFSET
> 	(0x0954)
> +
> +#define MSVDX_RENDEC_CONTEXT2_OFFSET
> 	(0x0958)
> +
> +#define MSVDX_RENDEC_CONTEXT3_OFFSET
> 	(0x095C)
> +
> +#define MSVDX_RENDEC_CONTEXT4_OFFSET
> 	(0x0960)
> +
> +#define MSVDX_RENDEC_CONTEXT5_OFFSET
> 	(0x0964)
> +/*************************** RENDEC registers end
> ****************************/
> +
> +/******************** CMD Register: 0x1000 - 0x1FFF (4kB)
> ********************/
> +#define MSVDX_CMDS_END_SLICE_PICTURE_OFFSET
> 	(0x1404)
> +/****************************** CMD Register end
> *****************************/
> +
> +/******************** VEC Local RAM: 0x2000 - 0x2FFF (4kB)
> *******************/
> +/* vec local MEM save/restore */
> +#define VEC_LOCAL_MEM_BYTE_SIZE (4 * 1024)
> +#define VEC_LOCAL_MEM_OFFSET 0x2000
> +
> +#define MSVDX_EXT_FW_ERROR_STATE 		(0x2CC4)
> +/* Decode operations in progress or not complete */
> +#define MSVDX_FW_STATUS_IN_PROGRESS			0x00000000
> +/* there's no work underway on the hardware, idle, can be powered down
> */
> +#define MSVDX_FW_STATUS_HW_IDLE
> 	0x00000001
> +/* Panic, waiting to be reloaded */
> +#define MSVDX_FW_STATUS_HW_PANIC			0x00000003
> +
> +/*
> + * This defines the MSVDX communication buffer
> + */
> +#define MSVDX_COMMS_SIGNATURE_VALUE	(0xA5A5A5A5)	/*!<
> Signature value */
> +/*!< Host buffer size (in 32-bit words) */
> +#define NUM_WORDS_HOST_BUF		(100)
> +/*!< MTX buffer size (in 32-bit words) */
> +#define NUM_WORDS_MTX_BUF		(100)
> +
> +#define MSVDX_COMMS_AREA_ADDR			(0x02fe0)
> +#define MSVDX_COMMS_CORE_WTD
> 	(MSVDX_COMMS_AREA_ADDR - 0x08)
> +#define MSVDX_COMMS_ERROR_TRIG
> 	(MSVDX_COMMS_AREA_ADDR - 0x08)
> +#define MSVDX_COMMS_FIRMWARE_ID
> 	(MSVDX_COMMS_AREA_ADDR - 0x0C)
> +#define MSVDX_COMMS_OFFSET_FLAGS
> 	(MSVDX_COMMS_AREA_ADDR + 0x18)
> +#define	MSVDX_COMMS_MSG_COUNTER
> 	(MSVDX_COMMS_AREA_ADDR - 0x04)
> +#define MSVDX_COMMS_FW_STATUS
> 	(MSVDX_COMMS_AREA_ADDR - 0x10)
> +#define	MSVDX_COMMS_SIGNATURE
> 	(MSVDX_COMMS_AREA_ADDR + 0x00)
> +#define	MSVDX_COMMS_TO_HOST_BUF_SIZE
> 	(MSVDX_COMMS_AREA_ADDR + 0x04)
> +#define MSVDX_COMMS_TO_HOST_RD_INDEX
> 	(MSVDX_COMMS_AREA_ADDR + 0x08)
> +#define MSVDX_COMMS_TO_HOST_WRT_INDEX
> 	(MSVDX_COMMS_AREA_ADDR + 0x0C)
> +#define MSVDX_COMMS_TO_MTX_BUF_SIZE
> 	(MSVDX_COMMS_AREA_ADDR + 0x10)
> +#define MSVDX_COMMS_TO_MTX_RD_INDEX
> 	(MSVDX_COMMS_AREA_ADDR + 0x14)
> +#define MSVDX_COMMS_TO_MTX_CB_RD_INDEX
> 	(MSVDX_COMMS_AREA_ADDR + 0x18)
> +#define MSVDX_COMMS_TO_MTX_WRT_INDEX
> 	(MSVDX_COMMS_AREA_ADDR + 0x1C)
> +#define MSVDX_COMMS_TO_HOST_BUF
> 	(MSVDX_COMMS_AREA_ADDR + 0x20)
> +#define MSVDX_COMMS_TO_MTX_BUF	\
> +			(MSVDX_COMMS_TO_HOST_BUF +
> (NUM_WORDS_HOST_BUF << 2))
> +
> +/*
> + * FW FLAGs: it shall be written by the host prior to starting the Firmware.
> + */
> +/* Disable Firmware based Watch dog timers. */
> +#define DSIABLE_FW_WDT				0x0008
> +	/* Abort Immediately on errors */
> +#define ABORT_ON_ERRORS_IMMEDIATE		0x0010
> +	/* Aborts faulted slices as soon as possible. Allows non faulted slices
> +	 * to reach backend but faulted slice will not be allowed to start. */
> +#define ABORT_FAULTED_SLICE_IMMEDIATE		0x0020
> +	/* Flush faulted slices - Debug option */
> +#define FLUSH_FAULTED_SLICES			0x0080
> +	/* Don't interrupt host when to host buffer becomes full.
> +	 * Stall until space is freed up by host on it's own. */
> +#define NOT_INTERRUPT_WHEN_HOST_IS_FULL		0x0200
> +	/* Contiguity warning msg will be send to host for stream with
> +         * FW_ERROR_DETECTION_AND_RECOVERY flag set if non-contiguous
> +	 * macroblocks are detected. */
> +#define NOT_ENABLE_ON_HOST_CONCEALMENT		0x0400
> +	/* Return VDEB Signature Value in Completion message.
> +	 * This requires a VDEB data flush every slice for constant results.*/
> +#define RETURN_VDEB_DATA_IN_COMPLETION		0x0800
> +	/* Disable Auto Clock Gating. */
> +#define DSIABLE_Auto_CLOCK_GATING		0x1000
> +	/* Disable Idle GPIO signal. */
> +#define DSIABLE_IDLE_GPIO_SIG			0x2000
> +	/* Enable Setup, FE and BE Time stamps in completion message.
> +	 * Used by IMG only for firmware profiling. */
> +#define ENABLE_TIMESTAMPS_IN_COMPLETE_MSG	0x4000
> +	/* Disable off-host 2nd pass Deblocking in Firmware.  */
> +#define DSIABLE_OFFHOST_SECOND_DEBLOCK		0x20000
> +	/* Sum address signature to data signature
> +	 * when returning VDEB signature values. */
> +#define SUM_ADD_SIG_TO_DATA_SIGNATURE		0x80000
> +
> +/*
> +#define MSVDX_COMMS_AREA_END	\
> +  (MSVDX_COMMS_TO_MTX_BUF + (NUM_WORDS_HOST_BUF << 2))
> +*/
> +#define MSVDX_COMMS_AREA_END 0x03000
> +
> +#if (MSVDX_COMMS_AREA_END != 0x03000)
> +#error
> +#endif
> +/***************************** VEC Local RAM end
> *****************************/
> +
> +#endif
> --
> 2.1.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux