Re: [RFCv1 2/2] drm/msm: basic KMS driver for snapdragon

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 8, 2013 at 7:17 PM, Jordan Crouse <jcrouse@xxxxxxxxxxxxxx> wrote:
> On 07/05/2013 01:53 PM, Rob Clark wrote:
>>
>> The snapdragon chips have multiple different display controllers,
>> depending on which chip variant/version.  (As far as I can tell, current
>> devices have either MDP3 or MDP4, and upcoming devices have MDSS.)  And
>> then external to the display controller are HDMI, DSI, etc. blocks which
>> may be shared across devices which have different display controller
>> blocks.
>>
>> To more easily add support for different display controller blocks, the
>> display controller specific bits are split out into a "kms" object,
>> which provides the kms plane/crtc/encoder objects.
>>
>> The external HDMI, DSI, etc. blocks are part encoder, and part connector
>> currently.  But I think I will pull in the drm_bridge patches from
>> chromeos tree, and split them into a bridge+connector, with the
>> registers that need to be set in modeset handled by the bridge.  This
>> would remove the 'msm_connector' base class.  But some things need to be
>> double checked to make sure I could get the correct ON/OFF sequencing..
>>
>> Signed-off-by: Rob Clark <robdclark@xxxxxxxxx>
>
>
>> diff --git a/drivers/gpu/drm/msm/NOTES b/drivers/gpu/drm/msm/NOTES
>> new file mode 100644
>> index 0000000..b9e9d03
>> --- /dev/null
>> +++ b/drivers/gpu/drm/msm/NOTES
>> @@ -0,0 +1,43 @@
>> +Rough thoughts/notes..
>> +
>> +We have (at least) 3 different display controller blocks at play:
>> + + MDP3 - ?? seems to be what is on geeksphone peak device
>> + + MDP4 - S3 (APQ8060, touchpad), S4-pro (APQ8064, nexus4 & ifc6410)
>> + + MDSS - snapdragon 800
>> +
>> +(I don't have a completely clear picture on which display controller
>> +is in which devices)
>> +
>> +But, HDMI/DSI/etc blocks seem like they can be shared.  And I for sure
>> +don't want to have to deal with N different kms devices from
>> +xf86-video-freedreno.  Plus, it seems like we can do some clever tricks
>> +like have kms/crtc code build up gpu cmdstream to update scanout after
>> +rendering without involving the cpu.
>> +
>> +And on gpu side of things:
>> + + zero, one, or two 2d cores (z180)
>
>
> Life would be easier if we just forgot that z180 existed.
>

I would like to support it eventually, although not the highest
priority.  Although I'm not quite sure yet about how to do a sane
kernel interface for it.. I might just take the easy way out and
memcpy.  Regarding extra level of indirection, well it doesn't
absolutely *have* to be the same ioctl..  I do need to give it some
thought though.

>
>> + + and either a2xx or a3xx 3d core.
>
>
> A2XX will probably be less interesting to everybody except folks trying to
> get their ancient phones working.  That said it might be smart to keep the
> GPU sub device split because future.
>

I would like to support a2xx as well, if for no other reason than that
I have a handful of a2xx devices as well.  (Although sometimes there
is a shortage of # of hrs in a day.)

>
>> +
>> +So, one drm driver, with some modularity.  Different 'struct msm_kms'
>> +implementations, depending on display controller.  And one or more
>> +'struct msm_gpu' for the various different gpu sub-modules.
>
>
> If Z180 goes poof then we could conceivably use 'adreno' for a name which
> is a nice way to compartmentalize the GPU code.  On the other hand msm_gpu
> has consistency going for it.
>

I suppose depending on what marketing literature you read, "adreno"
could refer collectively to 2d and 3d cores.  But meh.  I could go
either way on the name.

>
>> +The kms module provides the plane, crtc, and encoder objects, and
>> +loads whatever connectors are appropriate.
>> +
>> +For MDP4, the mapping is (I think):
>> +
>> +  plane   -> PIPE{RGBn,VGn}              \
>> +  crtc    -> OVLP{n} + DMA{P,S,E} (??)   |-> MDP "device"
>> +  encoder -> DTV/LCDC/DSI (within MDP4)  /
>> +  connector -> HDMI/DSI/etc              --> other device(s)
>> +
>> +Since the irq's that drm core mostly cares about are vblank/framedone,
>> +we'll let msm_mdp4_kms provide the irq install/uninstall/etc functions
>> +and treat the MDP4 block's irq as "the" irq.  Even though the connectors
>> +may have their own irqs which they install themselves.  For this reason
>> +the display controller is the "master" device.
>> +
>> +Each connector probably ends up being a seperate device, just for the
>> +logistics of finding/mapping io region, irq, etc.
>> +
>
>
>> diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
>> new file mode 100644
>> index 0000000..e6ccef9
>> --- /dev/null
>> +++ b/drivers/gpu/drm/msm/msm_drv.c
>> @@ -0,0 +1,491 @@
>>
>> +/*
>> + * Copyright (C) 2013 Red Hat
>> + * Author: Rob Clark <robdclark@xxxxxxxxx>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> it
>> + * under the terms of the GNU General Public License version 2 as
>> published by
>> + * the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful, but
>> WITHOUT
>> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
>> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
>> for
>> + * more details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> along with
>> + * this program.  If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#include "msm_drv.h"
>> +
>> +static void msm_fb_output_poll_changed(struct drm_device *dev)
>> +{
>> +       struct msm_drm_private *priv = dev->dev_private;
>> +       if (priv->fbdev)
>> +               drm_fb_helper_hotplug_event(priv->fbdev);
>> +}
>> +
>> +static const struct drm_mode_config_funcs mode_config_funcs = {
>> +       .fb_create = msm_framebuffer_create,
>> +       .output_poll_changed = msm_fb_output_poll_changed,
>> +};
>> +
>> +static int msm_fault_handler(struct iommu_domain *iommu, struct device
>> *dev,
>> +               unsigned long iova, int flags)
>> +{
>> +       DBG("*** fault: iova=%08lx, flags=%d", iova, flags);
>>
>> +       return 0;
>> +}
>> +
>> +int msm_register_iommu(struct drm_device *dev, struct iommu_domain
>> *iommu)
>>
>> +{
>> +       struct msm_drm_private *priv = dev->dev_private;
>> +       int idx = priv->num_iommus++;
>> +
>> +       if (WARN_ON(idx >= ARRAY_SIZE(priv->iommus)))
>> +               return -EINVAL;
>> +
>> +       priv->iommus[idx] = iommu;
>> +
>> +       iommu_set_fault_handler(iommu, msm_fault_handler);
>> +
>> +       /* need to iommu_attach_device() somewhere??  on resume?? */
>
>
> We are going to end up with 2 IOMMUs to deal with.

Oh, yeah, I did figure out the attach stuff eventually, but forgot to
remove that note to myself.

Rough plan is that different initiators (display, gpu, etc) request
iova in a particular device-space (msm_gem_get_iova()), and the gem
object keeps track of the device address in each domain that it is
mapped.

Well, I'm still thinking about the best way to deal with per-context
address space for GPU.  One easy way is just use same address space in
each context (although only with buffers shared to that context being
mapped).  That should work ok-ish, at least for newer GPU's with large
address space.  But I'm not super concerned about getting that part
right up-front, because it won't be visible in the user<->kernel ABI
so it is something that can be changed later.

>
>> +       return idx;
>> +}
>> +
>> +#ifdef CONFIG_DRM_MSM_REGISTER_LOGGING
>> +static bool reglog = false;
>> +MODULE_PARM_DESC(reglog, "Enable register read/write logging");
>> +module_param(reglog, bool, 0600);
>> +#else
>> +#define reglog 0
>> +#endif
>> +
>> +void __iomem *msm_ioremap(struct device *dev, resource_size_t offset,
>> +               unsigned long size, const char *name)
>> +{
>> +       void __iomem *ptr = devm_ioremap_nocache(dev, offset, size);
>> +       if (reglog)
>> +               printk(KERN_DEBUG "IO:region %s %08x %08lx\n", name,
>> (u32)ptr, size);
>> +       return ptr;
>> +}
>> +
>> +void msm_writel(u32 data, void __iomem *addr)
>> +{
>> +       if (reglog)
>> +               printk(KERN_DEBUG "IO:W %08x %08x\n", (u32)addr, data);
>> +       writel(data, addr);
>> +}
>> +
>> +u32 msm_readl(const void __iomem *addr)
>> +{
>> +       u32 val = readl(addr);
>> +       if (reglog)
>> +               printk(KERN_ERR "IO:R %08x %08x\n", (u32)addr, val);
>> +       return val;
>> +}
>> +
>> +/*
>> + * DRM operations:
>> + */
>> +
>> +static int msm_unload(struct drm_device *dev)
>> +{
>> +       struct msm_drm_private *priv = dev->dev_private;
>> +       struct msm_kms *kms = priv->kms;
>> +
>> +       drm_kms_helper_poll_fini(dev);
>> +       drm_mode_config_cleanup(dev);
>> +       drm_vblank_cleanup(dev);
>> +
>> +       pm_runtime_get_sync(dev->dev);
>> +       drm_irq_uninstall(dev);
>> +       pm_runtime_put_sync(dev->dev);
>> +
>> +       flush_workqueue(priv->wq);
>> +       destroy_workqueue(priv->wq);
>> +
>> +       if (kms) {
>> +               pm_runtime_disable(dev->dev);
>> +               kms->funcs->destroy(kms);
>> +       }
>> +
>> +       dev->dev_private = NULL;
>> +
>> +       pm_runtime_disable(dev->dev);
>> +
>> +       kfree(priv);
>>
>> +
>> +       return 0;
>> +}
>> +
>> +static int msm_load(struct drm_device *dev, unsigned long flags)
>>
>> +{
>> +       struct platform_device *pdev = dev->platformdev;
>> +       struct msm_drm_private *priv;
>> +       struct msm_kms *kms;
>> +       int ret;
>> +
>> +       priv = kzalloc(sizeof(*priv), GFP_KERNEL);
>> +       if (!priv) {
>> +               dev_err(dev->dev, "failed to allocate private data\n");
>> +               return -ENOMEM;
>> +       }
>> +
>> +       dev->dev_private = priv;
>> +
>> +       priv->wq = alloc_ordered_workqueue("msm", 0);
>> +
>> +       INIT_LIST_HEAD(&priv->obj_list);
>> +
>> +       drm_mode_config_init(dev);
>> +
>> +       kms = mdp4_kms_init(dev);
>> +       if (IS_ERR(kms)) {
>> +               /*
>> +                * NOTE: once we have GPU support, having no kms should
>> not
>> +                * be considered fatal.. ideally we would still support
>> gpu
>> +                * and (for example) use dmabuf/prime to share buffers
>> with
>> +                * imx drm driver on iMX5
>> +                */
>> +               dev_err(dev->dev, "failed to load kms\n");
>> +               ret = PTR_ERR(priv->kms);
>>
>> +               goto fail;
>> +       }
>> +
>> +       priv->kms = kms;
>> +
>> +       if (kms) {
>> +               pm_runtime_enable(dev->dev);
>> +               ret = kms->funcs->hw_init(kms);
>> +               if (ret) {
>> +                       dev_err(dev->dev, "kms hw init failed: %d\n",
>> ret);
>>
>> +                       goto fail;
>> +               }
>> +       }
>> +
>> +       dev->mode_config.min_width = 0;
>> +       dev->mode_config.min_height = 0;
>> +       dev->mode_config.max_width = 2048;
>> +       dev->mode_config.max_height = 2048;
>> +       dev->mode_config.funcs = &mode_config_funcs;
>> +
>> +       ret = drm_vblank_init(dev, 1);
>> +       if (ret < 0) {
>> +               dev_err(dev->dev, "failed to initialize vblank\n");
>>
>> +               goto fail;
>> +       }
>> +
>> +       pm_runtime_get_sync(dev->dev);
>> +       ret = drm_irq_install(dev);
>> +       pm_runtime_put_sync(dev->dev);
>> +       if (ret < 0) {
>> +               dev_err(dev->dev, "failed to install IRQ handler\n");
>>
>> +               goto fail;
>> +       }
>> +
>> +       platform_set_drvdata(pdev, dev);
>> +
>> +#ifdef CONFIG_DRM_MSM_FBDEV
>> +       priv->fbdev = msm_fbdev_init(dev);
>> +#endif
>> +
>> +       drm_kms_helper_poll_init(dev);
>>
>> +
>> +       return 0;
>> +
>> +fail:
>> +       msm_unload(dev);
>>
>> +       return ret;
>> +}
>> +
>> +static void msm_preclose(struct drm_device *dev, struct drm_file *file)
>> +{
>> +       struct msm_drm_private *priv = dev->dev_private;
>> +       struct msm_kms *kms = priv->kms;
>> +       if (kms)
>> +               kms->funcs->preclose(kms, file);
>> +}
>> +
>> +static void msm_lastclose(struct drm_device *dev)
>> +{
>> +       struct msm_drm_private *priv = dev->dev_private;
>> +       if (priv->fbdev) {
>> +               drm_modeset_lock_all(dev);
>> +               drm_fb_helper_restore_fbdev_mode(priv->fbdev);
>> +               drm_modeset_unlock_all(dev);
>> +       }
>> +}
>> +
>> +static irqreturn_t msm_irq(DRM_IRQ_ARGS)
>> +{
>> +       struct drm_device *dev = arg;
>>
>> +       struct msm_drm_private *priv = dev->dev_private;
>> +       struct msm_kms *kms = priv->kms;
>> +       BUG_ON(!kms);
>> +       return kms->funcs->irq(kms);
>
>
> And we will have separate interrupts too - has anybody else had to
> deal with that (too lazy to check).

IIRC, exynos, and perhaps some others do..

This is actually already the case in msm kms code, since HDMI block
has it's own irq (for HPD and DDC).  My thinking here is that every
"module" with interrupts not vblank related, that module can
separately register register it's own handler.  It is partly an
arbitrary decision, but seemed to make sense to me, because DRM core
doesn't really care too much about interrupts beyond vblank.

BR,
-R

>> +}
>> +
>> +static void msm_irq_preinstall(struct drm_device *dev)
>> +{
>> +       struct msm_drm_private *priv = dev->dev_private;
>> +       struct msm_kms *kms = priv->kms;
>> +       BUG_ON(!kms);
>> +       kms->funcs->irq_preinstall(kms);
>> +}
>> +
>> +static int msm_irq_postinstall(struct drm_device *dev)
>> +{
>> +       struct msm_drm_private *priv = dev->dev_private;
>> +       struct msm_kms *kms = priv->kms;
>> +       BUG_ON(!kms);
>> +       return kms->funcs->irq_postinstall(kms);
>> +}
>> +
>> +static void msm_irq_uninstall(struct drm_device *dev)
>> +{
>> +       struct msm_drm_private *priv = dev->dev_private;
>> +       struct msm_kms *kms = priv->kms;
>> +       BUG_ON(!kms);
>> +       kms->funcs->irq_uninstall(kms);
>> +}
>> +
>> +static int msm_enable_vblank(struct drm_device *dev, int crtc_id)
>>
>> +{
>> +       struct msm_drm_private *priv = dev->dev_private;
>> +       struct msm_kms *kms = priv->kms;
>> +       if (!kms)
>> +               return -ENXIO;
>> +       DBG("dev=%p, crtc=%d", dev, crtc_id);
>> +       return kms->funcs->enable_vblank(kms, priv->crtcs[crtc_id]);
>> +}
>> +
>> +static void msm_disable_vblank(struct drm_device *dev, int crtc_id)
>>
>> +{
>> +       struct msm_drm_private *priv = dev->dev_private;
>> +       struct msm_kms *kms = priv->kms;
>> +       if (!kms)
>> +               return;
>> +       DBG("dev=%p, crtc=%d", dev, crtc_id);
>> +       kms->funcs->disable_vblank(kms, priv->crtcs[crtc_id]);
>> +}
>> +
>> +#ifdef CONFIG_DEBUG_FS
>> +static int msm_gem_show(struct seq_file *m, void *arg)
>> +{
>> +       struct drm_info_node *node = (struct drm_info_node *) m->private;
>> +       struct drm_device *dev = node->minor->dev;
>>
>> +       struct msm_drm_private *priv = dev->dev_private;
>> +       int ret;
>> +
>> +       ret = mutex_lock_interruptible(&dev->struct_mutex);
>>
>> +       if (ret)
>> +               return ret;
>> +
>> +       seq_printf(m, "All Objects:\n");
>> +       msm_gem_describe_objects(&priv->obj_list, m);
>> +
>> +       mutex_unlock(&dev->struct_mutex);
>>
>> +
>> +       return 0;
>> +}
>> +
>> +static int msm_mm_show(struct seq_file *m, void *arg)
>> +{
>> +       struct drm_info_node *node = (struct drm_info_node *) m->private;
>> +       struct drm_device *dev = node->minor->dev;
>> +       return drm_mm_dump_table(m, dev->mm_private);
>> +}
>> +
>> +static int msm_fb_show(struct seq_file *m, void *arg)
>> +{
>> +       struct drm_info_node *node = (struct drm_info_node *) m->private;
>> +       struct drm_device *dev = node->minor->dev;
>>
>> +       struct msm_drm_private *priv = dev->dev_private;
>> +       struct drm_framebuffer *fb, *fbdev_fb = NULL;
>> +
>> +       if (priv->fbdev) {
>> +               seq_printf(m, "fbcon ");
>> +               fbdev_fb = priv->fbdev->fb;
>> +               msm_framebuffer_describe(fbdev_fb, m);
>> +       }
>> +
>> +       mutex_lock(&dev->mode_config.fb_lock);
>> +       list_for_each_entry(fb, &dev->mode_config.fb_list, head) {
>> +               if (fb == fbdev_fb)
>> +                       continue;
>> +
>> +               seq_printf(m, "user ");
>> +               msm_framebuffer_describe(fb, m);
>> +       }
>> +       mutex_unlock(&dev->mode_config.fb_lock);
>>
>> +
>> +       return 0;
>> +}
>> +
>> +static struct drm_info_list msm_debugfs_list[] = {
>> +               {"gem", msm_gem_show, 0},
>> +               { "mm", msm_mm_show,   0 },
>> +               { "fb", msm_fb_show, 0 },
>> +};
>> +
>> +static int msm_debugfs_init(struct drm_minor *minor)
>> +{
>> +       struct drm_device *dev = minor->dev;
>> +       int ret;
>> +
>> +       ret = drm_debugfs_create_files(msm_debugfs_list,
>> +                       ARRAY_SIZE(msm_debugfs_list),
>> +                       minor->debugfs_root, minor);
>> +
>> +       if (ret) {
>> +               dev_err(dev->dev, "could not install msm_debugfs_list\n");
>>
>> +               return ret;
>> +       }
>> +
>> +       return ret;
>> +}
>> +
>> +static void msm_debugfs_cleanup(struct drm_minor *minor)
>> +{
>> +       drm_debugfs_remove_files(msm_debugfs_list,
>> +                       ARRAY_SIZE(msm_debugfs_list), minor);
>> +}
>> +#endif
>> +
>> +static const struct vm_operations_struct vm_ops = {
>> +       .fault = msm_gem_fault,
>> +       .open = drm_gem_vm_open,
>> +       .close = drm_gem_vm_close,
>> +};
>> +
>> +static const struct file_operations fops = {
>> +       .owner              = THIS_MODULE,
>> +       .open               = drm_open,
>> +       .release            = drm_release,
>> +       .unlocked_ioctl     = drm_ioctl,
>> +#ifdef CONFIG_COMPAT
>> +       .compat_ioctl       = drm_compat_ioctl,
>> +#endif
>> +       .poll               = drm_poll,
>> +       .read               = drm_read,
>> +       .fasync             = drm_fasync,
>> +       .llseek             = no_llseek,
>> +       .mmap               = msm_gem_mmap,
>> +};
>> +
>> +static struct drm_driver msm_driver = {
>> +       .driver_features    = DRIVER_HAVE_IRQ | DRIVER_GEM |
>> DRIVER_MODESET,
>> +       .load               = msm_load,
>> +       .unload             = msm_unload,
>> +       .preclose           = msm_preclose,
>> +       .lastclose          = msm_lastclose,
>> +       .irq_handler        = msm_irq,
>> +       .irq_preinstall     = msm_irq_preinstall,
>> +       .irq_postinstall    = msm_irq_postinstall,
>> +       .irq_uninstall      = msm_irq_uninstall,
>> +       .get_vblank_counter = drm_vblank_count,
>> +       .enable_vblank      = msm_enable_vblank,
>> +       .disable_vblank     = msm_disable_vblank,
>> +       .gem_free_object    = msm_gem_free_object,
>> +       .gem_vm_ops         = &vm_ops,
>> +       .dumb_create        = msm_gem_dumb_create,
>> +       .dumb_map_offset    = msm_gem_dumb_map_offset,
>> +       .dumb_destroy       = msm_gem_dumb_destroy,
>> +#ifdef CONFIG_DEBUG_FS
>> +       .debugfs_init       = msm_debugfs_init,
>> +       .debugfs_cleanup    = msm_debugfs_cleanup,
>> +#endif
>> +       .fops               = &fops,
>> +       .name               = "msm",
>> +       .desc               = "MSM Snapdragon DRM",
>> +       .date               = "20130625",
>> +       .major              = 1,
>> +       .minor              = 0,
>> +};
>> +
>> +#ifdef CONFIG_PM_SLEEP
>> +static int msm_pm_suspend(struct device *dev)
>> +{
>> +       struct drm_device *ddev = dev_get_drvdata(dev);
>> +       struct msm_drm_private *priv = ddev->dev_private;
>> +       struct msm_kms *kms = priv->kms;
>> +
>> +       drm_kms_helper_poll_disable(ddev);
>> +
>> +       return kms->funcs->pm_suspend(kms);
>> +}
>> +
>> +static int msm_pm_resume(struct device *dev)
>> +{
>> +       struct drm_device *ddev = dev_get_drvdata(dev);
>> +       struct msm_drm_private *priv = ddev->dev_private;
>> +       struct msm_kms *kms = priv->kms;
>>
>> +       int ret = 0;
>> +
>> +       ret = kms->funcs->pm_resume(kms);
>>
>> +       if (ret)
>> +               return ret;
>> +
>> +       drm_kms_helper_poll_enable(ddev);
>>
>> +
>> +       return 0;
>> +}
>> +#endif
>> +
>> +static const struct dev_pm_ops msm_pm_ops = {
>> +       SET_SYSTEM_SLEEP_PM_OPS(msm_pm_suspend, msm_pm_resume)
>> +};
>> +
>> +/*
>> + * Platform driver:
>> + */
>> +
>> +static int msm_pdev_probe(struct platform_device *pdev)
>> +{
>> +       return drm_platform_init(&msm_driver, pdev);
>> +}
>> +
>> +static int msm_pdev_remove(struct platform_device *pdev)
>> +{
>> +       drm_platform_exit(&msm_driver, pdev);
>>
>> +
>> +       return 0;
>> +}
>> +
>> +static const struct platform_device_id msm_id[] = {
>> +       { "mdp", 0 },
>> +       { }
>> +};
>> +
>> +static struct platform_driver msm_platform_driver = {
>> +       .probe      = msm_pdev_probe,
>> +       .remove     = msm_pdev_remove,
>> +       .driver     = {
>> +               .owner  = THIS_MODULE,
>> +               .name   = "msm",
>> +               .pm     = &msm_pm_ops,
>> +       },
>> +       .id_table   = msm_id,
>> +};
>> +
>> +static int __init msm_drm_init(void)
>> +{
>> +       DBG("init");
>> +       hdmi_init();
>> +       return platform_driver_register(&msm_platform_driver);
>> +}
>> +
>> +static void __exit msm_drm_fini(void)
>> +{
>> +       DBG("fini");
>> +       platform_driver_unregister(&msm_platform_driver);
>> +       hdmi_fini();
>> +}
>> +
>> +module_init(msm_drm_init);
>> +module_exit(msm_drm_fini);
>> +
>> +MODULE_AUTHOR("Rob Clark <robdclark@xxxxxxxxx");
>> +MODULE_DESCRIPTION("MSM DRM Driver");
>> +MODULE_LICENSE("GPL");
>
>
>> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
>> new file mode 100644
>> index 0000000..a996490
>> --- /dev/null
>> +++ b/drivers/gpu/drm/msm/msm_gem.c
>> @@ -0,0 +1,441 @@
>>
>> +/*
>> + * Copyright (C) 2013 Red Hat
>> + * Author: Rob Clark <robdclark@xxxxxxxxx>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> it
>> + * under the terms of the GNU General Public License version 2 as
>> published by
>> + * the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful, but
>> WITHOUT
>> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
>> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
>> for
>> + * more details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> along with
>> + * this program.  If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#include <linux/spinlock.h>
>> +#include <linux/shmem_fs.h>
>> +
>> +#include "msm_drv.h"
>> +
>> +struct msm_gem_object {
>> +       struct drm_gem_object base;
>> +
>> +       struct list_head mm_list;
>> +
>> +       uint32_t flags;
>> +       struct page **pages;
>> +       struct sg_table *sgt;
>> +       void *vaddr;
>> +
>> +       struct {
>> +               // XXX
>> +               uint32_t iova;
>> +       } domain[NUM_DOMAINS];
>> +};
>> +#define to_msm_bo(x) container_of(x, struct msm_gem_object, base)
>> +
>> +/* called with dev->struct_mutex held */
>> +/* TODO move this into drm_gem.c */
>> +static struct page **attach_pages(struct drm_gem_object *obj)
>> +{
>> +       struct inode *inode;
>> +       struct address_space *mapping;
>> +       struct page *p, **pages;
>> +       int i, npages;
>> +
>> +       /* This is the shared memory object that backs the GEM resource */
>> +       inode = file_inode(obj->filp);
>> +       mapping = inode->i_mapping;
>> +
>> +       npages = obj->size >> PAGE_SHIFT;
>> +
>> +       pages = drm_malloc_ab(npages, sizeof(struct page *));
>> +       if (pages == NULL)
>> +               return ERR_PTR(-ENOMEM);
>> +
>> +       for (i = 0; i < npages; i++) {
>> +               p = shmem_read_mapping_page(mapping, i);
>> +               if (IS_ERR(p))
>> +                       goto fail;
>> +               pages[i] = p;
>> +       }
>> +
>> +       return pages;
>> +
>> +fail:
>> +       while (i--)
>> +               page_cache_release(pages[i]);
>> +
>> +       drm_free_large(pages);
>> +       return ERR_CAST(p);
>> +}
>> +
>> +static void detach_pages(struct drm_gem_object *obj, struct page **pages)
>> +{
>> +       int i, npages;
>> +
>> +       npages = obj->size >> PAGE_SHIFT;
>> +
>> +       for (i = 0; i < npages; i++) {
>> +               set_page_dirty(pages[i]);
>> +
>> +               /* Undo the reference we took when populating the table */
>> +               page_cache_release(pages[i]);
>> +       }
>> +
>> +       drm_free_large(pages);
>> +}
>> +
>> +
>> +/* called with dev->struct_mutex held */
>> +static struct page **get_pages(struct drm_gem_object *obj)
>> +{
>> +       struct msm_gem_object *msm_obj = to_msm_bo(obj);
>> +
>> +       if (!msm_obj->pages) {
>> +               struct page **p = attach_pages(obj);
>> +               int npages = obj->size >> PAGE_SHIFT;
>> +
>> +               if (IS_ERR(p)) {
>> +                       dev_err(obj->dev->dev, "could not get pages:
>> %ld\n",
>> +                                       PTR_ERR(p));
>> +                       return p;
>> +               }
>> +               msm_obj->pages = p;
>> +               msm_obj->sgt = drm_prime_pages_to_sg(p, npages);
>> +       }
>> +
>> +       return msm_obj->pages;
>> +}
>> +
>> +static void put_pages(struct drm_gem_object *obj)
>> +{
>> +       struct msm_gem_object *msm_obj = to_msm_bo(obj);
>> +
>> +       if (!msm_obj->pages) {
>> +               if (msm_obj->sgt) {
>> +                       sg_free_table(msm_obj->sgt);
>> +                       kfree(msm_obj->sgt);
>> +               }
>> +               detach_pages(obj, msm_obj->pages);
>> +               msm_obj->pages = NULL;
>> +       }
>> +}
>> +
>> +int msm_gem_mmap_obj(struct drm_gem_object *obj,
>> +               struct vm_area_struct *vma)
>> +{
>> +       struct msm_gem_object *msm_obj = to_msm_bo(obj);
>> +
>> +       vma->vm_flags &= ~VM_PFNMAP;
>> +       vma->vm_flags |= VM_MIXEDMAP;
>> +
>> +       if (msm_obj->flags & MSM_BO_WC) {
>> +               vma->vm_page_prot =
>> pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
>> +       } else if (msm_obj->flags & MSM_BO_UNCACHED) {
>> +               vma->vm_page_prot =
>> pgprot_noncached(vm_get_page_prot(vma->vm_flags));
>> +       } else {
>> +               /*
>> +                * Shunt off cached objs to shmem file so they have their
>> own
>> +                * address_space (so unmap_mapping_range does what we
>> want,
>> +                * in particular in the case of mmap'd dmabufs)
>> +                */
>> +               fput(vma->vm_file);
>> +               get_file(obj->filp);
>> +               vma->vm_pgoff = 0;
>> +               vma->vm_file  = obj->filp;
>> +
>> +               vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
>>
>> +       }
>> +
>> +       return 0;
>> +}
>> +
>> +int msm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
>> +{
>> +       int ret;
>> +
>> +       ret = drm_gem_mmap(filp, vma);
>> +       if (ret) {
>> +               DBG("mmap failed: %d", ret);
>>
>> +               return ret;
>> +       }
>> +
>> +       return msm_gem_mmap_obj(vma->vm_private_data, vma);
>> +}
>> +
>> +int msm_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
>> +{
>> +       struct drm_gem_object *obj = vma->vm_private_data;
>> +       struct msm_gem_object *msm_obj = to_msm_bo(obj);
>> +       struct drm_device *dev = obj->dev;
>> +       struct page **pages;
>> +       unsigned long pfn;
>> +       pgoff_t pgoff;
>> +       int ret;
>> +
>> +       /* Make sure we don't parallel update on a fault, nor move or
>> remove
>> +        * something from beneath our feet
>> +        */
>> +       mutex_lock(&dev->struct_mutex);
>> +
>> +       /* make sure we have pages attached now */
>> +       pages = get_pages(obj);
>> +       if (IS_ERR(pages)) {
>> +               ret = PTR_ERR(pages);
>>
>> +               goto out;
>> +       }
>> +
>> +       /* We don't use vmf->pgoff since that has the fake offset: */
>> +       pgoff = ((unsigned long)vmf->virtual_address -
>> +                       vma->vm_start) >> PAGE_SHIFT;
>> +
>> +       pfn = page_to_pfn(msm_obj->pages[pgoff]);
>> +
>> +       VERB("Inserting %p pfn %lx, pa %lx", vmf->virtual_address,
>> +                       pfn, pfn << PAGE_SHIFT);
>> +
>> +       ret = vm_insert_mixed(vma, (unsigned long)vmf->virtual_address,
>> pfn);
>> +
>> +out:
>> +       mutex_unlock(&dev->struct_mutex);
>> +       switch (ret) {
>> +       case 0:
>> +       case -ERESTARTSYS:
>> +       case -EINTR:
>> +               return VM_FAULT_NOPAGE;
>> +       case -ENOMEM:
>> +               return VM_FAULT_OOM;
>> +       default:
>> +               return VM_FAULT_SIGBUS;
>> +       }
>> +}
>> +
>> +/** get mmap offset */
>> +static uint64_t mmap_offset(struct drm_gem_object *obj)
>> +{
>> +       struct drm_device *dev = obj->dev;
>> +
>> +       WARN_ON(!mutex_is_locked(&dev->struct_mutex));
>> +
>> +       if (!obj->map_list.map) {
>> +               /* Make it mmapable */
>> +               int ret = drm_gem_create_mmap_offset(obj);
>> +
>> +               if (ret) {
>> +                       dev_err(dev->dev, "could not allocate mmap
>> offset\n");
>>
>> +                       return 0;
>> +               }
>> +       }
>> +
>> +       return (uint64_t)obj->map_list.hash.key << PAGE_SHIFT;
>> +}
>> +
>> +uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj)
>> +{
>> +       uint64_t offset;
>> +       mutex_lock(&obj->dev->struct_mutex);
>> +       offset = mmap_offset(obj);
>> +       mutex_unlock(&obj->dev->struct_mutex);
>> +       return offset;
>> +}
>> +
>> +int msm_gem_get_iova(struct drm_gem_object *obj, int id, uint32_t *iova)
>> +{
>> +       struct msm_gem_object *msm_obj = to_msm_bo(obj);
>>
>> +       int ret = 0;
>> +
>> +       mutex_lock(&obj->dev->struct_mutex);
>> +       if (!msm_obj->domain[id].iova) {
>> +               struct msm_drm_private *priv = obj->dev->dev_private;
>> +               uint32_t offset = (uint32_t)mmap_offset(obj);
>> +               get_pages(obj);
>> +               ret = iommu_map_range(priv->iommus[id], offset,
>> +                               msm_obj->sgt->sgl, obj->size, IOMMU_READ);
>> +               msm_obj->domain[id].iova = offset;
>> +       }
>> +       mutex_unlock(&obj->dev->struct_mutex);
>> +
>> +       if (!ret)
>> +               *iova = msm_obj->domain[id].iova;
>>
>> +
>> +       return ret;
>> +}
>> +
>> +void msm_gem_put_iova(struct drm_gem_object *obj, int id)
>> +{
>> +}
>> +
>> +int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
>> +               struct drm_mode_create_dumb *args)
>> +{
>> +       args->pitch = align_pitch(args->width, args->bpp);
>> +       args->size  = PAGE_ALIGN(args->pitch * args->height);
>> +       return msm_gem_new_handle(dev, file, args->size,
>> +                       MSM_BO_SCANOUT | MSM_BO_WC, &args->handle);
>> +}
>> +
>> +int msm_gem_dumb_destroy(struct drm_file *file, struct drm_device *dev,
>> +               uint32_t handle)
>> +{
>> +       /* No special work needed, drop the reference and see what falls
>> out */
>> +       return drm_gem_handle_delete(file, handle);
>> +}
>> +
>> +int msm_gem_dumb_map_offset(struct drm_file *file, struct drm_device
>> *dev,
>> +               uint32_t handle, uint64_t *offset)
>> +{
>> +       struct drm_gem_object *obj;
>>
>> +       int ret = 0;
>> +
>> +       /* GEM does all our handle to object mapping */
>> +       obj = drm_gem_object_lookup(dev, file, handle);
>> +       if (obj == NULL) {
>> +               ret = -ENOENT;
>>
>> +               goto fail;
>> +       }
>> +
>> +       *offset = msm_gem_mmap_offset(obj);
>> +
>> +       drm_gem_object_unreference_unlocked(obj);
>>
>> +
>> +fail:
>> +       return ret;
>> +}
>> +
>> +void *msm_gem_vaddr(struct drm_gem_object *obj)
>> +{
>> +       struct msm_gem_object *msm_obj = to_msm_bo(obj);
>> +       WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex));
>> +       if (!msm_obj->vaddr) {
>> +               struct page **pages = get_pages(obj);
>> +               if (IS_ERR(pages))
>> +                       return ERR_CAST(pages);
>> +               msm_obj->vaddr = vmap(pages, obj->size >> PAGE_SHIFT,
>> +                               VM_MAP, pgprot_writecombine(PAGE_KERNEL));
>> +       }
>> +       return msm_obj->vaddr;
>> +}
>> +
>> +#ifdef CONFIG_DEBUG_FS
>> +void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m)
>> +{
>> +       struct drm_device *dev = obj->dev;
>> +       struct msm_gem_object *msm_obj = to_msm_bo(obj);
>> +       uint64_t off = 0;
>> +
>> +       WARN_ON(!mutex_is_locked(&dev->struct_mutex));
>> +
>> +       if (obj->map_list.map)
>> +               off = (uint64_t)obj->map_list.hash.key;
>> +
>> +       seq_printf(m, "%08x: %2d (%2d) %08llx %p %d\n",
>> +                       msm_obj->flags, obj->name,
>> obj->refcount.refcount.counter,
>> +                       off, msm_obj->vaddr, obj->size);
>> +}
>> +
>> +void msm_gem_describe_objects(struct list_head *list, struct seq_file *m)
>> +{
>> +       struct msm_gem_object *msm_obj;
>> +       int count = 0;
>> +       size_t size = 0;
>> +
>> +       list_for_each_entry(msm_obj, list, mm_list) {
>> +               struct drm_gem_object *obj = &msm_obj->base;
>> +               seq_printf(m, "   ");
>> +               msm_gem_describe(obj, m);
>> +               count++;
>> +               size += obj->size;
>> +       }
>> +
>> +       seq_printf(m, "Total %d objects, %zu bytes\n", count, size);
>> +}
>> +#endif
>> +
>> +void msm_gem_free_object(struct drm_gem_object *obj)
>> +{
>> +       struct drm_device *dev = obj->dev;
>> +       struct msm_gem_object *msm_obj = to_msm_bo(obj);
>> +       int id;
>> +
>> +       WARN_ON(!mutex_is_locked(&dev->struct_mutex));
>> +
>> +       list_del(&msm_obj->mm_list);
>> +
>> +       if (obj->map_list.map)
>> +               drm_gem_free_mmap_offset(obj);
>> +
>> +       if (msm_obj->vaddr)
>> +               vunmap(msm_obj->vaddr);
>> +
>> +       for (id = 0; id < ARRAY_SIZE(msm_obj->domain); id++) {
>> +               if (msm_obj->domain[id].iova) {
>> +                       struct msm_drm_private *priv =
>> obj->dev->dev_private;
>> +                       uint32_t offset = (uint32_t)mmap_offset(obj);
>> +                       iommu_unmap_range(priv->iommus[id], offset,
>> obj->size);
>> +               }
>> +       }
>> +
>> +       put_pages(obj);
>> +
>> +       drm_gem_object_release(obj);
>> +
>> +       kfree(obj);
>> +}
>> +
>> +/* convenience method to construct a GEM buffer object, and userspace
>> handle */
>> +int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file,
>> +               uint32_t size, uint32_t flags, uint32_t *handle)
>> +{
>> +       struct drm_gem_object *obj;
>> +       int ret;
>> +
>> +       obj = msm_gem_new(dev, size, flags);
>> +       if (!obj)
>> +               return -ENOMEM;
>> +
>> +       ret = drm_gem_handle_create(file, obj, handle);
>> +
>> +       /* drop reference from allocate - handle holds it now */
>> +       drm_gem_object_unreference_unlocked(obj);
>>
>> +
>> +       return ret;
>> +}
>> +
>> +struct drm_gem_object *msm_gem_new(struct drm_device *dev,
>> +               uint32_t size, uint32_t flags)
>>
>> +{
>> +       struct msm_drm_private *priv = dev->dev_private;
>> +       struct msm_gem_object *msm_obj;
>> +       struct drm_gem_object *obj = NULL;
>> +       int ret;
>> +
>> +       size = PAGE_ALIGN(size);
>> +
>> +       msm_obj = kzalloc(sizeof(*msm_obj), GFP_KERNEL);
>> +       if (!msm_obj)
>> +               goto fail;
>> +
>> +       obj = &msm_obj->base;
>> +
>> +       ret = drm_gem_object_init(dev, obj, size);
>> +       if (ret)
>> +               goto fail;
>> +
>> +       msm_obj->flags = flags;
>> +
>> +       mutex_lock(&obj->dev->struct_mutex);
>> +       list_add(&msm_obj->mm_list, &priv->obj_list);
>> +       mutex_unlock(&obj->dev->struct_mutex);
>> +
>> +       return obj;
>> +
>> +fail:
>> +       if (obj)
>> +               drm_gem_object_unreference_unlocked(obj);
>>
>> +
>> +       return NULL;
>> +}
>>
>
> Yay GEM.  No complaints here.
>
> Jordan
>
--
To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [Linux for Sparc]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux