Hello All, Is it possible to pass decoder output buffer directly to color conversion routine. The snap shot of render function is as followed. static GstFlowReturn render (GstBaseSink * bsink, GstBuffer * buf) { int i, w, h; GstOmapFbSink *omapfbsink = GST_OMAPFB_SINK(bsink); __uint8_t *fb = omapfbsink->framebuffer, *data = GST_BUFFER_DATA(buf); long iTime=0; struct timeval tempo1, tempo2; if(omapfbsink->plane_info.enabled == 2) { omapfbsink->plane_info.enabled = 1; g_mutex_lock (omapfbsink->x_lock); gst_omapfbsink_update_plane(omapfbsink); g_mutex_unlock (omapfbsink->x_lock); } g_print("\n Address in render function = %x\n", (unsigned int)GST_BUFFER_DATA(buf)); /* If a buffer which wasn't supplied by us is given to us to render with, we need to copy to our buffer first so that memory alignment constraints are met. */ if(data != omapfbsink->buffer && GST_BUFFER_SIZE(buf) <= omapfbsink->buffer_size) { memcpy(omapfbsink->buffer, data, GST_BUFFER_SIZE(buf)); data = omapfbsink->buffer; } /* buffer_alloc gave a direct buffer, so we have nothing to do here... */ if(omapfbsink->row_skip) return GST_FLOW_OK; switch(omapfbsink->image_format) { case GST_MAKE_FOURCC('I', '4', '2', '0'): /* Convert to YUV422 and send to FB */ h = GST_VIDEO_SINK_HEIGHT (omapfbsink); w = GST_VIDEO_SINK_WIDTH (omapfbsink); __uint8_t *y, *u, *v; y = data; u = y + w * h; v = u + w / 2 * h / 2; yuv420_to_yuv422(fb, y, u, v, w & ~15, h, w, w / 2, omapfbsink->fixinfo.line_length); break; case GST_MAKE_FOURCC('U', 'Y', 'V', 'Y'): /* Send to FB, taking into account line_length */ w = 2 * GST_VIDEO_SINK_WIDTH (omapfbsink); for(i = 0; i < GST_VIDEO_SINK_HEIGHT (omapfbsink); i++) { memcpy(fb, data, w); fb += omapfbsink->fixinfo.line_length; data += w; } break; } return GST_FLOW_OK; } Here buf which is passed is my decoder output buffer having size = (width * height *1.5). I would like to replace memcpy seprated by blue color in above function. Is there any possibility to assign buf pointer directly to Y pointer which later go to color conversion. I am laking of some understanding of buffer_alloc of base sink. If any where I am wrong please correct me and try to guide me in right direction. In other way I was thinking to use dma transfer, but I am completely new for dma and will take much time to understand and write transfer function. If I can get some other solution, will be helpful in my project. -Tejas. From: Tejas [mailto:tejas at picus.in] Sent: Friday, March 26, 2010 3:33 PM To: 'gstreamer-embedded at lists.sourceforge.net' Subject: Need Help On gst-omapfb built from openembedded Hello All, I am using gst-omapfb plugin built from openembedded which contains X-overlay patch on normal omapfb plugin as my video sink. My normal pipeline is as followed. $gst-launch-0.10 filesrc location=1.mp4 ! myparser ! mydecoder ! omapfbsink. Src pad of my decoder is set as following caps . ("video/x-raw-yuv", "width", G_TYPE_INT, mpeg4dec->info.width, "height", G_TYPE_INT, mpeg4dec->info.height, "framerate", GST_TYPE_FRACTION, mpeg4dec->fps_nu, mpeg4dec->fps_de, "format", GST_TYPE_FOURCC, GST_MAKE_FOURCC ('I', '4', '2', '0'), "pixel-aspect-ratio", GST_TYPE_FRACTION,1,1,i NULL); Omapfbsink will accept I420 format and convert it to UYVY format and copy it to frame buffer. In these all process it is using memcpy which copy data from buffer pushed from my decoder to buffer allocated locally. Here in omapfbsink memcpy is time consuming. Instead of using memcpy I would like to use some other method. But I am getting how can I replace memcpy. If anyone can guide me to replace memcpy, I will be pleasure for me. -Tejas. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.freedesktop.org/archives/gstreamer-embedded/attachments/20100327/e6d6f259/attachment.htm>