On Tue, 22 Sep 2015, Christophe Fergeau wrote: [...] > What I'm getting at is that VideoBuffer seems to only be used from red_marshall_stream_data(), > and that all that the non-VideoEncoder code needs is > - some data buffer > - the size of that data buffer > - some opaque pointer which manages this data > - a function to free the opaque pointer when it's no longer needed. > > It seems to me that VideoBuffer could be stack-allocated (or even > removed), and that the gstreamer backend could just return a directly > GstBuffer with gst_buffer_unref() as its free function, rather than > what is done currently where you create a GstVideoBuffer wrapping a > GstBuffer, and then you have some more code to try and reuse existing > GstVideoBuffer instances rather than allocating a new one. > > The GStreamer 1.0 situation is a bit more complicated as you need a > GstBuffer and a GstMapInfo in the free function, I think I would use > gst_mini_object_set_qdata() to attach the GstMapInfo data to the > GstBuffer. I don't think getting rid of the VideoBuffer structure is a good idea. Instead of encode_frame() returning one easily extended structure it will have to return four values: a pointer to the data, the data size, an opaque pointer and a callback. And the video encoder may still have to allocate a structure to store all the information it needs to properly release the frame buffer, as is the case for GStreamer 1.0. > The needed data, data size, opaque pointer, free func could > alternatively be returned by a new encoder vfunc, This would only work if we assume that encode_frame() cannot called until the previous compressed frame has been released, or force adding a way to track which compressed buffer we want the size of, which is precisely what VideoBuffer* does for us. > or this could also be all hidden in an video_encoder_marshall(...) > vfunc I'd rather have the video encoder only care about video encoding and leave the decision of what to do with the compressed buffer to the caller, i.e. leave the marshalling to the rest of the Spice code. > pull_compressed_buffer() could do (something like): > > buffer->opaque = gst_app_sink_pull_buffer(encoder->appsink); > buffer->free_func = gst_buffer_unref; > if (buffer->opaque) { > buffer->data = GST_BUFFER_DATA(video_buffer->opaque); > buffer->size = GST_BUFFER_SIZE(video_buffer->opaque); > buffer->free_func = gst_buffer_unref; > return VIDEO_ENCODER_FRAME_ENCODE_DONE; > } > > and red_marshall_stream_data() would do: > VideoBuffer buffer; > agent->video_encoder->encode_frame(..., &buffer); > spice_marshaller_add_ref_full(base_marshaller, buffer->data, buffer->size, > buffer->free_func, buffer->opaque); I'm not sure I follow what you're getting at here. gst_buffer_unref() and spice_marshaller_item_free_func() don't take the same number of parameters. The above spice_marshaller_add_ref_full() call would result in calling gst_buffer_unref(buffer->data, buffer->opaque). That's why red_release_video_encoder_buffer() is needed. [...] > > + /* The default video buffer */ > > + GstVideoBuffer *default_buffer; > > Do we really need a default video buffer rather than always allocating a > new one? How much performance do we gain from that? Imo splitting this > in a separate commit (or removing it) would make the whole patch much > simpler (no need for refcounting in particular). You're probably right that avoiding reallocations is probably overkill (for both the mjpeg and gstreamer encoders). After all we do allocate GstBuffer and GstMemory structures with every run (although my understanding is that GStreamer tries to cache those and that Spice does the same with its display_stream structures). So I'll separate that part. -- Francois Gouget <fgouget@xxxxxxxxxxxxxxx> _______________________________________________ Spice-devel mailing list Spice-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/spice-devel