> > > > > On 07/24/2018 01:25 PM, Frediano Ziglio wrote: > > >> > > >> In the past dev->msg was only allocated for cursor images, and > > >> after use dev->msg size was reduced. > > >> > > >> Now, since commit dcc3f995d9f55, dev->msg is used for every frame. > > >> There is little point of calling reallocate with a large dev->msg size > > >> and then reallocate with a small size, for every frame. > > >> > > >> This patch keeps only the "enlarge buffer when needed" part. > > >> > > >> The drawback of this approach is that if a big buffer was used > > >> it is kept big (at least, in the near future), even if > > >> current frames are smaller. > > >> > > >> Signed-off-by: Uri Lublin <uril@xxxxxxxxxx> > > > > > > Why RFC? > > > > So people can comment :) > > > > Honestly in this ML RFC usually is close to /dev/null... > > > I think the benefit (not re-allocating for each frame) is more valuable > > than the drawback (keeping a large buffer). > > > > Optimizations without numbers are bets, do you have numbers? > Got this systemtap script probe process("/usr/lib64/libspice-server.so.1.12.4").statement("*@red-stream-device.c:136") { printf("size %d\n", $dev->hdr->size); } (line 136 is dev->msg_pos = 0; in if (dev->hdr_pos >= sizeof(dev->hdr)) { dev->hdr.type = GUINT16_FROM_LE(dev->hdr.type); dev->hdr.size = GUINT32_FROM_LE(dev->hdr.size); dev->msg_pos = 0; } ) numbers were usually quite small (mostly less than 2kb). But was not doing much honestly > > > > > > Looking at current code looks like for similar usage we use a static > > > "small" (some KB) buffer as most messages are small and do dynamic > > > alloc/free for every other message. > > > > The static buffer is only used upon an error. > > I'm talking about message allocation in the channels, like > common_alloc_recv_buf > (server/common-graphics-channel.c), is a quite common pattern in SPICE > server. > > > All messages are read into dev->msg->buf. > > > > Also once streaming starts there are a lot of consecutive > > "DATA" messages (frames). > > > > I know. One way I was thinking is adding a timer (let say 1 minute) on first > reallocation/increase and once timer trigger shrink the buffer, maximum > will resize every 1 minute. > > Still... all thinking without numbers! > A simple set of message sizes and a reproducer just doing g_malloc/whatever > would be enough. > > > Thanks, > > Uri. > > > > > > > > > > > >> --- > > >> server/red-stream-device.c | 14 -------------- > > >> 1 file changed, 14 deletions(-) > > >> > > >> diff --git a/server/red-stream-device.c b/server/red-stream-device.c > > >> index d293dc1cc..eda6e6690 100644 > > >> --- a/server/red-stream-device.c > > >> +++ b/server/red-stream-device.c > > >> @@ -175,14 +175,6 @@ stream_device_partial_read(StreamDevice *dev, > > >> SpiceCharDeviceInstance *sin) > > >> * the next message */ > > >> if (handled) { > > >> dev->hdr_pos = 0; > > >> - > > >> - // Reallocate message buffer to the minimum. > > >> - // Currently the only message that requires resizing is the > > >> cursor > > >> shape, > > >> - // which is not expected to be sent so often. > > >> - if (dev->msg_len > sizeof(*dev->msg)) { > > >> - dev->msg = g_realloc(dev->msg, sizeof(*dev->msg)); > > >> - dev->msg_len = sizeof(*dev->msg); > > >> - } > > >> } > > >> > > >> if (handled || dev->has_error) { > > >> @@ -314,12 +306,6 @@ handle_msg_data(StreamDevice *dev, > > >> SpiceCharDeviceInstance *sin) > > >> } > > >> > > >> /* make sure we have a large enough buffer for the whole frame */ > > >> - /* --- > > >> - * TODO: Now that we copy partial data into the buffer, for each > > >> frame > > >> - * the buffer is allocated and freed (look for g_realloc in > > >> - * stream_device_partial_read). > > >> - * Probably better to just keep the larger buffer. > > >> - */ > > >> if (dev->msg_pos == 0) { > > >> dev->frame_mmtime = reds_get_mm_time(); > > >> if (dev->msg_len < dev->hdr.size) { > > > > _______________________________________________ > Spice-devel mailing list > Spice-devel@xxxxxxxxxxxxxxxxxxxxx > https://lists.freedesktop.org/mailman/listinfo/spice-devel > _______________________________________________ Spice-devel mailing list Spice-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/spice-devel