On Tue, Dec 09, 2014 at 09:28:32PM +0200, ville.syrjala@xxxxxxxxxxxxxxx wrote: > From: Ville Syrjälä <ville.syrjala@xxxxxxxxxxxxxxx> > > Currently i915_pipe_crc_read() will drop pipe_crc->lock for the entire > duration of the copy_to_user() loop, which means it'll access > pipe_crc->entries without any protection. If another thread sneaks in > and frees pipe_crc->entries the code will oops. > > Reorganize the code to hold the lock around everything except > copy_to_user(). After the copy the lock is reacquired and the the number > of available entries is rechecked. > > Since this is a debug feature simplify the error handling a bit by > consuming the crc entry even if copy_to_user() would fail. > > Signed-off-by: Ville Syrjälä <ville.syrjala@xxxxxxxxxxxxxxx> While you go to all this trouble with making the debugfs interface safe against stupid userspace: set_source has a potential leak when ->entries is already allocated when we set it. An unconditional kfree(pipe_crc->entries); before should be all we need since kfree can handle this. I'll smash a patch on top. Entire series merged, thanks. -Daniel > --- > drivers/gpu/drm/i915/i915_debugfs.c | 39 +++++++++++++++++++++---------------- > 1 file changed, 22 insertions(+), 17 deletions(-) > > diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c > index d9d5f1f..1c61564 100644 > --- a/drivers/gpu/drm/i915/i915_debugfs.c > +++ b/drivers/gpu/drm/i915/i915_debugfs.c > @@ -2858,7 +2858,7 @@ i915_pipe_crc_read(struct file *filep, char __user *user_buf, size_t count, > struct drm_i915_private *dev_priv = dev->dev_private; > struct intel_pipe_crc *pipe_crc = &dev_priv->pipe_crc[info->pipe]; > char buf[PIPE_CRC_BUFFER_LEN]; > - int head, tail, n_entries, n; > + int n_entries; > ssize_t bytes_read; > > /* > @@ -2890,36 +2890,39 @@ i915_pipe_crc_read(struct file *filep, char __user *user_buf, size_t count, > } > > /* We now have one or more entries to read */ > - head = pipe_crc->head; > - tail = pipe_crc->tail; > - n_entries = min((size_t)CIRC_CNT(head, tail, INTEL_PIPE_CRC_ENTRIES_NR), > - count / PIPE_CRC_LINE_LEN); > - spin_unlock_irq(&pipe_crc->lock); > + n_entries = count / PIPE_CRC_LINE_LEN; > > bytes_read = 0; > - n = 0; > - do { > - struct intel_pipe_crc_entry *entry = &pipe_crc->entries[tail]; > + while (n_entries > 0) { > + struct intel_pipe_crc_entry *entry = > + &pipe_crc->entries[pipe_crc->tail]; > int ret; > > + if (CIRC_CNT(pipe_crc->head, pipe_crc->tail, > + INTEL_PIPE_CRC_ENTRIES_NR) < 1) > + break; > + > + BUILD_BUG_ON_NOT_POWER_OF_2(INTEL_PIPE_CRC_ENTRIES_NR); > + pipe_crc->tail = (pipe_crc->tail + 1) & (INTEL_PIPE_CRC_ENTRIES_NR - 1); > + > bytes_read += snprintf(buf, PIPE_CRC_BUFFER_LEN, > "%8u %8x %8x %8x %8x %8x\n", > entry->frame, entry->crc[0], > entry->crc[1], entry->crc[2], > entry->crc[3], entry->crc[4]); > > - ret = copy_to_user(user_buf + n * PIPE_CRC_LINE_LEN, > - buf, PIPE_CRC_LINE_LEN); > + spin_unlock_irq(&pipe_crc->lock); > + > + ret = copy_to_user(user_buf, buf, PIPE_CRC_LINE_LEN); > if (ret == PIPE_CRC_LINE_LEN) > return -EFAULT; > > - BUILD_BUG_ON_NOT_POWER_OF_2(INTEL_PIPE_CRC_ENTRIES_NR); > - tail = (tail + 1) & (INTEL_PIPE_CRC_ENTRIES_NR - 1); > - n++; > - } while (--n_entries); > + user_buf += PIPE_CRC_LINE_LEN; > + n_entries--; > + > + spin_lock_irq(&pipe_crc->lock); > + } > > - spin_lock_irq(&pipe_crc->lock); > - pipe_crc->tail = tail; > spin_unlock_irq(&pipe_crc->lock); > > return bytes_read; > @@ -3456,6 +3459,8 @@ static int pipe_crc_set_source(struct drm_device *dev, enum pipe pipe, > spin_lock_irq(&pipe_crc->lock); > entries = pipe_crc->entries; > pipe_crc->entries = NULL; > + pipe_crc->head = 0; > + pipe_crc->tail = 0; > spin_unlock_irq(&pipe_crc->lock); > > kfree(entries); > -- > 2.0.4 > > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@xxxxxxxxxxxxxxxxxxxxx > http://lists.freedesktop.org/mailman/listinfo/intel-gfx -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx