On Mon 2020-09-14 14:39:53, John Ogness wrote: > Add support for extending the newest data block. For this, introduce > a new finalization state (desc_finalized) denoting a committed > descriptor that cannot be extended. > > Signed-off-by: John Ogness <john.ogness@xxxxxxxxxxxxx> Looks good to me: Reviewed-by: Petr Mladek <pmladek@xxxxxxxx> There seems to be possible a small clean up, see below. But I would do it in a followup patch to avoid yet another respin. > diff --git a/kernel/printk/printk_ringbuffer.c b/kernel/printk/printk_ringbuffer.c > index 911fbe150e9a..4e526c79f89c 100644 > --- a/kernel/printk/printk_ringbuffer.c > +++ b/kernel/printk/printk_ringbuffer.c > +/* > + * Try to resize an existing data block associated with the descriptor > + * specified by @id. If the resized data block should become wrapped, it > + * copies the old data to the new data block. If @size yields a data block > + * with the same or less size, the data block is left as is. > + * > + * Fail if this is not the last allocated data block or if there is not > + * enough space or it is not possible make enough space. > + * > + * Return a pointer to the beginning of the entire data buffer or NULL on > + * failure. > + */ > +static char *data_realloc(struct printk_ringbuffer *rb, > + struct prb_data_ring *data_ring, unsigned int size, > + struct prb_data_blk_lpos *blk_lpos, unsigned long id) > +{ > + struct prb_data_block *blk; > + unsigned long head_lpos; > + unsigned long next_lpos; > + bool wrapped; > + > + /* Reallocation only works if @blk_lpos is the newest data block. */ > + head_lpos = atomic_long_read(&data_ring->head_lpos); > + if (head_lpos != blk_lpos->next) > + return NULL; > + > + /* Keep track if @blk_lpos was a wrapping data block. */ > + wrapped = (DATA_WRAPS(data_ring, blk_lpos->begin) != DATA_WRAPS(data_ring, blk_lpos->next)); > + > + size = to_blk_size(size); > + > + next_lpos = get_next_lpos(data_ring, blk_lpos->begin, size); > + > + /* If the data block does not increase, there is nothing to do. */ > + if (head_lpos - next_lpos < DATA_SIZE(data_ring)) { > + blk = to_block(data_ring, blk_lpos->begin); > + return &blk->data[0]; > + } > + > + if (!data_push_tail(rb, data_ring, next_lpos - DATA_SIZE(data_ring))) > + return NULL; > + > + /* The memory barrier involvement is the same as data_alloc:A. */ > + if (!atomic_long_try_cmpxchg(&data_ring->head_lpos, &head_lpos, > + next_lpos)) { /* LMM(data_realloc:A) */ > + return NULL; > + } > + > + blk = to_block(data_ring, blk_lpos->begin); > + > + if (DATA_WRAPS(data_ring, blk_lpos->begin) != DATA_WRAPS(data_ring, next_lpos)) { > + struct prb_data_block *old_blk = blk; > + > + /* Wrapping data blocks store their data at the beginning. */ > + blk = to_block(data_ring, 0); > + > + /* > + * Store the ID on the wrapped block for consistency. > + * The printk_ringbuffer does not actually use it. > + */ > + blk->id = id; Small cleanup: The "id" should already be there when the block has already been wrapped before. By other words, even the above need to be done only when (!wrapped). > + > + if (!wrapped) { > + /* > + * Since the allocated space is now in the newly > + * created wrapping data block, copy the content > + * from the old data block. > + */ > + memcpy(&blk->data[0], &old_blk->data[0], > + (blk_lpos->next - blk_lpos->begin) - sizeof(blk->id)); > + } > + } > + > + blk_lpos->next = next_lpos; > + > + return &blk->data[0]; > +} > + > /* Return the number of bytes used by a data block. */ > static unsigned int space_used(struct prb_data_ring *data_ring, > struct prb_data_blk_lpos *blk_lpos) Best Regards, Petr _______________________________________________ kexec mailing list kexec@xxxxxxxxxxxxxxxxxxx http://lists.infradead.org/mailman/listinfo/kexec