Re: [PATCH 2/2] drm/msm/adreno: Add A6XX device support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Feb 01, 2018 at 11:32:25AM +0100, Lucas Stach wrote:
> Hi Jordan,
> 
> just some drive-by comments:

Drive by comments are the best.

> Am Mittwoch, den 31.01.2018, 11:25 -0700 schrieb Jordan Crouse:
> > Add support for the A6XX family of Adreno GPUs. The biggest addition
> > is the GMU (Graphics Management Unit) which takes over most of the
> > power management of the GPU itself but in a ironic twist of fate
> > needs a goodly amount of management itself. Add support for the
> > A6XX core code, the GMU and the HFI (hardware firmware interface)
> > queue that the CPU uses to communicate with the GMU.
> > 
> > > Signed-off-by: Jordan Crouse <jcrouse@xxxxxxxxxxxxxx>
> > ---
> [...]
> > +static int a6xx_hfi_queue_read(struct a6xx_hfi_queue *queue, u32 *data,
> > > +		u32 dwords)
> > +{
> > > +	struct a6xx_hfi_queue_header *header = queue->header;
> > > +	u32 i, hdr, index = header->read_index;
> > +
> > > +	if (header->read_index == header->write_index) {
> > > +		header->rx_request = 1;
> > > +		return 0;
> > > +	}
> > +
> > > +	hdr = queue->data[index];
> > +
> > > +	/*
> > > +	 * If we are to assume that the GMU firmware is in fact a rational actor
> > > +	 * and is programmed to not send us a larger response than we expect
> > > +	 * then we can also assume that if the header size is unexpectedly large
> > > +	 * that it is due to memory corruption and/or hardware failure. In this
> > > +	 * case the only reasonable course of action is to BUG() to help harden
> > > +	 * the failure.
> > > +	 */
> > +
> > +	BUG_ON(HFI_HEADER_SIZE(hdr) > dwords);
> 
> Don't ever BUG the kernel due to a peripheral acting up, until you are
> really certain that it has corrupted memory it doesn't own, which would
> be written back to persistent storage. That's the only valid
> justification for a BUG.

Acknowledged. In the final version we'll try to zap the hardware and recover
cleanly but I don't have that coded up yet.  Don't worry, I wouldn't let it
merge with a BUG().

> > +
> > > +	for (i = 0; i < HFI_HEADER_SIZE(hdr); i++) {
> > > +		data[i] = queue->data[index];
> > > +		index = (index + 1) % header->size;
> > > +	}
> > +
> > > +	header->read_index = index;
> > > +	return HFI_HEADER_SIZE(hdr);
> > +}
> > +
> > +static int a6xx_hfi_queue_write(struct a6xx_gmu *gmu,
> > > +	struct a6xx_hfi_queue *queue, u32 *data, u32 dwords)
> > +{
> > > +	struct a6xx_hfi_queue_header *header = queue->header;
> > > +	u32 i, space, index = header->write_index;
> > +
> > > +	spin_lock(&queue->lock);
> > +
> > > +	space = CIRC_SPACE(header->write_index, header->read_index,
> > > +		header->size);
> > > +	if (space < dwords) {
> > > +		header->dropped++;
> > > +		spin_unlock(&queue->lock);
> > > +		return -ENOSPC;
> > > +	}
> > +
> > > +	for (i = 0; i < dwords; i++) {
> > > +		queue->data[index] = data[i];
> > > +		index = (index + 1) % header->size;
> > > +	}
> > +
> > > +	header->write_index = index;
> > > +	spin_unlock(&queue->lock);
> > +
> > > +	/* Make sure the command is written to the queue */
> > > +	wmb();
> 
> The above memory barrier is unnecessary. The gmu_write below is a
> wrapper around writel, which already includes the write barrier.

Thanks.  I got this one and a few others I found too.

> > +	gmu_write(gmu, REG_A6XX_GMU_HOST2GMU_INTR_SET, 0x01);
> > > +	return 0;
> > +}
> 
> [...]
> > +static int a6xx_hfi_send_msg(struct a6xx_gmu *gmu, int id,
> > > +		void *data, u32 size, u32 *payload, u32 payload_size)
> > +{
> > > +	struct a6xx_hfi_queue *queue = &gmu->queues[HFI_COMMAND_QUEUE];
> > > +	struct a6xx_hfi_response resp = { 0 };
> > > +	int ret, dwords = size >> 2;
> > > +	u32 seqnum;
> > +
> > > +	seqnum = atomic_inc_return(&queue->seqnum) % 0xfff;
> > +
> > > +	/* First dword of the message is the message header - fill it in */
> > > +	*((u32 *) data) = (seqnum << 20) | (HFI_MSG_CMD << 16) |
> > > +		(dwords << 8) | id;
> > +
> > > +	init_completion(&resp.complete);
> > > +	resp.id = id;
> > > +	resp.seqnum = seqnum;
> > +
> > > +	spin_lock_bh(&hfi_ack_lock);
> > > +	list_add_tail(&resp.node, &hfi_ack_list);
> > > +	spin_unlock_bh(&hfi_ack_lock);
> 
> What are you protecting against here by using the _bh spinlock
> variants?

We use a tasklet from the interrupt to process responses. The
commands are entirely serialized right now so this is mostly wasted
cycles but we are planning for a brave future where the commands
may be asynchronous.

> Regards,
> Lucas

Thanks,
Jordan
-- 
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
--
To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [Linux for Sparc]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux