Re: [PATCH 3/4] drm/i915/gsc: add initial support for GSC proxy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have a number of comments but most are personal preferences and so i labelled them nits.
I did catch a few minor coding styling issues and am assuming those need to be enforced as per i915/kernel rules?
That said, since they are so minor (or maybe they are not strict), I'm providing a conditional RB to fix those 4 issues
(i.e. the header inclusion alphabetical ordering and struct '{' bracket position)

Reviewed-by: Alan Previn <alan.previn.teres.alexis@xxxxxxxxx>

On Wed, 2023-03-29 at 09:56 -0700, Ceraolo Spurio, Daniele wrote:
> The GSC uC needs to communicate with the CSME to perform certain
> operations. Since the GSC can't perform this communication directly
> on platforms where it is integrated in GT, i915 needs to transfer the
> messages from GSC to CSME and back.
> The proxy flow is as follow:
> 1 - i915 submits a request to GSC asking for the message to CSME
> 2 - GSC replies with the proxy header + payload for CSME
> 3 - i915 sends the reply from GSC as-is to CSME via the mei proxy
>     component
> 4 - CSME replies with the proxy header + payload for GSC
> 5 - i915 submits a request to GSC with the reply from CSME
> 6 - GSC replies either with a new header + payload (same as step 2,
>     so we restart from there) or with an end message.
> 
> After GSC load, i915 is expected to start the first proxy message chain,
> while all subsequent ones will be triggered by the GSC via interrupt.
> 
> To communicate with the CSME, we use a dedicated mei component, which
> means that we need to wait for it to bind before we can initialize the
> proxies. This usually happens quite fast, but given that there is a
> chance that we'll have to wait a few seconds the GSC work has been moved
> to a dedicated WQ to not stall other processes.
> 
> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@xxxxxxxxx>
> Cc: Alan Previn <alan.previn.teres.alexis@xxxxxxxxx>
> ---
>  drivers/gpu/drm/i915/Makefile                 |   1 +
>  drivers/gpu/drm/i915/gt/uc/intel_gsc_proxy.c  | 384 ++++++++++++++++++
>  drivers/gpu/drm/i915/gt/uc/intel_gsc_proxy.h  |  17 +
>  drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.c     |  40 +-
>  drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.h     |  14 +-
>  .../i915/gt/uc/intel_gsc_uc_heci_cmd_submit.h |   1 +
>  6 files changed, 452 insertions(+), 5 deletions(-)
>  create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_gsc_proxy.c
>  create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_gsc_proxy.h
> 
alan:snip

> new file mode 100644
> index 000000000000..ed8f68e78c26
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_gsc_proxy.c
> @@ -0,0 +1,384 @@
> +#include "intel_gsc_proxy.h"
> +
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright © 2022 Intel Corporation
alan: nit: 2022 - 2023?

> + */
> +
> +#include <linux/component.h>
> +#include "drm/i915_gsc_proxy_mei_interface.h"
alan: alphabetical
> +#include "drm/i915_component.h"
alan: snip

> +/*
> + * GSC proxy:
> + * The GSC uC needs to communicate with the CSME to perform certain operations.
> + * Since the GSC can't perform this communication directly on platforms where it
> + * is integrated in GT, i915 needs to transfer the messages from GSC to CSME
> + * and back. i915 must manually start the proxy flow after the GSC is loaded to
> + * signal to GSC that we're ready to handle its messages and allow it to query
> + * its init data from CSME; GSC will then trigger an HECI2 interrupt if it needs
> + * to send messages to CSME again.
> + * The proxy flow is as follow:
> + * 1 - i915 submits a request to GSC asking for the message to CSME
> + * 2 - GSC replies with the proxy header + payload for CSME
> + * 3 - i915 sends the reply from GSC as-is to CSME via the mei proxy component
> + * 4 - CSME replies with the proxy header + payload for GSC
> + * 5 - i915 submits a request to GSC with the reply from CSME
> + * 6 - GSC replies either with a new header + payload (same as step 2, so we
> + *     restart from there) or with an end message.
> + */
> +
> +/*
> + * The component should load quite quickly in most cases, but it could take
> + * a bit. Using a very big timeout just to cover the worst case scenario
> + */
> +#define GSC_PROXY_INIT_TIMEOUT_MS 20000
> +
> +/* the protocol supports up to 32K in each direction */
> +#define GSC_PROXY_BUFFER_SIZE SZ_32K
> +#define GSC_PROXY_CHANNEL_SIZE (GSC_PROXY_BUFFER_SIZE * 2)
> +#define GSC_PROXY_MAX_MSG_SIZE (GSC_PROXY_BUFFER_SIZE - sizeof(struct intel_gsc_mtl_header))
> +
> +/* FW-defined proxy header */
> +struct intel_gsc_proxy_header
> +{
alan: i thought we typically put the '{' on the same line as the struct name 
> 
alan:snip

> +struct gsc_proxy_msg
> +{
alan: shouldnt the '{' be above?
> +	struct intel_gsc_mtl_header header;
> +	struct intel_gsc_proxy_header proxy_header;
> +} __packed;
> +
> +static int proxy_send_to_csme(struct intel_gsc_uc *gsc)
> +{
> +	struct intel_gt *gt = gsc_uc_to_gt(gsc);
> +	struct i915_gsc_proxy_component *comp = gsc->proxy.component;
> +	struct intel_gsc_mtl_header *hdr;
> +	void *in = gsc->proxy.to_csme;
> +	void *out = gsc->proxy.to_gsc;
> +	u32 in_size;
> +	int ret;
> +
> +	/* CSME msg only includes the proxy */
> +	hdr = in;
> +	in += sizeof(struct intel_gsc_mtl_header);
> +	out += sizeof(struct intel_gsc_mtl_header);
> +
> +	in_size = hdr->message_size - sizeof(struct intel_gsc_mtl_header);
> +
> +	/* the message must contain at least the proxy header */
> +	if (in_size < sizeof(struct intel_gsc_proxy_header) ||
> +	    in_size > GSC_PROXY_MAX_MSG_SIZE) {
> +		gt_err(gt, "Invalid CSME message size: %u\n", in_size);
> +		return -EINVAL;
> +	}
> +
> +	ret = comp->ops->send(comp->mei_dev, in, in_size);
alan: probably not something to do as part of this series but a future improvement series would be to
have a version of this ops->send with a timeout period as we've seen how these interfaces can
hang in rare corner cases (if we encounter a bug in the system). Since such a change would
be more intrusive and take more time, will leave that for a future follow up improvement.

> +	if (ret < 0) {
> +		gt_err(gt, "Failed to send CSME message\n");
> +		return ret;
> +	}
> +
> +	ret = comp->ops->recv(comp->mei_dev, out, GSC_PROXY_MAX_MSG_SIZE);
alan: same as above, a future-series follow up discussion, i believe, is that we need a version
of this interface with a timeout.
> +	if (ret < 0) {
> +		gt_err(gt, "Failed to receive CSME message\n");
> +		return ret;
> +	}
> +
> +	return ret;
> +}
> +
> +static int submit_gsc_proxy_request(struct intel_gsc_uc *gsc, u32 size)
alan: nit: the earlier half of the proxy operation function was called "proxy_send_to_csme"
wonder if, for no other reason that 'infered-duality', should this function be called "proxy_send_to_gsc"?
(or perhaps that should be called "proxy_comm_to_cse" and this called "proxy_comm_to_gsc".
Either way, since this is all internal to this one src file, treat as a nit.
> +{
> +	struct intel_gt *gt = gsc_uc_to_gt(gsc);
> +	u32 *marker = gsc->proxy.to_csme; /* first dw of the reply header */
> +	u64 addr_in = i915_ggtt_offset(gsc->proxy.vma);
> +	u64 addr_out = addr_in + GSC_PROXY_BUFFER_SIZE;
> +	int err;
alan:snip






> +static int i915_gsc_proxy_component_bind(struct device *i915_kdev,
> +					 struct device *tee_kdev, void *data)
alan: nit: do we still call this "tee_kdev" for the case of gsc_proxy? maybe should be "mei_gsc_proxy_kdev"?

alan:snip

> +static void i915_gsc_proxy_component_unbind(struct device *i915_kdev,
> +					    struct device *tee_kdev, void *data)
alan: nit: same as last one on "tee_kdev".

alan:snip


> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.c b/drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.c
> index 2d5b70b3384c..b505b208f04b 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.c
> @@ -10,15 +10,30 @@
>  #include "intel_gsc_uc.h"
>  #include "intel_gsc_fw.h"
>  #include "i915_drv.h"
> +#include "intel_gsc_proxy.h"
alan:alphabetical 
alan:snip




[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux