Vincent Mailhol <mailhol.vincent@xxxxxxxxxx> 於 2025年1月15日 週三 上午11:36寫道: > >>>>> +static int nct6694_can_get_berr_counter(const struct net_device *ndev, > >>>>> + struct can_berr_counter *bec) > >>>>> +{ > >>>>> + struct nct6694_can_priv *priv = netdev_priv(ndev); > >>>>> + struct nct6694_can_event *evt = priv->rx->event; > >>>>> + struct nct6694_cmd_header cmd_hd; > >>>>> + u8 mask = NCT6694_CAN_EVENT_REC | NCT6694_CAN_EVENT_TEC; > >>>>> + int ret; > >>>>> + > >>>>> + guard(mutex)(&priv->lock); > >>>>> + > >>>>> + cmd_hd = (struct nct6694_cmd_header) { > >>>>> + .mod = NCT6694_CAN_MOD, > >>>>> + .cmd = NCT6694_CAN_EVENT, > >>>>> + .sel = NCT6694_CAN_EVENT_SEL(priv->can_idx, mask), > >>>>> + .len = cpu_to_le16(sizeof(priv->rx->event)) > >>>>> + }; > >>>>> + > >>>>> + ret = nct6694_read_msg(priv->nct6694, &cmd_hd, evt); > >>>>> + if (ret < 0) > >>>>> + return ret; > >>>> > >>>> You are holding the priv->lock mutex before calling > >>>> nct6694_read_msg(). But nct6694_read_msg() then holds the > >>>> nct6694->access_lock mutex. Why do you need a double mutex here? What > >>>> kind of race scenario are you trying to prevent here? > >>>> > >>> > >>> I think priv->lock need to be placed here to prevent priv->rx from > >>> being assigned by other functions, and nct6694->access_lock ensures > >>> that the nct6694_read_msg() transaction is completed. > >>> But in this case, cmd_hd does not need to be in priv->lock's scope. > >> > >> So, the only reason for holding priv->lock is because priv->rx is shared > >> between functions. > >> > >> struct nct6694_can_event is only 8 bytes. And you only need it for the > >> life time of the function so it can simply be declared on the stack: > >> > >> struct nct6694_can_event evt; > >> > >> and with this, no more need to hold the lock. And the same thing also > >> applies to the other functions. > >> > >> Here, by trying to optimize the memory for only a few bytes, you are > >> getting a huge penalty on the performance by putting locks on all the > >> functions. This is not a good tradeoff. > >> > > > > Since nct6694_read_msg()/nct6694_write_msg() process URBs via > > usb_bulk_msg(), the transferred data must not be located on the stack. > > For more details about allocating buffers for transmitting data, > > please refer to the link: > > https://lore.kernel.org/linux-can/20241028-observant-gentle-doberman-0a2baa-mkl@xxxxxxxxxxxxxx/ > > Ack, I forgot that you can not use stack memory in usb_bulk_msg(). > > Then, instead, you can either: > > - do a dynamic memory allocation directly in the function (good for > when you are outside of the hot path, for example struct > nct6694_can_setting) > > - and for the other structures which are part of the hot path > (typically struct nct6694_can_frame) continue to use a dynamically > allocated buffer stored in your priv but change the type of > nct6694_can_tx and nct6694_can_rx from union to structures. > > And no more overlaps, thus no more need for the mutex. > Understood, I will remove the unions and add members to private structure in the next patch. e.g. struct nct6694_can_priv { struct can_priv can; ... struct nct6694_can_frame tx; struct nct6694_can_frame rx; }; And do dynamic memory allocation for struct nct6694_can_setting and struct nct6694_can_information. In addition, I would like to know your thoughts on how struct nct6694_can_event[2] should be handled? It is utilized in both nct6694_can_get_berr_counter() and nct6694_can_irq(), with the latter being called more frequently during runtime. Thanks, Ming