Re: [PATCH] Fix spelling errors in Server Message Block

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



merged into cifs-2.6.git for-next


On Wed, Aug 7, 2024 at 11:53 AM Xiaxi Shen <shenxiaxi26@xxxxxxxxx> wrote:
>
> Fixed typos in various files under fs/smb/client/
>
> Signed-off-by: Xiaxi Shen <shenxiaxi26@xxxxxxxxx>
> ---
>  fs/smb/client/cifsglob.h  | 4 ++--
>  fs/smb/client/misc.c      | 2 +-
>  fs/smb/client/smbdirect.c | 8 ++++----
>  fs/smb/client/transport.c | 2 +-
>  4 files changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
> index f6d1f075987f..66677b8fc9be 100644
> --- a/fs/smb/client/cifsglob.h
> +++ b/fs/smb/client/cifsglob.h
> @@ -345,7 +345,7 @@ struct smb_version_operations {
>         /* connect to a server share */
>         int (*tree_connect)(const unsigned int, struct cifs_ses *, const char *,
>                             struct cifs_tcon *, const struct nls_table *);
> -       /* close tree connecion */
> +       /* close tree connection */
>         int (*tree_disconnect)(const unsigned int, struct cifs_tcon *);
>         /* get DFS referrals */
>         int (*get_dfs_refer)(const unsigned int, struct cifs_ses *,
> @@ -816,7 +816,7 @@ struct TCP_Server_Info {
>          * Protected by @refpath_lock and @srv_lock.  The @refpath_lock is
>          * mostly used for not requiring a copy of @leaf_fullpath when getting
>          * cached or new DFS referrals (which might also sleep during I/O).
> -        * While @srv_lock is held for making string and NULL comparions against
> +        * While @srv_lock is held for making string and NULL comparisons against
>          * both fields as in mount(2) and cache refresh.
>          *
>          * format: \\HOST\SHARE[\OPTIONAL PATH]
> diff --git a/fs/smb/client/misc.c b/fs/smb/client/misc.c
> index b28ff62f1f15..3fe5bfc389d0 100644
> --- a/fs/smb/client/misc.c
> +++ b/fs/smb/client/misc.c
> @@ -352,7 +352,7 @@ checkSMB(char *buf, unsigned int total_read, struct TCP_Server_Info *server)
>                                  * on simple responses (wct, bcc both zero)
>                                  * in particular have seen this on
>                                  * ulogoffX and FindClose. This leaves
> -                                * one byte of bcc potentially unitialized
> +                                * one byte of bcc potentially uninitialized
>                                  */
>                                 /* zero rest of bcc */
>                                 tmp[sizeof(struct smb_hdr)+1] = 0;
> diff --git a/fs/smb/client/smbdirect.c b/fs/smb/client/smbdirect.c
> index d74e829de51c..7bcc379014ca 100644
> --- a/fs/smb/client/smbdirect.c
> +++ b/fs/smb/client/smbdirect.c
> @@ -406,7 +406,7 @@ static void smbd_post_send_credits(struct work_struct *work)
>                         else
>                                 response = get_empty_queue_buffer(info);
>                         if (!response) {
> -                               /* now switch to emtpy packet queue */
> +                               /* now switch to empty packet queue */
>                                 if (use_receive_queue) {
>                                         use_receive_queue = 0;
>                                         continue;
> @@ -618,7 +618,7 @@ static struct rdma_cm_id *smbd_create_id(
>
>  /*
>   * Test if FRWR (Fast Registration Work Requests) is supported on the device
> - * This implementation requries FRWR on RDMA read/write
> + * This implementation requires FRWR on RDMA read/write
>   * return value: true if it is supported
>   */
>  static bool frwr_is_supported(struct ib_device_attr *attrs)
> @@ -2177,7 +2177,7 @@ static int allocate_mr_list(struct smbd_connection *info)
>   * MR available in the list. It may access the list while the
>   * smbd_mr_recovery_work is recovering the MR list. This doesn't need a lock
>   * as they never modify the same places. However, there may be several CPUs
> - * issueing I/O trying to get MR at the same time, mr_list_lock is used to
> + * issuing I/O trying to get MR at the same time, mr_list_lock is used to
>   * protect this situation.
>   */
>  static struct smbd_mr *get_mr(struct smbd_connection *info)
> @@ -2311,7 +2311,7 @@ struct smbd_mr *smbd_register_mr(struct smbd_connection *info,
>         /*
>          * There is no need for waiting for complemtion on ib_post_send
>          * on IB_WR_REG_MR. Hardware enforces a barrier and order of execution
> -        * on the next ib_post_send when we actaully send I/O to remote peer
> +        * on the next ib_post_send when we actually send I/O to remote peer
>          */
>         rc = ib_post_send(info->id->qp, &reg_wr->wr, NULL);
>         if (!rc)
> diff --git a/fs/smb/client/transport.c b/fs/smb/client/transport.c
> index adfe0d058701..6e68aaf5bd20 100644
> --- a/fs/smb/client/transport.c
> +++ b/fs/smb/client/transport.c
> @@ -1289,7 +1289,7 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses,
>  out:
>         /*
>          * This will dequeue all mids. After this it is important that the
> -        * demultiplex_thread will not process any of these mids any futher.
> +        * demultiplex_thread will not process any of these mids any further.
>          * This is prevented above by using a noop callback that will not
>          * wake this thread except for the very last PDU.
>          */
> --
> 2.34.1
>
>


-- 
Thanks,

Steve





[Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux