Re: [RFC/PATCH] crypto: ccree - fix a race of enqueue_seq() in send_request_init()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Fri, Mar 11, 2022 at 10:19 AM Yoshihiro Shimoda
<yoshihiro.shimoda.uh@xxxxxxxxxxx> wrote:
>
> From: Dung Nguyen <dung.nguyen.zy@xxxxxxxxxxx>
>
> When loading ccree.ko, after registering cipher algorithm at
> cc_cipher_alloc() and cc_hash_alloc() -> send_request_init() ->
> enqueue_seq() has called to pushing descriptor into HW.
>
> On other hand, if another thread have called to encrypt/decrypt
> cipher (cc_cipher_encrypt/decrypt) -> cc_send_request() ->
> cc_do_send_request() -> enqueue_seq() while send_request_init()
> is running, the thread also has called to pushing descriptor into HW.
> And then, it's possible to overwrite data.
>
> The cc_do_send_request() locks mgr->hw_lock, but send_request_init()
> doesn't lock mgr->hw_lock before enqueue_seq() is called. So,
> send_request_init() should lock mgr->hw_lock before enqueue_seq() is
> called.
>
> This issue is possible to cause the following way in rare cases:
> - CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=n
> - insmod ccree.ko & insmod tcrypt.ko

Thank you very much Dung and Yoshihiro!

This is a very good catch and an issue we have missed indeed.

>
> Signed-off-by: Dung Nguyen <dung.nguyen.zy@xxxxxxxxxxx>
> [shimoda: revise the subject/description]
> Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@xxxxxxxxxxx>
> ---
>  I believe we should fix this race. But, as I wrote the desciption
>  above, there is in rare cases. So, I marked this patch as RFC.
>
>  drivers/crypto/ccree/cc_request_mgr.c | 2 ++
>  1 file changed, 2 insertions(+)
>
> diff --git a/drivers/crypto/ccree/cc_request_mgr.c b/drivers/crypto/ccree/cc_request_mgr.c
> index 887162df50f9..eef40022258b 100644
> --- a/drivers/crypto/ccree/cc_request_mgr.c
> +++ b/drivers/crypto/ccree/cc_request_mgr.c
> @@ -513,6 +513,7 @@ int send_request_init(struct cc_drvdata *drvdata, struct cc_hw_desc *desc,
>
>         set_queue_last_ind(drvdata, &desc[(len - 1)]);
>
> +       spin_lock_bh(&req_mgr_h->hw_lock);
>         /*
>          * We are about to push command to the HW via the command registers
>          * that may reference host memory. We need to issue a memory barrier
> @@ -520,6 +521,7 @@ int send_request_init(struct cc_drvdata *drvdata, struct cc_hw_desc *desc,
>          */
>         wmb();
>         enqueue_seq(drvdata, desc, len);
> +       spin_unlock_bh(&req_mgr_h->hw_lock);
>
>         /* Update the free slots in HW queue */
>         req_mgr_h->q_free_slots =
> --
> 2.25.1
>

Having said that, adding a lock here is not the best solution. To be
effective, the lock should be taken before the call to
cc_queues_status() and released only after the updating of the free
slots.
However, while doing so will ensure there is no race condition with
regard to writing to the hardware control registers, it does not deal
at all with the case the hardware command FIFO is full due to
encryption/decryption requests.
This is by design, as the whole purpose of a seperate init time only
send_request variant is to avoid these complexities, under the
assumption that all access to the hardware is serialised at init time.

Therefore, a better solution is to switch the order of algo
allocations so that the hash is allocated first, prior to any other
alg that might be using the hardware FIFO and thus guaranteeing
synchronized access and available FIFO space.

Can you please verify that the following patch indeed resolves the
issue for you?

diff --git a/drivers/crypto/ccree/cc_driver.c b/drivers/crypto/ccree/cc_driver.c
index 790fa9058a36..7d1bee86d581 100644
--- a/drivers/crypto/ccree/cc_driver.c
+++ b/drivers/crypto/ccree/cc_driver.c
@@ -529,24 +529,26 @@ static int init_cc_resources(struct
platform_device *plat_dev)
                goto post_req_mgr_err;
        }

-       /* Allocate crypto algs */
-       rc = cc_cipher_alloc(new_drvdata);
+       /* hash must be allocated first due to use of send_request_init()
+        * and dependency of AEAD on it
+        */
+       rc = cc_hash_alloc(new_drvdata);
        if (rc) {
-               dev_err(dev, "cc_cipher_alloc failed\n");
+               dev_err(dev, "cc_hash_alloc failed\n");
                goto post_buf_mgr_err;
        }

-       /* hash must be allocated before aead since hash exports APIs */
-       rc = cc_hash_alloc(new_drvdata);
+       /* Allocate crypto algs */
+       rc = cc_cipher_alloc(new_drvdata);
        if (rc) {
-               dev_err(dev, "cc_hash_alloc failed\n");
-               goto post_cipher_err;
+               dev_err(dev, "cc_cipher_alloc failed\n");
+               goto post_hash_err;
        }

        rc = cc_aead_alloc(new_drvdata);
        if (rc) {
                dev_err(dev, "cc_aead_alloc failed\n");
-               goto post_hash_err;
+               goto post_cipher_err;
        }

        /* If we got here and FIPS mode is enabled
@@ -558,10 +560,10 @@ static int init_cc_resources(struct
platform_device *plat_dev)
        pm_runtime_put(dev);
        return 0;

-post_hash_err:
-       cc_hash_free(new_drvdata);
 post_cipher_err:
        cc_cipher_free(new_drvdata);
+post_hash_err:
+       cc_hash_free(new_drvdata);
 post_buf_mgr_err:
         cc_buffer_mgr_fini(new_drvdata);
 post_req_mgr_err:
@@ -593,8 +595,8 @@ static void cleanup_cc_resources(struct
platform_device *plat_dev)
                (struct cc_drvdata *)platform_get_drvdata(plat_dev);


Thanks again!
Gilad

-- 
Gilad Ben-Yossef
Chief Coffee Drinker

values of β will give rise to dom!




[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]

  Powered by Linux