Re: [PATCH 08/13] lightnvm: pblk: Set proper read stutus in bio

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/4/19 2:45 PM, Javier González wrote:

On 4 Mar 2019, at 14.08, Matias Bjørling <mb@xxxxxxxxxxx> wrote:

On 3/4/19 1:51 PM, Igor Konopko wrote:
On 04.03.2019 13:14, Hans Holmberg wrote:
On Mon, Mar 4, 2019 at 10:48 AM Javier González <javier@xxxxxxxxxxx> wrote:
On 4 Mar 2019, at 10.35, Hans Holmberg <hans.ml.holmberg@xxxxxxxxxxxxx> wrote:

On Mon, Mar 4, 2019 at 9:03 AM Javier González <javier@xxxxxxxxxxx> wrote:
On 27 Feb 2019, at 18.14, Igor Konopko <igor.j.konopko@xxxxxxxxx> wrote:

Currently in case of read errors, bi_status is not
set properly which leads to returning inproper data
to higher layer. This patch fix that by setting proper
status in case of read errors

Patch also removes unnecessary warn_once(), which does
not make sense in that place, since user bio is not used
for interation with drive and thus bi_status will not be
set here.

Signed-off-by: Igor Konopko <igor.j.konopko@xxxxxxxxx>
---
drivers/lightnvm/pblk-read.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/drivers/lightnvm/pblk-read.c b/drivers/lightnvm/pblk-read.c
index 3789185144da..39c1d6ccaedb 100644
--- a/drivers/lightnvm/pblk-read.c
+++ b/drivers/lightnvm/pblk-read.c
@@ -175,11 +175,10 @@ static void pblk_read_check_rand(struct pblk *pblk, struct nvm_rq *rqd,
       WARN_ONCE(j != rqd->nr_ppas, "pblk: corrupted random request\n");
}

-static void pblk_end_user_read(struct bio *bio)
+static void pblk_end_user_read(struct bio *bio, int error)
{
-#ifdef CONFIG_NVM_PBLK_DEBUG
-     WARN_ONCE(bio->bi_status, "pblk: corrupted read bio\n");
-#endif
+     if (error && error != NVM_RSP_WARN_HIGHECC)
+             bio_io_error(bio);
       bio_endio(bio);
}

@@ -219,7 +218,7 @@ static void pblk_end_io_read(struct nvm_rq *rqd)
       struct pblk_g_ctx *r_ctx = nvm_rq_to_pdu(rqd);
       struct bio *bio = (struct bio *)r_ctx->private;

-     pblk_end_user_read(bio);
+     pblk_end_user_read(bio, rqd->error);
       __pblk_end_io_read(pblk, rqd, true);
}

@@ -292,7 +291,7 @@ static void pblk_end_partial_read(struct nvm_rq *rqd)
       rqd->bio = NULL;
       rqd->nr_ppas = nr_secs;

-     bio_endio(bio);
+     pblk_end_user_read(bio, rqd->error);
       __pblk_end_io_read(pblk, rqd, false);
}

--
2.17.1

This is by design. We do not report the read errors as in any other
block device - this is why we clone the read bio.

Could you elaborate on why not reporting read errors is a good thing in pblk?

Normal block devices do not report read errors on the completion path
unless it is a fatal error. This is actually not well understood by the
upper layers, which tend to assume that the device is completely broken.

So returning bogus data without even a warning is a preferred
solution? You want to force "the upper layers" to do checksumming?

It's fine to mask out NVM_RSP_WARN_HIGHECC, since that is just a
warning that OCSSD 2.0 adds. The data should still be good.
All other errors (see 4.6.1.2.1 in the NVMe 1.3 spec), indicates that
the command did not complete (As far as I can tell)
My approach was exactly like that. In all cases other than WARN_HIGHECC we don't have a valid data. Without setting a bio_io_error() we are creating the impression for other layers, that we read the data correctly, what is not a case then.
I'm also seeing that this patch is not the only user of bio_io_error() API, also other drivers such as md uses is commonly.

Yes agree. This is an actual error in pblk that lets it return bogus data.

I am not against returning an error, I am just saying that this is not
normal behavior on the read path.

The problem is that the upper layers might interpret that the device is
broken completely, which is not true for a spa failing. Think for
example of a host reading under mw_cunits - in reality this is not even
a device problem but a host bug that might result in a fatal error.

Agree, and the host should manage it. The drive shall return a fatal error when it is asked to return invalid data. E.g., mw_cunits.


Matias: I am surprised to see you answer this way - when I tried to
define a sane read error path with meaningful errors starting in the
spec and all the way up the stack you were the first one to argue for
reads to always succeed no matter what. In fact, using ZBC/ZAC as an
example...

What I objected against was having error messages for each type of error that could happen. Instead of a single or few error types (that triggers the same set of recovery procedure at the host-side).



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux