> On 10 Aug 2018, at 10.04, Hans Holmberg <hans.ml.holmberg@xxxxxxxxxxxxx> wrote: > > On Fri, Aug 3, 2018 at 2:05 PM, Javier González <javier@xxxxxxxxxxx> wrote: >> pblk guarantees write ordering at a chunk level through a per open chunk >> semaphore. At this point, since we only have an open I/O stream for both >> user and GC data, the semaphore is per parallel unit. >> >> Since metadata I/O is synchronous, the semaphore is not needed as >> ordering is guaranteed. However, if the metadata scheme changes or >> multiple streams are open, this guarantee might not be preserved. >> >> This patch makes sure that all writes go through the semaphore, even for >> synchronous I/O. This is consistent with pblk's write I/O model. It also >> simplifies maintenance since changes in the metdatada scheme could cause >> ordering issues. >> >> Signed-off-by: Javier González <javier@xxxxxxxxxxxx> >> --- >> drivers/lightnvm/pblk-core.c | 14 ++++++++++++-- >> drivers/lightnvm/pblk.h | 1 + >> 2 files changed, 13 insertions(+), 2 deletions(-) >> >> diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c >> index 00984b486fea..160b54d26bfa 100644 >> --- a/drivers/lightnvm/pblk-core.c >> +++ b/drivers/lightnvm/pblk-core.c >> @@ -493,6 +493,16 @@ int pblk_submit_io_sync(struct pblk *pblk, struct nvm_rq *rqd) >> return nvm_submit_io_sync(dev, rqd); >> } >> >> +int pblk_submit_io_sync_sem(struct pblk *pblk, struct nvm_rq *rqd) >> +{ >> + if (rqd->opcode != NVM_OP_PWRITE) >> + pblk_submit_io_sync(pblk, rqd); >> + >> + pblk_down_page(pblk, rqd->ppa_list, rqd->nr_ppas); > > This will only work if rqd->nr_ppas > 1, better check if rqd->nr_ppas > is 1 and pass &ppa->ppa_addr on to pblk_down_page when needed. For this particular case, we will always get > 1 ppas, but you're right, it is more robust to do the check for future cases. I'll add that to V3. Thanks! Javier
Attachment:
signature.asc
Description: Message signed with OpenPGP