Re: [PATCH v5 00/14] dm-raid/md/raid: fix v6.7 regressions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Benjamin,

On Mon, Feb 5, 2024 at 7:58 PM Benjamin Marzinski <bmarzins@xxxxxxxxxx> wrote:
>
> On Tue, Feb 06, 2024 at 09:36:18AM +0800, Yu Kuai wrote:
> > Hi!
> >
> > 在 2024/02/06 3:35, Benjamin Marzinski 写道:
> > > Could you run the test with something like
> > >
> > > # make check_local T=lvconvert-repair-raid.sh VERBOSE=1 > out 2>&1
> > >
> > > and post the output.
> >
> > Attached is the output from my VM.
>
> Instead of running the tests from the lvm2 git repo, if you run
>
> # make -C test install
>
> to install the tests, and then create a results directory and run the
> test from there, do you still see the error in the 6.6 kernel?
>
> # make ~/results
> # cd ~/results
> # lvm2-testsuite --only lvconvert-repair-raid.sh
>
> Running the tests this way will test the installed lvm2 binaries on your
> system, instead of the ones in the lvm2 git repo. They may be compiled
> differently.

I am not able to get reliable results from shell/lvconvert-repair-raid.sh
either. For 6.6.0 kernel, the test fails. On 6.8-rc1 kernel, the test fails
sometimes.

Could you please share more information about your test setup?
Specifically:
1. Which tree/branch/tag are you testing?
2. What's the .config used in the tests?
3. How do you run the test suite? One test at a time, or all of them
together?
4. How do you handle "test passes sometimes" cases?

Thanks,
Song





[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux