On Thu, Jun 30, 2016 at 07:04:06PM +0800, Anand Jain wrote: > > > Thanks for review comments. > more below.. > > On 06/27/2016 05:29 PM, Eryu Guan wrote: > > On Wed, Jun 22, 2016 at 07:01:54PM +0800, Anand Jain wrote: > > > > > > > > > On 06/21/2016 09:31 PM, Eryu Guan wrote: > > > > On Wed, Jun 15, 2016 at 04:48:47PM +0800, Anand Jain wrote: > > > > > From: Anand Jain <Anand.Jain@xxxxxxxxxx> > > > > > > > > > > The test does the following: > > > > > Initialize a RAID1 with some data > > > > > > > > > > Re-mount RAID1 degraded with _dev1_ and write up to > > > > > half of the FS capacity > > > > > > > > If test devices are big enough, this test consumes much longer test > > > > time. I tested with 15G scratch dev pool and this test ran ~200s on my > > > > 4vcpu 8G memory test vm. > > > > > > Right. Isn't that a good design? So that it gets tested differently > > > on different HW config. ? > > > > Not in fstests. We should limit the run time of tests to an acceptable > > amount, for auto group it's within 5 minutes. > > > > However the test time can be reduced by using smaller vdisk. > > > > I think either limit the write size or _notrun if the $max_fs_size is > > too big (say 30G). > > Fixed in v3 to have a fixed scratch data. Thanks! I've queued this patchset up, will let them go through some testings. Thanks, Eryu -- To unsubscribe from this list: send the line "unsubscribe fstests" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html