On Tue, 2012-12-18 at 05:26 -0500, Kamil Paral wrote: > > The point here is that 'does the install succeed?' and 'does the > > installed system boot?' are really separate test cases. Say you > > install > > an encrypted system, right? Say anaconda does everything just right, > > but > > there's a bug in dracut and you can't enter the passphrase on boot of > > the installed system. > > > > That's really nothing to do with installation. It's not a bug in the > > installer. It shouldn't be reported against anaconda. So it really > > shouldn't be part of the 'installation' test case. This was the > > reason > > for splitting this 'startup' test case out on its own in the first > > place > > - to make the distinction between 'failure during install' and > > 'failure > > on boot' clearer. I'm not a fan of 'install' test cases that include > > 'check the installed system boots'. > > > > I don't mind splitting it out further, at all, but I'm opposed to > > just > > saying 'oh, the install test case covers it', that's going backwards. > > Actually I see it exactly the other way. I believe the expected result > "the installed system initiates boot properly" should be part of > _every_ installation test case. The reason is that you can't know > whether the installation succeeded if you haven't tried to boot it. > Anaconda could have reported success but failed to install grub, or > something similar (happened just a few days ago). Or it could've > messed up some files on /boot. If you don't check that it can enter > grub and then load kernel+initrd and mount root filesystem, you can't > really say the installation was successful. > If we don't provide this instruction in our installation test cases, > but rather provide a single "system boots" test case, then we might > miss some important bugs where the system fails to be just in certain > scenarios (certain installation types). People might perform 10 > installation test cases and report success immediately after seeing > anaconda "finish" screen, without letting it boot. And then test the > system boot just _once_. That is far from ideal. Moreover, checking > that system boot works is not a waste of time, it's very fast. > > I think what you want to see is a green color in an installation test > case cell, and a red color in a boot test case cell, if there is a > dracut bug, right? That would be nice, but that is not achievable, > unless we replace the expected result "installation succeeded" with > "anaconda reported success" (those are different things). Yes, in that > case we can separate installation and boot. But I don't believe what > anaconda says and I don't think we should, we should check it instead. More or less. I don't think the problem is impossible. I recognize your scenario, but still, there is a difference between a system that doesn't boot because of a failure in the installer and a system that doesn't boot for some other reason. > Also I think the test case separation makes sense only when different > release criteria milestones are mixed. If test case A is Alpha and > test case B is Beta, and they are quite separate things, yes, two test > cases. But if both are Alpha and they are very tightly related > (testing one also tests the outcome of the second), why would we > separate them? It violates Alpha and it can be a single red cell, no > problem. -- Adam Williamson Fedora QA Community Monkey IRC: adamw | Twitter: AdamW_Fedora | identi.ca: adamwfedora http://www.happyassassin.net -- test mailing list test@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe: https://admin.fedoraproject.org/mailman/listinfo/test