Test case discovery / extraction (MozTrap)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey, folks. So as part of the possible move to MozTrap (or, really, it'd
be useful for any other TCMS), I've come up with a truly hideous bash
script to discover and extract test cases from the wiki. Here it is, in
all its glory:

#!/bin/bash
# Requires https://github.com/dominictarr/JSON.sh available as json.sh
# pageid of Category:Test Cases
main_cat_id=15824

testcase_ids+=(`curl -s "https://fedoraproject.org/w/api.php?action=query&list=categorymembers&cmpageid=$main_cat_id&cmtype=page&cmlimit=200&cmprop=ids&redirects&format=json"; | json.sh -b | cut -f2`)

sub1_cat_ids=(`curl -s "https://fedoraproject.org/w/api.php?action=query&list=categorymembers&cmpageid=$main_cat_id&cmtype=subcat&cmlimit=200&cmprop=ids&redirects&format=json"; | json.sh -b | cut -f2`)

echo "sub1_cat_ids are:"
printf -- '%s\n' "${sub1_cat_ids[@]}"

for i in "${sub1_cat_ids[@]}"
do
    sub2_cat_ids+=(`curl -s "https://fedoraproject.org/w/api.php?action=query&list=categorymembers&cmpageid=$i&cmtype=subcat&cmlimit=100&cmprop=ids&redirects&format=json"; | json.sh -b | cut -f2`)
    testcase_ids+=(`curl -s "https://fedoraproject.org/w/api.php?action=query&list=categorymembers&cmpageid=$i&cmtype=page&cmlimit=100&cmprop=ids&redirects&format=json"; | json.sh -b | cut -f2`)
done

echo "sub2_cat_ids are:"
printf -- '%s\n' "${sub2_cat_ids[@]}"

for i in "${sub2_cat_ids[@]}"
do
    sub3_cat_ids+=(`curl -s "https://fedoraproject.org/w/api.php?action=query&list=categorymembers&cmpageid=$i&cmtype=subcat&cmlimit=100&cmprop=ids&redirects&format=json"; | json.sh -b | cut -f2`)
    testcase_ids+=(`curl -s "https://fedoraproject.org/w/api.php?action=query&list=categorymembers&cmpageid=$i&cmtype=page&cmlimit=100&cmprop=ids&redirects&format=json"; | json.sh -b | cut -f2`)
done

echo "sub3_cat_ids are:"
printf -- '%s\n' "${sub3_cat_ids[@]}"

for i in "${sub3_cat_ids[@]}"
do
    sub4_cat_ids+=(`curl -s "https://fedoraproject.org/w/api.php?action=query&list=categorymembers&cmpageid=$i&cmtype=subcat&cmlimit=100&cmprop=ids&redirects&format=json"; | json.sh -b | cut -f2`)
    testcase_ids+=(`curl -s "https://fedoraproject.org/w/api.php?action=query&list=categorymembers&cmpageid=$i&cmtype=page&cmlimit=100&cmprop=ids&redirects&format=json"; | json.sh -b | cut -f2`)
done

echo "sub4_cat_ids are:"
printf -- '%s\n' "${sub4_cat_ids[@]}"

echo "testcase_ids are:"
printf -- '%s\n' "${testcase_ids[@]}"

mkdir -p /home/adamw/local/test_cases
for i in "${testcase_ids[@]}"
do
    echo "Page title: `curl -s "https://fedoraproject.org/w/api.php?action=query&pageids=$i&prop=info&format=json"; | json.sh -b | grep title | cut -f2`" > /home/adamw/local/test_cases/$i.mw
    echo "Test case contents:" >> /home/adamw/local/test_cases/$i.mw
    curl -s "https://fedoraproject.org/w/index.php?curid=$i&action=raw"; >> /home/adamw/local/test_cases/$i.mw
done

...I'm not proud. Yes, this is asking a PHP webapp to produce JSON then
parsing it using bash, which is one of the dumbest things you'll see all
week, but I suck at writing PHP from scratch. (Yes, yes, I also suck at
writing shell scripts, thanks.)

You need https://github.com/dominictarr/JSON.sh , and you also probably
want to change the hardcoded directory in the second to last line. This
whacks the mediawiki API pretty hard, so probably no point running it
unless you want to improve it or something; I have it running ATM and
dumping all ~900 test cases in the wiki out to my system, once it's done
I'll tar them and stick them up somewhere for others to access. Next
step would be to write something to convert them to MozTrap's mass
import format.

It stops at four levels of nesting because that seems to be all we have;
sub4_cat_ids comes up empty. mediawiki's API doesn't have any way to say
'give me all the pages in this category and all its subcategories' - no
way you can ask it to solve the nesting problem for you - so you pretty
much have to do something like this. The 'progress reports' aren't
really necessary, they were just to reassure me the thing was working
properly as I went along.

There are all kinds of potential weaknesses and bugs in the approach,
but for the actual set of categories and test cases we have in the
Fedora wiki, it seems to work, and that's really all I cared about. It
also produces some dupes - there are ~150 test cases in multiple
categories...but I figured it's just as easy to wait for the dupes to
get downloaded twice as it would be to spend time fixing the script to
filter them out before doing the download step.
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net

-- 
test mailing list
test@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe:
https://admin.fedoraproject.org/mailman/listinfo/test





[Index of Archives]     [Fedora Desktop]     [Fedora SELinux]     [Photo Sharing]     [Yosemite Forum]     [KDE Users]

  Powered by Linux