[PATCH] New TKO: Test results comparison

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The general idea of test case comparison is to compare a given test case
execution with other executions of the same test according to a
predefined
list of attributes.
The comparison result is a list of test executions ordered by attributes
mismatching distance (same idea as hamming distance, just implemented on
test attributes). Distance of 0 refers to test executions that have
identical attributes values, distance of 1 to test executions that have
only one different attribute value etc.

As for the patch code, I have added the new 'Test case comparison'
element under the Test Attributes element in the Test Details tab.
Opening this element queries the DB and outputs a list of compared
tests. I used the new_tko and django database model infrastructure and
added a table to the tko database - test_comparison_attributes - that
stores a list of attributes for comparison per test name per user.

The general layout of the comparison analysis result is:
   differing attributes (nSuccess/nTotal)
     attribute value (nSuccess/nTotal)
       Status
         testID   reason
         testID   reason
         ...
       Status (different)
         testID   reason
         ...

A basic usage example can be viewed here:
http://picasaweb.google.com/qalogic.pub

The patches were re-worked against the latest SVN trunk and all the TKO
database queries are made now using the Django database model.

Signed-off-by: Dror Russo <drusso@xxxxxxxxxx>
Signed-off-by: Lucas Meneghel Rodrigues <lmr@xxxxxxxxxx>
---
 frontend/client/src/autotest/public/TkoClient.html |    6 +
 frontend/client/src/autotest/public/tkoclient.css  |    9 +
 .../client/src/autotest/tko/TestDetailView.java    |  184 +++++++++-
 global_config.ini                                  |    1 +
 new_tko/tko/models.py                              |   11 +
 new_tko/tko/rpc_interface.py                       |  368 +++++++++++++++++++-
 .../030_add_test_comparison_attributes.py          |   20 +
 7 files changed, 579 insertions(+), 20 deletions(-)
 create mode 100644 tko/migrations/030_add_test_comparison_attributes.py

diff --git a/frontend/client/src/autotest/public/TkoClient.html b/frontend/client/src/autotest/public/TkoClient.html
index ce8a643..745c544 100644
--- a/frontend/client/src/autotest/public/TkoClient.html
+++ b/frontend/client/src/autotest/public/TkoClient.html
@@ -84,6 +84,11 @@
       <div id="test_detail_view" title="Test details">
         <span id="td_fetch" class="box-full">Fetch test by ID:
           <span id="td_fetch_controls"></span>
+        </span><br>
+        <span id="td_comp_attr" class="box-full">
+          Compare this execution using attributes <span id="td_comp_attr_control"></span>
+          with previous executions of <span id="td_comp_name_control"></span>
+          <span id="td_comp_save_control"></span>
         </span><br><br>
         <div id="td_title" class="title"></div><br>
         <div id="td_data">
@@ -110,6 +115,7 @@
           <span class="field-name">Test labels:</span>
           <span id="td_test_labels"></span><br>
           <span id="td_attributes"></span><br>
+          <span id="td_analysis"></span><br>
           <br>
 
           <span class="field-name">Key log files:</span>
diff --git a/frontend/client/src/autotest/public/tkoclient.css b/frontend/client/src/autotest/public/tkoclient.css
index 66ea371..868a31c 100644
--- a/frontend/client/src/autotest/public/tkoclient.css
+++ b/frontend/client/src/autotest/public/tkoclient.css
@@ -62,3 +62,12 @@ div.spreadsheet-cell-nonclickable {
   font-weight: bold;
 }
 
+.test-analysis .content {
+   white-space: pre;
+   font-family: monospace;
+}
+
+.test-analysis .header table {
+   font-weight: bold;
+}
+
diff --git a/frontend/client/src/autotest/tko/TestDetailView.java b/frontend/client/src/autotest/tko/TestDetailView.java
index 274f75a..cd70d08 100644
--- a/frontend/client/src/autotest/tko/TestDetailView.java
+++ b/frontend/client/src/autotest/tko/TestDetailView.java
@@ -7,11 +7,13 @@ import autotest.common.Utils;
 import autotest.common.ui.DetailView;
 import autotest.common.ui.NotifyManager;
 import autotest.common.ui.RealHyperlink;
+import autotest.common.StaticDataRepository;
 
 import com.google.gwt.json.client.JSONNumber;
 import com.google.gwt.json.client.JSONObject;
 import com.google.gwt.json.client.JSONString;
 import com.google.gwt.json.client.JSONValue;
+import com.google.gwt.json.client.JSONArray;
 import com.google.gwt.user.client.Window;
 import com.google.gwt.user.client.WindowResizeListener;
 import com.google.gwt.user.client.ui.Composite;
@@ -23,6 +25,10 @@ import com.google.gwt.user.client.ui.FlowPanel;
 import com.google.gwt.user.client.ui.HTML;
 import com.google.gwt.user.client.ui.Panel;
 import com.google.gwt.user.client.ui.ScrollPanel;
+import com.google.gwt.user.client.ui.Button;
+import com.google.gwt.user.client.ui.TextBox;
+import com.google.gwt.user.client.ui.ClickListener;
+import com.google.gwt.user.client.ui.Widget;
 import com.google.gwt.user.client.ui.SimplePanel;
 
 import java.util.ArrayList;
@@ -33,18 +39,23 @@ import java.util.List;
 class TestDetailView extends DetailView {
     private static final int NO_TEST_ID = -1;
 
-    private static final JsonRpcProxy logLoadingProxy = 
+    private static final JsonRpcProxy logLoadingProxy =
         new PaddedJsonRpcProxy(Utils.RETRIEVE_LOGS_URL);
 
+    private static JsonRpcProxy rpcProxy = JsonRpcProxy.getProxy();
     private int testId = NO_TEST_ID;
+    private String testName = null;
     private String jobTag;
     private List<LogFileViewer> logFileViewers = new ArrayList<LogFileViewer>();
     private RealHyperlink logLink = new RealHyperlink("(view all logs)");
     private RealHyperlink testLogLink = new RealHyperlink("(view test logs)");
+    protected static TextBox attrInput = new TextBox();
+    protected static TextBox testSetInput = new TextBox();
+    protected Button attrUpdateButton = new Button("Save");
     private Panel logPanel;
     private Panel attributePanel = new SimplePanel();
     
-    private class LogFileViewer extends Composite 
+    private class LogFileViewer extends Composite
                                 implements DisclosureHandler, WindowResizeListener {
         private DisclosurePanel panel;
         private ScrollPanel scroller; // ScrollPanel wrapping log contents
@@ -66,17 +77,17 @@ class TestDetailView extends DetailView {
                 handle(result);
             }
         };
-        
+
         public LogFileViewer(String logFilePath, String logFileName) {
             this.logFilePath = logFilePath;
             panel = new DisclosurePanel(logFileName);
             panel.addEventHandler(this);
             panel.addStyleName("log-file-panel");
             initWidget(panel);
-            
+
             Window.addWindowResizeListener(this);
         }
-        
+
         public void onOpen(DisclosureEvent event) {
             JSONObject params = new JSONObject();
             params.put("path", new JSONString(getLogUrl()));
@@ -88,7 +99,7 @@ class TestDetailView extends DetailView {
         private String getLogUrl() {
             return Utils.getLogsUrl(jobTag + "/" + logFilePath);
         }
-        
+
         public void handle(JSONValue value) {
             String logContents = value.isString().stringValue();
             if (logContents.equals("")) {
@@ -107,8 +118,8 @@ class TestDetailView extends DetailView {
         }
 
         /**
-         * Firefox fails to set relative widths correctly for elements with overflow: scroll (or 
-         * auto, or hidden).  Instead, it just expands the element to fit the contents.  So we use 
+         * Firefox fails to set relative widths correctly for elements with overflow: scroll (or
+         * auto, or hidden).  Instead, it just expands the element to fit the contents.  So we use
          * this trick to dynamically implement width: 100%.
          */
         private void setScrollerWidth() {
@@ -132,11 +143,119 @@ class TestDetailView extends DetailView {
 
         public void onClose(DisclosureEvent event) {}
     }
-    
+
+
+    /* This class handles the test comparison analysis.*/
+    private static class AnalysisTable extends Composite implements DisclosureHandler {
+        private ScrollPanel scroller;
+        private int testID;
+        private DisclosurePanel panel = new DisclosurePanel("Test case comparison");
+        private JsonRpcCallback rpcCallback = new JsonRpcCallback() {
+            @Override
+            public void onError(JSONObject errorObject) {
+                super.onError(errorObject);
+                String errorString = getErrorString(errorObject);
+                if (errorString.equals("")) {
+                    errorString = "No comparison analysis data available for this configuration.";
+                }
+                setStatusText(errorString);
+            }
+
+            @Override
+            public void onSuccess(JSONValue result) {
+                handle(result);
+            }
+
+        };
+
+        public AnalysisTable(int tid) {
+            this.testID = tid;
+            panel.addEventHandler(this);
+            panel.addStyleName("test-analysis");
+            initWidget(panel);
+        }
+
+        public void onOpen(DisclosureEvent event) {
+            JSONObject params = new JSONObject();
+            params.put("test", new JSONString(Integer.toString(testID)));
+            params.put("attr", new JSONString(attrInput.getText()));
+            params.put("testset", new JSONString(testSetInput.getText()));
+            rpcProxy.rpcCall("get_testcase_comparison_data", params, rpcCallback);
+            setStatusText("Loading (may take few seconds)...");
+        }
+
+        public void handle(JSONValue value) {
+            String contents = value.isString().stringValue();
+            if (contents.equals("")) {
+                setText("No analysis data for this test case.");
+            } else {
+                setText(contents);
+            }
+        }
+
+        private void setText(String text) {
+            panel.clear();
+            scroller = new ScrollPanel();
+            scroller.getElement().setInnerText(text);
+            panel.add(scroller);
+        }
+
+        private void setStatusText(String status) {
+            panel.clear();
+            panel.add(new HTML("<i>" + status + "</i>"));
+        }
+
+        public void onClose(DisclosureEvent event) {}
+
+    }
+
+
+    private void retrieveComparisonAttributes(String testName) {
+        attrInput.setText("");
+        testSetInput.setText("");
+        StaticDataRepository staticData = StaticDataRepository.getRepository();
+        JSONObject args = new JSONObject();
+        args.put("owner", new JSONString(staticData.getCurrentUserLogin()));
+        args.put("testname", new JSONString(testName));
+
+        rpcProxy.rpcCall("get_test_comparison_attr", args, new JsonRpcCallback() {
+            @Override
+            public void onSuccess(JSONValue result) {
+                JSONArray queries = result.isArray();
+                if (queries.size() == 0) {
+                    return;
+                }
+                assert queries.size() == 1;
+                JSONObject query = queries.get(0).isObject();
+                attrInput.setText(query.get("attributes").isString().stringValue());
+                testSetInput.setText(query.get("testset").isString().stringValue());
+                return;
+            }
+        });
+
+    }
+
+    public void saveComparisonAttribute() {
+        StaticDataRepository staticData = StaticDataRepository.getRepository();
+        JSONObject args = new JSONObject();
+        args.put("name", new JSONString(testName));
+        args.put("owner", new JSONString(staticData.getCurrentUserLogin()));
+        args.put("attr_token", new JSONString(attrInput.getText()));
+        args.put("testset_token", new JSONString(testSetInput.getText()));
+        rpcProxy.rpcCall("add_test_comparison_attr", args, new JsonRpcCallback() {
+           @Override
+           public void onSuccess(JSONValue result) {
+              NotifyManager.getInstance().showMessage("Test comparison attributes saved");
+           }
+        });
+   }
+
+
+
     private static class AttributeTable extends Composite {
         private DisclosurePanel container = new DisclosurePanel("Test attributes");
         private FlexTable table = new FlexTable();
-        
+
         public AttributeTable(JSONObject attributes) {
             processAttributes(attributes);
             setupTableStyle();
@@ -149,7 +268,7 @@ class TestDetailView extends DetailView {
                 table.setText(0, 0, "No test attributes");
                 return;
             }
-            
+
             List<String> sortedKeys = new ArrayList<String>(attributes.keySet());
             Collections.sort(sortedKeys);
             for (String key : sortedKeys) {
@@ -159,7 +278,7 @@ class TestDetailView extends DetailView {
                 table.setText(row, 1, value);
             }
         }
-        
+
         private void setupTableStyle() {
             container.addStyleName("test-attributes");
         }
@@ -170,12 +289,24 @@ class TestDetailView extends DetailView {
         super.initialize();
 
         addWidget(attributePanel, "td_attributes");
+
+        addWidget(attrInput, getTdCompAttrControlId());
+        addWidget(testSetInput, getTdCompSetNameControlId());
+        addWidget(attrUpdateButton, getTdCompSetNameControlId());
+
         logPanel = new FlowPanel();
         addWidget(logPanel, "td_log_files");
         testLogLink.setOpensNewWindow(true);
         addWidget(testLogLink, "td_view_logs_link");
         logLink.setOpensNewWindow(true);
         addWidget(logLink, "td_view_logs_link");
+
+        attrUpdateButton.addClickListener(new ClickListener() {
+            public void onClick(Widget sender) {
+                 saveComparisonAttribute();
+             }
+         });
+
     }
 
     private void addLogViewers(String testName) {
@@ -209,10 +340,10 @@ class TestDetailView extends DetailView {
                     resetPage();
                     return;
                 }
-                
+                testName = new String(test.get("test_name").isString().stringValue());
                 showTest(test);
             }
-            
+
             @Override
             public void onError(JSONObject errorObject) {
                 super.onError(errorObject);
@@ -220,7 +351,7 @@ class TestDetailView extends DetailView {
             }
         });
     }
-    
+
     @Override
     protected void setObjectId(String id) {
         try {
@@ -230,7 +361,7 @@ class TestDetailView extends DetailView {
             throw new IllegalArgumentException();
         }
     }
-    
+
     @Override
     protected String getObjectId() {
         if (testId == NO_TEST_ID) {
@@ -263,7 +394,19 @@ class TestDetailView extends DetailView {
     public String getElementId() {
         return "test_detail_view";
     }
-    
+
+    protected String getTdCompAttrControlId() {
+        return "td_comp_attr_control";
+    }
+
+    protected String getTdCompSetNameControlId() {
+        return "td_comp_name_control";
+    }
+
+    protected String getTdCompSaveControlId() {
+        return "td_comp_save_control";
+    }
+
     @Override
     public void display() {
         super.display();
@@ -273,7 +416,8 @@ class TestDetailView extends DetailView {
     protected void showTest(JSONObject test) {
         String testName = test.get("test_name").isString().stringValue();
         jobTag = test.get("job_tag").isString().stringValue();
-        
+
+        retrieveComparisonAttributes(testName);
         showText(testName, "td_test");
         showText(jobTag, "td_job_tag");
         showField(test, "job_name", "td_job_name");
@@ -295,7 +439,9 @@ class TestDetailView extends DetailView {
         JSONObject attributes = test.get("attributes").isObject();
         attributePanel.clear();
         attributePanel.add(new AttributeTable(attributes));
-        
+        RootPanel analysisPanel = RootPanel.get("td_analysis");
+        analysisPanel.clear();
+        analysisPanel.add(new AnalysisTable(testId));
         logLink.setHref(Utils.getRetrieveLogsUrl(jobTag));
         testLogLink.setHref(Utils.getRetrieveLogsUrl(jobTag) + "/" + testName);
         addLogViewers(testName);
diff --git a/global_config.ini b/global_config.ini
index 9de7865..8763a7f 100644
--- a/global_config.ini
+++ b/global_config.ini
@@ -11,6 +11,7 @@ query_timeout: 3600
 min_retry_delay: 20
 max_retry_delay: 60
 graph_cache_creation_timeout_minutes: 10
+test_comparison_maximum_attribute_mismatches: 4
 
 [AUTOTEST_WEB]
 host: localhost
diff --git a/new_tko/tko/models.py b/new_tko/tko/models.py
index 30644dd..778fd9b 100644
--- a/new_tko/tko/models.py
+++ b/new_tko/tko/models.py
@@ -228,6 +228,17 @@ class IterationAttribute(dbmodels.Model, model_logic.ModelExtensions):
         db_table = 'iteration_attributes'
 
 
+class TestComparisonAttribute(dbmodels.Model, model_logic.ModelExtensions):
+    testname = dbmodels.CharField(max_length=90)
+    # TODO: change this to foreign key once DBs are merged
+    owner = dbmodels.CharField(max_length=80)
+    attributes = dbmodels.CharField(max_length=1024)
+    testset = dbmodels.CharField(max_length=1024)
+
+    class Meta:
+        db_table = 'test_comparison_attributes'
+
+
 class IterationResult(dbmodels.Model, model_logic.ModelExtensions):
     # see comment on IterationAttribute regarding primary_key=True
     test = dbmodels.ForeignKey(Test, db_column='test_idx', primary_key=True)
diff --git a/new_tko/tko/rpc_interface.py b/new_tko/tko/rpc_interface.py
index 3bc565d..05e5eeb 100644
--- a/new_tko/tko/rpc_interface.py
+++ b/new_tko/tko/rpc_interface.py
@@ -1,10 +1,11 @@
-import os, pickle, datetime, itertools, operator
+import os, pickle, datetime, itertools, operator, urllib2, re
 from django.db import models as dbmodels
 from autotest_lib.frontend import thread_local
 from autotest_lib.frontend.afe import rpc_utils, model_logic
 from autotest_lib.frontend.afe import readonly_connection
 from autotest_lib.new_tko.tko import models, tko_rpc_utils, graphing_utils
 from autotest_lib.new_tko.tko import preconfigs
+from autotest_lib.client.common_lib import global_config
 
 # table/spreadsheet view support
 
@@ -397,6 +398,52 @@ def delete_saved_queries(id_list):
     query.delete()
 
 
+def get_test_comparison_attr(**filter_data):
+    """
+    Attributes list for test comparison is store at test comparison attributes
+    db table per user per test name. This function returns the database record
+    (if exists) that matches the user name and test name provided in
+    filter_data.
+
+    @param **filter_data: Dictionary of parameters that will be delegated to
+            TestComparisonAttribute.list_objects.
+    """
+    return rpc_utils.prepare_for_serialization(
+        models.TestComparisonAttribute.list_objects(filter_data))
+
+
+def add_test_comparison_attr(name, owner, attr_token=None, testset_token=""):
+    """
+    This function adds a new record to the test comparison attributes db table.
+
+    @param name: Test name.
+    @param owner: Test owner.
+    """
+    testname = name.strip()
+    if attr_token:
+        # if testset is not defined, the default is to save the test name (e.g.
+        # make the comparison on previous executions of the given testname)
+        if not testset_token or testset_token == "":
+            testset = testname
+        else:
+            testset = testset_token.strip()
+        existing_list = models.TestComparisonAttribute.objects.filter(
+                                                      owner=owner,testname=name)
+        if existing_list:
+            query_object = existing_list[0]
+            query_object.attributes= attr_token.strip()
+            query_object.testset = testset
+            query_object.save()
+            return query_object.id
+        return models.TestComparisonAttribute.add_object(owner=owner,
+                                                         testname=name,
+                                                         attributes=attr_token,
+                                                         testset=testset).id
+    else:
+        raise model_logic.ValidationError('No attributes defined for '
+                                          'comparison operation')
+
+
 # other
 def get_motd():
     return rpc_utils.get_motd()
@@ -465,3 +512,322 @@ def get_static_data():
     result['motd'] = rpc_utils.get_motd()
 
     return result
+
+
+def _get_test_lists(test_id, attr_keys, testset):
+    """
+    A test execution is a list containing header (ID, Status, Reason and
+    Attributes) values. The function searches the results database (tko)
+    and returns three test lists:
+
+    @param test_id: Test ID.
+    @param attr_keys: List with test attributes.
+    @param testset: Regular expression whose match describes a list of tests.
+    @return: Tuple with the following elements:
+            * test headers list
+            * current test execution to compare with past executions.
+            * List of previous test executions of the same testcase
+            (test name is identical)
+            * All attributes that matched the reg exp of the attribute keys
+            provided
+    """
+    current_set = models.TestView.objects.filter(test_idx=long(test_id))[0]
+    (test_name, status, reason) = (current_set.test_name, current_set.status,
+                                   current_set.reason)
+
+    # From all previous executions, filter those whose name matches the regular
+    # expression provided by the user
+    previous_valid_executions = []
+    previous_valid_attributes = []
+    previous_set = (
+             models.TestView.objects.exclude(test_idx=long(test_id)).distinct())
+    p = re.compile('^%s$' % testset.strip())
+    for set in previous_set:
+        execution = [set.test_idx, set.test_name, set.status, set.reason]
+        attributes = [set.attributes, set.test_attributes]
+        if p.match(set.test_name):
+            previous_valid_executions.append(execution)
+            previous_valid_attributes.append(attributes)
+
+    if len(previous_valid_executions) == 0:
+        raise model_logic.ValidationError('No comparison data available for '
+                                          'this configuration.')
+
+    current_execution = [test_id, str(test_name), str(status), str(reason)]
+    current_execution_attr = [str(current_set.attribute),
+                              str(current_set.test_attributes)]
+
+    previous_executions = []
+    test_headers = ['ID', 'NAME', 'STATUS', 'REASON']
+
+    # Find all attributes for the comparison (including all matches if
+    # regexp specified)
+    attributes = []
+    for key in attr_keys:
+        valid_key = False
+        p = re.compile('^%s$' % key.strip())
+        for attr in current_execution_attr:
+            if p.match(attr):
+                test_headers.append(attr)
+                attributes.append(attr)
+                valid_key=True
+        if not valid_key:
+            raise model_logic.ValidationError("Attribute '%s' does "
+                                              "not exist in this test "
+                                              "execution." % key)
+
+    # Check that previous test contains all required attributes for comparison
+    gap = [a for a in attributes if a not in previous_valid_attributes]
+
+    if len(gap) == 0:
+        for row in previous_valid_executions:
+            # Just to keep the record, each row for previous_valid_executions
+            # has the format [set.test_idx, set.test_name, set.status,
+            # set.reason]
+            if row[0] != long(test_id):
+                test = [int(row[0]), str(row[1]), str(row[2]), str(row[3])]
+            for key in attributes:
+                new_set = previous_set.filter(test_idx=row[0])
+                attr_key = new_set.attribute_value
+                attr_val = new_set.test_attributes
+                if str(attr_key) == key:
+                    if row[0] == long(test_id):
+                        current_execution.append(str(attr_val))
+                    else:
+                        test.append(str(attr_val))
+                    break
+            if row[0] != long(test_id):
+                previous_executions.append(test)
+
+    if len(previous_executions) == 0:
+        raise model_logic.ValidationError('No comparison data available '
+                                          'for this configuration.')
+
+    return (attributes, test_headers, current_execution, previous_executions)
+
+
+def _find_mismatches(headers, orig, new, valid_attr):
+    """
+    Finds the attributes mismatch between two tests provided: orig and new.
+
+    @param headers: Test headers.
+    @param orig: First test to be compared.
+    @param new: Second test to be compared.
+    @param valid_attr: Test attributes to be considered valid.
+    @returns: Tuple with the number of mismaching attributes found, a list
+            of the mismaching attribute names and a list of the mismaching
+            attribute values.
+    """
+    i = 0
+    mismatched_headers = []
+    mismatched_values = []
+
+    for index, item in enumerate(orig):
+        if item != new[index] and headers[index] in valid_attr:
+            i += 1
+            mismatched_headers.append(headers[index])
+            mismatched_values.append(new[index])
+
+    return (i,','.join(mismatched_headers),','.join(mismatched_values))
+
+
+def _prepare_test_analysis_dict(test_headers, previous_test_executions,
+                                current_test_execution, passed, attributes,
+                                max_mismatches_allowed=2):
+    """
+    Prepares and returns a dictionary of comparison analysis data.
+
+    @param test_headers: Dictionary with test headers.
+    @param previous_test_executions: List of test executions.
+    @param current_test_execution: Test execution being visualized on TKO.
+    @param passed: List of passed tests.
+    @param attributes: List of test attributes.
+    @param max_mismatches_allowed: Maximum amount of mismatches to be
+            considered for analysis.
+    """
+    dict = {}
+    counters = ['total', 'passed', 'pass_rate']
+    if len(previous_test_executions) > 0: # At least one test defined
+        for item in previous_test_executions:
+            (mismatches, attr_headers,
+             attr_values) = _find_mismatches(test_headers,
+                                             current_test_execution, item,
+                                             attributes)
+            if mismatches <= max_mismatches_allowed:
+                # Create dictionary keys if does not exist and
+                # initialize all counters
+                if mismatches not in dict.keys():
+                    dict[mismatches] = {'total': 0, 'passed': 0, 'pass_rate': 0}
+                if attr_headers not in dict[mismatches].keys():
+                    dict[mismatches][attr_headers] = {'total': 0, 'passed': 0,
+                                                      'pass_rate': 0}
+                if attr_values not in dict[mismatches][attr_headers].keys():
+                    dict[mismatches][attr_headers][attr_values] = {'total': 0,
+                                                                   'passed': 0,
+                                                                 'pass_rate': 0}
+                if (item[test_headers.index('STATUS')] not in
+                    dict[mismatches][attr_headers][attr_values].keys()):
+                    s = item[test_headers.index('STATUS')]
+                    dict[mismatches][attr_headers][attr_values][s] = []
+
+                # Update all counters
+                testcase = [item[test_headers.index('ID')],
+                            item[test_headers.index('NAME')],
+                            item[test_headers.index('REASON')]]
+
+                s = item[test_headers.index('STATUS')]
+                dict[mismatches][attr_headers][attr_values][s].append(testcase)
+                dict[mismatches]['total'] += 1
+                dict[mismatches][attr_headers]['total'] += 1
+                dict[mismatches][attr_headers][attr_values]['total'] += 1
+
+                if item[test_headers.index('STATUS')] in passed:
+                    dict[mismatches]['passed'] += 1
+                    dict[mismatches][attr_headers]['passed'] += 1
+                    dict[mismatches][attr_headers][attr_values]['passed'] += 1
+
+                p = float(dict[mismatches]['passed'])
+                t = float(dict[mismatches]['total'])
+                dict[mismatches]['pass_rate'] = int(p/t * 100)
+
+                p = float(dict[mismatches][attr_headers]['passed'])
+                t = float(dict[mismatches][attr_headers]['total'])
+                dict[mismatches][attr_headers]['pass_rate'] = int(p/t * 100)
+
+                p = float(dict[mismatches][attr_headers][attr_values]['passed'])
+                t = float(dict[mismatches][attr_headers][attr_values]['total'])
+                dict[mismatches][attr_headers][attr_values]['pass_rate'] = (
+                                                                int(p/t * 100))
+
+    return (dict, counters)
+
+
+def _make_content(test_id, current_test_execution, headers, counters, t_dict,
+                  attributes, max_mismatches_allowed):
+    """
+    Prepares the comparison analysis text to be presented in the frontend
+    GUI.
+
+    @param test_id: Test ID.
+    @param current_test_execution: Current text execution.
+    @param headers: Test headers.
+    @param counters: Auxiliary counters.
+    @param t_dict: Dictionary with the result that will be returned.
+    @param attributes: Test attributes.
+    @param max_mismatches_allowed: Number of maximum mismatches allowed.
+    """
+    mismatches  = t_dict.keys()
+    content = ('Test case comparison with the following attributes: %s\n' %
+               str(attributes))
+    content += ('(maximum allowed mismatching attributes = %d)\n' %
+                int(max_mismatches_allowed))
+
+    for mismatch in mismatches:
+        if mismatch not in counters:
+            content += ('\n%d mismatching attributes (%d/%d)\n' %
+                (mismatch, int(t_dict[mismatch]['passed']),
+                int(t_dict[mismatch]['total'])))
+        attribute_headers = t_dict[mismatch].keys()
+        for attr_header in attribute_headers:
+            if attr_header not in counters:
+                if int(mismatch) > 0:
+                    p = int(t_dict[mismatch][attr_header]['passed'])
+                    t = int(t_dict[mismatch][attr_header]['total'])
+                    content += '  %s (%d/%d)\n' % (attr_header, p, t)
+            attribute_values = t_dict[mismatch][attr_header].keys()
+            for attr_value in attribute_values:
+                if attr_value not in counters:
+                    if int(mismatch) > 0:
+                        p = int(
+                            t_dict[mismatch][attr_header][attr_value]['passed'])
+                        t = int(
+                            t_dict[mismatch][attr_header][attr_value]['total'])
+                        content += ('    %s (%d/%d)\n' % (attr_value, p, t))
+                    test_sets = (
+                            t_dict[mismatch][attr_header][attr_value].keys())
+                    for test_set in test_sets:
+                        if test_set not in counters:
+                            tc = (
+                            t_dict[mismatch][attr_header][attr_value][test_set])
+                            if len(tc) > 0:
+                                content += '      %s\n' % (test_set)
+                                links =[]
+                                for t in tc:
+                                    link = '%d : %s : %s' % (t[0], t[1], t[2])
+                                    content += '       %s\n' % link
+
+    return content
+
+
+def get_testcase_comparison_data(test, attr ,testset):
+    """
+    Test case comparison compares a given test execution with other executions
+    of the same test according to a predefined list of attributes. The result
+    is a dictionary of test cases classified by status (GOOD, FAIL, etc.) and
+    attributes mismatching distance (same idea as hamming distance, just
+    implemented on test attributes).
+
+    Distance of 0 refers to tests that have identical attributes values,
+    distance of 1 to tests that have only one different attribute value, so on
+    and so forth.
+
+    The general layout of test case comparison result is:
+
+    number of mismatching attributes (nSuccess/nTotal)
+       differing attributes (nSuccess/nTotal)
+         attribute value (nSuccess/nTotal)
+           Status
+             test_id   reason
+             ...
+           Status (different)
+             test_id   reason
+             ...
+           ...
+
+    @param test: Job ID that we'll get comparison data for.
+    @param attr: String of attributes to use for the comparison delimited by
+            comma.
+    @return: Dictionary with test comparison data that will be serialized.
+    """
+    result = {}
+    tid = None
+    attr_keys = None
+    attributes = None
+    if test:
+        tid = int(test)
+    if attr:
+        attr_keys = attr.replace(' ', '').split(',')
+
+    # process test comparison analysis
+    try:
+        c = global_config.global_config
+        max_mismatches_allowed = int(c.get_config_value('TKO',
+                                'test_comparison_maximum_attribute_mismatches'))
+        passed= ['GOOD']
+        if not tid:
+            raise model_logic.ValidationError('Test was not specified.')
+        if not attr_keys:
+            raise model_logic.ValidationError('At least one attribute must be '
+                                              'specified.')
+        if not testset:
+            raise model_logic.ValidationError('Previous test executions scope '
+                                              'must be specified.')
+
+        (attributes, test_headers, current_test_execution,
+         previous_test_executions) = _get_test_lists(tid, attr_keys, testset)
+
+        (test_comparison_dict,
+         counters) = _prepare_test_analysis_dict(test_headers,
+                                                 previous_test_executions,
+                                                 current_test_execution,
+                                                 passed, attributes,
+                                                 max_mismatches_allowed)
+
+        result = _make_content(tid, current_test_execution, test_headers,
+                               counters, test_comparison_dict, attributes,
+                               max_mismatches_allowed)
+
+    except urllib2.HTTPError:
+        result = 'Test comparison error!'
+
+    return rpc_utils.prepare_for_serialization(result)
diff --git a/tko/migrations/030_add_test_comparison_attributes.py b/tko/migrations/030_add_test_comparison_attributes.py
new file mode 100644
index 0000000..675ade1
--- /dev/null
+++ b/tko/migrations/030_add_test_comparison_attributes.py
@@ -0,0 +1,20 @@
+UP_SQL = """
+CREATE TABLE `test_comparison_attributes` (
+    `id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY,
+    `testname` varchar(255) NOT NULL,
+    `owner` varchar(80) NOT NULL,
+    `attributes` varchar(1024) NOT NULL,
+    `testset` varchar(1024) NOT NULL
+);
+"""
+
+DOWN_SQL = """
+DROP TABLE IF EXISTS `test_comparison_attributes`;
+"""
+
+def migrate_up(manager):
+    manager.execute(UP_SQL)
+
+
+def migrate_down(manager):
+    manager.execute(DOWN_SQL)
-- 
1.6.2.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux