Author
Listed:
- Changjiang Jia
(City University of Hong Kong, Tat Chee Avenue, Hong Kong & National University of Defense Technology, Changsha, China)
- Lijun Mei
(IBM Research—China, Beijing, China)
- W.K. Chan
(City University of Hong Kong, Tat Chee Avenue, Hong Kong)
- Yuen Tak Yu
(City University of Hong Kong, Tat Chee Avenue, Hong Kong)
- T.H. Tse
(The University of Hong Kong, Pokfulam, Hong Kong)
Abstract
Many existing studies measure the effectiveness of test case prioritization techniques using the average performance on a set of test suites. However, in each regression test session, a real-world developer may only afford to apply one prioritization technique to one test suite to test a service once, even if this application results in an adverse scenario such that the actual performance in this test session is far below the average result achievable by the same technique over the same test suite for the same application. It indicates that assessing the average performance of such a technique cannot provide adequate confidence for developers to apply the technique. The authors ask a couple of questions: To what extent does the effectiveness of prioritization techniques in average scenarios correlate with that in adverse scenarios? Moreover, to what extent may a design factor of this class of techniques affect the effectiveness of prioritization in different types of scenarios? To the best of their knowledge, the authors report in this paper the first controlled experiment to study these two new research questions through more than 300 million APFD and HMFD data points produced from 19 techniques, eight WS-BPEL benchmarks and 1000 test suites prioritized by each technique 1000 times. A main result reveals a strong and linear correlation between the effectiveness in the average scenarios and that in the adverse scenarios. Another interesting result is that many pairs of levels of the same design factors significantly change their relative strengths of being more effective within the same pairs in handling a wide spectrum of prioritized test suites produced by the same techniques over the same test suite in testing the same benchmarks, and the results obtained from the average scenarios are more similar to those of the more effective end than otherwise. This work provides the first piece of strong evidence for the research community to re-assess how they develop and validate their techniques in the average scenarios and beyond.
Suggested Citation
Changjiang Jia & Lijun Mei & W.K. Chan & Yuen Tak Yu & T.H. Tse, 2015.
"Connecting the Average and the Non-Average: A Study of the Rates of Fault Detection in Testing WS-BPEL Services,"
International Journal of Web Services Research (IJWSR), IGI Global, vol. 12(3), pages 1-24, July.
Handle:
RePEc:igg:jwsr00:v:12:y:2015:i:3:p:1-24
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:igg:jwsr00:v:12:y:2015:i:3:p:1-24. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Journal Editor (email available below). General contact details of provider: https://www.igi-global.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.