ASTERIx - Automatic System TEsting of inteRactive software applIcations
People
(Responsible)
Blasi A.
(Collaborator)
Heydarov R.
(Collaborator)
Mohebbi A.
(Collaborator)
Terragni V.
(Collaborator)
Abstract
In this project, we will investigate the problem of testing interactive software applications. We will define and develop a holistic approach to automatically test such applications, and reveal and remove both unavoidable and emerging bugs, to reduce the impact of software failures.
Interactive applications are software systems that provide services to people who dialog with the applications through different kinds of interfaces. They are deployed as concurrent desktop and distributed web and mobile applications, which interact with people through graphical user interfaces (GUIs) and wearable devices. They are popular in many domains, including retail, education, management, finance, entertainment. Interactive applications are commonly designed as concurrent systems that combine shared memory, message passing and event-driven paradigms. Failures of interactive applications are unavoidable due to the combination of many factors: the complexity of the testing process, the limitation of the current testing practices, the presence of concurrent failures that manifest non-deterministically, the heterogeneity of interactions with people and environment, and the presence of execution conditions that emerge in the field and are impossible to reproduce and test before deployment. Failures may severely impact the business value of the applications, and may lead to consistent economic loss.
State-of-the-art testing practices do not address all the challenges of preventing failures of interactive applications:
(i) approaches for testing interactive applications sample the execution space mostly referring to the structure of the interface, largely ignoring the application semantics, and the problem of testing concurrent messages and events exchanged with wearable devices has been only marginally studied so far,
(ii) most approaches for testing concurrent systems target shared memory systems, and only few approaches consider the concurrency issues that derive from message passing and event driven paradigms, which commonly used in interactive systems,
(iii) cost-effective test oracles catch mostly system crashed and regression failures, missing many relevant semantically related problems,
(iv) testing approaches work before deployment, and hardly deal with problems that emerge at runtime.
In this project we will define and develop an effective and coherent approach for testing interactive applications, by addressing four main open issues that hinder the automatic testing of such applications:
- System Testing:Generating system test cases that exercise interactive systems by interacting with the applications through GUIs, mobiles and wearable devices. We will consider both semantically related aspects of user interactions and concurrency scenarios that emerge from event driven interactions. We will investigate dynamic model inference techniques for generating system test cases from implicitly available knowledge, and probabilistic analysis and machine learning techniques to learn from system execution, and test the effects of imprecise and noisy data from wearable devices.
- Concurrency testing: Generating test cases and event interleavings for message passing and event driven concurrent systems. We will define current testing techniques that address concurrency failures for message passing and event driven systems, and that generate test cases and event interleavings to exercise the interplay among different concurrency paradigms.
- Test oracles: Generating semantically relevant oracles. We will define techniques to generate test oracles from information provided as natural language and semi-structured comments and annotations, by exploiting natural language processing, and from knowledge that becomes incrementally available when executing the application, by exploiting dynamic model inference and analysis.
- Field Testing: Generating system test cases for field testing. We will define approaches that dynamically analyse information from field execution to identify emerging execution condition and unpredictable environment interactions, and generate test cases to be executed online for verifying the new execution conditions.
We will study and evaluate the approaches in an integrated and coherent framework for automatic testing of interactive applications.