AIST logo

20 April, 2023

AIST 2023

3rd International Workshop on Artificial Intelligence in Software Testing

Dublin, Ireland Co-located with ICST 2023

Important Dates

  • Submission deadline: 27 January 2023 AoE
  • Notification of Acceptance: 15 February 2023
  • Camera-ready: 3 March 2023
  • Workshop: 20 April 2023

Paper Submission

Papers are to be submitted via EasyChair in the following link:
See Call for Papers for more details.

Theme and Goals

The application of AI techniques in software testing is in its early stages. In the last few years, software developers have increasingly adopted novel techniques to ease the development cycle, namely the testing phase, by means of autonomous testing and/or optimization of repetitive and tedious activities.

AI can transform software testing by supporting efficient testing activities and increasing automation, which ultimately leads to a reduction in testing costs and improvements in software quality. The AIST workshop aims to gather researchers and practitioners to present, discuss, and foster collaboration on novel and up-to-date R&D focused on the application of AI in software testing, with an inclusive view of the many perspectives and topics under the AI umbrella.


9:00 - 10:30: Opening and Research Papers 1

  • 9:00 - 9:15: Opening
  • 9:15 - 9:45: Nour Chetouane and Franz Wotawa, Generating concrete test cases from vehicle data using models obtained from clustering
  • 9:45 - 10:05: Aurora Ramírez, Mario Berrios, José Raúl Romero and Robert Feldt, Towards Explainable Test Case Prioritisation with Learning-to-Rank Models
  • 10:05 - 10:25: Daniel Zimmermann and Anne Koziolek, Automating GUI-based Software Testing with GPT-3

11:00 - 12:30: Keynote

Sebastiano Panichella, Testing and Development Challenges for Complex Cyber-Physical Systems: Insights from the COSMOS H2020 Project

14:00 - 15:30: Research Papers 2

  • 14:00 - 14:20: Felix Dobslaw and Robert Feldt, Similarities of Testing Programmed and Learnt Software
  • 14:20 - 14:50: Viraj Rohit Gala and Martin A. Schneider, Evaluating the Effectiveness of Attacks and Defenses on Machine Learning Through Adversarial Samples
  • 14:50 - 15:20: Frédéric Tamagnan, Fabrice Bouquet, Alexandre Vernotte and Bruno Legeard, Regression Test Generation by Usage Coverage Driven Clustering on User Traces

16:00 - 17:30: Discussion and Mind-Mapping

Topic: “Software Testing in 2030”


Testing and Development Challenges for Complex Cyber-Physical Systems: Insights from the COSMOS H2020 Project

Sebastiano Panichella

Abstract: Over the past decade, the development of Cyber-Physical systems (CPSs) has enabled significant advancements in healthcare, avionics, automotive, railway, and robotics. Notably, Unmanned Aerial Vehicles (UAVs) and Self-driving Cars (SDCs) have emerged as the frontrunners in avionics and automotive sectors, showcasing autonomous capabilities through onboard cameras and sensors. These systems have opened doors to a range of applications, including crop monitoring, medical and food delivery, and 3D reconstruction of archaeological and space exploration sites. However, the state-of-the-art technology still lacks solutions that can operate in real-life missions due to limited testing solutions, which remains the biggest challenge.

This keynote will discuss the testing and development challenges faced by the COSMOS H2020 Project ( in the context of complex CPSs. COSMOS brings together a consortium of four academic and eight industrial partners, with organizations from healthcare, avionics, automotive, utility, and railway sectors. The talk will focus on the studies conducted by COSMOS to identify the types of bugs affecting CPSs and the safety-critical issues of UAVs, along with the selection and prioritization strategies proposed for cost-effective regression testing of SDCs. Additionally, the talk will cover automated testing approaches for UAVs, addressing the issue of the “Reality-gap”.

The keynote will also provide success stories and lessons learned from applying these testing approaches in industrial settings, and outline future directions for generic CPSs and specific use cases such as UAVs and SDCs. Join us for an insightful discussion on the challenges and solutions of testing and developing CPSs in real-life scenarios.

Biography: Sebastiano Panichella is a Computer Science Researcher at the Zurich University of Applied Sciences (ZHAW). His main research goal is to conduct industrial research, involving both industrial and academic collaborations, to sustain the Internet of Things (IoT) vision where future “smart cities” will be characterized by millions of smart systems (e.g., cyber-physical systems such as drones, and other autonomous vehicles) connected over the internet, composed by AI components, and/or controlled by complex embedded software implemented for the cloud. His research interests are in the domain of Software Engineering (SE), cloud computing (CC), and Data Science (DS). Currently, he is the technical coordinator of H2020 and Innosuisse projects concerning DevOps for Complex Cyber-physical Systems. He authored or co-authored around nighty papers that appeared in International Conferences and Journals.
This research works involved studies with industrial and open projects and received best paper awards or best paper nominations. He serves and has served as a program committee member for various International conferences and as a reviewer for various international journals in the field of software engineering.

Call for Papers

We invite novel papers from both academia and industry on AI applied to software testing that cover, but are not limited to, the following aspects:

  • AI for test case design, test generation, test prioritization, and test reduction.
  • AI for load testing and performance testing.
  • AI for monitoring running systems or optimizing those systems.
  • Explainable AI for software testing.
  • Case studies, experience reports, benchmarking, and best practices.
  • New ideas, emerging results, and position papers.
  • Industrial case studies with lessons learned or practical guidelines.

Papers can be of one of the following types:

  • Full Papers (max. 8 pages): Papers presenting mature research results or industrial practices.
  • Short Papers (max. 4 pages): Papers presenting new ideas or preliminary results.
  • Tool Papers (max. 4 pages): Papers presenting an AI-enabled testing tool. Tool papers should communicate the purpose and use cases for the tool. The tool should be made available (either free to download or for purchase).
  • Position Papers (max. 2 pages): Position statements and open challenges, intended to spark discussion or debate.

The reviewing process is single blind. Therefore, papers do not need to be anonymized. Papers must conform to the two-column IEEE conference publication format and should be submitted via EasyChair using the following link:

All submissions must be original, unpublished, and not submitted for publication elsewhere. Submissions will be evaluated according to the relevance and originality of the work and on their ability to generate discussions between the participants of the workshop. Each submission will be reviewed by three reviewers, and all accepted papers will be published as part of the ICST proceedings. For all accepted papers, at least one author must register in the workshop and present the paper.

Important Dates

  • Submission deadline: 27 January 2023 AoE
  • Notification of Acceptance: 15 February 2023
  • Camera-ready: 3 March 2023
  • Workshop: April 20, 2023


Organizing Committee

Program Committee

Steering Committee

Previous Editions