Abstract
We present a new technique for automatically detecting logical errors in functional programming assignments. Compared to syntax or type errors, detecting logical errors remains largely a manual process that requires hand-made test cases. However, designing proper test cases is nontrivial and involves a lot of human effort. Furthermore, manual test cases are unlikely to catch diverse errors because instructors cannot predict all corner cases of diverse student submissions. We aim to reduce this burden by automatically generating test cases for functional programs. Given a reference program and a student's submission, our technique generates a counter-example that captures the semantic difference of the two programs without any manual effort. The key novelty behind our approach is the counter-example generation algorithm that combines enumerative search and symbolic verification techniques in a synergistic way. The experimental results show that our technique is able to detect 88 more errors not found by mature test cases that have been improved over the past few years, and performs better than the existing property-based testing techniques. We also demonstrate the usefulness of our technique in the context of automated program repair, where it effectively helps to eliminate test-suite-overfitted patches.
Original language | English |
---|---|
Article number | A188 |
Journal | Proceedings of the ACM on Programming Languages |
Volume | 3 |
Issue number | OOPSLA |
DOIs | |
Publication status | Published - 2019 Oct |
Bibliographical note
Publisher Copyright:© 2019 Association for Computing Machinery. All rights reserved.
Keywords
- Automated Test Case Generation
- Program Synthesis
- Symbolic Execution
ASJC Scopus subject areas
- Software
- Safety, Risk, Reliability and Quality