Part II: Lexical Fuzzing

This part introduces test generation at the lexical level, that is, composing sequences of characters.

  • Fuzzing: Breaking Things with Random Inputs starts with one of the simplest test generation techniques: Fuzzing feeds a string of random characters into a program in the hope to uncover failures.

  • In Getting Coverage, we measure the effectiveness of these tests by assessing their code coverage – that is, measuring which parts of a program are actually executed during a test run. Measuring such coverage is also crucial for test generators that attempt to cover as much code as possible.

  • Mutation-Based Fuzzing shows how to mutate existing inputs to exercise new behavior. We show how to create such mutations, and how to guide them towards yet uncovered code, applying central concepts from the popular AFL fuzzer.

  • Greybox Fuzzing extends the concept of input mutation further, using statistical estimators to guide test generation towards likely bugs.

  • Search-Based Fuzzing takes the concept of guidance further, introducing search-based algorithms to systematically generate test data for a program.

  • Mutation Analysis seeds synthetic defects (mutations) into program code to check whether the tests find them. If the tests do not find mutations, they likely won't find real bugs either.

Creative Commons License The content of this project is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. The source code that is part of the content, as well as the source code used to format and display that content is licensed under the MIT License. Last change: 2023-01-07 15:27:27+01:00CiteImprint