What Is Fuzzing

Posted in fuzzing with tags , , , on 2008-11-17 by crashtesting

Fuzz testing, or fuzzing, originally meant a simple testing technique for feeding random input to applications (see Fuzz by University of Wisconsin, 1990). Today, it is much more optimized. Model-based fuzzing tools have been available since 1999, from research teams such as PROTOS. Fuzzing techniques can basically be divided in four different categories:

  1. Random fuzzing: has close to zero awareness of the tested interface.
  2. Capture-replay fuzzing: learns the protocol from templates such as traffic captures or files.
  3. Block-based fuzzing: breaks the syntax of the tested interface into blocks of data, which it semi-randomly mutates.
  4. Model-based fuzzing: builds an executable model of the protocol based on protocol specification, which it then uses for generating systematic non-random test cases.

In short, fuzzing is about negative testing, generation on non-conformant messages in order to crash software. The failures (crashes, hangs, busy-loops, …) are studied from risk analysis perspective to see if they are something that need to be fixed. Most discoveries can also be identified as software vulnerabilities.

A Good Analysis of Test Coverage

Posted in fuzzing with tags , on 2010-02-01 by crashtesting

Test coverage, especially for fuzzing, is always a challenge. The three most common dimensions for analyzing test coverage in software fault injection (robustness testing or fuzzing) are interface coverage, protocol coverage and input coverage. But those are often difficult to measure, therefore simplification helps. A recent white paper on this topic breaks down this analysis into simple things like:

  • Attack surface
  • Specifications
  • Statefulness

Attack surface is a simple to do process of analyzing the interfaces and protocols, whereas specifications and statefulness look at how well those protocols are tested. Understanding these simple metrics helps you understand how well your own fuzzing tool actually does its test. Good luck in fuzzing!

Fuzzing Definitions for QA

Posted in fuzzing with tags , , , on 2009-07-20 by crashtesting

A fuzz test is difficult to place in a regular test specification. I decided to write a few notes to help you in integrating fuzzing.

A fuzz test comes in two basic modes:

  • Deterministic and systematic model-based fuzzing, where each test is carefully built from the grammar and syntax of a protocol.
  • Semi-random and indefinite mutation-based fuzzing, where a template of somekind is used as a basis of mutations.

A fuzz process typically starts with the identification of the SUT (system under test), and the actual test planning process can be seen to start from that analysis. Some people see this as the phase where the attack surface is mapped. Some just call it system scanning. Basically, the SUT can consist of numerous devices, each with numerous communication interfaces, each with numerous protocols in layers but also independent of each other.

Definitions that are needed to map the above into a test plan include:

  • SUT: System Under Test, the actual target system.
  • DUT: Device Under Test, subset of the SUT, or sometimes same as SUT (in stand-alone device test environment).
  • Interface: Physical or logical interface between two or more DUTs.
  • Protocol: A specific layer of communications in an interface.
  • Fuzz test plan: A specific test setup, goals, and configurations for testing a protocol, on one interface, in one device.
  • Test group: A set on individual inputs that aim to finding one type of a flaw. Can consist of either generated or mutated tests.
  • Test case: One specific sequence of messages potentially triggering a flaw.

In most cases, the starting point for a test engineer is a test requirement where SUT, DUT, Interface, and Protocol are clearly defined. The goal in such case is to build a test plan, define its structure, test purpose, used metrics, test grouping (if any). Finally the easy phase is building the tests, and reporting the results.

From that on, it is someone else’s problem. 😉

Fuzzing Usage

Posted in Uncategorized with tags , on 2009-03-17 by crashtesting

Upcoming webcast will finally reveal some interesting aspects on who is really using fuzzing. I will definitely comment on that here, after the show. If you are interested you can sign up for the show at:
Fuzzing 101

Michael Howard on Fuzzing

Posted in fuzzing with tags on 2009-02-13 by crashtesting

Check out this article from Michael Howard from Microsoft:One Tool Does not Rule them All

You would of course want me to comment on (rather counter-argument) this, and I will not fail you. 😉

Michael starts with a comment:

If you read much about security, you’ll see that fuzzing is a very effective security and reliability testing technique, but it is far from perfect.

Well, fuzzing is not one technique but at least three. Depending what technique you use, you either catch only simple flaws (or no flaws at all) or you will catch most remote flaws. But nobody has ever claimed fuzzing is perfect. The study from Charlie Miller (check out chapter 8 of the Fuzzing book by Takanen, DeMott and Miller) indicated that model-based fuzzing finds about 80-90% of flaws, whereas random (dumb) fuzzing only finds between 0-30% of the flaws. Evolutionary fuzzers are difficult to map in this comparison as sometimes they are able to find almost as many flaws as model-based fuzzers (especially with simple text-based protocols like Michael had) but sometimes fail to find any (with more complex protocols).

Not catching flaws such as shown in the example just means you are using the wrong tool, or you need to educate the people building the tools in how systematic fuzzing is done. Just doing fuzzing means nothing. The important question is: What type of fuzzing you are using?

Fuzzing is never the only tool you should depend on when building a product security practice. But it is the most effective one, and the fastest to deploy. And did I remember to say that there are no false positives? A crash is a crash (if you do not attempt to hide it with exception handling, of course). That is why most people will start out with fuzzing, and then move to other techniques that cost much more to use, such as code auditing.

Two Uses Of Fuzzing, Or Six, Or Something In-Between

Posted in fuzzing with tags , on 2009-01-16 by crashtesting

The people I know that talk actively about fuzzing seem to mix two different views where fuzzing is used. The first view is the “Role” of the user, and the second is the “Phase” of the test in the software life-cycle (note: NOT _development_ life-cycle, but the entire life-cycle).

The role of the tester is sometimes divided between Quality Assurance (QA) and Vulnerability Assurance (VA), but maybe an another view would be to look at the role from the test purpose perspective. Three roles can be easily identified:

  1. “Dev-fuzzer” is testing something that he or his team has built.
  2. “Test-fuzzer” is testing something that someone else has built.
  3. “Admin-fuzzer” is testing something that he or his team is planning to use or buy.

The roles given above place some limitations on which tools or techniques are most useful to the person conducting the tests. A developer will have access to the source code, and can also easily instrument the code for better tests. A system administrator (or any enterprise security or QA specialist) on the other hand has almost zero visibility to the internals of the software.

Even simpler categorization is the binary view created by one major event in the software life-cycle: the release, or launch date of the software. Pre-release fuzzing aims at finding as many problems as possible in software before it is released. Post-release fuzzing on the other hand aims at finding (and often disclosing) vulnerabilities in software that is often already widely used.

Third categorization can be based on who actually is doing the tests, whether it is the software development organization (or vendor), a researcher (hacker) or the end-user organization (enterprise).

All these categories can be mixed. I.e. you can have an enterprise user fuzzing a beta-release of a software, at a test-lab environment such as interoperability lab for a financial organization. Or a hacker that also will take part in the development of an open source project to fix the findings in the code for a future release of the same software.

Nobody said it was a simple categorization, but understanding that everyone has a different view into fuzzing can help you understand the problematics that your collegue is explaining to you. And we are here to learn, aren’t we.

The Dummy Did Not Crash Yet

Posted in Uncategorized with tags on 2009-01-16 by crashtesting

After an abnormally long Xmas holiday, I am back, finally. Although there is nobody reading this, I will still continue to collect key points in fuzzing here. Maybe someone will find them useful… I am also happy to answer any questions anyone has regarding fuzzing or security testing.