Upcoming Events
Unite 2010
11/10 - 11/12 @ Montréal, Canada

GDC China
12/5 - 12/7 @ Shanghai, China

Asia Game Show 2010
12/24 - 12/27  

GDC 2011
2/28 - 3/4 @ San Francisco, CA

More events...
Quick Stats
112 people currently visiting GDNet.
2406 articles in the reference section.

Help us fight cancer!
Join SETI Team GDNet!
Link to us Events 4 Gamers
Intel sponsors gamedev.net search:

In this article I'm going to introduce DFAs and NFAs, and explain the differences between them. After showing an example, I will also present the various methods for regex recognition employing DFAs and NFAs.

DFA + NFA = FSM

Talking about FSMs in the past two articles, I was hiding the full picture from you, for the sake of simplicity. Now I intend to fix that.

FSM, as you already know, stands for Finite State Machine. A more scientific name for it is FA - Finite Automaton (plural automata). The theory of Finite Automatons can be classified into several categories, but the one we need for the sake of regex recognition is the notion of determinism. Something is deterministic when it involves no chance - everything is known and can be prescribed and simulated beforehand. On the other hand, nondeterminism is about chance and probabilities. It is commonly defined as "A property of a computation which may have more than one result".

Thus, the world of FSMs can be divided to two: a deterministic FSM is called DFA (Deterministic Finite Automaton) and a nondeterministic FSM is called NFA (Nondeterministic Finite Automaton).

NFA

A nondeterministic finite automaton is a mathematical model that consists of:

  1. A set of states S
  2. A set of input symbols A (the input symbol alphabet)
  3. A transition function move that maps state-symbol pairs to sets of states
  4. A state s0 that is the start state
  5. A set of states F that are the final states

Most of this should be familiar to you from our FSM discussion in the past articles. I will now elaborate on a few fine points (trying to simplify and avoid mathematical implications).

A NFA accepts an input string X if and only if there is some path in the transition graph from the start state to some accepting (final) state, such that the edge labels along this path spell out X.

The definition of a NFA doesn't pose a restriction on the amount of states resulting in some input in some state. So, given we're in some state N it is completely legal (in a NFA) to transition to several different states given the input a.

Furthermore, epsilon (eps) transitions are allowed in a NFA. That is, there may be a transition from state to state given "no input".

I know this must sound very confusing if it's the first time you learn about NFAs, but an example I'll show a little later should make things more understandable.

DFA

By definition, a deterministic finite automaton is a special case of a NFA, in which

  1. No state has an eps-transition
  2. For each state S and input a, there is at most one edge labeled a leaving S.

You can immediately see that a DFA is a more "normal" FSM. In fact the FSMs we were discussing in the previous articles are DFAs.

Recognizing regexes with DFAs and with NFAs

To make this more tolerable, consider an example comparing the DFA and the NFA for the regex (a|b)*abb we saw in the previous article. Here is the DFA (exactly the one you saw last time):

Graphic1

And this is the NFA:

Graphic2

Can you see a NFA unique feature in this diagram? Look at state 0. When the input is a, where can we move? To state 0 and state 1 - a multiple transition, something that is illegal in a DFA. Take a minute to convince yourself that this NFA indeed accepts (a|b)*abb. For instance, consider the input string abababb. Recall how NFA's acceptance of a string is defined. So, is there a path in the NFA graph above that "spells out" abababb? There indeed is. The path will stay in state 0 for the first 4 characters, and then will move to states 1->2->3. Consider the input string baabab. Is there a path that spells out this string? No, there isn't, as in order to reach the final state, we must go through abb in the end, which the input string lacks.

Both NFAs and DFAs are important in computer science theory and especially in regular expressions. Here are a few points of difference between these constructs:

  • It is simpler to build a NFA directly from a regex than a DFA.
  • NFAs are more compact that DFAs. You must agree that the NFA diagram is much simpler for the (a|b)*abb regex. Due to its definition, a NFA doesn't explicitly need many of the transitions a DFA needs. Note how elegantly state 0 in the NFA example above handles the (a|b)* alternation of the regex. In the DFA, the char a can't both keep the automaton in state 0 and move it to state 1, so the many transitions on a to state 1 are required from any other state.
  • The compactness also shows in storage requirements. For complex regexes, NFAs are often smaller than DFAs and hence consume less space. There are even cases when a DFA for some regex is exponential in size (while an NFA is always linear) - though this is quite rare.
  • NFAs can't be directly simulated in the sense DFA can (recall the simulation algorithm from the previous article). This is due to them being nondeterministic "beings", while our computers are deterministic. They must be simulated using a special technique that generates a DFA from their states "on the fly". More on this later.
  • The previous leads to NFAs being less time efficient than DFAs. In fact, when large strings are searched for regex matches, DFAs will almost always be preferable.

There are several techniques involving DFAs and NFAs to build recognizers from regexes:

  • Build a NFA from the regex. "Simulate" the NFA to recognize input.
  • Build a NFA from the regex. Convert the NFA to a DFA. Simulate the DFA to recognize input.
  • Build a DFA directly from the regex. Simulate the DFA to recognize input.
  • A few hybrid methods that are too complicated for our discussion.

In the next article, I will pick one technique and explain it in depth. After getting to know all the algorithms involved, we will be finally ready for the real implementation of a recognizer.


© Copyright by Eli Bendersky, 2003. All rights reserved.





  Printable version
  Discuss this article

The Series
  Part 1
  Part 2
  Part 3
  Part 4
  Part 5
  Part 6
  Part 7
  Part 8