# Constructing e-Machines¶

There are two primary ways to construct epsilon machines – by defining the transition structure, or by inferring from finite data.

## Defining the Transition Structure¶

### The Hard Way¶

It is very easy to define your own process. For example, let’s build the Even process by hand:

```
>>> even = RecurrentEpsilonMachine(name='Even')
>>> even.add_edge('A', 'A', o='0', p=0.5)
0
>>> even.add_edge('A', 'B', o='1', p=0.5)
0
>>> even.add_edge('B', 'A', o='1', p=1)
0
```

And there you have it! First we defined an empty machine and gave it a name. Then we added edges to it by calling add_edge where each signature is of the form (from_state, to_state, o=output_symbol, p=probability).

You’ll notice that we used the RecurrentEpsilonMachine class to build this process. We did this because we know before hand that the presentation we’re building up is the recurrent part of the epsilon machine – it is minimal and unifilar. Had we not known this, we should use the MealyHMM class.

### The Easy Way¶

Let’s build the even process in a mildly simpler way:

```
>>> s = """
... A A 0 0.5;
... A B 1 0.5;
... B A 1 1;
... """
>>> even = from_string(s, name='Even', style=1)
```

We do the same general procedure, but have instead used a specially formatted string to define our transitions. Here, passing in style=1 constructs a RecurrentEpsilonMachine for us from that transition structure. If we wanted a MealyHMM, we would use style=0.

## Inferring From Finite Data¶

Given some finite data, we would like to be able to construct the best e-Machine possible from that data. CMPy has two primary methods of doing this – tree merging and state splitting.

For this short introduction, we will be attempting to infer the Even process. Our first task is to generate some data:

```
>>> from random import randint
>>> from cmpy.util import take
>>> def even():
... while True:
... if randint(0,1):
... yield '0'
... else:
... yield '1'
... yield '1'
>>> data = take(1000, even())
>>> data[:15]
['1', '1', '0', '1', '1', '1', '1', '0', '1', '1', '0', '0', '1', '1', '1']
```

### Tree Merging¶

Tree merging is a reconstruction technique looks at all the available data and equates future morphs from a parse tree. When performing inference we need to tell the algorithm what morph lengths to look at, and what the tree depth should be. The inferred machine will likely be sensitive to these values – too small a morph length and different causal states are not differentiated, too large a tree depth and the statistics drawn from the data are poor.

```
>>> from cmpy.inference import TreeMerger
>>> tm = TreeMerger()
>>> m = tm.infer(data, morphLength=3, treeDepth=7)
>>> m.draw()
```

### State Splitting¶

State splitting begins with the assumption that the process is IID and splits states into unique future morphs when the data forces it to. Here the only parameter for inference is the history length – it too should be played with when inferring from real data as the inferred machine will be sensitive to it.

```
>>> from cmpy.inference import StateSplitter
>>> ss = StateSplitter()
>>> m = ss.infer(data, historyLength=3)
>>> m.draw()
```