Tutorial >Generating with RiMarkov

Markov Chains

Markov chains (also called “n-gram” models) are systems of states and transitions. An analogy might be a set of cities connected by highways, where each city is a “state”, and each highway is a “transition”, a way of getting from one city to another.

Since we are concerned here with text, we can think of each “state” as a bit of text; a letter, a word, or a phrase (usually drawn inside a circle). A transition is a way of moving from one state to another (usually drawn as an arrow). Here’s a simple example:

In the image above we see a simple Markov chain for the sentence: “The girl created a game after school.” We can start at the first word, “The”, and follow the arrows to get to the final word, “.” In this very simple chain, each word leads to exactly one subsequent word, so there are no choices we need to make.

But consider the following example:

Again, we start from the left, at “The”, but after we get to “girl”, we have two choices about where to go next. Depending on which transition we take, we will end up with one of two different sentences. Here the probability of choosing either sentence is 50% or 0.5. The same idea can be further extended to a sequence of syllables, letters, words or sentences.

Now let’s change the sentences to make it more interesting:

The girl went to a game after dinner.
The teacher went to dinner with a girl.


The word “went” can occur after either “girl” or “teacher”. The word (or token) that follows “girl” can be either “.” or “went”. This contiguous sequence of n elements is an n-gram.

The minimum value of n in a Markov chain is 2 (otherwise we wouldn't have created a chain). If we try to build a Markov model for the above two sentences with n=2, the outcome would be something like this (with | representing OR):

      The     —>  girl | teacher
      girl    —>  went | .
      went    —>  to
      to      —>  a | dinner
      a       —>  game | girl
      game    —>  after
      after   —>  dinner
      dinner  —>  . | with
      .       —>  The
      teacher —>  went
      with    —>  a

Imagine this to be an action guide from which the computer will attempt to generate sentences. The computer begins with the word “The”. Then it checks the action guide for words that follow “The”, and finds two options: “girl” or “teacher”. It flips a coin and picks “girl”. Next it checks for words that follow “girl”, and finds two options: “went” or “.”  And so on...

With a longer text sample we could experiment with different n-values. The higher the n-value, the more similar the output will be to the input text. Often the easiest way to find the best value for n is simply to experiment...

Here is a very simple example of text-generation with RiTa.


RiMarkov

Generating text with Markov Chain in RiTa requires three simple steps:

First, construct a Markov-chain and set its n-factor (the # of elements to consider). Let’s start with n=4...

  var rm = new RiMarkov(4);

Second, provide some text for RiTa to analyse. There are three functions to achieve this: loadFrom(), loadText() and loadTokens(). Let's start with loadText(), which simply loads a string of text into the model.


  rm.loadText("The girl went to a game after dinner. The teacher went \
    to dinner with a girl.");

Third, generate an outcome according to the Markov model. You can either use generateSentences(), generateTokens() or generateUntil() to achieve different results. Here are all three lines together:

  var rm = new RiMarkov(4);
  rm.loadText("The girl went to a game after dinner. The teacher went \
    to dinner with a girl.");
  var sentences = rm.generateSentences(2);

We can now run this code. One possible output would be:

The teacher went to dinner.
The girl went to dinner with a game after dinner.


To get a better sense of how it all works, check out this example sketch using RiTa Markov chains...