For those who don’t know it yet ,Akinator is a computer game and mobile app created by French Company: Elokence.

Akinator’s goal is to guess a real or fictional characters. To guess the character the player is thinking, Akinator asks a series of questions and the player can answer with ‘Yes’,‘ Don’t know’, ‘No’, ‘Probably ’and ‘Probably’ not , then the program determines the best question.

For each answer, Akinator computes the best question to ask the player and finally gives a guess as to who this player is thinking of. If the first guess is not correct, Akinator continues to ask questions, and so on up to three guesses; the first one being generally after 15–20 questions. If the third guess is still not correct, the player is asked to add the character into a database.

The algorithm used for the selection of questions was totally developed by French Company mentioned above, which has been kept secret. However, it is relatively easy to find articles that describe how the algorithm was built and how it is used in Akinator. In this article, I will show you a simple and fun way to understand this algorithm.

Akinator Working

Some articles declare that Akinator use Decision Trees, as well as Probabilistic Methods or Reinforcement Learning. This article, will focus on two important algorithm of Decision Tree; Incremental Induction of Decision Trees 3 (ID3) and ID4.

For more information about the Decision Tree, see the articleTree Models Fundamental Concepts

Incremental Induction of Decision Trees 3 (ID3)

The basic idea of ID3 algorithm is to built a Decision Tree using a top-down, greedy search through the given sets to test each attribute on each node of the tree.

If you want to understand better ID3, you can see the article: Example: Compute the Impurity using Entropy and Gini Index.”

To find an optimal way to classify a learning set, it is necessary to minimize the questions asked(i.e. minimize the depth of the tree). Thus, we need some function which can measure which questions provide the most balanced splitting. The Information Gain metric is such a function, that is, Information Gain is the difference between the Impurity Measure of the initial set (i.e., when it has not yet been split) and the weighted average of the Impurity Measure after the split set (In the previous article Tree Models Fundamental Concepts we have studied that Gini and Entropy are measures of impurity):

Where Entropy(S) is the Impurity values before splitting the data and Entropy(S,X) is the impurity after the split.

In Information Gain, there are two main operations during tree building:

  • Evaluation of splits for each attribute and selection of the best split and,
  • Creation of partitions using the best split.

One very important thing that you should always understand is that the complexity lies in determining the best split for each attribute and as say before, based on Entropy or Gini, we can compute Information Gain.

Hence, using Information Gain,the algortihm used in ID3 tree is the following:

  1. If all the instances are from exactly one class, then the decision tree is an answer node containing that class name.
  2. Otherwise,

(a) Define a(best) to be an attribute (or feature) with the lowest Gain-score.

(b) For each value V(best,i) of a(best), grow a branch from a(best) to a decision tree constructed recursively from all those instances with value V(best,i) of attribute a(best).

ID4

Another important algorithm is ID4. They argue that this algorithm accepts a new training instance and then updates the decision tree, which avoids rebuilding decision tree for that a global data structure has been kept in the original tree.

The basic ID4 algorithm tree-update procedure is given below.

inputs: A decision tree, One instance

output: A decision tree

  1. For each possible test attribute at the current node, update the count of positive or negative instances for the value of that attribute in the training instance.
  2. If all the instances observed at the current node are positive (negative), then Decision Tree at the current node is an answer node containing a “+” (“-”) to indicate a positive (negative) instance.
  3. Otherwise,

(a) If the current node is an answer node, then change it to a decision node containing an attribute test with the lowest Gain-score.

(b) Otherwise, if the current decision node contains an attribute test that does not have the lowest Gain-score, then

  • Change the attribute test to one with the lowest Gain-score.
  • Discard all existing sub-trees below the decision node.

Recursively update the Decision Tree below the current decision node along the branch of the value of the current test attribute that occurs in the instance description. Grow the branch if necessary.

For more information about Decision Tree see:Tree Models Fundamental Concepts”and “Example: Compute the Impurity using Entropy and Gini Index.”

Physics. Data Science. This is my space where I can contribute my knowledge and also learn from others.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store