## Project Summary

## About:

Tux, of Math command is an open-source educational game for learning Mathematics. My project was to improve the learning outcome of a student by personalizing the game based on inferring student’s knowledge state. Bayesian Networks are used to model the cognitive state of the learner. The code is being hosted at github URL: https://github.com/sids-aquarius/tuxmath, under the bbn branch.

## Installation instructions:

t4kcommon is a pre-requisite for installing tuxmath. It is library of shared code between tuxmath and tuxtype.

a.) Installation instructions for t4kcommon

1. git clone git://git.debian.org/git/tux4kids/t4kcommon.git, # This makes a read-only clone of the t4kcommon repository.

2. cd t4kcommon

3. Create an empty git branch (how?),

4. git pull origin bbn_compatible, # This branch is compatible with the ‘bbn’ project

5. Read the INSTALL file, # Follow the installation instructions

b.) Installation instructions for tuxmath

1. git clone git://git.debian.org/git/tux4kids/tuxmath.git, # This makes a read-only clone of the tuxmath repository

or

git clone git://github.com/sids-aquarius/tuxmath.git, # This is github URL of the repository, both are updated

2. git branch bbn,

3. git checkout bbn,

4. git pull origin bbn,

5. Read doc/INSTALL for OS specific instructions on installing.

## Code overview:

Most of the code for this project resides in the src/bayesian/ directory. The main files are bayesian_structure.c, and inference_poly.c. Other files are helper modules, to support these two.

bayesian_structure.c -> This module acts an interface between the game and other modules in src/bayesian/ directory. It is responsible for specifying the topology of the Bayesian Network structures. There are five independent BBN structures, for number_typing, addition, multiplication, subtraction and division. The design of each of them is based on the idea of a back-bone structure, along with local nodes and global nodes and is inspired from [1].

inference_poly.c -> This module is used for performing inference on a BBN based on an evidence node. The implementation is based on Pearl’s Belief Propagation algorithm for DAG’s with poly-tree topology (or singly-connected networks).

Overview of all the modules can be found in src/bayesian/USAGE file – link.

Additionally, –debug-bayesian flag can used while running tuxmath.

## Game flow:

The student model is only applicable to the Tuxmath training academy mode. So, once a user starts the training academy, BBN’s are initialized. In case this is not a new session, they are loaded from the tuxmath directory (location depends on OS) from a file named “lesson_proficiency”. The user now selects one of the lessons for practice and the game starts. Once in the game, the BBN takes inputs from three user actions (i.e. correct answer, incorrect answer, and no answer). The node probabilities are updated based on the evidence node’s value (true or false), and the evidence is later retracted and beliefs revised. Once the user exits the game, a new lesson is suggested to the user provided the user has a certain threshold proficiency in the current topic. This is indicated to the user by changing the suggested lesson’s position at the front.

## Things to do:

– Model game difficulty based on user’s tolerance of difficulty,

– Enhance user interface to show suggested lessons clearly.

## Global nodes in a topic cluster

A learner’s overall characteristics are represented by global nodes, the beliefs not specific to any topic. We are particularly interested in the following two characteristics:

- Acquisition-rate – Implies how fast a particular student learns a new topic. When a new topic is introduced, the learner’s performance is measured on an initial set of questions, to determine this characteristic. The purpose of measuring acquisition rate is two fold ->
**one**, to aid in planning the pace at which new topics should be introduced, and**two**, to help in question-generation where more than one sub-skills can be involved. - Student’s challenge level – There is quite a difference in terms of how much negative outcome a learner can tolerate. The purpose of measuring this characteristic is to model the level of failure/challenge that is appreciable to the learner, which would then be used to keep the game at a playable level.

Let’s consider the structure of a topic(sub-topic)-cluster we get considering the two global nodes, and observe the relationships between the local and global nodes.

In Figure 1, G1 and G2 refer to the newly introduced global nodes. L1, L2, L3 and L4 are the local nodes of the topic-cluster.

The G1-L1 edge does not represent an exact causal relationship, and it would be inappropriate to update belief for G1 on L1’s posterior belief based on conditional probability. Rather, acquisition rate(G1) would be measured as a rate of change in L1.

The L4 node corresponds to the learner’s choice of topic from options, where the interface would be similar to Image 3 in the Tuxmath post. The idea as of now is to present the learner with recommended choices for topics(lessons) to practice (like one hard, medium and easy). The nature of node L4 would be quaternary, i.e. a learner can choose either of the three recommended choices(H,M,E) or choose a completely different(D) topic. The learner’s choice will determine how much challenge he is willing to take (G2). And the relationship G2-L4 is causal in the way that more challenge level of a learner corresponds to an increasing chance of L4 node being hard(H).

Even though one may argue that global nodes complicate the model, they allow better inferencing about learner’s knowledge in a particular topic/sub-topic, taking into account both topic-related characteristics and learner’s characteristics.

## Code

I will maintain all the code that I produce in SoC under the bbn branch on github.

It’s also there on alioth where the official Tux4Kids repositories are hosted. You can find out more about the Tux4Kids repositories here.

I will also maintain a list about the code changes that I make:

- Shifted question generation from the beginning of a game to every level-change of a game [This implies that question-generation for a level can take into account inferences from a player’s answers of the levels he has already cleared] commit
- A Bayesian network library which supports to create, instantiate, and make inferences on Bayesian networks. [It is limited to BBN whose DAG topology is a tree] commit
- A tester file for stand-alone testing of the library commit

## Tux, of Math command

Tux, of Math Command, is an open-source, arcade style video game starring Tux, the linux mascot. It aims at providing tutoring on elementary mathematics.

**Image 1:** Menu options

**Image 2:** Menu options for the Play alone mode

Important menu options for playing the game as shown in Image 1 are:-

- Play alone– Play and learn on your own [Image 2]
- Math command training academy – Earn gold stars as you build your math skills with each mission
- Math command fleet missions – Complete missions for the math command fleet
- Play arcade game – Make the hall of fame by beating the highest score

- Network game – Play a game with your friends on a local network
- Play with friends – Take turns in a game with your friends on a single computer

Important topics taught by Tuxmath are number typing, addition, subtraction, multiplication, division, factors and fractions.

**Image 3:** Topic options for Math command training academy (There are more than 50 in all!)

**Image 4:** A game screenshot

A video tutorial for the game can be found here. More project related information can be found on http://tux4kids.alioth.debian.org.

## Student modelling using Bayesian Belief Networks (BBN)

#### Why is BBN an appropriate way to model the knowledge of the student?

Systems that are able to weigh each new item of evidence in conjunction with previous evidence about the student’s state of knowledge, have a firmer foundation for making pedagogical decisions than those systems which ignore this issue. For human tutors, the **prerequisite relationship** is important both for *instructional planning purposes* and for *gathering information about the current state of a student’s knowledge*. BBNs allow to formally model the prerequisite relationships. They provide a graphical way of designing probabilistic models based on the concept of conditional probability.

Building a BBN is a three-step process:-

1. Defining the structure

In the general theory of BBNs, there are no restrictions on the structure of the network, apart from the prohibition of directed cycles. However, an actual structure needs to be specified for any system that uses a BBN. An appropriate BBN structure for student modelling is based on three connected ideas^{[1]}:

*a belief net backbone,*links all the “student-knows(topic)” nodes together in a partial ordering, according to their prerequisite relationships;*a set of topic clusters,*each of which comprises of a single “student-knows(topic)” node, together with a set of additional nodes;*a set of global nodes,*which represents the system’s belief in overall student characteristics, i.e. beliefs not focused on a specific topic, e.g. “student’s acquisition rate”, “student’s aptitude”, etc.

**Figure 1**: Structure of the BBN based on the idea of a belief-net backbone

In the above figure, the backbone is represented by the vertical arrows, and each topic cluster is represented by the four nodes in each horizontal plane. The backbone and the topic clusters intersect in a restricted way, i.e. each of the “student-knows” nodes occurs once in the backbone and once in their topic clusters. The backbone approach’s main advantage is the gained computational efficiency as the updates to the beliefs in any one topic cluster only affects the other topic clusters via the backbone, rather than there being any direct connection.

Broadly, we can divide the domain in the following topics and sub-topics:

- Number typing -> (+ve numbers, -ve numbers);
- Addition -> (0-3, 1-5, sums to 10, sums to 15, sums to 20, two digit additions, missing numbers, +ve and -ve, -ve and +ve, -ve and -ve);
- Subtraction -> (0-10, 0-20, two digit numbers, -ve answers, -ve from +ve, +ve from -ve, -ve from -ve);
- Multiplication -> (0-3, 0-5, 0-10, 0-15, multiples from 2-15, missing numbers, -ve and -ve, +ve and -ve);
- Division -> (1-5, 1-10, 1-15, division by 2-15, -ve and -ve, +ve and -ve);

As such, there are no dependencies involved among the topics; the above order is chosen as it represents the sequence in which the topics are usually taught in classrooms. The list of sub-topics for each topic is based on the lesson options present in the Math command training academy (refer this post). Breaking each topic into sub-topics allows for a more detailed assessment about strengths and weaknesses of a learner for each of the sub-topics. To model a learner’s knowledge state for each topic, we will use a vector of length equal to the number of sub-topics. For example, belief in learner’s knowledge in addition will be stored as a vector of the form (x_{0} x_{1} x_{2} x_{3} x_{4} x_{5} x_{6} x_{7} x_{8} x_{9}), where x_{i} is in the range [0,1] and represents the belief in learner’s proficiency for the corresponding sub-topic. The belief in learner’s proficiency for the topic is then the average of beliefs in each of the sub-topics.

**Figure 2**: Structure of a topic cluster

In the above figure, the “evidence” node represents the learner’s answers to the questions, which appear in the form of comets. We need this additional node to make inferences about the learner’s knowledge state as the “student-knows” node is not directly observable. The “explicitly chooses the topic when suggested” node corresponds to the learner’s response when the system presents him with its recommended choices. Again, this node is used to make inferences about the “learner’s interest in a particular topic” as interest itself is not directly observable. We will use the same structural arrangement as shown in Figure 2 for all the topic clusters. This uniformity is not required by the theory of BBNs, it is intended as a relatively low design-cost approach. The “student-knows(topic/sub-topic)” node has direct dependencies with other sub-topics. This dependency graph of all the “student-knows(sub-topic)” nodes varies for each topic.

**Figure 3**: Showing the dependency relationship between the student-knows(sub-topic) nodes for the topic “addition”

Global nodes are covered in this post – Global nodes.

2. Initializing BBN

The second task in building the BBN is to initialize the network’s estimate of student knowledge.

3. Updating the network

After defining the structure and initializing the network, next task is to update the network’s estimate of the learner’s proficiency based on his behavior.

#### References:

## Question generation

The player (read learner) has selected the topic and the game has started!

In general, we will need the following things at the back-end:

- Learner’s proficiency – the topic ability of the learner which is how well the student has been performing on the present topic.
- Acquisition rate – this reflects how well the learner grasps when a new topic is introduced

Both these measures take on either the following values: Excellent(1), Good(0.75), Okay(0.50), Fair(0.25), or Poor(0). Measuring and updating of these parameters are covered in a separate post.

**How do we decide on the sequence of questions?**

One of the main goals for a personalized game-play is to tailor instructions to the need of each student. The general notion of difficulty is that, more the sub-skills required for a question, more difficult it is. So, the challenge would be to generate questions which are at the right level of difficulty. Question generation thus should depend on two factors:

- Learner’s proficiency in the topic
- Acquisition rate of the learner

**Algorithm for question generation**

In essence, we need to determine how many and which sub-skills to use. This algorithm is based on the algorithm[1] used in Animalwatch, a tutor which teaches pre-algebra mathematics.

A line is dynamically drawn in the X-Y plane with the X-axis representing the learner’s ability in the individual sub-skills and the Y-axis representing the probability that these sub-skills will be required for problem generation. The slope is a function of the student’s acquisition factor, and is always negative. The X-coordinate of the line represents the learner’s proficiency in the particular sub-skill. The Y-intercept is a function of learner’s proficiency in the topic. The general equation for the line is: y = f(acquisition_rate)*x + g(topic_proficiency).

The y-coordinate for the line is calculated corresponding to each sub-skill. If necessary, the obtained values for the y-coordinates are then normalized. Based on the probabilities for each sub-skill, we determine the priorities for them. Once we select the sub-skills based on their compatibilities, we also need to decide upon an appropriate level of difficulty for the question within the selected sub-skills. There can be two/three levels of difficulty and we can decide which one to choose based upon the learner’s topic proficiency. One quantifiable way to differentiate between the level of difficulties would be the number of digits in the operands. At this point, based on the compatibility of the sub-skills, the questions can be generated.

**An example**

Let’s take an example of the Ranger mode when one plays in the Arcade-style of Tuxmath. This particular mode asks questions about addition, subtraction, multiplication, and division. Let’s make a case-study of a learner with the following data:-

Sub-skills and their proficiencies are: Addition (1), Subtraction (0.75), Multiplication (0.50), and Divison (0.25). The acquisition rate is 0.75. Since the topic is a combination of individual topics (addition, subtraction, multiplication and division), we will take an average of the individual skills to obtain the topic proficiency – 0.50.

For the sake of simplicity, we will use f(acquisition) = -0.5/acquisition = -0.5/0.75 = -0.67; and g(topic_proficiency) = topic_proficiency = 0.50.

Now, we will compute the probability of using each sub-skill and their priorities. The priority of a sub-skill is defined as (1-proficiency).

Proficiency | P(subskill) | Priority | |

Addition | 1 | -0.27 | 0 |

Subtraction | 0.75 | 0 | 0.25 |

Multiplication | 0.50 | 0.1667 | 0.50 |

Division | 0.25 | 0.334 | 0.75 |

Seeing the priorities, and the fact that none of the sub-skills are compatible, division would be selected. Again, since the learner’s topic proficiency is 0.50, we will go with an easy problem of the chosen sub-skill.

**References**

## Personalized, adaptive game-play to improve learning in Tuxmath

**Synopsis**:

Tux, of Math command is an open-source educational game for learning Mathematics. Since the game is quite popular and used by many schools[1], my idea focuses on improving the learning of a player/student by providing a personalized, adaptive game-play. This is done by modelling the learner’s cognitive state using Bayesian networks[2]. This also helps a teacher/supervisor to be provided with a detailed assessment about a particular student’s strengths and weaknesses in particular topics.

**Benefits**:

The user base of Tuxmath consists a considerable number of school students[1]. A focus on learning will be desirable for this particular user base.

**Approach:**

Two important ways to make the game-play personalized and adaptive in order to improve the learning outcome are:

1. Topic suggestion – This feature suggests a topic to the learner based on the modelling of his present cognitive state. The selection of the topic is done to likely maximize the learning outcome.

Let’s say there are a total of **T** topics in our domain **D** (which comprises of elementary mathematics). Each topic of the set **T** contains information about the learner’s proficiency for that topic. Given this information, how do we select **t **(the topic to be suggested to the learner) from the set **T**? Firstly, we build a dependency graph among the topics **T** which will be directed and acyclic in nature. Now, a topic t from the set **T** will be selected based on two criteria –

- We first try to find topics which the learner has priorly practiced but needs remediation,
- otherwise, we create a topological ordering from the dependency graph and than select a new topic for the learner. This way we ensure selection of a topic which the learner is ready to attempt but has not yet mastered.

“How do we measure a learner’s proficiency in a particular topic?” – I propose the use of Bayesian belief networks(BBN) for this. Bayesian networks provide a way to represent and reason about uncertainty – significant factors in learner modelling[2]. Building a Bayesian network is a three-step process:

- Defining the structure or graph of the network,
- Initializing node values in the network,
- Updating the node values based on evidences (learner inputs).

I will cover these steps in detail in a separate post.

2. Modifying the game variables – Changing the value of game variables is mostly about adjusting the speed of question generation, maximum number of questions present on the screen, the time duration of a question’s visibility, and likewise. The main purpose is to retain the immersiveness of the game while providing a sufficient challenge to the learner (read player).

**Deliverables**:

- Before May 23: Get familiar with the code-base, community bonding period.
- May 24 – June 18: Defining the structure for the Bayesian network and if necessary, choose a library which implements the BBN updating algorithms.
- June 19 – July 7: Finish the implementation of “topic suggestion”, which is step 1.
- July 8 – July 15: (Week before the mid-term evaluations) I would dedicate this week for code clean-up and writing
**HACKING**document(s), describing the high-level architecture of the learner module. This will allow contributers to jump in easily. - July 16 – July 31: Implement “Changing the value of game variables”, i.e. Step 2.
- August 1 – August 16: The last two weeks will be dedicated to code clean-up, documentation and testing.

**References**:

[1] Free educational software page at North Canton city schools,

[2] Student Modelling based on belief networks.

P.S. – The contents of this post are mutable. :P