STACK Documentation

Documentation home | Category index | Parent | Site map

Authoring quick start 1

Computer aided assessment of mathematics works in the following phases.

  1. Authoring
  2. Testing
  3. Deploying
  4. Reporting

Each of these links contains detailed instructions. The purpose of this page is to work through a simple example.

Introduction

The STACK question type for Moodle is designed as a vehicle to manage mathematical questions. Implicit in this is a data structure which represents them. This page explains the process of authoring a question, by working through an example.

Questions are edited through the Moodle quiz. In Moodle, go to the question bank and ask to create a new STACK question. Do not be put off by the fact the editing form looks complicated.

There are lots of fields, but only a few are compulsory. These are the question name and question text. The question text is the string actually displayed to the student, i.e. this is "the question". If you have an input (the default is to have one) the teacher's answer must be non-empty. Nodes in potential response trees have compulsory fields (the default is to provide a tree with one node).

An example question

We are now ready to edit an example question. The question name is compulsory in Moodle, so choose one now, e.g. Question 1.

Ensure the question text contains the following information. It should be possible to cut and paste, but make sure you do not copy the HTML pre-formatted tags!

There are a number of things to notice about this text.

  • The text contains LaTeX mathematics environments. Do NOT use mathematics environments $..$ and $$..$$. Instead you must use \(..\) and \[..]\ for inline and displayed mathematics respectively. (There is an automatic bulk converter if you have a lot of legacy materials!)
  • The tag [[input:ans1]] will be replaced by an input labelled ans1, i.e this denotes the position of the box into which the student puts their answer.
  • The tag [[validation:ans1]] will be replaced by any feedback related to the validity of the input ans1.

By default, a new question automatically has one input, and one algorithm to assess the answer.

Scroll down: there will be an inputs section of the editing form. Into the model answer type in the answer as a syntactically valid CAS expression, e.g.

3*(x-1)^2

Now we have a question, and the model answer. We next have to decide if the student's answer is correct.

Establishing properties of the student's answer via the potential response tree

To establish properties of student's answers we need an algorithm known as a potential response tree.

This tree will allow us to establish the mathematical properties of the student's answer and on the basis of these properties provide outcomes, such as feedback and a score.

In due course, we shall provide feedback which checks

  1. For the correct answer.
  2. To see if the student integrated by mistake.
  3. To see if it is likely that the student expanded out and differentiated.

By default, a new question contains one potential response tree called prt1. This is the name of the potential response, and it can be anything sensible (letters, optionally followed by numbers, no more than 18 characters). There can be any number of potential response trees (including zero). Feedback generated by these trees replaces the tag [[feedback:prt1]]. By default this tag is placed in the Specific feedback field, but it could also be placed in the question text.

A potential response tree is a non-empty acyclic directed graph of potential response nodes. By default we have one potential response node, and this node is quite simple.

  1. SAns is compared to TAns with the answer test, possibly with an option.
  2. If true then we execute the true branch.
  3. If false then we execute the false branch.

The answer test itself sometimes produces feedback for the student (which the teacher might choose to suppress with the quiet option). The answer test also produces an internal answer note for the teacher which is essential for Reporting students' attempts later.

Each branch can then

  • Assign/update the score.
  • Assign formative feedback to the student.
  • Leave an answer note for Reporting purposes.
  • Nominate the next potential response node, or end the process [stop].

We refer to the student's answer in computer algebra calculations by using the name ans1 since we gave this name to the input in the question text. The model answer was 3*(x-1)^2. Update the form fields so that

 SAns = ans1
 TAns = 3*(x-1)^2
 Answer test = AlgEquiv

Then press the [Save changes] button. If the question fails to save check carefully for any errors, correct them and save again.

This has created and saved a minimal question. To recap we have

  1. Typed in the question
  2. Typed in the model answer
  3. Indicated we wish to establish the student's answer is algebraically equivalent to the model answer 3*(x-1)^2.

Next we should try out our question, by pressing the preview button from the question bank.

Previewing the question

Assuming there are no errors, you may now choose the link "preview the question" from the Moodle question bank. This takes us to a new form where the teacher can experiment with the question.

The Moodle quiz is very flexible. Under Attempt options, make sure you have "How questions behave" set to "Adaptive Mode". If necessary "Start again with these options".

Try typing in

3*(x-1)^2

into the answer box.

The default is for STACK to use "instant validation". That is, when the student finishes typing the system automatically validates their answer and provides feedback. If this does not happen automatically press the [Check] button.

The system first establishes the syntactical validity of this answer.

Press the [Check] button again.

The system executes the potential response tree and establishes whether your answer is algebraically equivalent to the model answer 3*(x-1)^2. Next, try getting the question wrong. If your server does not have "instant validation" switched on (an administrator/installation option) you will need to submit each answer twice. Notice all your responses are stored in an attempts table.

We would really like to add better feedback, so it is time to edit the question again. Return to the question bank page and click on the link to edit the question.

Better feedback

What if the outcome of applying the first answer test was false? We would like to check that the student has not integrated by mistake, and we achieve this by adding another potential response node.

Close the preview window and edit the question again. Scroll down to the Potential Response Tree and click [Add another node] button at the bottom of the list of nodes.

From the false branch of Node 1, change the "Next" field so it is set to [Node 2]. If the first test is false, we will then perform the test in Node 2.

If the student has integrated, they may or may not have added a constant of integration. If they have added such a constant we don't know what letter they have used! So, the best way to solve this problem is to differentiate their answer and compare it to the question.

Update the form so that Node 2 has

SAns = diff(ans1,x)
TAns = (x-1)^3
Answer test = AlgEquiv

This gives us the test, but what about the outcomes?

  1. On the true branch set the score=0
  2. On the true branch set the feedback to You appear to have integrated by mistake!

Notice here that STACK also adds an "intelligent note to self" in the answer note field. This is useful for statistical grouping of similar outcomes when the feedback depends on randomly generated questions, and different responses. You have something definite to group over. This is discussed in reporting.

Press the [Save changes] button and preview the question.

Better feedback still: the form of the answer

It is common for students to give the correct answer but use a quite inappropriate method. For example, they may have expanded out the polynomial and hence give the answer in unfactored form. In this situation, we might like to provide some encouraging feedback to explain to the student what they have done wrong.

Go back and [Add another node] in a similar way as before. After all, we need to apply another answer test to spot this.

To use this potential response, edit Node 1, and now change the true branch to make the Next node point to the new Node 3. If we enter Node 3, we know the student has the correct answer. We only need to establish if it is factored or not. To establish this we need to use a different answer tests.

Update the form so that Node 3 has

SAns = ans1
TAns = 3*(x-1)^2
Answer test = FacForm
Test option\s = x
Quiet = Yes.

The FacForm answer test provides feedback automatically which would be inappropriate here. We just need to look at whether the answer is factored. Hence we choose the quiet option. We needed to add \(x\) to the "Test opts" to indicate which variable we are using.

We need to assign outcomes.

  1. On the true branch set the score=1
  2. On the false branch set the score=1 (well, you may disagree here, but that is up to you!)
  3. On the false branch set the feedback to something like

This new feedback can be tested by typing in an expanded answer, i.e. 3*x^2-6*x+3.

You can continue to add more potential response nodes as the need arises. These can test for more subtle errors based upon the common mistakes students make. In each case an answer tests can be used to make a different kind of distinction between answers.

Random questions

At this point you might consider saving as a new question.

It is common to want to use random numbers in questions. This is straightforward to do, and we make use of the optional question variables field.

STACK 3 uses Maxima's syntax for assignment, which is unusual. In particular the colon : is used to assign a value to a variable. So to assign the value of 5 to n we use the syntax n:5.

Modify the question variables from the previous example so that

p:(x-1)^3;

Then change the question text to

and in the inputs change the model answer to

diff(p,x)

Notice that now we have defined a local variable p, and used the value of this in the Question text. The difference is between mathematics enclosed between \(..\) symbols and {@..@} symbols. All the text-based fields in the question, including feedback, are CAS text. This is HTML into which mathematics can be inserted. LaTeX is placed between \(..\)s, and CAS expressions (including your variables) between matching {@..@} symbols. There is more information in the specific documentation. The CAS expressions are evaluated in the context of the random variables and displayed.

Since we have used {@p@} here, the user will not see a \(p\) on the screen when the question is instantiated, but the displayed value of p.

Notice also that in the model answer there is a CAS command to differentiate the value of p with respect to x. It is necessary for the CAS to work out the answer in a random question. You now need to go through the potential response tree to use the variable p or diff(p,x) (or perhaps some other CAS expression) as appropriate.

We are now in a position to generate a random question. To do this modify the question variables to be

n : 2+rand(3);
p : (x-1)^n;

In this new example, we have an extra variable n which is defined to be a random number.

This is then used to define the variable p which is in turn used in the question itself.

When generating random questions in CAA we talk about random numbers when we really mean pseudo-random numbers. To keep track of which random numbers are generated for each user, there is a special command in STACK, which you should use instead of Maxima's random command.

This is the rand command which is a general "random thing" generator, see the page on random generation for full details. It can be used to generate random numbers and also to make selections from a list.

The question note

The question note enables the teacher to track which version of the question is given to each student. Two versions are the same if and only if the question note is the same. Hence a random question may not have an empty question note.

Fill this in as

\[ \frac{d}{d{@x@}}{@p@} = {@diff(p,x)@} \]

It is crucial to do this now since questions with rand() in the question variables may not have an empty question note. By enforcing this now we prevent frustration later when it would be otherwise impossible to distinguish between random versions of a question.

Edit your trial question, save and preview it to get new random versions of the question.

Further randomisation

At this point you might consider saving as a new question.

As a specific example of some of these features, try the question illustrated below. This contains random numbers, and also examples of variables and expressions selected from a list.

n : rand(5)+3;
v : rand([x,s,t]);
p : rand([sin(n*v),cos(n*v)]);

Then change the Question text to

Again, we need to use expressions such as diff(p,v) throughout the potential response tree, and even in one place diff(ans1,v).

Delete Node 3. Factored form tests no longer make sense in the context of this question.

It is often a good idea to use variables in the question at the outset, even if there is no intention to randomly generate a question initially. Also, as questions become increasingly complex, it is a good habit to comment complicated lines in the Maxima code in the Question variables and Feedback variables, in order to make the code easier to read for anyone wishing to edit the question. Comments are entered as follows: v : rand([x,s,t]) /* Set v randomly to x, s, or t */.

You will also need to update the question note to be

\[ \frac{d}{d{@v@}}{@p@} = {@diff(p,v)@} \]

Question tests

Testing questions is time consuming and tedious, but important to ensure questions work. To help with this process STACK enables teachers to define "question tests". These are the same principle as "unit tests" in software engineering.

From the question preview window, click on Question tests & deployed versions link in the top right of the page.

Please read the page on testing.

Please ensure you have deleted the third node from the potential response tree! Click Add a test case to add a test to your question. Fill in the following information

ans1 = diff(p,v)
score = 1
penalty = 0
answernote = prt1-1-T

The system will automatically evaluate diff(p,v) to create ans1 and then mark the question using this information. It will match up the actual outcomes with those you specified. This automates the testing process.

You can add as many tests as you think is needed, and it is usually a sensible idea to add one for each case you anticipate. Here it would be sensible to test if the student has integrated by mistake.

If your question uses randomisation, then you need to deploy instances of it before you can present it to students. This is done via the deployment interface on the top of the testing page.

Next steps

You might like to look at Moodle's quiz settings, creating a simple quiz. This is, strictly speaking, a completely Moodle issue and there is every reason to combine STACK questions with other Moodle question types. Some very brief notes are included in the quiz quickstart guide.

STACK's question type is very flexible.

The next part of the authoring quick start guide looks at multi-part mathematical questions.


Documentation home | Category index | Parent | Site map