grunt-hammer-athena v1.0.6
grunt-hammer-athena
Tasks for generating build files to run Hammer in Athena browser.
Getting Started
If you haven't used Grunt before, be sure to check out the Getting Started guide, as it explains how to create a Gruntfile as well as install and use Grunt plugins. Once you're familiar with that process, you may install this plugin with this command:
npm install grunt-hammer-athena --save-dev
Once the plugin has been installed, it may be enabled inside your Gruntfile with this line of JavaScript:
grunt.loadNpmTasks('grunt-hammer-athena');
Athena scoring in a Hammer question
Only questions with Athena scoring will be included in an Athena build.
In a question, you probably have set up your scoring like so using h:score
.
<h:score>
<condition>
<all>
<!-- selection scoring -->
<assert>'{{models/sr::selectedItems}}' == 'sharepoint,word' && Math.round({{locaproper}})</assert>
</all>
</condition>
</h:score>
Athena is similar in nature but looks like this...
<athena:score>
<athena:response ident="r1" property="models/sr::selectedItems" value="${answers}">
<assert>'{property}' == '{value}'</assert>
</athena:response>
</athena:score>
The Athena API requires an ident
, a property
and a value
. The ident
needs to be unique within the question. You can have one or more athena:response
items.
Scoring best practices
Your scoring should not a boolean
indicating if they got the answer correct. Instead, it should test against different actions (or responses) the user is response to in the question. For example, in a multiple choice, you may en
Multiple choice example
<athena:score>
<athena:response ident="r1" property="models/sr::selectedItems" value="sharepoint,word">
<assert>'{property}' == '{value}'</assert>
</athena:response>
</athena:sc
A simulation will most likely have multiple responses. Lets take the following example:
Lets say a Microsoft Word question has you click on the ribbon bar, then open a dialog window, select an option in a list box to get the question correct. Your responses may look like the following:
Simulation example
<athena:score>
<athena:response ident="r1" property="models/word::ribbonTab" value="design">
<assert>'{property}' == '{value}'</assert>
</athena:response>
<athena:response ident="r1" property="models/word::dialog" value="styles">
<assert>'{property}' == '{value}'</assert>
</athena:response>
<athena:response ident="r1" property="models/word::color" value="red">
<assert>'{property}' == '{value}'</assert>
</athena:response>
</athena:score>
With this information the psychometrician can quickly look at this data or put it into a spreadsheet to run numbers on whatever they want. Our goals is not only to score the question but to provide information to run analytics against.
Athena task
Run this task with the grunt athena
command
Task targets, files and options may be specified according to the grunt Configuring tasks guide.
Options
These are the options that can be set.
inputDir
Type: String
Default: undefined
The target directory to use to inspect, compile and produce an Athena build.
outputDir
Type: String
Default: undefined
The directory path to output the Athena build.
strings
Type: String
Array
Default: undefined
The path or paths where the string XML files are located. These are used to replace strings during compile.
manifest
Type: {}
Default: undefined
Produces an athena manifest listing all the files that will can be loaded in the exam. The Athena server uses these to cache and validate resources. If it isn't in the list, it isn't getting loaded.
minify
Type: Boolean
Default: false
Minifies some of the files that are generated during this build.
excludePackages
Type: Array
Default: undefined
Will exclude the Hammer packages listed from the Athena build.
Usage examples
This is probably how you should set up your file. The CustomItem-hammer is what you should change.
athena: {
build: {
inputDir: "build",
outputDir: "CustomItem-hammer",
strings: ["build/strings/**/*.xml"],
manifest: {
cwd: 'CustomItem-hammer/',
src: ['**', '!athena/**']
},
minify: true,
excludePackages: ['hammer-debug', 'hammer-score', 'hammer-config']
}
}
© Copyright Pearson 2016