Automate tests
Many behavioural tests are straightforward, with ANY-maze simply tracking the animal’s movements. However, tests such as Operant or Fear conditioning can include complex rules for what should happen as time passes or if the animal does certain things.
ANY-maze uses procedures to define these rules. Procedures are built using drag-and-drop from a few simple to understand statements. For example, the procedure on the right specifies that 30 seconds after the test starts a tone should be played, then 5 seconds after that a shock should be administered.
Making decisions
In tests such as Operant conditioning, the animal may be rewarded for making a correct choice – perhaps it should press a lever only when a light on the left side of the cage is lit; if it does this it receives a reward, otherwise a mild foot shock is administered.
It’s easy to set up rules such as this using procedures – see the example on the right.
Use multiple procedures
Complex Operant conditioning tests can include many devices, for example, doors separating multiple compartments, with lights, levers and pellet dispensers in each compartment. The rules governing how such tests are performed are usually correspondingly complicated.
Writing a single procedure to automate these tests can be difficult, but breaking the rules into different procedures can make things dramatically simpler. ANY-maze allows you to create any number of procedures, all of which are processed simultaneously.
Procedure features
- Simple drag and drop interface
- Just 8 principal statements
- Procedures can respond to over 50 different events
- Procedures can perform more than 70 different actions
- Full control of all the I/O devices that ANY-maze supports
- Wide range of maths functions available
- Functions to generate random numbers
- Support for variables, including arrays
- Variables can be stored between tests
- Numeric variables can be saved as part of a test's results
- Procedures are checked when you write them and errors are highlighted and explained
- Errors that occur while a test is running are reported at the time and recorded with the test