Optotrak Motion Tracking System
Connecting the Optotrak
To turn on the optotrak, flick the switch on the cameras (located between the two plugs), and press the on button at the front of the optotrak box.
Using NDI Software
You can use the NDI software NDI First Principles to check that the markers are working and to get a sense of the trackable volume of the cameras.
To ensure everything is connected properly: utilities > query system You should see "camera c3 - 3 sensors", if you have 3 pairs of sensors attached to the marker hub and everything is properly connected.
To get feel for range of trackable space (green light = marker seen, vs. red light = marker obstructed):
new experiment : next > next > ports (front of box) + how many markers 1 + 6 markers
cal frequency = 200hz for rapid reaching power frequency- leave alone
finish
Now you can move the markers around to test where your capture volume is - green light indicates a marker is visible, red light means it is obstructed.
Optotrack Matlab Code
optotrack matlab code, functions for interfacing with c api (not on cd, given by some guy) more in optomfiles, not more frequently used
runshapempt.m
optotrak_init, written by craig, set up (each subject)
create local coofdinate frame: dynamic and static
looking at only one marker (frontmost);
next ask for 3 points: 0,0 then x-axis (not amount, just positive x) then y (positive)
datagetnext3d- gets next available position of marker (must initialize pdtadest as single)- note: opto deals in single precision, matlab likes to work in double
note: opto calls any missing value as high negative number, can call "nan" in matlab
setup opto activates markers, so then deactivate
T_glob_loc from global opto coordinate frame to local table coordinate frame
make folder for handling raw opto data (idiosyncratic to craig's subject folder structure)
realtime in actual experiment:
line 50 - 88
line 375 0 make sure in right folder -
activate
start position code - 379-399 get location, change to double from single, tranform it, then use it (1:3) = only one marker xyz, (4:6) -
data buffer start buffers on separate opto box
if want to use real time marker in matlab- can use velocity to see if moved -
detectvelonset.m function, stores 4 samples at a time and calculates whether velocity exceeds threshold
end of trial: make sure finished writing buffer- spooling finished? then deactivate markers
analyze reach- for analysis, filter, fill in missing data
SUMMARY
beginning of experiment:
- initialize
- setup frame
- setup opto
trial loop:
- activate
- do stuff
- deactivate
Opto Analysis
each subject has own folder with .mat output file (parameters, trial conditions), and also opto folder with raw data (one file per trial) opto reach analysis gets placed back into the .mat file (data_struct.fdaMat)
analyze reach, loads in the data and calls other functions analysereach lines 5 - 28
fopen(filename) function that reads in a byte at a time, read in header, then read in data function
rawdata matrix created, 6x3x400 400=data points(samples), 3=xyz, 6=markers
note- bad values not =0, but rather -3e28 -- so first convert these to NaN
transform coordinate frame (using .mat data file which has transform info)
make each marker a cell, 6 cells, each with 3 xyz coordinates x 400 samples (200 per second over the 2 seconds)
analyzereach lines 28 on
raw data has to be edited so that one marker can fill in gaps from other marker, then interpolation of missing samples bounds - set bounds b/c interpolation not very good at apex(inflexion) or reach forward/return back
fillmissinginpaint - does interpolation while preserving curvature and velocity information (borrowed from image processing)
cutoff=10 10hz is the outer maximum that a person can move, thus this is the low pass filter value need to filter position data, because when look at its derivative (velocity), noisy position segments get magnified
note: don't use "framerate" as a variable name, since it's a PTB function frameRate= 200, since we sampled at 200Hz
then make everything start from the exact same origin point
next, adjust rotation of trajector so that it's perpendicular to the screen (so figure out the screen coordinates for this)
next, extract only the outward movement (figure out where reach out stops)
onset= for consecutive number of frames hand had to be going at certain velocity and with certain acceleration
stop = maximum reach distance first cutoff, then prior to that find point where they went really slow (this info can be found in Craig's papers)
definedReach now just has one marker, all the redundant markers not processed past this point
note: plot(definedReach(1)(:,1), defineRreach(1)(:,2),'r')
now done for single trial, but when you average trials, some reaches take many frames, others take much fewer frames normalize in space- so start and end point in space remain the same; Craig uses functional data analysis to do this =fit mathematical function to the data, then can alter function to any number of points you need =then overfit (e.g., every 4mm) for one direction in space)
normalizedreach - contains xyz, as well as velocity of xyz
one every trial we now have normalized reach trajectory!
fdaMat (functional data analysis)-> now this gets fed back into .mat file
-be careful using these functions, try to look at individual's reaches and make sure there's not systematic noisy data for lots of individual trials (at least for a couple of subjects)
normalizeType- can be normalized in time, space, x y or z