# Eyetracking in Labvanced - Initial Study Settings

This is the complete text documentation for creating an experiment that uses eyetracking in Labvanced. For visuals, please see the 4-part eyetracking video series under “Videos.”

# Calibration Settings

In the Study Settings tab, you can activate Eyetracking in the Experiment Features column (on the right side of the screen). Under Eye Tracking via Webcam, choose V2 (Labvanced Eyetracking).

# Length of Calibration

  • Choose between four calibration lengths:
    • Long but most accurate (8 minutes)
    • Intermediate length and good accuracy (5 minutes): the default option
    • Short length and ok accuracy (3 minutes)
    • Very short and less accurate (<1 minute)
  • The difference between these options is how many head poses and movements (calibration points) participants must perform before the calibration is complete. The more calibration points there are, the more accurate the Eyetracking will be.

# Calibration Image Type and Options

  • Choose between dots or animal icons as the images that are presented during calibration.
  • There are several options to make calibration more engaging for participants:
    • Infant friendly mode: Makes the experience more enjoyable for a young child. This option will include exciting images and music, and will automatically choose the shortest calibration time.
    • Play sounds: This option will play a sound for each target in the calibration to attract attention.
    • Show grid: Enables a grid on the screen during calibration. Can help subjects anticipate where the next calibration point will be.
    • Show Initial Test Video Stream: Shows the participant the video feed of themselves with a face mesh overlay (seen during the calibration) to illustrate what will happen during eyetracking. Note: For computers with a low/without a graphics card, this may be very slow or not function at all. If this does not work for a subject, it means their eyetracking data will probably not be recorded and they should not participate in the study.
    • Share calibration data with Labvanced to improve eyetracking: This is optional, but it does help us improve our algorithm for eyetracking.

# Virtual Chinrest

During the calibration, the subject defines a center pose for their head that is used as an estimate of head movement. If the subject moves their head away from the center pose, the experiment is paused and the participant is asked to return to their “chinrest,” or center position. This option can be enabled fully, enabled but with a button to ignore the warning, or disabled. The feature to enable with an ignore button is useful for infants because although a center position is preferable, it may not be possible for all infants, so parents can click “ignore” and continue with data collection.

It is very important that the center pose set during this calibration is comfortable, because the other poses required for calibration will be based on this center one. If the center pose is relaxed and easy to maintain, the calibration will be an easier process and the study will go more smoothly. If the center pose is uncomfortable or not directly facing the monitor, the subject could have difficulty matching the other head poses during calibration, or holding the center pose steady for the entire study. Remember that deviating from the center pose too much could theoretically reduce the spatial accuracy of the measurements.

# NEW Calculation Mode

In our latest eyetracking update, we have added 2 new modes of calculation.

  • Legacy Mode: The original (default) mode of calculation. Has a low sampling rate but no effect on spatial precision.
  • Spatial accuracy: Has a low sampling rate, but unlike Legacy mode, does not slow down experimental events. Also has no effect on spatial precision.
  • High sampling: Has a high sampling rate and no effect on experimental events, but spatial accuracy is slightly reduced during fast head movements. We recommend using High Sampling mode because it will calculate statically at 30Hz with smooth experiment execution.

This chart outlines the differences between the three modes of calculation: eyetracking mode chart