Eyetracking in Labvanced

This is the complete text documentation for creating an experiment that uses eyetracking in Labvanced. For visuals, please see the 4-part eyetracking video series under “Videos”open in new window in the “Advanced Topics” section.

This page is dedicated to describing the Initial Study Settings for eye tracking-enabled studies.

For additional information on related topics, you can read the following:

WARNING

  • Participants must have a camera or webcam that is HD with at least 1280*720 resolution to participate in eyetracking studies. Otherwise, the study will not be executed and/or no data will be collected from the participant.
  • Please notify your participants of this requirement at the onset of the study to avoid incomplete datasets.

Calibration Settings

In the Study Settings tab, you can activate Eyetracking in the Experiment Features column (on the right side of the screen). Under Eye Tracking via Webcam, choose V2 (Labvanced Eyetracking).

Length of Calibration

  • Choose between four calibration lengths:
    • Long but most accurate (8 minutes)
    • Intermediate length and good accuracy (5 minutes): the default option
    • Short length and ok accuracy (3 minutes)
    • Very short and less accurate (<1 minute)
  • The difference between these options is how many head poses and movements (calibration points) participants must perform before the calibration is complete. The more calibration points there are, the more accurate the Eyetracking will be.

Calibration Image Type and Options

  • Choose between dots or animal icons as the images that are presented during calibration.
  • There are several options to make calibration more engaging for participants:
    • Infant friendly mode: Makes the experience more enjoyable for a young child. This option will include exciting images and music, and will automatically choose the shortest calibration time and ignore the virtual chinrest (see below).
    • Play sounds: This option will play a sound for each target in the calibration to attract attention.
    • Show grid: Enables a grid on the screen during calibration. Can help subjects anticipate where the next calibration point will be.
    • Allow to use previous calibration data: If checked, this allows participants to skip the calibration process on a device that they have already gone through calibration on in the past few hours. Participants will have to agree to a statement that says their position, lighting, etc. have not changed since the last calibration.
    • Redo calibration if error is too high: This option allows you enter a value that serves as the ceiling for calibration error. The value is in terms of percentage of screen size. The percentage of error calculates how accurate the calibration was, and if it is above the set limit, the calibration will be redone.
    • Show Initial Test Video Stream: Shows the participant the video feed of themselves with a face mesh overlay (seen during the calibration) to illustrate what will happen during eyetracking. Note: For computers with a low/without a graphics card, this may be very slow or not function at all. If this does not work for a subject, it means their eyetracking data will probably not be recorded and they should not participate in the study.
    • Share calibration data with Labvanced to improve eyetracking: This is optional, but it does help us improve our algorithm for eyetracking.

Head Pose Alignment (Virtual Chinrest)

During the calibration, the subject defines a center pose for their head that is used as an estimate of head movement. If the subject moves their head away from the center pose, the experiment is paused and the participant is asked to return to their “chinrest,” or center position. This option can be:

  • Enabled and checked at all times: Movement outside of the chinrest at any time will result in the study pausing until the position is regained.
  • Enabled and checked only in between trials: The study will only pause to prompt the subject to return to the chinrest in between trials.
  • Enabled, checked between trials, with an ignore button: The study will pause in between trials if the subject is not positioned at the virtual chinrest, but there is an option to ignore this and continue the study.
  • Disabled: the study will not pause if the participant moves from the virtual chinrest.

When the head pose is being checked, the experiment will pause if the participant moves away from the set virtual chinrest. The feature to enable with an ignore button is useful for infants because although a center position is preferable, it may not be possible for all infants, so adults can click “ignore” and continue with data collection.

It is very important that the center pose set during this calibration is comfortable, because the other poses required for calibration will be based on this center one. If the center pose is relaxed and easy to maintain, the calibration will be an easier process and the study will go more smoothly. If the center pose is uncomfortable or not directly facing the monitor, the subject could have difficulty matching the other head poses during calibration, or holding the center pose steady for the entire study. Remember that deviating from the center pose too much could theoretically reduce the spatial accuracy of the measurements.

Chinrest Constraint

The Chinrest Constraint is a specification of how much the participant is allowed to move from the set Virtual Chinrest during the task. The higher the constraint, the less the participant can move their head. The constraint ranges from very loose to very strict. Very loose is recommended for infant eyetracking studies and very strict is recommended for adult studies where the best prediction and accuracy are desired.

Minimum Performance Requirements

This specification sets the minimum face snapshots per second that the participant's device camera needs to be able to take in order to participate in the study. We recommend the medium-low or medium-high setting, which requires 5 to 7.5 Hz (frequency of snapshots per second). Lower requirements will affect data quality, but higher requirements will restrict the number of participants whose devices can handle that specification.

The requirements range from very low (0.5 Hz) to very high (15 Hz).

Note: Maximum Resolution Requirement

The reason there is a maximum resolution limit for webcams in eye tracking studies is because it standardizes the amount of data being uploaded to our servers. If your webcam is very high-power, there will be a large amount of data collected and the study will slow down.

Most of the processing load happens during the initial calibration, when the system is attempting to detect a face from the webcam. Face detection and eye gaze detection operate in two different pipelines. Once a face is established, an “eye mask” snapshot is taken and eye gaze can be tracked.

Even if your device has very high CPU and RAM, the eye tracking feature may be slow or unable to process due to other processes running on your device at the same time. It is very important that you close all other tabs and any unnecessary background processes prior to beginning an eye tracking study.