labvanced logoLabVanced
  • Research
    • Publications
    • Researcher Interviews
    • Use Cases
      • Behavioral Psychology
      • Personality & Social Psychology
      • Cognitive & Neuro Psychology
      • Developmental & Educational Psychology
      • Clinical & Health Psychology
      • Sports & Movement Psychology
      • Marketing & Consumer Psychology
    • Labvanced Blog
  • Technology
    • Feature Overview
    • Desktop App
    • Phone App
    • Precise Timing
    • Experimental Control
    • Eye Tracking
    • Multi User Studies
    • More ...
      • Questionnaires
      • Artificial Intelligence (AI) Integration
      • Mouse Tracking
      • Data Privacy & Security
  • Learn
    • Guide
    • Videos
    • Walkthroughs
    • FAQ
    • Release Notes
    • Documents
    • Classroom
  • Experiments
    • Public Experiment Library
    • Labvanced Sample Studies
  • Pricing
    • Pricing Overview
    • License Configurator
    • Single License
    • Research Group
    • Departments & Consortia
  • About
    • About Us
    • Contact
    • Downloads
    • Careers
    • Impressum
    • Disclaimer
    • Privacy & Security
    • Terms & Conditions
  • Appgo to app icon
  • Logingo to app icon
Learn
Guide
Videos
Walkthroughs
FAQ
Release Notes
Classroom
  • 中國人
  • Deutsch
  • Français
  • Español
  • English
Guide
Videos
Walkthroughs
FAQ
Release Notes
Classroom
  • 中國人
  • Deutsch
  • Français
  • Español
  • English
  • Guide
    • GETTING STARTED

      • Objects
      • Events
      • Variables
      • Task Wizard
      • Trial System
      • Study Design
        • Tasks
        • Blocks
        • Sessions
        • Groups
    • FEATURED TOPICS

      • Randomization & Balance
      • Eye Tracking
      • Desktop App
      • Sample Studies
      • Participant Recruitment
      • API Access
        • REST API
        • Webhook API
        • WebSocket API
      • Other Topics

        • Precise Stimulus Timings
        • Multi User Studies
        • Head Tracking in Labvanced | Guide
    • MAIN APP TABS

      • Overview: Main Tabs
      • Dashboard
      • My Studies
      • Shared Studies
      • My Files
      • Experiment Library
      • My Account
      • My License
    • STUDY TABS

      • Overview: Study-Specific Tabs
      • Study Design
        • Tasks
        • Blocks
        • Sessions
        • Groups
      • Task Editor
        • Main Functions and Settings
        • The Trial System
        • Canvas and Page Frames
        • Objects
        • Object Property Tables
        • Variables
        • System Variables Tables
        • The Event System
        • Trial Randomization
        • Text Editor Functions
        • Eyetracking in a Task
        • Head Tracking in a Task
        • Multi-User Studies
      • Study Settings
        • Start Up and Main Settings
        • Browsers & Devices Settings
        • Experiment Features Settings
      • Description
        • More Details about Description Information
        • Images, Links, and References in Descriptions
      • Variables
      • Media
      • Translate
      • Run
      • Publish and Record
        • Requirements for Publishing a Study in Labvanced
        • Recruiting Participants and Crowdsourcing
        • License Selection and Confirmation
        • After Publishing Your Labvanced Study
      • Sharing
      • Participants
      • Dataview and Export
        • Dataview and Variable & Task Selection (OLD Version)
        • Accessing Recordings (OLD Version)
  • Videos
    • Video Overview
    • Getting Started in Labvanced
    • Creating Tasks
    • Element Videos
    • Events & Variables
    • Advanced Topics
  • Walkthroughs
    • Introduction
    • Stroop Task
    • Lexical Decision Task
    • Posner Gaze Cueing Task
    • Change Blindness Flicker Paradigm
    • Eye-tracking Sample Study
    • Infant Eye-tracking Study
    • Attentional Capture Study with Mouse Tracking
    • Rapid Serial Visual Presentation
    • ChatGPT Study
    • Eye Tracking Demo: SVGs as AOIs
    • Multi-User Demo: Show Subjects' Cursors
    • Gamepad / Joystick Controller- Basic Set Up
    • Desktop App Study with EEG Integration
  • FAQ
    • Features
    • Security & Data Privacy
    • Licensing
    • Precision of Labvanced
    • Programmatic Use & API
    • Using Labvanced Offline
    • Troubleshooting
    • Study Creation Questions
  • Release Notes
  • Classroom

Eyetracking in Labvanced

This is the complete text documentation for creating an experiment that uses eyetracking in Labvanced. For visuals, please see the 4-part eyetracking video series under “Videos” in the “Advanced Topics” section.

This page is dedicated to describing the Initial Study Settings for eye tracking-enabled studies.

For additional information on related topics, you can read the following:

  • Eye Tracking Technology Overview: General explanation about the technology behind our innovative webcam-based eye tracking.
  • Creating a Task: Explains how to enable and create an eye tracking task in Labvanced.
    • Eye Tracking in a Task: Additional information about creating an eye tracking-related task, such as task options and submenus.
  • Running the Study: Details about how eye tracking looks like from the participant’s point of view during a study, such as the calibration steps that they will need to follow.
  • Data Output: Information about how to access and view the relevant data captured from your eye tracking experiment.

WARNING

  • Participants must have a camera or webcam that is HD with at least 1280*720 resolution to participate in eyetracking studies. Otherwise, the study will not be executed and/or no data will be collected from the participant.
  • Please notify your participants of this requirement at the onset of the study to avoid incomplete datasets.

Activate Eyetracking via Webcam

In the Study Settings tab, you can activate Eyetracking in the Experiment Features column (right-most column).

Upon activating it you can specify which eye tracking to use, whether to go into infant-friendly mode, as well as your calibration settings:

  • Eyetracking Version:
    • v0.2 (legacy): Remains selected for on-going studies
    • v1.0: Includes additional feature customization
  • Infant friendly mode: Makes the experience more enjoyable for a young child. This option will include exciting images and music, and will automatically update the calibration settings below to suggested values which you can then, of course, edit.

Calibration Settings

The Calibration Settings allow you to customize and specify the necessary requirements for your particular study.

Length of Calibration

  • Choose between four calibration lengths:
    • 175 points, 12 poses, ~7 min: Long but most accurate
    • 130 points, 9 poses, ~5.5 min: Intermediate length and good, also the default option
    • 55 points, 4 poses, ~2 min: Short length and ok accuracy
    • 15 points, 1 poses, ~40 seconds: Very short and least accurate
  • The difference between these options is how many head poses and movements (calibration points) participants must perform before the calibration is complete. The more calibration points there are, the more accurate the Eyetracking will be.
  • NOTE: Please note that the value below for the Max allowed calibation error in % of screen size updates automatically based on the calibration length chosen.

Calibration Image Type and Options

  • Choose between dots or animal icons as the images that are presented during calibration.
  • There are several options to make calibration more engaging for participants:
    • Play sounds: This option will play a sound for each target in the calibration to attract attention.
    • Animal sounds volume: Allows you to adjust the volume level of the animal sounds featured in infant-friendly calibration. Note: Available only with the v1.0 eye tracking veersion.
    • Show grid: Enables a grid on the screen during calibration. Can help subjects anticipate where the next calibration point will be.
    • Allow to use previous calibration data: If checked, this allows participants to skip the calibration process on a device that they have already gone through calibration on in the past few hours. Participants will have to agree to a statement that says their position, lighting, etc. have not changed since the last calibration.
    • Redo calibration if error is too high: This option allows you enter a value that serves as the ceiling for calibration error. The value is in terms of percentage of screen size. The percentage of error calculates how accurate the calibration was, and if it is above the set limit, the calibration will be redone.
    • Share calibration data with Labvanced to improve eyetracking: This is optional, but it does help us improve our algorithm for eyetracking.

Head Pose Alignment (Virtual Chinrest)

During the calibration, the subject defines a center pose for their head that is used as an estimate of head movement. If the subject moves their head away from the center pose, the experiment is paused and the participant is asked to return to their “chinrest,” or center position. This option can be:

  • Enable (check during trials): Enabled and checked at all times. Movement outside of the chinrest at any time will result in the study pausing until the position is regained.
  • Enable (check between trials): Enabled and checked only in between trials: The study will only pause to prompt the subject to return to the chinrest in between trials.
  • Enable (check between trials) and show ignore button: Enabled, checked between trials, with an ignore button: The study will pause in between trials if the subject is not positioned at the virtual chinrest, but there is an option to ignore this and continue the study.
  • Disabled: the study will not pause if the participant moves from the virtual chinrest.

When the head pose is being checked, the experiment will pause if the participant moves away from the set virtual chinrest. The feature to enable with an ignore button is useful for infants because although a center position is preferable, it may not be possible for all infants, so adults can click “ignore” and continue with data collection.

It is very important that the center pose set during this calibration is comfortable, because the other poses required for calibration will be based on this center one. If the center pose is relaxed and easy to maintain, the calibration will be an easier process and the study will go more smoothly. If the center pose is uncomfortable or not directly facing the monitor, the subject could have difficulty matching the other head poses during calibration, or holding the center pose steady for the entire study. Remember that deviating from the center pose too much could theoretically reduce the spatial accuracy of the measurements.

Chinrest Constraint

The Chinrest Constraint is a specification of how much the participant is allowed to move from the set Virtual Chinrest during the task. The higher the constraint, the less the participant can move their head. The constraint ranges from very loose to very strict. Very loose is recommended for infant eyetracking studies and very strict is recommended for adult studies where the best prediction and accuracy are desired.

Minimum Performance Requirements

The Minimum Performance Requirement sets the minimum face snapshots per second that the participant's device camera needs to be able to take in order to participate in the study.

The requirements range from very low (0.5 Hz) to very high (15 Hz).

We recommend the medium-low or medium-high setting, which requires 5 to 7.5 Hz (frequency of snapshots per second). Lower requirements will affect data quality, but higher requirements will restrict the number of participants whose devices can handle that specification.

Prev
Randomization & Balance
Next
Desktop App