labvanced logoLabVanced
  • Research
    • Publications
    • Researcher Interviews
    • Use Cases
      • Behavioral Psychology
      • Personality & Social Psychology
      • Cognitive & Neuro Psychology
      • Developmental & Educational Psychology
      • Clinical & Health Psychology
      • Sports & Movement Psychology
      • Marketing & Consumer Psychology
    • Labvanced Blog
  • Technology
    • Feature Overview
    • Desktop App
    • Phone App
    • Precise Timing
    • Experimental Control
    • Eye Tracking
    • Multi User Studies
    • More ...
      • Questionnaires
      • Artificial Intelligence (AI) Integration
      • Mouse Tracking
      • Data Privacy & Security
  • Learn
    • Guide
    • Videos
    • Walkthroughs
    • FAQ
    • Release Notes
    • Documents
    • Classroom
  • Experiments
    • Public Experiment Library
    • Labvanced Sample Studies
  • Pricing
    • Pricing Overview
    • License Configurator
    • Single License
    • Research Group
    • Departments & Consortia
  • About
    • About Us
    • Contact
    • Downloads
    • Careers
    • Impressum
    • Disclaimer
    • Privacy & Security
    • Terms & Conditions
  • Appgo to app icon
  • Logingo to app icon
Learn
Guide
Videos
Walkthroughs
FAQ
Release Notes
Classroom
  • 中國人
  • Deutsch
  • Français
  • Español
  • English
Guide
Videos
Walkthroughs
FAQ
Release Notes
Classroom
  • 中國人
  • Deutsch
  • Français
  • Español
  • English
  • Guide
    • GETTING STARTED

      • Objects
      • Events
      • Variables
      • Task Wizard
      • Trial System
      • Study Design
        • Tasks
        • Blocks
        • Sessions
        • Groups
    • FEATURED TOPICS

      • Randomization & Balance
      • Eye Tracking
      • Desktop App
      • Sample Studies
      • Participant Recruitment
      • API Access
        • REST API
        • Webhook API
        • WebSocket API
      • Other Topics

        • Precise Stimulus Timings
        • Multi User Studies
        • Head Tracking in Labvanced | Guide
    • MAIN APP TABS

      • Overview: Main Tabs
      • Dashboard
      • My Studies
      • Shared Studies
      • My Files
      • Experiment Library
      • My Account
      • My License
    • STUDY TABS

      • Overview: Study-Specific Tabs
      • Study Design
        • Tasks
        • Blocks
        • Sessions
        • Groups
      • Task Editor
        • Main Functions and Settings
        • The Trial System
        • Canvas and Page Frames
        • Objects
        • Object Property Tables
        • Variables
        • System Variables Tables
        • The Event System
        • Trial Randomization
        • Text Editor Functions
        • Eyetracking in a Task
        • Head Tracking in a Task
        • Multi-User Studies
      • Study Settings
        • Start Up and Main Settings
        • Browsers & Devices Settings
        • Experiment Features Settings
      • Description
        • More Details about Description Information
        • Images, Links, and References in Descriptions
      • Variables
      • Media
      • Translate
      • Run
      • Publish and Record
        • Requirements for Publishing a Study in Labvanced
        • Recruiting Participants and Crowdsourcing
        • License Selection and Confirmation
        • After Publishing Your Labvanced Study
      • Sharing
      • Participants
      • Dataview and Export
        • Dataview and Variable & Task Selection (OLD Version)
        • Accessing Recordings (OLD Version)
  • Videos
    • Video Overview
    • Getting Started in Labvanced
    • Creating Tasks
    • Element Videos
    • Events & Variables
    • Advanced Topics
  • Walkthroughs
    • Introduction
    • Stroop Task
    • Lexical Decision Task
    • Posner Gaze Cueing Task
    • Change Blindness Flicker Paradigm
    • Eye-tracking Sample Study
    • Infant Eye-tracking Study
    • Attentional Capture Study with Mouse Tracking
    • Rapid Serial Visual Presentation
    • ChatGPT Study
    • Eye Tracking Demo: SVGs as AOIs
    • Multi-User Demo: Show Subjects' Cursors
    • Gamepad / Joystick Controller- Basic Set Up
    • Desktop App Study with EEG Integration
  • FAQ
    • Features
    • Security & Data Privacy
    • Licensing
    • Precision of Labvanced
    • Programmatic Use & API
    • Using Labvanced Offline
    • Troubleshooting
    • Study Creation Questions
  • Release Notes
  • Classroom

Running the Study

Initial Calibration of Video Feed

First, subjects will have to give permission for the Labvanced server to access their camera/webcam to record data. Once accepted, the calibration screen will appear (please allow up to 15 seconds for loading). You are looking for a video of yourself with a blue mesh over your face that moves smoothly as you turn your head and move your mouth. If the mesh seems out of sync with your actions, click “No, this does not work well” at the bottom of the screen. Otherwise, click “yes, this works well.” The study will begin after another consent screen is accepted.

Participants must have a camera or webcam with at least 1280*720 resolution in order to capture a complete eyetracking dataset. Please inform participants of this requirement at the onset of the study.

Calibration of Eyetracker

A screen with text will appear. This text can be translated for some multilingual studies, but by default looks like this:

eyecalibration

For participants who need to wear glasses, the eyetracking may not calibrate properly, especially if the lenses are very reflective.

The next screen will ask participants about lighting. The subject’s face should be well lit from the front, without a bright light or window behind them. Too much light from the front will “wash out” a participant’s face, so be careful.

Next, participants must set their center pose (the virtual chinrest mentioned earlier). There will be 2 head poses with one round of calibration for each pose. Participants can also adjust their screen and/or camera position to be more comfortable, as they cannot change these after this step.

facecalibration

After clicking continue, this window will appear:

centerpose

The position that the participant was in as they clicked continue on the previous screen is saved as a green mesh, which indicates the center pose/virtual chinrest. If this pose is incorrect, participants can go back and try again. If it is correct and comfortable, clicking continue will start the dynamic calibration.

Now, participants will be asked to align the blue mesh (their active face) with the green mesh (the static virtual chinrest) and hold the position for a few seconds. If they stray from this position, the calibration pauses and a notice appears at the bottom of the screen asking them to realign.

The video feed then disappears and a screen with the researcher’s chosen background color and fixation points appears. Participants are directed to look at one of the points, highlighted by a large circle that gradually shrinks as the participant gazes at it. The large circle then moves to highlight a different point on the screen, and so on until all points have been calibrated.

calibrationpoints

Next, participants are asked to move their head left and right.

headtilt

Holding each pose, the dot screen appears again and is recalibrated for the left and right shifted positions. Finally, participants return to the center position for a last round of calibration with the dots. Before each trial, there is a short recalibration. If your first task involves eyetracking, there will be a short dot calibration even though the participant has just finished the overall calibration.

Tips

  • If you change the background color of you frames, be sure to go to the Study Settings tab and change the overall background color of the study. This ensures that the calibration screen color matches the background color that participants will see throughout the study.
  • Test the study on yourself or a lab partner to ensure you are collecting the necessary eyetracking data.
  • If the same device logs into an eyetracking experiment within a few hours of having logged in previously, Labvanced may have saved the calibration data.

storeddata

You can select to either skip calibration or rerun it.