Laptop with the Labvanced platform open, powerful new technology for psychology

Labvanced Eye Tracking

Researchers from all over the world and countless universities have employed our online eye tracking technology. At Labvanced, our goal is to reduce the gap between eye tracking technology and online experiments, helping researchers have powerful tools with just a few mouse clicks.

Peer-Reviewed Publication: Webcam-based Eye Tracking from Labvanced

Published in the journal of Behavior Research Methods, check it out: Webcam eye tracking close to laboratory standards: Comparing a new webcam-based system and the EyeLink 1000.open in new window

Key findings from the publication:

  • Labvanced’s webcam-based eye tracking has an overall accuracy of 1.4° and a precision of 1.1° with an error of about 0.5° larger than the EyeLink system
  • Interestingly, both accuracy and precision improve (to 1.3° and 0.9°, respectively) when visual targets are presented in the center of the screen - something that’s important for researchers to take into account given that the center of the screen is where stimuli are presented in a lot of psychology experiments.
  • For free viewing and smooth pursuit tasks, the correlation was around 80% between Labvanced and EyeLink gaze data. For a visual demonstration of how this correlation looks like, see the image below that shows the overlap between the data points between Labvanced (blue dots) and EyeLink (red dots).
  • Also, the accuracy was consistent across time.

Cite this paper:

Fig 1. Graphs from the research paper (taken from the publication - Fig.7), a visual demonstration of how this correlation looks like, in a Smooth Pursuit Task, showing the overlap between the data points between Labvanced (blue dots) and EyeLink (red dots). For free viewing and smooth pursuit tasks, the correlation was around 80% between Labvanced and EyeLink gaze data.

Image from an upcoming publication comparing Labvanced with EyeLink for predicting gaze x-coordinates

Background

Online eye tracking was first done a little over 5 years ago by Web Gazer and is still used today. While this solution works with there being a signal across a 4-quadrant plane which can distinguish between left/right and up/down, it has received criticisms as being unrefined because of the high error rate, especially when you move your head which is bound to happen.

Although Web Gaze has been a pioneer in bringing eye tracking to the worldwide web, it cannot be reliably used for scientific purposes and rigorous experiments. Some other alternatives do exist, for example tools for UX, but currently, there are no other is no other web eye tracking software that can be used online with a scientific context for psychological experiments to study complex processes like visual attention and cognition.

Our journey began over 3 years ago, where we implemented a deep neural network to take visual data from a webcam and analyze it to provide information about the participant’s eye and head movement.

This video describes Labvanced's webcam-based eye tracking, as well as it's accuracy:

Our Process: Labvanced’s eye tracking pipeline

With our web-based eye tracking software driven by a deep neural network, a webcam image can be translated into data points through the following process:

  1. Capture webcam images: From an enabled webcam, multiple frames are taken in realtime, creating multiple images from which an analysis for face detection will be conducted. At this step, the user’s graphic card is an important component because artificial intelligence learning relies heavily on a graphics card (more on this in the next section).

  2. Derive two main data points: While performing face detection using a deep neural network, two main data points are derived.

    • Relevant image data from the area around the eyes: the pixels that show the eyes and the area around the eyes.
    • Head position and orientation: the pixels that show where the head is positioned and its orientation in space.
  3. Perform calibration: Finally, something all eye tracking needs is calibration which is subject-specific due to individual differences relating to physical characteristics. Our calibration is a strong web-cam feature that arises from the integrated neural network.

The steps behind Labvanced's eye tracking software that make the webcam solution a plausible app for research and psychology studies.

About System Architecture and Eye Tracking Data Flow

Let’s take a closer look at the process and how eye tracking is handled.

  • First, as established previously, we get a webcam image and we run a face detection algorithm (to create a mesh around the facial features). This is all happening really quickly and in real time on the user’s device, not on remote servers.

  • The tradeoff between doing things in real time and on the user’s device is that that means we are working with finite resources allotted by the user’s computer/device capabilities. However, let it be noted that there is a way around this with post-hoc processing.

Advantages of Labvanced’s Real Time Eye Tracking Processing

While real time processing is done with limited resources, there are some strong advantages to doing this, such as:

  • Real privacy: By performing the calculations on the user’s device (instead of the company’s remote servers), it means there is real privacy. We never see any face data because it is always on your device.
  • Sustainability: Locally, video analysis data can be handled more easily. By contrast, when sending data to a remote company server, costs increase, taking more time and money to process. Thus, it is more sustainable and economic to run things locally with finite resources, rather than remotely with servers and infinite resources.

Key Features of Labvanced’s Online Eye Tracking

Labvanced's key features for the eye tracking technology which gives rise for metrics in online experiments

  • Privacy-Client-side Calculation: As explained previously, we don’t take any data or recordings to our servers. All processing happens locally on the client’s device, allowing for real privacy.

  • Virtual Chin Rest: During eye tracking experiments, a face mesh is created over the user’s face. Before an experiment begins, the participant must align the mesh that appears over their face with a target meat. This created a virtual chin rest that ensures the participant stays within an acceptable zone. When the participant moves away from the chin rest, the task is interrupted and the participant is asked to realign with the virtual chin rest. Try it out yourself here.

  • Recalibration: Our eye tracking technology checks and recalibrates according to your ever-changing environment. This happens automatically but it is customizable. The default is for realibration to occur in the background after 7 trials. The system recalibrates itself after reassessing for influencing factors like luminance changes.

  • Performance check: We also offer a performance check, letting you rest assured that the data from the participants meets your standards. With web-based eye tracking, the speed and performance of the client’s computer can be quantified and the researcher can choose which participants’ data to include in their study. This way, when a participant has a really slow running computer (which would impact the integrity of their results), you will know and you can omit this data from your data set.

  • Infant-friendly eye tracking: When creating an experiment with eye tracking in Labvanced, you can use the infant-friendly eye tracking preset settings if you are working with toddlers. We have successfully deployed studies from many universities focusing on this special population and have adapted the technology according to inputs and needs coming directly from researchers. Read more about infant-friendly eye tracking.open in new window

Learn more about the virtual chinrest in this video:

Sample Data and Metrics from Eye Tracking

Eye tracking produces many different metrics which are then used for data analysis and drawing conclusions about your experimental question.

  1. Gaze location
  2. Areas of Interest (AOI)
  3. Ratio
  4. Revists
  5. Dwell time / time spent
  6. Time to first fixation
  7. First fixation duration
  8. Average fixation duration
  9. Fixation sequences

The majority of these metrics are generated during the data analysis stage, after data collection. The basis of these metrics is gaze measurements. If you know the gaze position point at any given time, you can calculate the remaining metrics.

This video discusses the data from our eye tracking technology:

Collecting Eye Tracking Data from Labvanced

With a few clicks, you can set up your experiment to record the data you desire by selecting one of the many options and creating variables to record them.

Setting up an eye tracking study in Labvanced is straightforward and makes methodology for gathering data for metrics easy

Fig. 2: Setting variables in Labvanced’s eye tracking app to record experimental data about gaze.

Here, in Fig. 2. a variable is created to record the eye gazing X- and Y-coordinates, time stamps, and confidence levels.

A sample data set from Labvanced’s eye tracking app is shown below in Fig. 3.

Sample data from Labvanced's eye tracking online tool, the type of data for gaze which can be used for measuring various use cases like studying attention in autism, toddlers, and marketing

Fig 3: Time series data view display with the last four columns of: x-coordinate, y-coordinate, UNIX time, and confidence scores.

The confidence level (column C) has to do with how confident we are of the measurement and is impacted by when blinking occurs. Measurements from wide-open eyes receive more confidence than measurements from an eye that is half-open or in the process of blinking.

Practice setting up your own eye tracking study in Labvanced by following our step-by-step walkthroughopen in new window.

Things You Can do with Labvanced’s Eye Tracking Technology

When setting up your study on Labvanced, you can set up the variables and events to do many things, for example, you can use:

  • Gaze as the input / response: During the experiment, you can use gaze to select one of the two objects by looking at it for a predetermined amount of time.
  • Gaze to control something: Instead of having the participant use keyboard inputs to control elements in the experiment, you can create a feedback response so that users can control or select objects in real time with their gaze.
  • Gaze to affect a property change: In this example, you can control gaze-contingent displays. A classic example of this is change blindness. So in an experiment, you can track when a participant looks at the left side of the screen and when this happens, you can change a property (like color) of an object on the right hand side.
  • Gaze to dynamically control the flow of an experiment: Eye movements can be used to determine how an experiment trial sequence progresses. For example, if a participant looks at red stimuli, then another trial will follow with red stimuli as opposed to one with blue stimuli.
  • Gaze broadcasting in social experiments: In multi-user experiments, the gaze can be ‘broadcasted’ from one person to another, that way a participant can see where their partner or opponent is looking while performing a 2-person experiment together.
  • Gaze to track reading tasks: Linguistic research uses this aspect of eye tracking to measure reading passages and quantify how much time participants needed to complete reading a passage.
  • Gaze for quality control in crowdsourced experiments: By activating eye tracking in crowdsourced experiments, a researcher can ensure that participants actually spent some time reading the task instructions.

Check out this video where we walk through creating an eyetracking study with special settings and features such as chat boxes, variable distribution, and feedback:

LV Library Studies

The Labvanced library is full of study templates that can be imported and adjusted to include eye tracking. Below we provide a few example studies that demonstrate our eye tracking technology:

  • Target Distractor Task:open in new window A classic eye-tracking paradigm (target - ‘T’ distractor ‘P’ have to look at the target immediately to find it.
  • Infant-Friendly Eye Tracking:open in new window A task based on preferential looking which begins with infant-friendly calibration.
  • Free Viewing:open in new window Shows a variety of images and explores where the participant’s gaze on the image. Useful for experiments wanting to predict where a participant will look on the image based on existing image features.
  • Gaze Feedback Test:open in new window Here you can see your own gaze because a red circle appears, immediate feedback that predicts where you are looking and shows a red circle in that spot.
  • Spatial Accuracy Test:open in new window A standardized calibration test where participants fixate in different locations on a grid, then the fixation is taken for each of the points and the distance is compared between the point on the grid and the predicted point.

Again, try this step-by-step walkthroughopen in new window and practice building a sample eye tracking study in our app!

Research Papers Using Labvanced's Webcam-based Eye Tracking

Linguistic identity as a modulator of gaze cueing of attention
Lorenzoni, A., Calignano, G., Dalmaso, M., & Navarrete, E. (2023).
Scientific Reports - Natureopen in new window

Comparing Online Webcam- and Laboratory-Based Eye-Tracking for the Assessment of Infants’ Audio-Visual Synchrony Perception
Bánki, A., de Eccher, M., Falschlehner, L., Hoehl, S., & Markova, G. (2023).
Frontiers in Psychologyopen in new window

Finding Goldilocks Influencers: How Follower Count Drives Social Media Engagement
Wies, S., Bleier, A., & Edeling, A. (2022).
Journal of Marketingopen in new window

Unconscious Frustration: Dynamically Assessing User Experience using Eye and Mouse Tracking
Stone, S. A., & Chapman, C. S. (2023).
Proceedings of the ACM on Human-Computer Interactionopen in new window

Can Eye Movement Synchronicity Predict Test Performance With Unreliably-Sampled Data in an Online Learning Context?
Sauter, M., Hirzle, T., Wagner, T., Hummel, S., Rukzio, E., & Huckauf, A. (2022).
2022 Symposium on Eye Tracking Research and Applicationsopen in new window

Dynamics of eye-hand coordination are flexibly preserved in eye-cursor coordination during an online, digital, object interaction task
Bertrand, J. K., & Chapman, C. S. (2023)
Proceedings of the 2023 CHI Conference on Human Factors in Computing Systemsopen in new window

Need More Info?

If you need more support or are curious about a particular feature, please reach out to us or look through our FAQ materialsopen in new window.

Sign up today and enable webcam-based eye tracking for your online psychology experiment.open in new window