labvanced logoLabVanced
  • Research
    • Publications
    • Researcher Interviews
    • Use Cases
      • Behavioral Psychology
      • Personality & Social Psychology
      • Cognitive & Neuro Psychology
      • Developmental & Educational Psychology
      • Clinical & Health Psychology
      • Sports & Movement Psychology
      • Marketing & Consumer Psychology
    • Labvanced Blog
  • Technology
    • Feature Overview
    • Desktop App
    • Phone App
    • Precise Timing
    • Experimental Control
    • Eye Tracking
    • Multi User Studies
    • More ...
      • Questionnaires
      • Artificial Intelligence (AI) Integration
      • Mouse Tracking
      • Data Privacy & Security
  • Learn
    • Guide
    • Videos
    • Walkthroughs
    • FAQ
    • Release Notes
    • Documents
    • Classroom
  • Experiments
    • Public Experiment Library
    • Labvanced Sample Studies
  • Pricing
    • Pricing Overview
    • License Configurator
    • Single License
    • Research Group
    • Departments & Consortia
  • About
    • About Us
    • Contact
    • Downloads
    • Careers
    • Impressum
    • Disclaimer
    • Privacy & Security
    • Terms & Conditions
  • Appgo to app icon
  • Logingo to app icon
Learn
Guide
Videos
Walkthroughs
FAQ
Release Notes
Classroom
  • 中國人
  • Deutsch
  • Français
  • Español
  • English
Guide
Videos
Walkthroughs
FAQ
Release Notes
Classroom
  • 中國人
  • Deutsch
  • Français
  • Español
  • English
  • Guide
    • GETTING STARTED

      • Objects
      • Events
      • Variables
      • Task Wizard
      • Trial System
      • Study Design
        • Tasks
        • Blocks
        • Sessions
        • Groups
    • FEATURED TOPICS

      • Randomization & Balance
      • Eye Tracking
      • Desktop App
      • Sample Studies
      • Participant Recruitment
      • API Access
        • REST API
        • Webhook API
        • WebSocket API
      • Other Topics

        • Precise Stimulus Timings
        • Multi User Studies
        • Head Tracking in Labvanced | Guide
    • MAIN APP TABS

      • Overview: Main Tabs
      • Dashboard
      • My Studies
      • Shared Studies
      • My Files
      • Experiment Library
      • My Account
      • My License
    • STUDY TABS

      • Overview: Study-Specific Tabs
      • Study Design
        • Tasks
        • Blocks
        • Sessions
        • Groups
      • Task Editor
        • Main Functions and Settings
        • The Trial System
        • Canvas and Page Frames
        • Objects
        • Object Property Tables
        • Variables
        • System Variables Tables
        • The Event System
        • Trial Randomization
        • Text Editor Functions
        • Eyetracking in a Task
        • Head Tracking in a Task
        • Multi-User Studies
      • Study Settings
        • Start Up and Main Settings
        • Browsers & Devices Settings
        • Experiment Features Settings
      • Description
        • More Details about Description Information
        • Images, Links, and References in Descriptions
      • Variables
      • Media
      • Translate
      • Run
      • Publish and Record
        • Requirements for Publishing a Study in Labvanced
        • Recruiting Participants and Crowdsourcing
        • License Selection and Confirmation
        • After Publishing Your Labvanced Study
      • Sharing
      • Participants
      • Dataview and Export
        • Dataview and Variable & Task Selection (OLD Version)
        • Accessing Recordings (OLD Version)
  • Videos
    • Video Overview
    • Getting Started in Labvanced
    • Creating Tasks
    • Element Videos
    • Events & Variables
    • Advanced Topics
  • Walkthroughs
    • Introduction
    • Stroop Task
    • Lexical Decision Task
    • Posner Gaze Cueing Task
    • Change Blindness Flicker Paradigm
    • Eye-tracking Sample Study
    • Infant Eye-tracking Study
    • Attentional Capture Study with Mouse Tracking
    • Rapid Serial Visual Presentation
    • ChatGPT Study
    • Eye Tracking Demo: SVGs as AOIs
    • Multi-User Demo: Show Subjects' Cursors
    • Gamepad / Joystick Controller- Basic Set Up
    • Desktop App Study with EEG Integration
  • FAQ
    • Features
    • Security & Data Privacy
    • Licensing
    • Precision of Labvanced
    • Programmatic Use & API
    • Using Labvanced Offline
    • Troubleshooting
    • Study Creation Questions
  • Release Notes
  • Classroom

Temporal & Spatial Precision

How precise is Labvanced? How does Labvanced ensure precise presentation of all stimuli?

Labvanced was built with the focus of ensuring the exact temporal presentation of all stimuli. We pursue a double pre-loading approach: before the experiment starts, we load all external content (images,videos, audio, etc.) in the browser cache. Then, for each trial we pre-load and pre-render the next trial such that once the trial changes, all stimuli are presented instantaneously. As a result, on devices/computer with acceptable internet bandwidth and CPU/RAM, the temporal precision should be comparable to lab settings.

How can I be sure that timing measures/stimulus presentation are correct? What does Labvanced do to measure timing precision?

Labvanced runs separate controls of timing precision for each recording. More precisely, we run a test time measurement every 5 seconds during each recording. In the end, we calculate the mean and standard deviation of these values and provide these for each recording. The mean is always positive, as a device never stops before but occasionally does stop after a desired time and thus sometimes a little too late (due to CPU usage, other lags, etc). This is equivalent to the delay in presenting a stimulus on the screen. However, to our understanding, such a constant mean offset can be counteracted post-hoc analytically.

On the other hand, a high standard deviation in the control time measurement suggests that the time measurements are generally of different quality, and therefore not really comparable. A high standard deviation is a good exclusion criterion of the subject. The median offset of all time measurements is about 20ms and the median standard deviation of all time measurements is about 12ms (across all subjects and experiments). If values are much higher (2-3 times) than that, it indicates imprecise timing measurements. Furthermore, it is possible to determine the timing precision on a custom basis (e.g. for each trial, frame, or stimulus) using UNIX timestamps in the event system.

What are the options of spatial presentation modes? How precise can stimuli be presented spatially?

There are various modes of how stimuli can be presented spatially. The default mode is the zoom mode, which will zoom all content elements until either vertical or horizontal limits are reached. Stimuli can also be fixed to pixels or visual degrees. Fixing a stimulus / frame to visual degrees will require a pre-calibration before the start of the experiment by which the physical size of the participants’ computer/ device is measured. Depending of the experimenter's choice of display mode, stimuli can be presented with very high (lab comparable) spatial accuracy.

Prev
Licensing
Next
Programmatic Use & API