labvanced logoLabVanced
  • Research
    • Publications
    • Researcher Interviews
    • Use Cases
      • Developmental Psychology
      • Linguistics
      • Clinical & Digital Health
      • Educational Psychology
      • Cognitive & Neuro
      • Social & Personality
      • Arts Research
      • Sports & Movement
      • Marketing & Consumer Behavior
      • Economics
      • HCI / UX
      • Commercial / Industry Use
    • Labvanced Blog
    • Services
  • Technology
    • Feature Overview
    • Code-Free Study Building
    • Eye Tracking
    • Mouse Tracking
    • Generative AI Integration
    • Multi User Studies
    • More ...
      • Reaction Time/Precise Timing
      • Text Transcription
      • Heart Rate Detection (rPPG)
      • Questionnaires/Surveys
      • Experimental Control
      • Data Privacy & Security
      • Desktop App
      • Mobile App
  • Learn
    • Guide
    • Videos
    • Walkthroughs
    • FAQ
    • Release Notes
    • Documents
    • Classroom
  • Experiments
    • Cognitive Tests
    • Sample Studies
    • Public Experiment Library
  • Pricing
    • Licenses
    • Top-Up Recordings
    • Subject Recruitment
    • Study Building
    • Dedicated Support
    • Checkout
  • About
    • About Us
    • Contact
    • Downloads
    • Careers
    • Impressum
    • Disclaimer
    • Privacy & Security
    • Terms & Conditions
  • Appgo to app icon
  • Logingo to app icon
Learn
Guide
Videos
Walkthroughs
FAQ
Newsletter Archive
Documents
Classroom
  • 中國人
  • Deutsch
  • Français
  • Español
  • English
  • 日本語
Guide
Videos
Walkthroughs
FAQ
Newsletter Archive
Documents
Classroom
  • 中國人
  • Deutsch
  • Français
  • Español
  • English
  • 日本語
  • Guide
    • GETTING STARTED

      • Task Editor
      • Stimulus Presentation
      • Correctness of Response
      • Objects
      • Events
      • Variables
      • Task Wizard
      • Trial System
      • Study Design
        • Tasks
        • Blocks
        • Sessions
        • Groups
    • FEATURED TOPICS

      • Randomization & Balance
      • Eye Tracking
      • Questionnaires
      • Desktop App
      • Sample Studies
      • Participant Recruitment
      • API Access
        • REST API
        • Webhook API
        • WebSocket API
      • Other Topics

        • Precise Stimulus Timings
        • Multi User Studies
        • Head Tracking in Labvanced | Guide
    • MAIN APP TABS

      • Overview: Main Tabs
      • Dashboard
      • My Studies
      • Shared Studies
      • My Files
      • Experiment Library
      • My Account
      • License & Services
    • STUDY TABS

      • Overview: Study-Specific Tabs
      • Study Design
        • Tasks
        • Blocks
        • Sessions
        • Groups
      • Task Editor
        • Task Controls
        • The Trial System
        • Canvas and Page Frames
        • Objects
        • Object Property Tables
        • Variables
        • System Variables Tables
        • The Event System
        • Text Editor Functions
        • Eyetracking in a Task
        • Head Tracking in a Task
        • Multi-User Studies
      • Settings
      • Variables
      • Media
      • Texts & Translate
      • Launch & Participate
      • Subject Management
      • Dataview and Export
        • Dataview and Variable & Task Selection (OLD Version)
        • Accessing Recordings (OLD Version)
  • Videos
    • Video Overview
    • Getting Started in Labvanced
    • Creating Tasks
    • Element Videos
    • Events & Variables
    • Advanced Topics
  • Walkthroughs
    • Introduction
    • Stroop Task
    • Lexical Decision Task
    • Posner Gaze Cueing Task
    • Change Blindness Flicker Paradigm
    • Eye-tracking Sample Study
    • Infant Eye-tracking Study
    • Attentional Capture Study with Mouse Tracking
    • Rapid Serial Visual Presentation
    • ChatGPT Study
    • Eye Tracking Demo: SVGs as AOIs
    • Multi-User Demo: Show Subjects' Cursors
    • Gamepad / Joystick Controller- Basic Set Up
    • Desktop App Study with EEG Integration
    • Between-subjects Group Balancing and Variable Setup
  • FAQ
    • Features
    • Security & Data Privacy
    • Licensing
    • Precision of Labvanced
    • Programmatic Use & API
    • Using Labvanced Offline
    • Troubleshooting
    • Study Creation Questions
  • Newsletter Archive
  • Documents
  • Classroom
Representation of the settings tab in Labvanced.

Settings Tab

This documentation provides a comprehensive overview of the study-level settings and features available in Labvanced. It explains how to configure core study information, participant controls, device and software requirements, subject management, physiological measurements, and optional advanced features, helping researchers set up, customize, and manage experiments effectively from start to finish.

Table of Contents

  • Main
  • Controls
  • Subjects
  • Physiology
  • Special Features

Main

The Main section in the Settings tab is the central hub for configuring a study’s core aspects, including its identity, presentation, technical setup, and accessibility. It combines Settings for parameters like ID, names, languages, device and offline preferences, multi-user or longitudinal design, eye-tracking, and JavaScript execution, with Defaults for consistent fonts, buttons, and frame sizes. It also manages Shared Editing Access for secure collaboration and the Open Experiment Library for publishing, discoverability, and controlled sharing of study designs. Overall, it ensures a study is properly configured, visually consistent, collaboratively managed, and optionally publicly accessible.

Main

  • Settings
  • Defaults
  • Shared Editing Access
  • Open Experiment Library

Settings

Path: Settings tab → top Main drop-down → Settings

The Settings section serves as the basic-most configuration hub for a study, allowing researchers to define its overall identity, functionality, and accessibility. It establishes how the study is identified and organized internally, as well as the visual and linguistic environment in which it runs. It also specifies the basic technical and methodological parameters, including support for eye-tracking, multi-session or multi-user designs, device optimization, offline compatibility, and secure JavaScript execution.


Study Id

A unique identifier for a study. Also displayed in 'My Studies' Page. Always provide this ID when you reach out to our support.

Path: Settings tab → top Main drop-down → Settings section → Study Id


Study Name (Internal)

The internal study name is the name you see on the 'My Studies' page to organize your studies.

Path: Settings tab → top Main drop-down → Settings section → Study Name (Internal)


Study Name (Public)

The public study name is the name that will be displayed to participants while the study is loading.

Path: Settings tab → top Main drop-down → Settings section → Study Name (Public)


Loading Image

The loading image is an optional image that will be displayed to participants while the study is loading.

Path: Settings tab → top Main drop-down → Settings section → Loading Image


Background Color

The background color is the color of all the unoccupied parts of your screen.

Note: For Eye-Tracking studies it is important to make sure that the study background color matches the background color of your eye-tracking task!

Path: Settings tab → top Main drop-down → Settings section → Background Color


Time Zone (UTC)

This determines the timezone for your study. It is defined as an offset from UTC.

Path: Settings tab → top Main drop-down → Settings section → Time Zone (UTC)


Main Language

This setting indicates the main language of your study shown to participants. If you have multiple languages enabled there will either be a language selection option at the start or you can predefine the displayed language via a URL parameter.

Path: Settings tab → top Main drop-down → Settings section → Main Language


Static / System Text Language

This setting indicates the language for system messages / static strings shown to the participant. You can customize these strings in the Texts & Translate tab.

Path: Settings tab → top Main drop-down → Settings section → Static / System Text Language


Multiple Languages

Indicates whether the study is defined for multiple languages or not.

Path: Settings tab → top Main drop-down → Settings section → Multiple Languages


Eye-Tracking Study

Indicates whether the study has activated eye-tracking or not.

Path: Settings tab → top Main drop-down → Settings section → Eye-Tracking Study


Longitudinal Design

Indicates whether the study is defined as longitudinal (multiple sessions per participant) or not.

Path: Settings tab → top Main drop-down → Settings section → Longitudinal Design


Multi User Study

Indicates whether the study is defined as a multi-user / real time multi-player experiment or not.

Path: Settings tab → top Main drop-down → Settings section → Multi User Study


Device Preference

By changing this setting, other study settings will be adjusted accordingly. E.g. if you select "Mobile Optimized", the default size for newly created frames will be take on to mobile size (portrait orientation).

Path: Settings tab → top Main drop-down → Settings section → Device Preference


Is Offline Compatible

Indicates whether the study is compatible with the offline mode offered in the Labvanced Desktop and Phone Apps. Offline compatible studies can be downloaded and run without an internet connection.

Path: Settings tab → top Main drop-down → Settings section → Is Offline Compatible


Locked Offline Mode

Locking the study to offline compatible mode ensures that only features supported in offline mode are used. This is recommended for studies intended to be run without an internet connection.

Path: Settings tab → top Main drop-down → Settings section → Locked Offline Mode


JavaScript Code Execution

You can allow JavaScript code execution in a native format or within a sandboxed environment for improved security. Learn More.

Path: Settings tab → top Main drop-down → Settings section → JavaScript Code Execution


Defaults

Path: Settings tab → top Main drop-down → Defaults

The Defaults section provides a way to standardize the visual and structural aspects of a study by setting baseline preferences for newly created elements. It allows researchers to define a default font size, button color, and button hover color for consistency across text and interactive elements, and to specify the default frame size and orientation to control how content is displayed. These defaults streamline study design, ensure a uniform look and feel, and reduce the need for repetitive manual adjustments when creating new elements.


Default Font Size (Px)

Set a preferred font size in pixels that will be applied to any new text elements you create in your study.

Path: Settings tab → top Main drop-down → Defaults section → Default Font Size (Px)


Default Button Color

Set a preferred button color that will be applied to any new button elements you create in your study.

Path: Settings tab → top Main drop-down → Defaults section → Default Button Color


Default Button Hover Color

Set a preferred button hover color that will be applied to any new button elements you create in your study.

Path: Settings tab → top Main drop-down → Defaults section → Default Button Hover Color


Default Frame Size

This setting controls the default dimensions for your study's frames (landscape or portrait). If you choose custom, you can also change the display mode to control how rendering and displaying the frames is handled in general. For more information see the documentation.

Path: Settings tab → top Main drop-down → Defaults section → Default Frame Size


Shared Editing Access

Path: Settings tab → top Main drop-down → Shared Editing Access section

The Shared Editing Access section manages collaborative permissions for a study. It allows the study admin to invite trusted users to edit the study by entering their email or username, with the invitee required to accept access. It also provides a list of current users with edit rights, giving the admin control to revoke access at any time. This section ensures that only authorized collaborators can modify the study while maintaining oversight and security.


Invite User to Edit Your Study

Grant editing access to this study to other users by entering their email address or user name. They will be notified and have to accept the editing access request. Note, you should only allow editing access to users you trust. Only the admin of the study can invite new users to edit.

Path: Settings tab → top Main drop-down → Shared Editing Access section → Invite User to Edit Your Study


List of Users With Edit Rights

This section displays a list of users who have been granted edit access to your study. You can revoke access to any user by clicking the "Revoke Access" button. The users on this who have access are only visible to the admin of the study.

Path: Settings tab → top Main drop-down → Shared Editing Access section → List of Users With Edit Rights


Open Experiment Library

Path: Settings tab → top Main drop-down → Open Experiment Library section

The Open Experiment Library section facilitates sharing, transparency, and collaboration by allowing researchers to make their studies publicly accessible. It provides a link to the study in the Labvanced Public Experiment Library, controls import rules for how others can use the study design, and optionally allows view-only access to recorded data and study setup. Required settings like publication status, study duration, affiliation, and keywords ensure the study is discoverable and properly categorized, while optional fields like description and related publications help communicate context and provide academic references. Overall, this section supports open science, enabling others to find, inspect, and build upon your research.


Study in Experiment Library

The Labvanced Public Experiment Library is a free and open science / open materials resource for everyone. It contains a collection of studies from both the Labvanced team and other researchers who have chosen to share their work with the research community.

Path: Settings tab → top Main drop-down → Open Experiment Library section → Study in Experiment Library


Study Link In Experiment Library

The URL to your study in the Labvanced Public Experiment Library. You can use this link to share your study with reviewers, colleagues, and everyone else in the community.

Path: Settings tab → top Main drop-down → Open Experiment Library section → Study Link in Experiment Library


Import Rule For Your Study

This setting allows you to control if and how other users can import your study design from the Labvanced Public Experiment Library. Sharing your work (before or after publishing) will increase transparency and allow others to build on your work.

Path: Settings tab → top Main drop-down → Open Experiment Library section → Import Rule For Your Study


Allow Everyone to Inspect Your Recorded Data and Study Setup (View Access Only)

This setting allows everyone to view your recorded data and study setup without making any changes. This is most useful for open science and transparency.

Path: Settings tab → top Main drop-down → Open Experiment Library section → Allow Everyone to Inspect Your Recorded Data and Study Setup (View Access Only)


Settings for Experiment Library:

If a study is enlisted in the Public Experiment Library, then the following settings and study description fields are also relevant:


Study Is Published (Required)

A flag indicating whether the study is published and available for participant recruitment.

Path: Settings tab → top Main drop-down → Open Experiment Library section → Study Is Published (Required)


Study Duration (Required)

An estimate of how long it will take a participant to complete the entire study. This information will be shown in the Labvanced Experiment Library only.

Path: Settings tab → top Main drop-down → Open Experiment Library section → Study Duration (Required)


Your Affiliation (Required)

Refers to the academic or research institutions that you are formally connected with, such as a university, research center, or company. This information will be shown in the Labvanced Experiment Library only.

Path: Settings tab → top Main drop-down → Open Experiment Library section → Your Affiliation (Required)


Study Keywords (Required)

These are descriptive words or phrases that researchers can associate with their studies to make them easier to find and categorize within the Labvanced Experiment Library. You can add any custom keywords you like.

Path: Settings tab → top Main drop-down → Open Experiment Library section → Study Keywords (Required)


Description (Optional)

This is a text field where you can introduce and provide details about your study. This description is visible both in the Labvanced Experiment Library and during experiment loading.

Path: Settings tab → top Main drop-down → Open Experiment Library section → Description (Optional)


Related Publications (Optional)

This allows researchers to add academic citations to their studies when sharing them in the Public Experiment Library.

Path: Settings tab → top Main drop-down → Open Experiment Library section → Related Publications (Optional)


Controls

The Controls tab provides control over how a study starts, displays, and is accessed. Experiment Startup handles pre-loading files, consent forms, initial surveys, permissions (webcam, microphone, screen), and language selection. Screen settings control fullscreen mode, pausing on exit, refresh rates, orientations, dark mode, and screen calibration. Software & Devices sets which apps, browsers, and devices participants can use and can include a headphone check for audio verification.

Controls

  • Experiment Startup
  • Screen
  • Software & Devices

Experiment Startup

Path: Settings tab → top Controls drop-down → Experiment Startup

The Experiment Startup section in the Controls tab manages all settings that occur when a study begins, ensuring smooth participant experience and proper data collection. Overall, this section ensures the experiment starts correctly, participants provide necessary permissions, and preparatory data is collected efficiently.

It includes options for pre-loading media files for timely stimulus presentation, handling file loading errors, and presenting consent forms (Labvanced or custom) before the experiment starts. It also allows showing an initial survey for subject balancing, requesting webcam, microphone, or screen access, and offering language selection if multiple languages are enabled.


Pre-load All Experiment Files

This is the process of loading all necessary media files, such as images, audio, and videos, into the participant's browser before the study begins. It is best practice to ensure the smooth and timely presentation of stimuli. In case your study has a large amount of media files, it is recommended to disable this option.

Path: Settings tab → top Controls drop-down → Experiment Startup section → Pre-load All Experiment Files


On File Loading Error

This setting determines what should happen if a resource file, such as an image or audio file fails to load before the study begins. If you select "abort experiment", the experiment will be terminated and the participant will not be able to participate.

Path: Settings tab → top Controls drop-down → Experiment Startup section → On File Loading Error


Show Labvanced Consent Form

This option enables showing the Labvanced consent form before the experiment starts.

Path: Settings tab → top Controls drop-down → Experiment Startup section → Show Labvanced Consent Form


Show Custom Consent Form

This option enables showing a custom consent form before the experiment starts. Modify this custom consent via our Texts & Translate tab. Go to the "Start" section then edit the field "customConsent".

Path: Settings tab → top Controls drop-down → Experiment Startup section → Show Custom Consent Form


Shows Initial Survey

The initial survey is a special predefined survey that can (optionally) be shown before the experiment starts. If enabled, the data of the initial survey is used for subject balancing (e.g to assign subjects into groups based on gender, age etc.) If a more customized group assignment based on pre-survey answers is required, you have to assign the subjects first to only one group and then change their group using the "change group action". For more information please read our documentation.

Path: Settings tab → top Controls drop-down → Experiment Startup section → Shows Initial Survey


Asks for Webcam Access

This flag indicates whether the experiment will request access to the participant's webcam on experiment startup. The participant will have to consent to share their webcam.

Path: Settings tab → top Controls drop-down → Experiment Startup section → Asks for Webcam Access


Asks for Microphone Access

This flag indicates whether the experiment will request access to the participant's microphone on experiment startup. The participant will have to consent to share their microphone.

Path: Settings tab → top Controls drop-down → Experiment Startup section → Asks for Microphone Access


Asks for Screen Access

This flag indicates whether the experiment will request to record the participant's screen on experiment startup. The participant will have to consent to share their screen.

Path: Settings tab → top Controls drop-down → Experiment Startup section → Asks for Screen Access


Shows Experiment Language Selection

This flag indicates whether the study is available in multiple languages and participants will be prompted to select their preferred language at the start of the experiment.

Path: Settings tab → top Controls drop-down → Experiment Startup section → Shows Experiment Language Selection


Screen

Path: Settings tab → top Controls drop-down → Screen section

The Screen section in the Controls tab manages display settings to ensure participants view the study as intended and maintain focus throughout. It includes options to start the study in fullscreen, pause if fullscreen or browser tab is exited, and enforce a minimum monitor refresh rate for precise visual timing. It also allows specifying allowed screen orientations, preventing dark mode, and calibrating screen size and distance to guarantee accurate stimulus presentation. Additionally, researchers can define a minimum screen size to ensure proper visibility and functionality across devices. Overall, this section ensures visual fidelity, participant focus, and accurate stimulus delivery.


Start Study in Fullscreen

This option ensures that the study begins in fullscreen mode, providing an immersive experience for participants by utilizing the entire screen space.

Path: Settings tab → top Controls drop-down → Screen section → Start Study in Fullscreen


Pause on Exit Fullscreen

This option pauses the study if the participant exits fullscreen mode, ensuring that participants stay focused and do not miss any important content, instructions.

Path: Settings tab → top Controls drop-down → Screen section → Pause on Exit Fullscreen


Pause on Exit Browser Tab

​​This option pauses the study if the participant navigates away from the browser tab, helping to maintain their focus and engagement with the study, and avoids attempts to complete the study while the participant is not present.

Path: Settings tab → top Controls drop-down → Screen section → Pause on Exit Browser Tab


Set Required Refresh Rate (Hz)

This is a setting that allows researchers to specify a minimum monitor refresh rate that a participant's device must have in order to run the study. This feature is important for experiments where precise timing of visual stimuli is essential (e.g. stimuli shown shorter than 100 ms).

Path: Settings tab → top Controls drop-down → Screen section → Set Required Refresh Rate (Hz)


Allowed Screen Orientation

This setting allows you to specify the screen orientations that are permitted for participants to use during your study. This is particularly important for studies that are designed to be run on mobile devices like tablets and smartphones.

Path: Settings tab → top Controls drop-down → Screen section → Allowed Screen Orientation


Prevent Dark Mode Theme

When enabled, this stops participants from using a dark mode theme on their device while taking part in the study. This is particularly useful for preventing display issues, especially on mobile devices.

Path: Settings tab → top Controls drop-down → Screen section → Prevent Dark Mode Theme


Screen Size:

This section provides options for specifying screen calibration by measuring screen size and distance, as well as setting the minimum screen size to participate in the study, as described below:


Measure Screen Size (card calibration)

When enabled, the participant will be asked to calibrate their screen size using a standardized card (85.60 by 53.98 mm). This calibration can be used to ensure that visual stimuli are displayed at the correct size in mm or visual degree, which is crucial for experiments requiring precise visual measurements.

Path: Settings tab → top Controls drop-down → Screen section → Measure Screen Size (card calibration)


Default Screen Distance (cm)

This is a setting used to estimate the visual angle of stimuli presented on the screen. It represents the assumed distance, in centimeters, between the participant's eyes and their monitor.

Path: Settings tab → top Controls drop-down → Screen section → Measure Screen Size (card calibration)


Set Minimum Screen Size

This setting allows researchers to define a minimum screen size requirement for devices in various units to ensure optimal display and functionality of the study.

  • In Pixel: Specify the minimum screen size in pixels.
  • In Millimeter: Specify the minimum screen size in milli meter (requires screen size measurement)
  • In Visual Degree: Specify the minimum screen size in visual degree (requires screen size measurement and screen distance estimation)

Path: Settings tab → top Controls drop-down → Screen section → Set Minimum Screen Size


Software & Devices

Path: Settings tab → top Controls drop-down → Software & Devices section

The Software & Devices section in the Controls tab manages the platforms and applications through which participants can access the study. It allows researchers to enable participation via the Labvanced Phone App for mobile data collection, the Labvanced Desktop App for local online or offline data collection and physiological integrations, and web browsers, specifying which browsers are permitted, such as Chrome, Firefox, Edge, Opera, Safari, or any other. This section ensures that the study is accessible on appropriate devices and software while maintaining control over the testing environment.


Allowed Software:

Labvanced Phone App

The Labvanced Mobile App is an application for Android devices (iOS support is planned) that enables data collection within a native phone application. It also includes a mode to collect data offline. Learn more.

Path: Settings tab → top Controls drop-down → Software & Devices section → Allowed Software


Labvanced Desktop App

The Labvanced Desktop App is a downloadable application (for Windows, Mac and Linux) that brings the functionality of the Labvanced platform to a researcher's local computer, allowing for both online and offline data collection. Furthermore it integrates with Lab Streaming Layer (LSL) to enable EEG and other physiological data collection. Learn more.

Path: Settings tab → top Controls drop-down → Software & Devices section → Allowed Software


Browsers

Allowing participation via web browsers enables online / remote participation. Individual browsers can be enabled or disabled.

Google Chrome

Allows participants to use Google Chrome.

Mozilla Firefox

Allows participants to use Mozilla Firefox.

Microsoft Edge

Allows participants to use Microsoft Edge.

Opera

Allows participants to use Opera.

Safari

Allows participants to use Safari.

Any Other

Allows participants to use any other browser.

Path: Settings tab → top Controls drop-down → Software & Devices section → Browsers


Allowed Participation Devices:

Android Mobile

Allows participants to use an Android mobile phone.

Android Tablet

Allows participants to use an Android tablet.

iPhone

Allows participants to use an iPhone.

iPad

Allows participants to use an iPad.

Windows PC

Allows participants to use a Windows PC.

Mac

Allows participants to use a Mac.

Linux PC

Allows participants to use a Linux PC.

Any Other Device

Allows participants to use any other devices.

Path: Settings tab → top Controls drop-down → Software & Devices section → Allowed Participation Devices:


Headphone Check

To test whether participants wear headphones, you can include this or this block in your study.

Path: Settings tab → top Controls drop-down → Software & Devices section → Headphone Check


Subjects

The Subjects section of the Settings tab in Labvanced centralizes control over participant management and experimental group balance. It enables configuring repeated participation, including longitudinal study designs and preventing re-entry based on completion status, using unique identifiers. It also supports structured assignment of participants to multiple groups via automatic or manual balancing rules, while options like timeout-based subject discarding and reassignment of participant numbers help maintain evenly distributed and valid datasets across experimental conditions. Overall, it ensures precise control over who participates, when, and how data integrity and group balance are maintained.

Note: For participation settings relating to crowdsourcing and custom link creation, refer to the Launch & Participate tab.

Subjects

  • Repeated Participation
  • Subject Balancing

Repeated Participation

The Repeated Participation section in the Subjects tab controls whether participants can take part in the same study multiple times.

Path: Settings tab → top Subjects drop-down → Repeated Participation section

Note: For participation settings relating to crowdsourcing and custom link creation, refer to the Launch & Participate tab.


Longitudinal Design

Longitudinal design means that participants can take part repeatedly at different times in the same study. For this at least 1 group has to include more than 1 session.

Path: Settings tab → top Subjects drop-down → Repeated Participation section → Longitudinal Design


Prevent Subject Re-Participation

This controls whether participants can take part in the same study multiple times. To prevent subjects from redoing the study, Labvanced needs a unique subject identifier in the URL. You can use either the "subject_code", "token", or "Prolific_PID" parameter. Read More.

Path: Settings tab → top Subjects drop-down → Repeated Participation section → Prevent Subject Re-Participation

‣ Prevent When Completed

This option prevents a subject from participating in the same study again once the study is completed.

Path: Settings tab → top Subjects drop-down → Repeated Participation section → Prevent When Completed

‣ Prevent When Not Completed

This option prevents a subject from participating in the same study again when the subject started it, but did not complete the study before, i.e. they only have one chance to complete the study. You should enable this option if it is important that a subject did not see the stimuli / content of your study before. Note, this can cause complaints on crowdsourcing platforms when subjects are not able to finish the study for technical reasons or errors in the study design. Hence you should also compensate subjects who did not complete the study fairly.

Path: Settings tab → top Subjects drop-down → Repeated Participation section → Prevent When Not Completed


Subject Balancing

The Subject Balancing provides setting options to ensure that experimental groups have an even distribution of participants. Options like Multi Group Design indicate that participants will be assigned to different groups, while Timeout Based Subject Discarding flags participants who take too long, helping maintain balanced datasets. Reassigning subject numbers from timed-out or discarded participants to new ones further improves balance across groups and experimental conditions.

Path: Settings tab → top Subjects drop-down → Subject Balancing section


Multi Group Design

This section indicates whether your study contains different groups (e.g., control and experimental groups) within your study. Learn More.. Multi Group Design allows you to assign participants to different groups (e.g., control and experimental groups) within your study.

Path: Settings tab → top Subjects drop-down → Multi Group Design section

Note: Actions can be used to move participants across Groups.

Under the Study Design tab, if the Groups section contains more than one group, the Multi Group Design setting here will automatically update to be YES. As a result, the following two settings will appear, as described in greater detail below:

Subject Group Selection is Done

This setting specifies how subjects should be assigned to groups.

Note: For studies that run in offline mode (ie. via the Labvanced Desktop App) the only option available is By Participant (Manual) as there is no communication with the server online to perform server-based balacing.

‣ Automatic (Server-based)

When using automatic/server based group selection the server will try to equally balance subjects across groups. Learn more about between-subject balancing.

‣ By Participant (Manual)

This option allows the participant to select which group they will start the study in.

Path: Settings tab → top Subjects drop-down → Multi Group Design is equal to YES → Subject Group Selection is Done

Subject Balancing Rule (Which Subjects are Counted for Balancing)

By default, the server will always assign a new subject to the group with the lowest number of subjects. However, this setting determines which subjects are counted for each group and hence this influences which group a new subject will be assigned to. The first option counts all subjects, the second option counts completed and recently started (likely still active) subjects. The last option only counts completed subjects. In most scenarios with concurrent participation from multiple subjects, the second option is the best choice and leads to the most balanced dataset. Learn More about between-subject balancing.

Path: Settings tab → top Subjects drop-down → Multi Group Design is equal to YES → Subject Balancing Rule (Which Subjects are Counted for Balancing)

‣ All subjects who started (includes incomplete datasets)

Indicates that all subjects should be counted and the values used for balancing. This includes incomplete datasetes.

Path: Settings tab → top Subjects drop-down → Multi Group Design is equal to YES → Subject Balancing Rule (Which Subjects are Counted for Balancing) → All subjects who started (includes incomplete datasets)

‣ Subjects who completed the study or started recently (not timed out)

This option counts completed and recently started (likely still active and not timed out). In most scenarios with concurrent participation from multiple subjects, this option is the best choice and leads to the most balanced dataset.

Path: Settings tab → top Subjects drop-down → Multi Group Design is equal to YES → Subject Balancing Rule (Which Subjects are Counted for Balancing) → Subjects who completed the study or started recently (not timed out)

‣ Only subjects who completed the study

This object will only use the count of subjects who successfully completed the study as a part of the balancing count.

Path: Settings tab → top Subjects drop-down → Multi Group Design is equal to YES → Subject Balancing Rule (Which Subjects are Counted for Balancing) → Only subjects who completed the study


Timeout Based Subject Discarding

When enabled this will flag a subject as timed out if they take too long to complete the experiment. This is important to balance completed datasets / subjects across groups. The data of timed out subjects will still be available in the data export.

Path: Settings tab → top Subjects drop-down → Timeout Based Subject Discarding section

When activated, the following options are available:

‣ Timeout (minutes)

Specifies the time until a subject is considered timed out. This should be 30-50% above the average participation time (i.e. you should be relatively certain that this subject will not complete the study).

Path: Settings tab → top Subjects drop-down → Timeout Based Subject Discarding section → Timeout (minutes)

‣ Replace Subject Numbers of Timed Out and Discarded Subjects

If enabled, the subject numbers of timed out and manually discarded subjects will be assigned to new participants. Enabling this option will further improve subject balancing if you use the "Subject_Nr" or "Subject_Nr_Per_Group" variables, or between subject factors inside tasks. Learn More.

Path: Settings tab → top Subjects drop-down → Timeout Based Subject Discarding section → Replace Subject Numbers of Timed Out and Discarded Subjects


Physiology

The Physiology settings provide control over how physiological data is collected with regards to devices, toolbox versions, and initialization and calibration options for eye-tracking, as well as activating head-tracking, heart rate, and emotion detection.

Physiology

  • Common Settings
  • Eye-Tracking
  • Head-Tracking
  • Heart Rate Detection
  • Emotion Detection

Common Settings

The options under Common Settings of the Physiology section in the Settings tab contain basic settings which pertain to all physiological signal detection, including which toolbox version should be used for the study and whether participants are allowed to select their preferred camera device if there are multiple.

Path: Settings tab → top Physiology drop-down → Common Settings section


Physiology Toolbox Version

Indicates the version of the Labvanced Physiology Toolbox that is used in the study. (Formerly called eye-tracking version).

Path: Settings tab → top Physiology drop-down → Common Settings section → Physiology Toolbox Version


Allow Choosing Camera If Multiple

Allow participants to select their preferred camera device.

Path: Settings tab → top Physiology drop-down → Common Settings section → Allow Choosing Camera If Multiple


Eye-Tracking

This feature allows researchers to enable webcam-based eye-tracking in their experiment. Labvanced's webcam-based eye-tracking technology is leading in terms of accuracy and ease of use and has been validated in multiple studies and publications. Learn More.

Path: Settings tab → top Physiology drop-down → Eye-Tracking section

Virtual Chinrest

This setting controls when the virtual chinrest is active/checked.

‣ Enable (check during trials)

‣ Enable (check between trials)

‣ Enable (check between trials) and show ignore button

‣ Disable

Path: Settings tab → top Physiology drop-down → Eye-Tracking section → Virtual Chinrest

Minimum Facemesh Speed

This setting sets the minimum facemesh processing speed. It acts as a threshold to ensure that too slow devices are not included in the study.

  • Very Low (0.5Hz)
  • Low (2.5Hz)
  • Medium Low (5Hz)
  • Medium (7.5Hz)
  • Medium High (10Hz)
  • High (12.5Hz)
  • Very High (15Hz)

Path: Settings tab → top Physiology drop-down → Eye-Tracking section → Minimum Facemesh Speed

Confirm Face Mask Manually

When enabled, subjects need to manually confirm that the facemesh is correctly placed on their face. This can be useful to get a participant-based confirmation about the correct workings of the facemesh

Path: Settings tab → top Physiology drop-down → Eye-Tracking section → Confirm Face Mask Manually

Share Data with Labvanced

When enabled, eye-tracking data will be shared with Labvanced to improve our eye-tracking.

Path: Settings tab → top Physiology drop-down → Eye-Tracking section → Share Data with Labvanced

Calibration Settings:

The settings below are dedicated to how calibration should behave and run during the study:

Infant Friendly Mode

If toggled on the eye-tracking calibration settings below will automatically be adjusted for infants. Learn More.

Path: Settings tab → top Physiology drop-down → Eye-Tracking section → Infant Friendly Mode

Calibration Length

This setting controls the number of points in the calibration and hence the total length of the calibration. Longer calibration leads to higher accuracy / less error. When calibration is done well, you can on average expect the following accuracy values:

  • 130 points: Accuracy is at ~1.5 visual degrees
  • 55 points: Accuracy is at ~2.0 visual degrees
  • 15 points: Accuracy is at ~2.7 visual degrees Read our documentation to learn more.

The following calibration length options are available:

  • 175 points, 12 poses (~7 minutes)
  • 130 points, 9 poses (~5 minutes)
  • 55 points, 4 poses (~2 minutes)
  • 15 points, 1 pose (~30 seconds)
  • Very long experimental / debug (15 minutes)
  • Debug

Path: Settings tab → top Physiology drop-down → Eye-Tracking section → Calibration Length

Set Maximum Calibration Error

This enables a maximum calibration error and hence can be used as a quality control measure, ensuring accurate eye tracking data.

When the checkbox is ticked, the following options appear:

‣ Max Calibration Error (%)

This sets the maximum calibration error allowed (in percentage of screen diagonal). Lowering this value can further improve the quality of the eye tracking data, but more people will fail the calibration.

Path: Settings tab → top Physiology drop-down → Eye-Tracking section → Set Maximum Calibration Error is checked → Max Calibration Error (%)

‣ Max Nr Calibration Attempts

This controls how often the calibration is allowed to be retried before the study fails completely.

Path: Settings tab → top Physiology drop-down → Eye-Tracking section → Set Maximum Calibration Error is checked → Max Nr Calibration Attempts

Calibration Image Type

You can choose between dots and animal icons for the calibration image.

Path: Settings tab → top Physiology drop-down → Eye-Tracking section → Calibration Image Type

Play Calibration Sounds

Choose whether to play calibration sounds. When the checkbox is ticked, a volume / sound slider will appear to adjust the volume.

Path: Settings tab → top Physiology drop-down → Eye-Tracking section → Play Calibration Sounds

Show Grid

Control if during the calibration the grid is shown.

Path: Settings tab → top Physiology drop-down → Eye-Tracking section → Show Grid

Allow Calibration Reuse

This allows the participant to reuse the calibration from a previous session. Note this can be very useful to enable while developing the experiment.

Path: Settings tab → top Physiology drop-down → Eye-Tracking section → Allow Calibration Reuse


Head-Tracking

This feature enables head-and faces tracking using the participant's webcam, allowing researchers to monitor and analyze head movements and get detailed facemesh data during the study. Learn More.

Path: Settings tab → top Physiology drop-down → Head-Tracking section


Heart Rate Detection

This feature enables heart rate detection from webcam video using remote photoplethysmography (rPPG), allowing researchers to monitor participants' heart rate without physical sensors. This implemention is based based on temporal-spatial state space duality principle suggested by Wang et al. 2025 For more information: Read the whitepaper.

Path: Settings tab → top Physiology drop-down → Heart Rate Detection section

‣ Initialize RPPG on Study Start

RPPG works by calculating weighted averages over time and therefore can take 10 or more seconds to initialize. Activating this setting helps ensure the RPPG measures are ready before your main task starts (except where you have enabled RPPG in your first task).

Path: Settings tab → top Physiology drop-down → Heart Rate Detection section → Initialize RPPG on Study Start

‣ Wait for heartrate confidence

When enabled, the study will wait until the heart rate detection reaches a specified confidence level before proceeding. The study will also be paused if the heart rate confidence falls below the specified threshold during the experiment. This is useful for ensuring that the heart rate data is reliable.

Set the Heartrate Confidence Threshold (as a number between 0 and 1) that the heart rate detection must reach before the study proceeds.

Path: Settings tab → top Physiology drop-down → Heart Rate Detection section → Wait for heartrate confidence


Emotion Detection

This feature enables real-time emotion detection from webcam video using AI-based facial expression analysis, allowing researchers to monitor participants' emotional states during the study. The implementation is based on this publication.

Path: Settings tab → top Physiology drop-down → Emotion Detection section


Special Features

The Special Features settings give researchers control over advanced study capabilities, enabling audio and video recordings, real-time transcription, secure data handling via end-to-end encryption, and integration with AI, gamepads, or streaming protocols. They also support multi-user interactions, video conferencing, and external server connections via Webhooks or WebSockets, providing flexibility, reliability, and enhanced experimental interactivity.

Special Features

  • Audio Recordings
  • Audio Transcription (AI-based)
  • End-to-End Encryption
  • Gamepad Integration
  • Generative AI Integration
  • Lab Streaming Layer (LSL)
  • Multi User Study
  • Screen Recordings
  • Video Conferencing
  • Video Recordings
  • Webhook API
  • Websocket Connection

Audio Recordings

Enable audio recordings to capture participants' verbal responses or ambient sounds during the study. Learn More.

Path: Settings tab → top Special Features drop-down → Audio Recordings section

Allow Choosing Mic If Multiple

Check this box to allow participants to select their preferred microphone device for audio recordings.

Path: Settings tab → top Special Features drop-down → Audio Recordings section → Allow Choosing Mic If Multiple


Audio Transcription (AI-based)

This feature enables automatic transcription of audio recordings collected during the study using AI algorithms (Whisper AI).Learn More.

Path: Settings tab → top Special Features drop-down → Audio Transcription (AI-based) section

Transcription Model

Choose the model used for transcription. The tiny model is faster but less accurate, while the large model is more accurate but slower. The larger model will lead to longer experiment startup time as it has to be downloaded on the participant's device. Options available:

  • Speed Optimized (tiny model)
  • Accuracy Optimized (large model)

Path: Settings tab → top Special Features drop-down → Audio Transcription (AI-based) section → Transcription Model


End-to-End Encryption

This feature ensures that all binary files (e.g., video, audio) uploaded by participants are encrypted on their device before transmission, enhancing data security and privacy.

Path: Settings tab → top Special Features drop-down → End-to-End Encryption section

Encryption Key

When this option is enabled, clicking the Set PGP Key will prompt you to paste your PGP public key.

Path: Settings tab → top Special Features drop-down → End-to-End Encryption section → Encryption Key


Gamepad Integration

This feature allows the use of gamepads or joysticks as input devices during the study. Learn More.

Path: Settings tab → top Special Features drop-down → Gamepad Integration section


Generative AI Integration

This feature allows the use of large language models (i.e. chatGPT) and other generative AI models in your study for advanced AI capabilities. Learn More.

Path: Settings tab → top Special Features drop-down → Generative AI Integration section

API Key (for OpenAI)

Insert your API key for openAI. This will be stored securely on our servers.

Path: Settings tab → top Special Features drop-down → Generative AI Integration section → API Key (for OpenAI)

Key is Valid

If the API key provided above is valid upon pasting it, this status indicator will change from NO to YES.

Path: Settings tab → top Special Features drop-down → Generative AI Integration section → Key is Valid


Lab Streaming Layer (LSL)

This feature allows integration with the Lab Streaming Layer (LSL) protocol for real-time data streaming. Only supported on the desktop app. Learn More.

Path: Settings tab → top Special Features drop-down → Lab Streaming Layer (LSL) section

Output Stream

  • Name: Assign a name for the specific stream
  • Type: Indicate the type of data that is being transferred, such as ‘Markers’ or ‘Gaze’.
  • Channel Count: Refers to the number of channels or different data types are within the stream
  • Nominal Sample Rate (Hz): The nominal sampling rate in hertz [Hz] ; select 0 if the rate is irregular
  • Channel Format: Indicate the channel format or data type of the device. Available options include:
    • String
    • Float32
    • Double64
    • Int32
    • Int16
    • Int8

Path: Settings tab → top Special Features drop-down → Lab Streaming Layer (LSL) section → Output Stream

Input Stream

  • Prop: Property name
  • Value: Value

Path: Settings tab → top Special Features drop-down → Lab Streaming Layer (LSL) section → Input Stream


Multi User Study

A multi-user study is a type of online experiment that allows for real-time interaction between multiple participants. Learn More.

Path: Settings tab → top Special Features drop-down → Multi User Study section

If enabled, the following options appear:

Number of Participants

In a multi-user study, you can have as many participants as you want. When participants join, they are placed in a waiting room until the required number of participants for a session is met.

Path: Settings tab → top Special Features drop-down → Multi User Study section → Number of Participants

Maximum Parallel Sessions

This refers to the maximum number of simultaneous experimental sessions that can run at the same time. This setting acts as a safeguard to prevent the servers from becoming overloaded.

Path: Settings tab → top Special Features drop-down → Multi User Study section → Maximum Parallel Sessions

Action on User Leaving Study

This determines what happens to the remaining participants if one person leaves the study, for example, by closing their browser window. The following options are available:

  • Finish Study with Error
  • Finish Study Correctly
  • Custom / Redirect

Path: Settings tab → top Special Features drop-down → Multi User Study section → Action on User Leaving Study

Allow WebSocket Reconnection

This determines if participants can rejoin a study after a temporary disconnection or loss of internet connection.

Path: Settings tab → top Special Features drop-down → Multi User Study section → Allow WebSocket Reconnection

Check Internet Connection

If enabled, will perform a connection and ping test before the experiment begins. Only participants who pass this connection test will be allowed to take part in the study.

Path: Settings tab → top Special Features drop-down → Multi User Study section → Check Internet Connection

If enabled, the following two settings appear:

‣ Maximum Allowed Ping

This defines the highest acceptable latency (ping) a participant's internet connection can have (in 10 measurements over 30 seconds).

Path: Settings tab → top Special Features drop-down → Multi User Study section → Check Internet Connection is enabled → Maximum Allowed Ping

‣ Average Ping Allowed

This sets the maximum average ping in milliseconds, that a participant's internet connection can have (in 10 measurements over 30 seconds).

Path: Settings tab → top Special Features drop-down → Multi User Study section → Check Internet Connection is enabled → Average Ping Allowed

Allow Inviting Friends in Lobby

This gives participants the ability to send an invitation link (via email) while they are waiting in the study's lobby.

Path: Settings tab → top Special Features drop-down → Multi User Study section → Allow Inviting Friends in Lobby


Screen Recordings

This feature enables screen recordings to capture participants' on-screen activities during the study. Learn More.

Path: Settings tab → top Special Features drop-down → Screen Recordings section


Video Conferencing

This feature enables video conferencing video/call capabilities within your study using a "Stream.IO" integration. Learn More.

Path: Settings tab → top Special Features drop-down → Video Conferencing section

Choose API Key

Specify the API key that should be used in order to enable Video Conferencing:

  • Use Labvanced API Key
  • Use Own API Key from StreamIO (see link above for instructions on how to set this up)

Path: Settings tab → top Special Features drop-down → Video Conferencing section → Choose API Key


Video Recordings

This feature enables webcam/video recordings to capture the participant's face and/or the visual surroundings. Learn More.

Path: Settings tab → top Special Features drop-down → Video Recordings section

Allow Choosing Camera If Multiple

Allow participants to select their preferred camera device for video recordings.

Path: Settings tab → top Special Features drop-down → Video Recordings section → Allow Choosing Camera If Multiple

Select Video Resolution

Select the desired resolution for video recordings. Higher resolution will result in larger file sizes. Also note that if you have eyetracking enabled, choosing a high resolution will negatively impact the performance of the eyetracking algorithm. Resolutions available:

  • 960 x 720 pixels
  • 1280 x 720 pixels
  • 1920 x 1080 pixels (Full HD)

Path: Settings tab → top Special Features drop-down → Video Recordings section → Select Video Resolution

Select Video Bitrate

Select the desired bitrate for video recordings. Higher bitrate will result in better quality but larger file sizes. Options available:

  • Default
  • High
  • Very High

Path: Settings tab → top Special Features drop-down → Video Recordings section → Select Video Bitrate

Use Mp4 When Possible

If the browser supports it, the video will be recorded in mp4 format. Otherwise, it will be recorded in webm format.

Path: Settings tab → top Special Features drop-down → Video Recordings section → Use Mp4 When Possible


Webhook API

Enable this option to send recorded study data to an external server of your choice via Webhook API. See API documentation.

Path: Settings tab → top Special Features drop-down → Webhook API section

When enabled, the following options appear:

IP Address

The IP address or URL of your backend server to receive the recorded data in real time.

  • e.g.: https://www.my-university-data-center.com

Path: Settings tab → top Special Features drop-down → Webhook API section → IP Address

Port

The port of your backend server to receive the recorded data in real time.

  • e.g.: 8082

Path: Settings tab → top Special Features drop-down → Webhook API section → Port

URL Path

The namespace / URL path of your backend server to receive the recorded data in real time.

  • e.g.: /labvanced/my-username

Path: Settings tab → top Special Features drop-down → Webhook API section → URL Path


Websocket Connection

Enable connection to an external server or other recording devices connected to the same computer. Learn More.

Path: Settings tab → top Special Features drop-down → Websocket Connection section

When enabled, the following options appear:

WebSocket Address

Enter your servers' IP address to connect to an external server or 'ws://localhost' to connect to recording devices on the same computer.

Path: Settings tab → top Special Features drop-down → Websocket Connection section → WebSocket Address

Port

Enter your servers' port, default to 8081.

Path: Settings tab → top Special Features drop-down → Websocket Connection section → Port

Prev
Task Editor
Next
Variables