labvanced logoLabVanced
  • Research
    • Publications
    • Researcher Interviews
    • Use Cases
      • Developmental Psychology
      • Linguistics
      • Clinical & Digital Health
      • Educational Psychology
      • Cognitive & Neuro
      • Social & Personality
      • Arts Research
      • Sports & Movement
      • Marketing & Consumer Behavior
      • Economics
      • HCI / UX
      • Commercial / Industry Use
    • Labvanced Blog
  • Technology
    • Feature Overview
    • Desktop App
    • Phone App
    • Precise Timing
    • Experimental Control
    • Eye Tracking
    • Multi User Studies
    • More ...
      • Questionnaires
      • Artificial Intelligence (AI) Integration
      • Mouse Tracking
      • Data Privacy & Security
      • Text Transcription
  • Learn
    • Guide
    • Videos
    • Walkthroughs
    • FAQ
    • Release Notes
    • Documents
    • Classroom
  • Experiments
    • Cognitive Tests
    • Sample Studies
    • Public Experiment Library
  • Pricing
    • Pricing Overview
    • License Configurator
    • Single License
    • Research Group
    • Departments & Consortia
  • About
    • About Us
    • Contact
    • Downloads
    • Careers
    • Impressum
    • Disclaimer
    • Privacy & Security
    • Terms & Conditions
  • Appgo to app icon
  • Logingo to app icon
Learn
Guide
Videos
Walkthroughs
FAQ
Release Notes
Release Notes
Documents
Classroom
  • 中國人
  • Deutsch
  • Français
  • Español
  • English
  • 日本語
Guide
Videos
Walkthroughs
FAQ
Release Notes
Release Notes
Documents
Classroom
  • 中國人
  • Deutsch
  • Français
  • Español
  • English
  • 日本語
  • Guide
    • GETTING STARTED

      • Objects
      • Events
      • Variables
      • Task Wizard
      • Trial System
      • Study Design
        • Tasks
        • Blocks
        • Sessions
        • Groups
    • FEATURED TOPICS

      • Randomization & Balance
      • Eye Tracking
      • Questionnaires
      • Desktop App
      • Sample Studies
      • Participant Recruitment
      • API Access
        • REST API
        • Webhook API
        • WebSocket API
      • Other Topics

        • Precise Stimulus Timings
        • Multi User Studies
        • Head Tracking in Labvanced | Guide
    • MAIN APP TABS

      • Overview: Main Tabs
      • Dashboard
      • My Studies
      • Shared Studies
      • My Files
      • Experiment Library
      • My Account
      • My License
    • STUDY TABS

      • Overview: Study-Specific Tabs
      • Study Design
        • Tasks
        • Blocks
        • Sessions
        • Groups
      • Task Editor
        • Main Functions and Settings
        • The Trial System
        • Canvas and Page Frames
        • Objects
        • Object Property Tables
        • Variables
        • System Variables Tables
        • The Event System
        • Trial Randomization
        • Text Editor Functions
        • Eyetracking in a Task
        • Head Tracking in a Task
        • Multi-User Studies
      • Settings
      • Variables
      • Media
      • Texts & Translate
      • Launch & Participate
      • Subject Management
      • Dataview and Export
        • Dataview and Variable & Task Selection (OLD Version)
        • Accessing Recordings (OLD Version)
  • Videos
    • Video Overview
    • Getting Started in Labvanced
    • Creating Tasks
    • Element Videos
    • Events & Variables
    • Advanced Topics
  • Walkthroughs
    • Introduction
    • Stroop Task
    • Lexical Decision Task
    • Posner Gaze Cueing Task
    • Change Blindness Flicker Paradigm
    • Eye-tracking Sample Study
    • Infant Eye-tracking Study
    • Attentional Capture Study with Mouse Tracking
    • Rapid Serial Visual Presentation
    • ChatGPT Study
    • Eye Tracking Demo: SVGs as AOIs
    • Multi-User Demo: Show Subjects' Cursors
    • Gamepad / Joystick Controller- Basic Set Up
    • Desktop App Study with EEG Integration
    • Between-subjects Group Balancing and Variable Setup
  • FAQ
    • Features
    • Security & Data Privacy
    • Licensing
    • Precision of Labvanced
    • Programmatic Use & API
    • Using Labvanced Offline
    • Troubleshooting
    • Study Creation Questions
  • Release Notes
  • Documents
  • Classroom

Send to OpenAI Action

Overview

The Send to OpenAI action allows you to send information, such as a string input value, to OpenAI. You can specify a certain Model Type to receive the prompt in the context of text-, image-, or audio-generation.

NOTE: For this option to be available, you have to first list your API Key under in the Settings tab.

The Send to OpenAI Action in the Labvanced action menu.

The following options will appear upon clicking this event:

Options for the Send to OpenAI action.

Based on the Model Category chosen,

Model Category

Model CategoryDescription
ChatGPTSend text input to OpenAI for the purpose of generating a text-based response.
Image GenerationSend text input to OpenAI for the scope of generating an image.
Generate AudioSend text input to OpenAI for the scope of generating audio.

ChatGPT - Model Category

Here is a functional example of how this event looks like when all the necessary information is provided:

Example of the Send to OpenAI action being utilized for generating text

Here is a deeper explanation of the fields included under the Send to OpenAI action with ChatGPT selected as the Model Category:

Menu Item'Send to OpenAI' Action Options - ChatGPT
Model CategorySpecifies the AI model category that is relevant for the particular action. In this case, ChatGPT is selected for text-to-text scenarios. For the other options, like image or audio generation, refer to the sections below.
Model VersionSpecifies the ChatGPT version that should be called on during the experiment. Available options range from:
  • GPT-3.5-turbo
  • GPT-4
  • GPT-4o
  • GPT-4.1
  • GPT-5-nano
  • GPT-5-mini
  • GPT-5
Max TokensMax tokens controls how long the chat output can maximally be. As you pay per token, this can be an effective way to control costs.
TemperatureThe temperature controls the randomness/creativity in the answers. On this scale, 0 = rather deterministic and 2 = highly random and creative.
Chat History DataframeLink to a data frame variable with two columns. The first column will denote the ‘role’ and the second column the ‘chat message.’ The values from the action will be automatically appended to the data frame that is linked here.

The data frame can also be manipulated with data frame actions. For further reference, please check the docs from OpenAI.
Insert Message ‘+’ By clicking on this, the variable dialog box will appear. You will need to indicate which ‘Variable’ value is being sent out to OpenAI as well as the ‘role’ of the associated message:
  • system: refers to the high-level ChatGPT system
  • user: the participant
  • assistant: refers to a specific sub-role created within the system

NOTE 1: Also refer to this walkthrough where we build a study step-by-step, integrating ChatGPT in a study and utilizing this action.


Image Generation - Model Category

In the example below, the variable that stores the text written in an Input Object is set to be as the Image Prompt that will be sent to OpenAI as a result of this action:

Example of the Send to OpenAI action being utilized in the context of image generation.

Useful Demo: Check out this demo that makes use of image generation via the OpenAI Trigger and Action. The participant is asked to input a prompt and this prompt is then used to generate an image.

Here is a deeper explanation of the fields included under the Send to OpenAI action with Image Generation selected as the Model Category:

Menu Item'Send to OpenAI' Action Options - Image Generation
Model CategorySpecifies the AI model category that is relevant for the particular action. In this case, Image Generation is selected for text-to-image scenarios. For the other options, like image or audio generation, refer to the sections below.
Model VersionSpecifies the ChatGPT version that should be called on during the experiment. Available options include:
  • GPT-image-1
  • DALL-E-3
Image PromptGives you the option to set the prompt for the image that should be generated. Popular approaches include setting a Constant Value like a string of written text or to use something like an Input Object variable and link it here.
Image QualityPrompts you to indicate the quality of the image that will be generated as a result of the text-based prompt linked above. Options include:
Image RatioPrompts you to indicate the ratio of the image that will be generated.
Image StyleIn the case where DALL-E-3 is selected as the Model Version, this option appears to specify the image style. Options include: natural and vivid.

Generate Audio - Model Category

In the example below, the variable that stores the text written in an Input Object is set to be as the Prompt that will be sent to OpenAI as a result of this action:

Example of the Send to OpenAI action being utilized in the context of audio generation.

Here is a deeper explanation of the fields included under the Send to OpenAI action with Generate Audio selected as the Model Category:

Menu Item'Send to OpenAI' Action Options - Generate Audio
Model CategorySpecifies the AI model category that is relevant for the particular action. In this case, Generate Audio is selected for text-to-audio scenarios. For the other options, like image or audio generation, refer to the sections below.
Model VersionSpecifies the ChatGPT version that should be called on during the experiment. Available options include:
  • GPT-4o-mini
PromptGives you the option to set the prompt for the audio that should be generated. Popular approaches include setting a Constant Value like a string of written text or to use something like an Input Object variable and link it here.
VoiceSet the voice tone that should be adopted for the generated audio. Available options include:
  • Alloy
  • Ash
  • Ballad
  • Coral
  • Echo
  • Onyx
  • Nova
  • Sage
  • Shimmer
  • Verse
InstructionsType in further instructions for the generated audio to take shape, such as 'Speaking in a neutral and calm voice...'

Useful Demo: Check out this demo that makes use of audio generation via the OpenAI Trigger and Action. The spoken text is generated via OpenAI and is it used in the context of reading a paragraph outloud to the participants for which they must answer multiple choice answers about.

Important Notes

  • As OpenAI is evolving on a daily basis, please check the docs from OpenAI for further clarifications with regards to chat and consider browsing through the documentation for other categories of models, such as text-to-audio.