10 Popular Linguistic Experiment Examples in Labvanced
Language and speech researchers use online experiment platforms like Labvanced for running their various studies because it’s a way to gather both participants and data quickly.
By running experiments in a virtual language lab, publishing studies online and sharing them through the web, linguists and cognitive psychologists not only complete their research faster but also create their experiment quickly and without code.
Below we highlight 10 popular linguistic experiments that can be performed in Labvanced for studying speech perception and language comprehension, all which demonstrate a different capability or feature of the platform.
The Multimodal Stroop Effect Task is a classic task that challenges participants’ cognitive associations.
In the study, words like ‘blue’ or ‘green’ are shown one by one with a varying text color, only sometimes corresponding to what the written word indicates. This incongruence challenges the participant.
The study prompts the participant to focus on the text color and ignore the text meaning. During the experiment, there are also distracting auditory words spoken, a voice that says one of the 4 featured colors.
In the training session, the participant practices focusing on the text color and clicking the corresponding button. The other two dimensions (spoken word and written-text) are congruent and reflect the target color.
In the example below in the training session, the correct response is ‘F’ because the text color is blue. But the participant is also reinforced because the actual written word is blue and the audio playing automatically also says ‘blue.’
In the experiment, things become more challenging as the three dimensions are incongruent.
In the example below, the correct response is ‘D’ because the color is red, but the written word says ‘yellow’ and the audio voice prompts ‘blue.’
Thus, the participant is challenged to focus and limit the various cognitive associations in order to pick the correct response and override the language written- and spoken- language cues.
Fun Fact: Did you know being bilingual predicts a Stroop Effect? A study with Spanish-English bilinguals shows a language stroop effect (Suarez et al., 2014)!
This study, published by the UCLA Linguistics Department aims to test how adult native American English speakers finish sentences.
Participants must listen to sentence fragments and then provide a response where their voice is recorded using their computer microphone, completing the sentence fragment into a full sentence.
The participants are prompted to provide an answer using the first thing that comes to mind and without any hesitation.
The general study progress is illustrated below:
- The participant tests the recording feature of Labvanced to ensure their recording works.
- The participant moves to the next screen and clicks ‘Play’ to hear the sentence fragment.
- Then, the participant is prompted to think of a way complete the sentence starting with the fragment they just heard
- The participant clicks the record button and says the whole sentence out loud.
The study aims to increase scientific knowledge about speech and human language. The researchers state that the gathered insights will have positive implications for several areas, including: implementing computer technology, language teaching, and speech pathology treatment.
In this speech and language experiment by the Max Planck Institute for Empirical Aesthetics in Frankfurt, the researchers set out to investigate how vocalizations are perceived.
The participants begin by filling out a simple questionnaire about themselves. Then, they are instructed to listen to sounds and vocalizations. After perceiving the audio stimuli, the participants are asked to rate the sound on 2 scales.
This experiment demonstrates how to incorporate a questionnaire at the beginning of the study and then use audio to study human sound perception of vocalizations.
The Spanish Pronunciation Study is one of the many experiments by the University of Toronto published in Labvanced. The experiment is in Spanish, but can also be administered in Portuguese, and tests the participant’s comprehension and language capabilities through speaking and listening tasks.
In this study, the participant goes through information about the experimental procedure. Then, there are 2 short tasks to be completed, about 10 minutes each. The first task is about speaking and reading and the second task is about listening.
At the end, there is a questionnaire so the participant can provide basic information about themselves, as well as any relevant information about their language learning background.
Labvanced is used for many language learning and bilingual studies. Researchers can design their experiment in any language, choose to limit a study only to specific speakers, and share the study internationally so different language speakers can participate from around the globe or keep the study local to examine language learning in a specific group, such as students in a university learning a second language.
In this study, the relationship between sound perception and feelings is assessed. The participants are prompted to listen to 21 human sounds from all over the world. After hearing this clip, the participant must rate how the sound made them feel using 5-point Likert scales.
The experimental screen opens with instructions of the experiment. Towards the end of the explanation, there is a sound volume adjuster where the participant can adjust and calibrate the audio that will proceed to a comfortable level:
After calibrating and adjusting the sound, the experiment begins.
The participant hears a sound that plays and lasts for about 30 seconds:
Then, after the sound has been played, the participant is prompted to indicate on a 5-point Likert to what extent certain emotions and feelings (like confidence, sadness, or alertness) were invoked by the audio:
This experiment is a great example of how to present audio recordings and then a questionnaire so the participant can provide a response to the sound, language, or vocalization they perceived.
This study is interested in auditory perception and how participants classify sounds based on 2 scales.
For each sound, the participant must rate how they perceived the sound, whether it sounds like a song or like speech and whether it sounds natural or artificially-produced.
The response is recorded on a continuous range using slider scales, also known as visual analogous scales (VASs). These scales are sometimes preferred over Likert scales because they record a continuous value as opposed to discrete values (Chyung et al., 2018).
Before the experiment begins, there is a sound calibration process that checks whether the participant is using speakers of headphones to play the audio. During this process, there are three sounds presented and the participant must pick which tone was the quietest.
If you go through this calibration process using speakers, you will not pass because the answers will be wrong, indicating that headphones were not used and a prompt will appear:
The Song or Speech study is a great example of not only how to calibrate sound and objectively ensure your participants are following instructions (like using headphones), but also to use continuous slider scales for recording responses.
The participants see a word, then they see a letter sequence. If the letter sequence means something in the English language, the participant is asked to click ‘Y’ on the keyboard but if the letter sequence does not mean anything, then ‘N’ should be pressed.
The design is simple and straightforward, but it demonstrates how to collect participant responses using button presses after presenting words visually in a particular sequence.
The results of this study aim to suggest best practice for educational practitioners and businesses using online fonts since reading text online has become a commonplace behavior. By establishing which fonts are associated with the highest language comprehension and user performance, the researchers are helping increase the efficiency of how language is used online for communication and learning.
Before starting the training session, the study also asks the participants to provide their email address so that responses from a previous section in Labvanced can be linked.
In the training session, the participant must record themselves reading the prompted passage out loud:
After the voice recording has been completed, a series of questions about the passage follow:
The Adult Reading Test captures several different types of measurements, from voice recordings to answers from questionnaires. It’s a great way to measure language comprehension and mastery and can be adapted to other languages and population groups.
The Semantic Learning for Toddlers study aims to look at how different speakers can influence semantic connections in children between the ages of 22 and 36 months that are monolingual (English-only) or bilingual (English + another language).
The study combines several different features that can be used in virtual language labs:
- Video presentation of speakers using target words
- Video recording of the participant
Through these features, the researchers can determine how a toddler is looking at the screen for each trial and where their attention is while learning new words in different conversational settings.
Child sees a video of two speakers, each teaching two new words. Then, in one type of trial, the child will hear two words repeated (for about 20 seconds) from the same speaker. In the second type of trial, the child will hear two words again, but one word per speaker.
With this set-up, the experiment aims to investigate how semantic connections are formed between newly acquired words and if the speakers that taught those words influence in any way the semantic connections with the newly learned words.
Together these 10 linguistic experiments are great examples not only of what you can do in Labvanced but also how researchers from various universities are studying speech and language but also perception using online experiments to record data and responses.
Chyung, S. Y., Swanson, I., Roberts, K., & Hankinson, A. (2018). Evidence‐based survey design: The use of continuous rating scales in surveys. Performance Improvement, 57(5), 38-48.
Suarez, P. A., Gollan, T. H., Heaton, R., Grant, I., Cherner, M., & HNRC Group. (2014). Second-language fluency predicts native language Stroop effects: Evidence from Spanish–English bilinguals. Journal of the International Neuropsychological Society, 20(3), 342-348.