Visual Attention and Eye Tracking
Eye tracking is one of the main tools for studying cognitive processes like visual attention. By quantifying eye movements, researchers can determine where the participant’s focus and visual field is attuned to while performing certain tasks. Eye tracking software like Labvanced’s helps quantify these movements for academic purposes across many fields, such as linguistics research and developmental psychology.
Before considering eye tracking, let’s have a quick review of visual attention. Visual attention is a complicated cognitive phenomenon with many overlapping areas that researchers study in hopes of bringing light to the topic.
Types of Visual Attention
Visual attention is comprised of three main subtypes, including:
- Spatial attention
- Feature-based attention
- Object-based attention
When using eye tracking technology and designing a psychology experiment, it’s crucial to define which type of visual attention you are looking into (Carrasco, 2011).
Purpose of Visual Attention
Attention has many different purposes, such as:
- Feature binding
- Stimulus selection / data reduction
- Stimulus enhancement
Together, these abilities help the visual system function, ultimately allowing us to perceive and understand our direct environment and surroundings (Evans et al., 2011).
With eye tracking software, researchers can get a closer look at these particular attention-related functions under various psychological contexts and across different populations.
Eye Tracking Technology Metrics Quantify Attention
One of the best ways to quantify visual attention is through eye movement tracking. By measuring where the eyes are looking, researchers acquire concrete measurements of the visual plane that participants are attending to.
Eye tracking software like Labvanced’s provides numerical data about where the gaze is (the coordinates in the x/y plane and confidence levels) and subsequently other types of metrics can be computed like fixation and revisits.
Contexts for Studying Visual Attention
So many everyday situations and processes are tightly intertwined with visual attention.
Consider the following activities that require visual attention and the cognitive outcomes related:
- Playing video games: This study uses eye tracking to show the correlation between attention, task-driven viewing, and object judgements during video games (Wang et al., 2019).
- Reading: Reading and reading acquisition are complex phenomenon that are directly related to fluency. This study shows that visual attention actually has the power to modulate reading acquisition (Valdois, Roulin, & Bosse, 2019).
- Buying behavior: This study shows how visual attention to buying information is significant different between impulsive and non-impulsive buyers (Khachatryan et al., 2018).
Psychology and cognitive science experiments can be designed to capture these contexts, and countless other situations which rely on visual attention, apply eye tracking technology as a research method and analyze relationships between attention, performance, and eye movement.
Eye tracking software like Labvanced’s helps quantify visual attention in psychology experiments. Since eye movement is so intimately related to attention, any cognitive science experiment that is interested in this domain would benefit from an additional layer of data recording from eye tracking technology.
- Carrasco, M. (2011). Visual attention: The past 25 years. Vision research, 51(13), 1484-1525.
- Evans, K. K., Horowitz, T. S., Howe, P., Pedersini, R., Reijnen, E., Pinto, Y., ... & Wolfe, J. M. (2011). Visual attention. Wiley Interdisciplinary Reviews: Cognitive Science, 2(5), 503-514.
- Khachatryan, H., Rihn, A., Behe, B., Hall, C., Campbell, B., Dennis, J., & Yue, C. (2018). Visual attention, buying impulsiveness, and consumer behavior. Marketing Letters, 29(1), 23-35.
- Valdois, S., Roulin, J. L., & Bosse, M. L. (2019). Visual attention modulates reading acquisition. Vision Research, 165, 152-161.
- Wang, W., Song, H., Zhao, S., Shen, J., Zhao, S., Hoi, S. C., & Ling, H. (2019). Learning unsupervised video object segmentation through visual attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 3064-3074).