Labvanced can be used to study visual attention with its online webcame eye tracking software

Visual Attention and Eye Tracking

Eye tracking is one of the main tools for studying cognitive processes like visual attention. By quantifying eye movements, researchers can determine where the participant’s focus and visual field is attuned to while performing certain tasks. Eye tracking software like Labvanced’s helps quantify these movements for academic purposes across many fields, such as linguistics researchopen in new window and developmental psychology.

Visual Attention

Before considering eye tracking, let’s have a quick review of visual attention. Visual attention is a complicated cognitive phenomenon with many overlapping areas that researchers study in hopes of bringing light to the topic.

Types of Visual Attention

Visual attention is comprised of three main subtypes, including:

  1. Spatial attention
  2. Feature-based attention
  3. Object-based attention

When using eye tracking technology and designing a psychology experiment, it’s crucial to define which type of visual attention you are looking into (Carrasco, 2011).open in new window

Purpose of Visual Attention

Attention has many different purposes, such as:

  1. Feature binding
  2. Recognition
  3. Stimulus selection / data reduction
  4. Stimulus enhancement

Together, these abilities help the visual system function, ultimately allowing us to perceive and understand our direct environment and surroundings (Evans et al., 2011).open in new window

With eye tracking software, researchers can get a closer look at these particular attention-related functions under various psychological contexts and across different populations.

Eye Tracking Technology Metrics Quantify Attention

Labvanced provides many metrics like gaze points and fixation which is used to quantify visual attention with its online webcame eye tracking software One of the best ways to quantify visual attention is through eye movement tracking. By measuring where the eyes are looking, researchers acquire concrete measurements of the visual plane that participants are attending to.

Eye tracking software like Labvanced’sopen in new window provides numerical data about where the gaze is (the coordinates in the x/y plane and confidence levels) and subsequently other types of metrics can be computed like fixation and revisits.

Contexts for Studying Visual Attention

So many everyday situations and processes are tightly intertwined with visual attention.

Consider the following activities that require visual attention and the cognitive outcomes related:

Psychology and cognitive science experiments can be designed to capture these contexts, and countless other situations which rely on visual attention, apply eye tracking technology as a research method and analyze relationships between attention, performance, and eye movement.

Conclusion

Eye tracking software like Labvanced’s helps quantify visual attention in psychology experiments. Since eye movement is so intimately related to attention, any cognitive science experiment that is interested in this domain would benefit from an additional layer of data recording from eye tracking technology.

Book a demo to discuss implementing eye tracking in your research today.open in new window

References

Carrasco, M. (2011). Visual attention: The past 25 years. Vision research, 51(13), 1484-1525.

Evans, K. K., Horowitz, T. S., Howe, P., Pedersini, R., Reijnen, E., Pinto, Y., ... & Wolfe, J. M. (2011). Visual attention. Wiley Interdisciplinary Reviews: Cognitive Science, 2(5), 503-514.

Khachatryan, H., Rihn, A., Behe, B., Hall, C., Campbell, B., Dennis, J., & Yue, C. (2018). Visual attention, buying impulsiveness, and consumer behavior. Marketing Letters, 29(1), 23-35.

Valdois, S., Roulin, J. L., & Bosse, M. L. (2019). Visual attention modulates reading acquisition. Vision Research, 165, 152-161.

Wang, W., Song, H., Zhao, S., Shen, J., Zhao, S., Hoi, S. C., & Ling, H. (2019). Learning unsupervised video object segmentation through visual attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 3064-3074).