How do we perceive our real-world environments?
What happens in our brains when we see our natural environment? What are the neural mechanisms? What are the important visual features that we use? How are the features organized to support naturalistic vision?
In our lab we approach these questions by combining detailed computational analysis of real-world scenes with neuroimaging techniques (fMRI, EEG) and behavioral experiments. We pioneered the decoding of scene categories from fMRI activity and pinpointed particular contour features as important for scene perception as well as the neural code of scene categories. Currently, we are working on the perceptual organization afforded by mid-level visual cues such as symmetry, parallelism, and contour integration. We are constantly exploring new avenues to better understand the perception of the complex world that we inhabit.
What do we like and why?
As we perceive the world around us, constantly evaluation of its aesthetic appeal appears to be an integral part of the perceptual experience. Why?
We are working on uncovering the mechanisms and motivations of perceptual aesthetics with a combination of computational analysis, behavioral testing, and neuroimaging. We have identified neural codes of architectural styles, identified image features responsible for perceived threat and safety in scenes, and related computational measures of perceived complexity to neural activity, Recent investigations of images features that support aesthetic appreciation have allowed us to manipulate the aesthetic experience by selectively manipulating image features.
What can computer vision learn from human vision?
Our lab is constantly trying to find ways to apply the lessons that we learn from research in human vision to computer vision problems. Recent work includes:
Different types of symmetry measures improves scene categorization similarly for people and for deep neural networks (Rezanejad et al., CVPR 2019).
Stochastic completion fields allow for filling in of missing regions and structurally guided in-painting, inspired by the good-continuation grouping rule (Reznaejad et al., BMVC 2021).
We enrich the representation of 3D objects with width and medial symmetry based on spectral coordinates, allowing us to match poses, segment object parts, and categorize 3D objects. (Rezanejad et al., CVPR 2022).
How do we perceive the world across sensory modalities?
Imagine taking a walk on the beach. Your sensory experience would include the sparkle of the sun’s reflection on the water, the sound of the crushing waves, and the smell of ocean air. Even though the brain has clearly delineated processing channels for all of these sensory modalities, we still have the integral concept of the beach, which is not tied to particular sensory modalities.
What are the neural systems underlying this convergence, which allows our brain to represent the world beyond sensory modalities?
We address this question by investigating the neural codes of scenes that transcend sensory modalities - vision, hearing, and touch/temperature.
Where do we attend, and what do we remember?
There is too much going on in the world for us to perceive and remember it all. How do we select what information we attend to and what information we store in memory? Tracking eye movements of participants watching and memorizing scenes has given us some insights into the stimulus properties as well as internal processes that drive the allocation of attention and eye movements. Eye movements are also instrumental in gating visual information for storage in memory. We have found, for instance, that eye movements play different roles during the encoding and the retrieval of memories for scenes.
What else we're up to ...
We love to work with and learn from our colleagues in other fields. Such collaborations are fun and intellectually stimulating. In recent years, we have collaborated with colleagues in Developmental Psychology, Industrial Engineering, Computer Science, Urban Planning, Orthodontics, and Oceanography. Do you have a proposal for working with us? Please contact us!