1. Visuomotor learning: dual-task paradigms (Click the image to see animation.)
Reaching and rapid serial visual presentation (RSVP) tasks
Demo flash

  • Encoding attentional states during visuomotor adaptation (Im et al., J of Vision, 2015)
  • Long lasting attentional-context dependent visuomotor memory (Im et al., J of Experimenal Psychology: HPP, 2016)
  • Paradoxical benefits of attentional distraction for visuomotor adaptation without awareness (Im & Song, Under Review)

2. Crowd emotion task (Click the image to see animation.)
Demo flash

  • Ensemble coding of crowd emotion: Differential hemispheric and visual stream contributions (Im et al., Nature Human Behav., 2017)
  • Cross-cultural effects on ensemble coding of emotion in facial crowds (Im et al., Culture and Brain, 2017)

3a. Creating Magnocellular biased images (Click the image to see animation.)
Demo flash

3b. Creating Parvocellular biased images (Click the image to see animation.)
Demo flash

  • Sex-related differences in behavioral and amygdalar responses to compound facial threat cues (Im et al., Human Brain Mapping, 2018)
  • Observer’s anxiety facilitates magnocellular processing of clear facial threat cues, but impairs parvocellular processing of ambiguous facial threat cues. (Im et al., Scientific Reports, 2017)

4. Average size task (Click the image to see animation.)
Demo flash

  • Ensemble statistics as a unit of selection (Im et al., J Cognitive Psychology., 2015)
  • Mean size as a unit of visual working memory (Im & Chong, Perception, 2014) Demo flash
  • The effects of sampling and internal noise on the representation of ensemble average size (Im & Halberda, Attention, Perception & Psychophysics, 2013)

Demo flash

  • Computation of mean size is based on perceived size (Im & Chong, Attention, Perception & Psychophysics, 2009)

5. Approximate number estimation task (Click the image to see animation.)
Demo flash

  • Perceptual groups as a unit for rapid extraction of approximate number of elements in random dot arrays (Im et al., Vision Research, 2016)