Unification of sentence processing via ear and eye: An fMRI study Journal Articles uri icon

  •  
  • Overview
  •  
  • Research
  •  
  • Identity
  •  
  • Additional Document Info
  •  
  • View All
  •  

abstract

  • We present new evidence based on fMRI for the existence and neural architecture of an abstract supramodal language system that can integrate linguistic inputs arising from different modalities such that speech and print each activate a common code. Working with sentence material, our aim was to find out where the putative supramodal system is located and how it responds to comprehension challenges. To probe these questions we examined BOLD activity in experienced readers while they performed a semantic categorization task with matched written or spoken sentences that were either well-formed or contained anomalies of syntactic form or pragmatic content. On whole-brain scans, both anomalies increased net activity over non-anomalous baseline sentences, chiefly at left frontal and temporal regions of heteromodal cortex. The anomaly-sensitive sites correspond approximately to those that previous studies (Michael et al., 2001; Constable et al., 2004) have found to be sensitive to other differences in sentence complexity (object relative minus subject relative). Regions of interest (ROIs) were defined by peak response to anomaly averaging over modality conditions. Each anomaly-sensitive ROI showed the same pattern of response across sentence types in each modality. Voxel-by-voxel exploration over the whole brain based on a cosine similarity measure of common function confirmed the specificity of supramodal zones.

authors

  • Braze, David
  • Mencl, W Einar
  • Tabor, Whitney
  • Pugh, Kenneth R
  • Todd Constable, R
  • Fulbright, Robert K
  • Magnuson, James S
  • Van Dyke, Julie
  • Shankweiler, Donald P

publication date

  • April 2011

has subject area

published in