11.07.2015 Views

Untitled - Vision Sciences Society

Untitled - Vision Sciences Society

Untitled - Vision Sciences Society

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Meeting ScheduleFriday, May 79:00 am – 8:30 pm Registration Open Royal Foyer1:00 – 3:00 pm Symposia Session 1 Royal Ballrooms 1-3, 4-5 & 6-83:00 – 3:30 pm Coffee Break Royal Foyer3:30 – 5:30 pm Symposia Session 2 Royal Ballrooms 1-3, 4-5 & 6-85:30 – 7:30 pm Opening Night Reception Royal Foyer, Orchid Foyer, Sunset Deck, Vista Deck5:30 – 9:30 pm Exhibits Open Orchid Foyer6:30 – 9:30 pm Evening Poster Session Vista Ballroom, Orchid BallroomSaturday, May 87:30 am – 6:45 pm Registration Open Royal Foyer7:45 – 8:15 am Coffee Royal Foyer, Orchid Foyer8:15 – 10:00 am Talk Sessions Royal Ballrooms 1-3 & 4-58:30 am – 12:30 pm Poster Sessions Royal Ballroom 6-8, Orchid Ballroom, Vista Ballroom8:30 am – 6:45 pm Exhibits Open Orchid Foyer10:00 – 11:30 am VSS Public Lecture Renaissance Academy of Florida Gulf Coast University10:00 – 11:30 am Family & Friends Get-Together Mangrove Pool10:15 – 10:45 am Coffee Break Royal Foyer, Orchid Foyer11:00 am – 12:45 pm Talk Sessions Royal Ballrooms 1-3 & 4-512:45 – 2:45 pm Lunch Break Purchase a lunch at VSS Marketplace and head to the beach!*2:45 – 4:15 pm Talk Sessions Royal Ballrooms 1-3 & 4-52:45 – 6:45 pm Poster Sessions Royal Ballroom 6-8, Orchid Ballroom, Vista Ballroom4:30 – 5:00 pm Coffee Break Royal Foyer, Orchid Foyer5:15 – 6:45 pm Talk Sessions Royal Ballrooms 1-3 & 4-56:45 – 7:45 pm Keynote Reception Royal Foyer7:45 – 9:15 pm Keynote Address and Awards Ceremony Royal Ballroom 4-5Sunday, May 97:30 am – 6:45 pm Registration Open Royal Foyer7:45 – 8:15 am Coffee Royal Foyer, Orchid Foyer8:15 – 10:00 am Talk Sessions Royal Ballrooms 1-3 & 4-58:30 am – 12:30 pm Poster Sessions Royal Ballroom 6-8, Orchid Ballroom, Vista Ballroom8:30 am – 6:45 pm Exhibits Open Orchid Foyer10:15 – 10:45 am Coffee Break Royal Foyer, Orchid Foyer11:00 am – 12:45 pm Talk Sessions Royal Ballrooms 1-3 & 4-512:45 – 2:45 pm Lunch Break Purchase a lunch at VSS Marketplace and head to the beach!*2:45 – 4:15 pm Talk Sessions Royal Ballrooms 1-3 & 4-52:45 – 6:45 pm Poster Sessions Royal Ballroom 6-8, Orchid Ballroom, Vista Ballroom4:30 – 5:00 pm Coffee Break Royal Foyer, Orchid Foyer5:15 – 7:00 pm Talk Sessions Royal Ballrooms 1-3 & 4-510:00 pm – 1:00 am CVS-VVRC Social Vista Ballroom & Sunset Deck4 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>


VSS 2010 AbstractsSchedule-at-a-GlanceMonday, May 10 Tuesday, May 11 Wednesday, May 127:00 amCoffee Coffee Coffee8:00 amVisualSearchMorning Poster SessionsBinocularvision:Models andmechanismsPerception andaction: Pointing,reaching,and graspingCoffee BreakAttention:TimeObjectrecognition:CategoriesBusiness Meeting6th Annual Best Illusionof the Year ContestExhibits OpenRegistration Desk OpenAfternoon Poster Sessions Morning Poster SessionsPerceptualorganization:Groupingand segmentationNeuralmechanisms:CortexMemory:EncodingandretrievalSpatialvision:Crowdingand mechanismsCoffee BreakLunchCoffee BreakMotion:MechanismsAttention:Objectattentionand objecttrackingAttention:Models andmechanismsof searchPerceptuallearning:Plasticityand adaptationExhibits OpenRegistration Desk OpenMorning Poster SessionsEye movements:Updating3D perception:Depthcues andspatiallayoutCoffee BreakPerceptionand action:Navigationand mechanismsFace perception:SocialcognitionRegistration Desk Open9:00 am10:00 am11:00 am12:00 pm1:00 pm2:00 pm3:00 pm4:00 pm5:00 pm6:00 pm7:00 pmDemoNightDinnerDemoNightDemosExhibits OpenOpen House forGraduate Students andPostdocs8:00 pm9:00 pm10:00 pmClub <strong>Vision</strong>Dance Party<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>7


Member-Initiated SymposiaSymposium summaries are presented below. See theAbstracts book for the full text of each presentation. Preregistrationis not necessary to attend a symposium, butrooms will fill up quickly, so plan to arrive early.Schedule OverviewFriday, May 7, 1:00 – 3:00 pmS1 Integrative mechanisms for 3D vision: combining psychophysics,computation and neuroscience, Royal Palm Ballroom 1-3S2 New Methods for Delineating the Brain and Cognitive Mechanismsof Attention, Royal Palm Ballroom 4-5S3 Nature vs. Nurture in <strong>Vision</strong>: Evidence from Typical andAtypical Development, Royal Palm Ballroom 6-8Friday, May 7, 3:30 – 5:30 pmS4 Representation in the Visual System by Summary Statistics,Royal Palm Ballroom 1-3S5 Understanding the interplay between reward and attention,and its effects on visual perception and action, Royal Palm Ballroom4-5S6 Dissociations between top-down attention and visual awareness,Royal Palm Ballroom 6-8S1Integrative mechanisms for 3D vision:combining psychophysics, computationand neuroscienceFriday, May 7, 1:00 – 3:00 pm, Royal Palm Ballroom 1-3Organizer: Andrew Glennerster (University of Reading)Presenters: Roland W. Fleming (Max Planck Institute for Biological Cybernetics),James T Todd (Department of Psychology, Ohio State University),Andrew Glennerster (University of Reading), Andrew E Welchman(University of Birmingham), Guy A Orban (K.U. Leuven), Peter Janssen(K.U. Leuven)Symposium SummaryEstimating the three-dimensional (3D) structure of the worldaround us is a central component of our everyday behavior, supportingour decisions, actions and interactions. The problem facedby the brain is classically described in terms of the difficulty ofinferring a 3D world from (“ambiguous”) 2D retinal images. Thecomputational challenge of inferring 3D depth from retinal samplesrequires sophisticated neural machinery that learns to exploitmultiple sources of visual information that are diagnostic of depthstructure. This sophistication at the input level is demonstrated byour flexibility in perceiving shape under radically different viewingsituations. For instance, we can gain a vivid impression ofdepth from a sparse collection of seemingly random dots, as wellas from flat paintings. Adding to the complexity, humans exploitdepth signals for a range of different behaviors, meaning that theinput complexity is compounded by multiple functional outputs.Together, this poses a significant challenge when seeking to investigateempirically the sequence of computations that enable 3Dvision.This symposium brings together speakers from different perspectivesto outline progress in understanding 3D vision. Fleming willstart, addressing the question of “What is the information?”, usingcomputational analysis of 3D shape to highlight basic principlesthat produce depth signatures from a range of cues. Todd andGlennerster will both consider the question of “How is this informationrepresented?”, discussing different types of representationalschemes and data structures. Welchman, Orban and Janssenwill focus on the question of “How is it implemented in cortex?”.Welchman will discuss human fMRI studies that integrate psychophysicswith concurrent measures of brain activity. Orban willreview fMRI evidence for spatial correspondence in the processingof different depth cues in the human and monkey brain. Janssenwill summarize results from single cell electrophysiology, highlightingthe similarities and differences between the processing of3D shape at the extreme ends of the dorsal and ventral pathways.Finally, Glennerster, Orban and Janssen will all address the questionof how depth processing is affected by task.The symposium should attract a wide range of VSS participants, asthe topic is a core area of vision science and is enjoying a wave ofpublic enthusiasm with the revival of stereoscopic entertainmentformats. Further, the goal of the session in linking computationalapproaches to behavior to neural implementation is one that is scientificallyattractive.PresentationsFrom local image measurements to 3D shapeRoland W. Fleming, Max Planck Institute for Biological CyberneticsThere is an explanatory gap between the simple local image measurementsof early vision, and the complex perceptual inferences involved in estimatingobject properties such as surface reflectance and 3D shape. The mainpurpose of my presentation will be to discuss how populations of filterstuned to different orientations and spatial frequencies can be ‘put to gooduse’ in the estimation of 3D shape. I’ll show how shading, highlights andtexture patterns on 3D surfaces lead to highly distinctive signatures in thelocal image statistics, which the visual system could use in 3D shape estimation.I will discuss how the spatial organization of these measurementsprovides additional information, and argue that a common front end canexplain both similarities and differences between various monocular cues.I’ll also present a number of 3D shape illusions and show how these can bepredicted by image statistics, suggesting that human vision does indeedmake use of these measurements.The perceptual representation of 3D shapeJames T Todd, Department of Psychology, Ohio State UniversityOne of the fundamental issues in the study of 3D surface perception is toidentify the specific aspects of an object’s structure that form the primitivecomponents of an observer’s perceptual knowledge. After all, in order tounderstand shape perception, it is first necessary to define what ‘’shape’’ is.In this presentation, I will assess several types of data structures that havebeen proposed for representing 3D surfaces. One of the most commondata structures employed for this purpose involves a map of the geometricproperties in each local neighborhood, such as depth, orientation or curvature.Numerous experiments have been performed in which observershave been required to make judgments of local surface properties, but theresults reveal that these judgments are most often systematically distortedrelative to the ground truth and surprisingly imprecise, thus suggestingthat local property maps may not be the foundation of our perceptualknowledge about 3D shape. An alternative type of data structure for representing3D shape involves a graph of the configural relationships amongqualitatively distinct surface features, such as edges and vertices. The psychologicalvalidity of this type of representation has been supported bynumerous psychophysical experiments, and by electrophysiological studiesof macaque IT. A third type of data structure will also be considered inwhich surfaces are represented as a tiling of qualitatively distinct regions<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>11


Member-Initiated SymposiaVSS 2010 Abstractsbased on their patterns of curvature, and there is some neurophysiologicalevidence to suggest that this type of representation occurs in several areasof the primate cortex.View-based representations and their relevance to human 3DvisionAndrew Glennerster, School of Psychology and CLS, University of ReadingIn computer vision, applications that previously involved the generationof 3D models can now be achieved using view-based representations. Inthe movie industry this makes sense, since both the inputs and outputs ofthe algorithms are images, but the same could also be argued of human 3Dvision. We explore the implications of view-based models in our experiments.In an immersive virtual environment, observers fail to notice the expansionof a room around them and consequently make gross errors when comparingthe size of objects. This result is difficult to explain if the visual systemcontinuously generates a 3-D model of the scene using known baselineinformation from interocular separation or proprioception. If, on the otherhand, observers use a view-based representation to guide their actions, theymay have an expectation of the images they will receive but be insensitiveto the rate at which images arrive as they walk.In the same context, I will discuss psychophysical evidence on sensitivity todepth relief with respect to surfaces. The data are compatible with a hierarchicalencoding of position and disparity similar to the affine model ofKoenderink and van Doorn (1991). Finally, I will discuss two experimentsthat show how changing the observer’s task changes their performance ina way that is incompatible with the visual system storing a 3D model of theshape or location of objects. Such task-dependency indicates that the visualsystem maintains information in a more ‘raw’ form than a 3D model.The functional roles of visual cortex in representing 3D shapeAndrew E Welchman, School of Psychology, University of BirminghamEstimating the depth structure of the environment is a principal function ofthe visual system, enabling many key computations, such as segmentation,object recognition, material perception and the guidance of movements.The brain exploits a range of depth cues to estimate depth, combininginformation from shading and shadows to linear perspective, motion andbinocular disparity. Despite the importance of this process, we still knowrelatively little about the functional roles of different cortical areas in processingdepth signals in the human brain. Here I will review recent humanfMRI work that combines established psychophysical methods, high resolutionimaging and advanced analysis methods to address this question. Inparticular, I will describe fMRI paradigms that integrate psychophysicaltasks in order to look for a correspondence between changes in behaviouralperformance and fMRI activity. Further, I will review information-basedfMRI analysis methods that seek to investigate different types of depthrepresentation in parts of visual cortex. This work suggests a key role fora confined ensemble of dorsal visual areas in the processing informationrelevant to judgments of 3D shape.Extracting depth structure from multiple cuesGuy A Orban, K.U. LeuvenMultiple cues provide information about the depth structure of objects:disparity, motion and shading and texture. Functional imaging studies inhumans have been preformed to localize the regions involved in extractingdepth structure from these four cues. In all these studies extensive controlswere used to obtain activation sites specific for depth structure. Depthstructure from motion, stereo and texture activates regions in both parietaland ventral cortex, but shading only activates a ventral region. For stereoand motion the balance between dorsal and ventral activation depends onthe type of stimulus: boundaries versus surfaces. In monkey results aresimilar to those obtained in humans except that motion is a weaker cuein monkey parietal cortex. At the single cell level neurons are selective forgradients of speed, disparity and texture. Neurons selective for first andsecond order gradients of disparity will be discussed by P Janssen. I willconcentrate on neurons selective for speed gradients and review recentdata indicating that a majority of FST neurons is selective for second orderspeed gradients.Neurons selective to disparity defined shape in the temporal andparietal cortexPeter Janssen, K.U. Leuven; Bram-Ernst Verhoef, KU LeuvenA large proportion of the neurons in the rostral lower bank of the SuperiorTemporal Sulcus, which is part of IT, respond selectively to disparity-defined3D shape (Janssen et al., 1999; Janssen et al., 2000). These ITneurons preserve their selectivity for different positions-in-depth, whichproves that they respond to the spatial variation of disparity along the verticalaxis of the shape (higher-order disparity selectivity). We have studiedthe responses of neurons in parietal area AIP, the end stage of the dorsalvisual stream and crucial for object grasping, to the same disparity-defined3D shapes (Srivastava et al., 2009). In this presentation I will review thedifferences between IT and AIP in the neural representation of 3D shape.More recent studies have investigated the role of AIP and IT in the perceptualdiscrimination of 3D shape using simultaneous recordings of spikesand local field potentials in the two areas, psychophysics and reversibleinactivations. AIP and IT show strong synchronized activity during 3Dshapediscrimination, but only IT activity correlates with perceptual choice.Reversible inactivation of AIP produces a deficit in grasping but does notaffect the perceptual discrimination of 3D shape. Hence the end stages ofboth the dorsal and the ventral visual stream process disparity-defined 3Dshape in clearly distinct ways. In line with the proposed behavioral role ofthe two processing streams, the 3D-shape representation in AIP is actionorientedbut not crucial for 3D-shape perception.S2New Methods for Delineating the Brainand Cognitive Mechanisms of AttentionFriday, May 7, 1:00 – 3:00 pm, Royal Palm Ballroom 4-5Organizer: George Sperling (University of California, Irvine)Presenters: Edgar DeYoe (Medical College of Wisconsin), Jack L. Gallant(University of California, Berkeley), Albert J. Ahumada (NASA AmesResearch Center, Moffett Field CA 94035), Wilson S. Geisler (The Universityof Texas at Austin), Barbara Anne Dosher (University of California,Irvine), George Sperling (University of California, Irvine)Symposium SummaryThis symposium brings together the world’s leading specialists insix different subareas of visual attention. These distinguished scientistswill expose the audience to an enormous range of methods,phenomena, and theories. It’s not a workshop; listeners won’t learnhow to use the methods described, but they will become aware ofthe existence of diverse methods and what can be learned fromthem. The participants will aim their talks to target VSS attendeeswho are not necessarily familiar with the phenomena and theoriesof visual attention but who can be assumed to have some rudimentaryunderstanding of visual information processing. The talksshould be of interest to and understandable by all VSS attendeeswho have an interest in visual information processing: students,postdocs, academic faculty, research scientists, clinicians, and thesymposium participants themselves. Attendees will see examplesof the remarkable insights achieved by carefully controlled experimentscombined with computational modeling. DeYoe reviews hisextraordinary fMRI methods for localizing spatial visual attentionin the visual cortex of alert human subjects to measure their ‘’attentionmaps’’. He shows in exquisite detail how top-down attentionto local areas in visual space changes the BOLD response (an indicatorof neural activity) in corresponding local areas V1 of visualcortex and in adjacent spatiotopic visual processing areas. Thiswork is of fundamental significance in defining the topography ofattention and it has important clinical applications. Gallant is thepremier exploiter of natural images in the study of visual corticalprocessing. His work uses computational models to define the neuralprocesses of attention in V4 and throughout the attention hierarchy.Gallant’s methods complement DeYoe’s in that they revealfunctions and purposes of attentional processing that often areoverlooked with simple stimuli traditionally used. Ahumada, whointroduced the reverse correlation paradigm in vision science, here12 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>


VSS 2010 AbstractsMember-Initiated Symposiapresents a model for the eye movements in perhaps the simplestsearch task (which happens also to have practical importance): thesearch for a small target near horizon between ocean and sky. Thisis an introduction to the talk by Geisler. Geisler continues the themeof attention as optimizing performance in complex tasks in studiesof visual search. He presents a computational model for howattention and stimulus factors jointly control eye movements andsearch success in arbitrarily complex and difficult search tasks. Eyemovements in visual search approach those of an ideal observerin making optimal choices given the available information, andobservers adapt (learn) rapidly when the nature of the informationchanges. Dosher has developed analytic descriptions of attentionalprocesses that enable dissection of attention into three components:filter sharpening, stimulus enhancement, and altered gain control.She applies these analyses to show how subjects learn to adjustthe components of attention to easy and to difficult tasks. Sperlingreviews the methods used to quantitatively describe spatial andtemporal attention windows, and to measure the amplification ofattended features. He shows that different forms of attention actindependently.PresentationsI Know Where You Are Secretly Attending! The topography ofhuman visual attention revealed with fMRIEdgar DeYoe, Medical College of Wisconsin; Ritobrato Datta, MedicalCollege of WisconsinPrevious studies have described the topography of attention-related activationin retinotopic visual cortex for an attended target at one or a few locationswithin the subject’s field of view. However, a complete descriptionfor all locations in the visual field is lacking. In this human fMRI study,we describe the complete topography of attention-related cortical activationthroughout the central 28° of visual field and compare it with previousmodels. We cataloged separate fMRI-based maps of attentional topographyin medial occipital visual cortex when subjects covertly attended to eachtarget location in an array of 3 concentric rings of 6 targets each. Attentionalactivation was universally highest at the attended target but spread to othersegments in a manner depending on eccentricity and/or target size.. Wepropose an “Attentional Landscape” model that is more complex than a‘spotlight’ or simple ‘gradient’ model but includes aspects of both. Finally,we asked subjects to secretly attend to one of the 18 targets without informingthe investigator. We then show that it is possible to determine the targetof attentional scrutiny from the pattern of brain activation alone with100% accuracy. Together, these results provide a comprehensive, quantitativeand behaviorally relevant account of the macroscopic cortical topographyof visuospatial attention. We also show how the pattern of attentionalenhancement as it would appear distributed within the observer’s field ofview thereby permitting direct observation of a neurophysiological correlateof a purely mental phenomenon, the “window of attention.”Attentional modulation in intermediate visual areas during naturalvisionJack L. Gallant, University of California, BerkeleyArea v4 has been the focus of much research on neural mechanisms ofattention. However, most of this work has focused on reduced paradigmsinvolving simple stimuli such as bars and gratings, and simple behaviorssuch as fixation. The picture that has emerged from such studies suggeststhat the main effect of attention is to change response rate, response gainor contrast gain. In this talk I will review the current evidence regardinghow neurons are modulated by attention under more natural viewing conditionsinvolving complex stimuli and behaviors. The view that emergesfrom these studies suggests that attention operates through a variety ofmechanisms that modify the way information is represented throughoutthe visual hierarchy. These mechanisms act in concert to optimize taskperformance under the demanding conditions prevailing during naturalvision.A model for search and detection of small targetsAlbert J. Ahumada, NASA Ames Research Center, Moffett Field CA 94035Computational models predicting the distribution of the time to detectionof small targets on a display are being developed to improve workstationdesigns. Search models usually contain bottom-up processes, like a saliencymap, and top-down processes, like a priori distributions over the possiblelocations to be searched. A case that needs neither of these features is thesearch for a very small target near the horizon when the sky and the oceanare clear. Our models for this situation have incorporated a saccade-distancepenalty and inhibition-of-return with a temporal decay. For verysmall, but high contrast targets, using the simple detection model that thetarget is detected if it is foveated is sufficient. For low contrast signals, astandard observer detection model with masking by the horizon edge isrequired. Accurate models of the the search and detection process withoutsignificant expectations or stimulus attractors should make it easier to estimatethe way in which the expectations and attractors are combined whenthey are included.Ideal Observer Analysis of Overt AttentionWilson S. Geisler, The University of Texas at AustinIn most natural tasks humans use information detected in the periphery,together with context and other task-dependent constraints, to select theirfixation locations (i.e., the locations where they apply the specialized processingassociated with the fovea). A useful strategy for investigating theovert-attention mechanisms that drive fixation selection is to begin by derivingappropriate normative (ideal observer) models. Such ideal observermodels can provide a deep understanding of the computational requirementsof the task, a benchmark against which to compare human performance,and a rigorous basis for proposing and testing plausible hypothesesfor the biological mechanisms. In recent years, we have been investigatingthe mechanisms of overt attention for tasks in which the observer is searchingfor a known target randomly located in a complex background texture(nominally a background of filtered noise having the average power spectrumof natural images). This talk will summarize some of our earlier andmore recent findings (for our specific search tasks): (1) practiced humansapproach ideal search speed and accuracy, ruling out many sub-ideal models;(2) human eye movement statistics are qualitatively similar to thoseof the ideal searcher; (3) humans select fixation locations that make nearoptimal use of context (the prior over possible target locations); (4) humansshow relatively rapid adaptation of their fixation strategies to simulatedchanges in their visual fields (e.g., central scotomas); (5) there are biologicallyplausible heuristics that approach ideal performance.Attention in High Precision Tasks and Perceptual LearningBarbara Anne Dosher, University of California, Irvine; Zhong-Lin Lu, Universityof Southern CaliforniaAt any moment, the world presents far more information than the brain canprocess. Visual attention allows the effective selection of information relevantfor high priority processing, and is often more easily focused on oneobject than two. Both spatial selection and object attention have importantconsequences for the accuracy of task performance. Such effects are historicallyassessed primarily for relatively “easy” lower-precision tasks, yet therole of attention can depend critically on the demand for fine, high precisionjudgments. High precision task performance generally depends moreupon attention and attention affects performance across all contrasts withor without noisy stimuli. Low precision tasks with similar processing loadsgenerally show effects of attention only at intermediate contrasts and maybe restricted to noisy display conditions. Perceptual learning can reducethe costs of inattention. The different roles of attention and task precisionare accounted for within the context of an elaborated perceptual templatemodel of the observer showing distinct functions of attention, and providingan integrated account of performance as a function of attention, taskprecision, external noise and stimulus contrast. Taken together, these providea taxonomy of the functions and mechanisms of visual attention.Modeling the Temporal, Spatial, and Featural Processes of VisualAttentionGeorge Sperling, University of California, IrvineA whirlwind review of the methods used to quantitatively define the temporal,spatial, and featural properties of attention, and some of their interactions.The temporal window of attention is measured by moving attentionfrom one location to another in which a rapid sequence of different items(e.g., letters or numbers) is being presented. The probability of items fromthat sequence entering short-term memory defines the time course of atten-<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>13


Member-Initiated SymposiaVSS 2010 Abstractstion: typically 100 msec to window opening, maxim at 300-400 msec, and800 msec to closing. Spatial attention is defined like acuity, by the abilityto alternately attend and ignore strips of increasingly finer grids. The spatialfrequency characteristic so measured then predicts achievable attentiondistributions to arbitrarily defined regions. Featural attention is defined bythe increased salience of items that contain to-be-attended features. Thiscan be measured in various ways; quickest is an ambiguous motion taskwhich shows that attended features have 30% greater salience than neutralfeatures. Spatio-temporal interaction is measured when attention movesas quickly as possible to a designated area. Attention moves in parallel toall the to-be-attended areas, i.e., temporal-spatial independence. Independenceof attentional modes is widely observed; it allows the most efficientneural processing.S3Nature vs. Nurture in <strong>Vision</strong>: Evidence fromTypical and Atypical DevelopmentFriday, May 7, 1:00 – 3:00 pm, Royal Palm Ballroom 6-8Organizer: Faraz Farzin (University of California, Davis)Presenters: Karen Dobkins (Department of Psychology, University ofCalifornia, San Diego), Rain G. Bosworth (Department of Psychology,University of California, San Diego), Melanie Palomares (University ofSouth Carolina), Anthony M. Norcia (The Smith-Kettlewell Eye ResearchInstitute), Janette Atkinson (Visual Development Unit, Department of DevelopmentalScience, University College London), Faraz Farzin (University ofCalifornia, Davis)Symposium SummaryThe interplay between genetics and the environment is a rapidlyadvancing area in vision, yet it is a classic question in developmentalresearch. In this symposium, each speaker will presentempirical evidence supporting the contribution of genetic and/orenvironmental factors on specific visual processes, and will collectivelydiscuss how these factors affect human visual development.The symposium has three aims: (1) to provide the opportunityfor developmental researchers to come together and engage incollaborative dialogue in a single session at VSS, which has beenneglected in recent years, (2) to synthesize a working knowledgeof the biological and environmental influences on the functionaland anatomical organization of the typically and atypically developingvisual system, and (3) to advance the role of developmentin understanding visual mechanisms. Bringing together prominentscientists as well as young investigators, we anticipate thatthis symposium will appeal to those who share a common interestin understanding the nature of early vision and the factors whichshape its development.PresentationsInfant Contrast Sensitivity: Contributions of Factors Related toVisual Experience vs. Preprogrammed MechanismsKaren Dobkins, University of California, San Diego; Rain G. Bosworth,University of California, San DiegoIn order to investigate potential effects of visual experience vs. preprogrammedmechanisms on visual development, we have investigated howwell variation in contrast sensitivity (CS) across a large group of typicalinfants (n = 182) can be accounted for by a variety of factors that differin the extent to which they are tied to visual experience. Using multipleregression analyses, we find that gestational length and gender, which areunlikely to be tied to visual experience, predict Luminance CS (thought tobe mediated by the Magnocellular pathway). Other factors, which mightbe tied to either preprogrammed mechanisms or visual experience, specifically,birth order and small variations in postnatal age, predict ChromaticCS (thought to be mediated by the Parvocellular pathway) and LuminanceCS. In addition, we have investigated effects of visual experience vs. preprogrammedmechanisms by studying CS in infant twins (n = 64). Ourresults show that the CS of both monozygotic (Mz) and dizygotic (Dz) twinpairs are significantly correlated with one another (accounting for ~35% ofthe variance in CS), which could be due to either shared environment orgenetic preprogramming. More data will allow us to determine whethercorrelations are significantly stronger in Mz vs. Dz twins, which would providedirect evidence of effects of genetic preprogramming. Based on ourmultiple regression studies (above), as well as our studies of prematureinfants (presented in this symposium), we predict that genetic preprogrammimgwill be more influential for Luminance (Magnocellular pathway) CSthan Chromatic (Parvocellular pathway) CS.Chromatic and Luminance Contrast Sensitivity in Fullterm andPreterm Infants: Effects of Early Visual Experience on Magnocellularand Parvocellular Pathway ProcessingRain G. Bosworth, University of California, San Diego; Karen Dobkins,University of California, San DiegoStudy of healthy preterm infants affords an opportunity to investigatethe contributions of visual experience vs. preprogrammed mechanismson visual development. By comparing the developmental trajectories ofcontrast sensitivity (CS) in preterm vs. fullterm infants, we can determineif development is primarily tied to postterm age, in which visual maturationis governed by preprogrammed mechanisms timed to conception.By contrast, if development is tied to postnatal age, then visual maturationmay be affected by visual experience. Using forced-choice preferentiallooking methods, data from 57 preterm (born 5-9 weeks early) and97 fullterm infants were collected between 1–6 months postterm age (2-7month postnatal age). Our visual measures were luminance (light/dark)and chromatic (red/green) CS, which are thought to be mediated by theMagnocellular and Parvocellular subcortical pathways, respectively. Inthe first few months, luminance CS was found to be predicted by posttermage, suggesting that preprogrammed development is sufficient to accountfor luminance CS. By contrast, chromatic CS significantly exceeded thatpredicted by postterm age, which suggests that time since birth (and byextension, visual experience) confers a benefit on chromatic CS. In sum,early Parvocellular pathway development appears to be more influencedby early postnatal visual experience than Magnocellular pathway development.We will present results comparing preterm infants born at differentgestational ages, to determine if the slope of postnatal developmentchanges with gestational length at birth. Finally, data will be compared tovery preterm infants with retinopathy of prematurity.Visual Evoked Potentials in Texture Segmentation: Are Boys andGirls Different?Melanie Palomeres, University of South Carolina; Anthony M. Norcia, TheSmith-Kettlewell Eye Research InstituteTexture-defined objects are shapes defined by boundaries based on discontinuitiesalong a feature dimension (Nothdurft, 1993). Psychophysicalstudies in pediatric observers showed that detecting texture discontinuitiesbased on orientation have been found to appear at 4-6 months of agemature in adolescence (Rieth & Sireteanu, 1992). We evaluated the neuralcorrelates of texture-segmentation across development by measuringhigh-density visual-evoked potentials in typically-developing children andadults. We found that the formation of texture-defined form elicited VEPresponses earlier in adults than in children. While there were no sex differencesin VEP responses in adults, we found that response amplitudes ingirls were much smaller than in boys of the same age. These results suggestthat the neural responses in girls were more adult-like than in boys. Thispresentation will discuss the possible cortical substrate of this sex difference(e.g. Sowell, et al, 2007).Experience Dependent Plasticity of human Form and MotionMechanisms in Anisometropic AmblyopiaAnthony M. Norcia, The Smith-Kettlewell Eye Research Institute; Sean I.Chen, The Galway Clinic, Galway, Ireland; Arvind Chandna, Royal LiverpoolChildrens Hospital, Liverpool, UKDeprivation of visual input during developmental critical periods can haveprofound effects on the structure of visual cortex and on functional vision(Hubel and Wiesel 1963). Converging evidence from studies in human andanimal models of amblyopia suggests that visual deprivation can have differentialeffects on different cortical pathways, consistent with the presenceof multiple critical periods within the visual system as a whole (Harwerth,Smith et al. 1986). Anisometropia (unequal refractive error between the14 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>


Member-Initiated SymposiaVSS 2010 AbstractsUnderstanding how reward and saliency affect overt attention anddecisionsVidhya Navalpakkam, Division of Biology, Caltech; Christof Koch, Divisionof Engineering, Applied Science and Biology, Caltech; Antonio Rangel,Division of Humanities and Social <strong>Sciences</strong>, Caltech; Pietro Perona, Divisionof Engineering and Applied Science, CaltechThe ability to rapidly choose among multiple valuable targets embedded ina complex perceptual environment is key to survival in many animal species.Targets may differ both in their reward value as well as in their lowlevelperceptual properties (e.g., visual saliency). Previous studies investigatedseparately the impact of either value on decisions, or saliency onattention, thus it is not known how the brain combines these two variablesto influence attention and decision-making. In this talk, I will describe howwe addressed this question with three experiments in which human subjectsattempted to maximize their monetary earnings by rapidly choosingitems from a brief display. Each display contained several worthless items(distractors) as well as two targets, whose value and saliency were variedsystematically. The resulting behavioral data was compared to the predictionsof three computational models which assume that: (1) subjects seek themost valuable item in the display, (2) subjects seek the most easily detectableitem (e.g., highest saliency), (3) subjects behave as an ideal Bayesianobserver who combines both factors to maximize expected reward withineach trial. We find that, regardless of the motor response used to expressthe choices, decisions are influenced by both value and feature-contrast ina way that is consistent with the ideal Bayesian observer. Thus, individualsare able to engage in optimal reward harvesting while seeking multiplerelevant targets amidst clutter. I will describe ongoing studies on whetherattention, like decisions, may also be influenced by value and saliency tooptimize reward harvesting.Optimizing eye movements in search for rewardsMiguel Eckstein, Department of Psychology, University of California,Santa Barbara; Wade Schoonveld, Department of Psychology, Universityof California, Santa Barbara; Sheng Zhang, Department of Psychology,University of California, Santa BarbaraThere is a growing literature investigating how rewards influence theplanning of saccadic eye movements and the activity of underlying neuralmechanisms (for a review see, Trommershauser et al., 2009). Most of thesestudies reward correct eye movements towards a target at a given location(e.g., Liston and Stone, 2008). Yet, in every day life, rewards are not directlylinked to eye movements but rather to a correct perceptual decision and follow-upaction. The role of eye movements is to explore the visual scene andmaximize the gathering of information for a subsequent perceptual decision.In this context, we investigate how varying the rewards across locationsassigned to correct perceptual decisions in a search task influences theplanning of human eye movements. We extend the ideal Bayesian searcher(Najemnik & Geisler, 2005) by explicitly including reward structure to:1) determine the (optimal) fixation sequences that maximize total rewardgains; 2) predict the theoretical increase in gains from taking into accountreward structure in planning eye movements during search. We show thathumans strategize their eye movements to collect more reward. The patternof human fixations shares many of the properties with the fixationsof the ideal reward searcher. Human increases in total gains from usinginformation about the reward structure are also comparable to the benefitsin gains of the ideal searcher. Finally, we use theoretical simulations toshow that the observed discrepancies between the fixations of humans andthe ideal reward searcher do not have major impact in the total collectedrewards. Together, the results increase our understanding of how rewardsinfluence optimal and human saccade planning in ecologically valid taskssuch as visual search.Incentive salience in human visual attentionClayton Hickey, Department of Cognitive Psychology, Vrije UniversiteitAmsterdam; Leonardo Chelazzi, Department of Neurological and Visual<strong>Sciences</strong>, University of Verona - Medical School; Jan Theeuwes, Departmentof Cognitive Psychology, Vrije Universiteit AmsterdamReward-related midbrain dopamine guides animal behavior, creatingautomatic approach towards objects associated with reward and avoidancefrom objects unlikely to be beneficial. Using measures of behavior andbrain electricity we show that the dopamine system implements a similarprinciple in the deployment of covert attention in humans. Participantsattend to an object associated with monetary reward and ignore an objectassociated with sub-optimal outcome, and do so even when they know thiswill result in bad task performance. The strength of reward’s impact onattention is predicted by the neural response to reward feedback in anteriorcingulate cortex, a brain area known to be a part of the dopamine reinforcementcircuit. These results demonstrate a direct, non-volitional role forreinforcement learning in human attentional control.Reward expectancy biases selective attention in the primary visualcortexPieter R. Roelfsema, Dept. <strong>Vision</strong> & Cognition, Netherlands Institute forNeuroscience, Amsterdam; Chris van der Togt, Dept. <strong>Vision</strong> & Cognition,Netherlands Institute for Neuroscience, Amsterdam; Cyriel Pennartz,Dept. <strong>Vision</strong> & Cognition, Netherlands Institute for Neuroscience,Amsterdam; Liviu Stanisor, Dept. <strong>Vision</strong> & Cognition, Netherlands Institutefor Neuroscience, AmsterdamRewards and reward expectations influence neuronal activity in manybrain regions as stimuli associated with a higher reward tend to give rise tostronger neuronal responses than stimuli associated with lower rewards. Itis difficult to dissociate these reward effects from the effects of attention, asattention also modulates neuronal activity in many of the same structures(Maunsell, 2004). Here we investigated the relation between rewards andattention by recording neuronal activity in the primary visual cortex (areaV1), an area usually not believed to play a crucial role in reward processing,in a curve-tracing task with varying rewards. We report a new effect ofreward magnitude in area V1 where highly rewarding stimuli cause moreneuronal activity than unrewarding stimuli, but only if there are multiplestimuli in the display. Our results demonstrate a remarkable correspondencebetween reward and attention effects. First, rewards bias the competitionbetween simultaneously presented stimuli as is also true for selectiveattention. Second, the latency of the reward effect is similar to the latencyof attentional modulation (Roelfsema, 2006). Third, neurons modulated byrewards are also modulated by attention. These results inspire a unificationof theories about reward expectation and selective attention.How reward shapes attention and the search for informationJacqueline Gottlieb, Dept. of Neuroscience and Psychiatry, ColumbiaUniversity; Christopher Peck, Dept. of Neuroscience and Psychiatry,Columbia University; Dave Jangraw, Dept. of Neuroscience and Psychiatry,Columbia UniversityIn the neurophysiological literature with non-human primates, much efforthas been devoted to understanding how reward expectation shapes decisionmaking, that is, the selection of a specific course of action. On the otherhand, we know nearly nothing about how reward shapes attention, theselection of a source of information. And yet, understanding how organismsvalue information is critical for predicting how they will allocate attentionin a particular task. In addition, it is critical for understanding active learningand exploration, behaviors that are fundamentally driven by the needto discover new information that may prove valuable for future tasks.To begin addressing this question we examined how neurons located in theparietal cortex, which encode the momentary locus of attention, are influencedby the reward valence of visual stimuli. We found that reward predictorsbias attention in valence-specific manner. Cues predicting rewardproduced a sustained excitatory bias and attracted attention toward theirlocation. Cues predicting no reward produced a sustained inhibitory biasand repulsed attention from their location. These biases were persisted andeven grew with training, even though they came in conflict with the operantrequirement of the task, thus lowering the animal’s task performance.This pattern diverges markedly from the assumption of reinforcementlearning (that training improves performance and overcomes maladaptivebiases, and suggests that the effects of reward on attention may differ markedlyfrom the effects on decision making. I will discuss these findings andtheir implications for reward and reward-based learning in cortical systemsof attention.18 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>


VSS 2010 AbstractsMember-Initiated SymposiaS6Dissociations between top-down attentionand visual awarenessFriday, May 7, 3:30 – 5:30 pm, Royal Palm Ballroom 6-8Organizers: Jeroen van Boxtel (California Institute of Technology and NaoTsuchiya, California Institute of Technology, USA and Tamagawa University,Japan)Presenters: Nao Tsuchiya (California Institute of Technology, USA,Tamagawa University, Japan), Jeroen J.A. van Boxtel (California Instituteof Technology, USA), Takeo Watanabe (Boston University), Joel Voss(Beckman Institute, University of Illinois Urbana-Champaign, USA), AlexMaier (National Institute of Mental Health, NIH)Symposium SummaryHistorically, the pervading assumption among sensory psychologistshas been that attention and awareness are intimately linked, ifnot identical, processes. However, a number of recent authors haveargued that these are two distinct processes, with different functionsand underlying neuronal mechanisms. If this position werecorrect, we should be able to dissociate the effects of attention andawareness with some experimental manipulation. Furthermore, wemight expect extreme cases of dissociation, such as when attentionand awareness have opposing effects on some task performanceand its underlying neuronal activity. In the last decade, a numberof findings have been taken as support for the notion that attentionand awareness are distinct cognitive processes. In our symposium,we will review some of these results and introduce psychophysicalmethods to manipulate top-down attention and awareness independently.Throughout the symposium, we showcase the successfulapplication of these methods to human psychophysics, fMRIand EEG as well as monkey electrophysiology.First, Nao Tsuchiya will set the stage for the symposium by offeringa brief review of recent psychophysical studies that support theidea of awareness without attention as well as attention withoutawareness. After discussing some of the methodological limitationsof these approaches, Jeroen VanBoxtel will show direct evidencethat attention and awareness can result in opposite effectsfor the formation of afterimages. Takeo Watanabe’s behavioralparadigm will demonstrate that subthreshold motion can be moredistracting than suprathreshold motion. He will go on to show theneuronal substrate of this counter-intuitive finding with fMRI. JoelVoss will describe how perceptual recognition memory can occurwithout awareness following manipulations of attention, and howthese effects result from changes in the fluency of neural processingin visual cortex measured by EEG. Finally, Alexander Maierwill link these results in the humans studies to neuronal recordingsin monkeys, where the attentional state and the visibility ofa stimulus are manipulated independently in order to study theneuronal basis of each.A major theme of our symposium is that emerging evidence supportsthe notion that attention and awareness are two distinctiveneuronal processes. Throughout the symposium, we will discusshow dissociative paradigms can lead to new progress in the questfor the neuronal processes underlying attention and awareness. Weemphasize that it is important to separate out the effects of attentionfrom the effects of awareness. Our symposium would benefitmost vision scientists, interested in visual attention or visualawareness because the methodologies we discuss would informthem of paradigms that can dissociate attention from awareness.Given the novelty of these findings, our symposium will cover aterrain that remains largely untouched by the main program.PresentationsThe relationship between top-down attention and consciousawarenessNao Tsuchiya, California Institute of Technology, USA, Tamagawa University,JapanAlthough a claim that attention and awareness are different has been suggestedbefore, it has been difficult to show clear dissociations due to theirtight coupling in normal situations; top-down attention and visibility ofstimulus both improve the performance in most visual tasks. As proposed inthis workshop, however, putative difference in their functional and computationalroles implies a possibility that attention and awareness may affectvisual processing in different ways. After brief discussion on the functionaland computational roles of attention and awareness, we will introduce psychophysicalmethods that independently manipulate visual awareness andspatial, focal top-down attention and review the recent studies showingconsciousness without attention and attention without consciousness.Opposing effects of attention and awareness on afterimagesJeroen J.A. van Boxtel, California Institute of Technology, USAThe brain’s ability to handle sensory information is influenced by both selectiveattention and awareness. There is still no consensus on the exact relationshipbetween these two processes and whether or not they are distinct.So far, no experiment simultaneously manipulated both, which severelyhampers discussions on this issue. We here describe a full factorial study ofthe influences of attention and awareness (as assayed by visibility) on afterimages.We investigated the duration of afterimages for all four combinationsof high versus low attention and visible versus invisible grating. Wedemonstrate that selective attention and visual awareness have oppositeeffects: paying attention to the grating decreases the duration of its afterimage,while consciously seeing the grating increases afterimage duration. Wemoreover control for various possible confounds, including stimulus, andtask changes. These data provide clear evidence for distinctive influences ofselective attention and awareness on visual perception.Role of subthreshold stimuli in task-performance and its underlyingmechanismTakeo Watanabe, Boston UniversityConsiderable evidence exists indicating that a stimulus which is subthresholdand thus consciously invisible, influences brain activity andbehavioral performance. However, it is not clear how subthreshold stimuliare processed in the brain. We found that a task-irrelevant subthresholdcoherent motion leads to stronger disturbance in task performance thansuprathreshold motion. With the subthreshold motion, fMRI activity in thevisual cortex was higher, but activity in the dorsolateral prefrontal cortex(DLPFC) was lower, than with suprathreshold motion. The results of thepresent study demonstrate two important points. First, a weak task-irrelevantstimulus feature which is below but near the perceptual thresholdmore strongly activates visual area (MT+) which is highly related to thestimulus feature and more greatly disrupts task performance. This contradictsthe general view that irrelevant signals that are stronger in stimulusproperties more greatly influence the brain and performance and that theinfluence of a subthreshold stimulus is smaller than that of suprathresholdstimuli. Second, the results may reveal important bidirectional interactionsbetween a cognitive controlling system and the visual system. LPFC, whichhas been suggested to provide inhibitory control on task-irrelevant signals,may have a higher detection threshold for incoming signals than the visualcortex. Task-irrelevant signals around the threshold level may be sufficientlystrong to be processed in the visual system but not strong enoughfor LPFC to “notice” and, therefore, to provide effective inhibitory controlon the signals. In this case, such signals may remain uninhibited, take moreresources for a task-irrelevant distractor, and leave fewer resources for agiven task, and disrupt task performance more than a suprathreshold signal.On the other hand, suprathreshold coherent motion may be “noticed”,given successful inhibitory control by LPFC, and leave more resources fora task. This mechanism may underlie the present paradoxical finding thatsubthreshold task-irrelevant stimuli activate the visual area strongly anddisrupt task performance more and could also be one of the reasons whysubthreshold stimuli tend to lead to relatively robust effects.<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>19


Perception and action: LocomotionOrchid Ballroom, Boards 401–412Friday, May 7, 6:30 - 9:30 pm16.401 Does Visual Texture Enhance the Recognition of Ramps andSteps?Tiana M. Bochsler 1 (bochs001@umn.edu), Christopher S. Kallie 1 , Gordon E.Legge 1 , Rachel Gage 1 ; 1 Psychology Department, University of Minnesota, TwinCitiesVisual texture on floors may facilitate safe mobility by providing informationto pedestrians about surface slant and discontinuities. Often, groundplane texture is composed of fine detail and is beyond the acuity limit orbelow the contrast threshold of people with low vision. Consequently, weinvestigated whether a surface with large, high-contrast texture elementswould enhance the detectability of steps and ramps for low-resolutionviewing. Since the angular size of texture elements depends on viewingdistance, we expected any benefits from texture to depend on both acuityand viewing distance. Subjects viewed a sidewalk interrupted by oneof five possible targets: a single step up or down (7 inch height), a rampup or down (7 inch change of height over 8 feet), or flat. Subjects reportedwhich of the five targets was shown, and percent correct was computedfrom a block of trials. Viewing distance was 5, 10 or 20 feet from the target.Normally sighted subjects viewed the targets monocularly through goggleswith two levels of blur having effective acuities of ~20/135 (moderateblur) and ~20/900 (severe blur). For the Texture group, the sidewalk wascovered with a black and white, high contrast (0.87) checkerboard patternwith squares 12 inches on a side, surrounded by uniform mid-gray wallsand flooring. Performance was compared with a group of subjects testedpreviously with a textureless gray sidewalk, walls, and floor (No-Texturegroup). With moderate blur, texture elements were visible and the Texturegroup outperformed the No-Texture group at all three distances. However,with severe blur, the groups performed comparably, with best performanceat 5 feet. The results encourage us to consider the potential value of flooringwith large texture elements for enhancing visual accessibility in publicspaces.Acknowledgement: NIH Grant EY01783516.402 Stepping over obstacles: Are older adults’ perceptual judgmentsconsistent with their actions?Kaylena Ehgoetz Martens 1 (ehgo7110@wlu.ca), Michael Cinelli 1 ; 1 Department ofKinesiology, Wilfrid Laurier UniversityObstacle avoidance includes: stepping over, around or through obstacles.To effectively avoid collisions individuals must use both ventral (perception)and dorsal (action) visual streams. Visual perception guides actionand over a lifespan, action capabilities change. The objective was to determinewhether these changes were a result of perceptual changes. To testthis, we had 15 participants over 60 years of age perform an obstacle avoidancetask. At the start of the trial the participants stood approximately 5maway from an obstacle and were asked to elevate their foot to their perceivedheight of the obstacle. Following this judgment, participants wereinstructed to walk one meter while looking at the object and make a secondperceptual judgment of the same obstacle. After this second judgment, theywere asked to step over the obstacle with the same foot used during thetwo initial perceptual judgments. There were three obstacle heights used(1.5, 10, 20 cm), which were representative of real world obstacles such as acurb height, stair height, and transition from carpet to hardwood such thatresults can be directly related to behaviours in real settings. The participantsperformed two blocks (free to look at foot during perceptual judgementand not allowed to look at foot) of 18 (3 obstacle heights x 2 obstaclelocations x 3 trials) randomized trials. Preliminary results showed that onlythree trials were unsuccessful. Of the successful trials participants appearto use a similar toe elevation height for both the 10 and 20cm obstacle. Thisinability to properly scale toe elevation to obstacle height was reflected intheir variable perceptual judgments. This finding suggests that older adults’Friday Evening Postersperceptions and actions are different from those of young adults (Patla &Goodale, 1996), suggesting that older adults do not correctly couple perceptionwith action.Acknowledgement: WLU Science Technology and Endowment Program16.403 Static and Dynamic Information about the Size and Passabilityof AperturesAaron Fath 1 (yarblockers@gmail.com), Brett Fajen 1 ; 1 Cognitive Science Department,Rensselaer Polytechnic InstituteNarrow openings between obstacles are among the most commonly encounteredpotential impediments to forward locomotion. The size of such aperturescan be perceived in intrinsic units based on static, eye-height scaledinformation (Warren & Whang, 1987), allowing one to decide whether toattempt to pass through or select an alternative route. The aim of this studyis to investigate the contribution of two sources of dynamic informationabout aperture size that are also available during approach to an aperture,one of which specifies aperture size in units of stride length (Lee, 1980), andthe other that specifies the future passing distance of the inside edges ofthe aperture (Peper et al., 1994). Experiments were conducted in a 7 m x 9m virtual environment that was viewed through a head-mounted display(FOV: 44° H x 35° V). In Experiment 1, subjects judged whether they couldfit through an aperture between a pair of vertical posts resting on a groundplane, an aperture between a pair of vertical posts without a ground plane,and an aperture in an untextured frontal wall without a ground plane.Because the ground plane was absent and the posts and wall spanned theentire vertical FOV in the last two conditions, static, eyeheight-scaled informationwas not available and subjects had to rely on dynamic information.Analyses focused on the accuracy of passability judgments that were madewhile stationary versus immediately after walking 3 m toward the aperture.In Experiment 2, subjects were instructed to walk toward the apertureand rotate their shoulders if necessary to safely pass through without colliding.Analyses focused on the timing and magnitude of shoulder rotationunder the same three viewing conditions used in Experiment 1.Acknowledgement: NSF 054514116.404 The effects of aging on action and visual strategies whenwalking through aperturesAmy Hackney 1 (hack7780@wlu.ca), Michael Cinelli 1 ; 1 Department of Kinesiology,Wilfrid Laurier UniversityAvoiding collisions with objects is a requirement of everyday locomotion.The actions individuals take to move through a cluttered environment aregoverned by how passable one perceives the open space to be. Naturally,people want to avoid colliding with objects to reduce the risk of injury.In the current study the participants (N=13) walked along an 8m path attheir self-selected speed towards a static door aperture. The aperture variedin width from 40-90cm. The participants were instructed to safely passthrough the aperture using a suitable method. The objectives of the studywere to determine: (1) if the actions of older adults (i.e. critical point, velocitychange onset, & shoulder rotation onset) are different from those previouslyreported with young adults (Warren & Whang, 1987); (2) if gazebehaviours (i.e. fixation locations and durations) are different from thosereported from younger adults (Higuchi et al., 2009)); and (3) if fixation patternsare reflective of action differences. Preliminary results indicate thatolder adults use different action strategies when approaching and passingthrough apertures than young adults. Older adults appear to have alarger critical point (i.e. aperture width/shoulder width) than previouslyreported in younger adults (i.e. 1.5 vs. 1.3). Further analysis will determinewhether older adults have similar sequential action changes (i.e. velocitychange followed by shoulder rotation initiation) as younger adults (Cinelli& Patla, 2007). Preliminary data analysis has also shown that older adults’fixation patterns were different from younger adults when approaching theaperture. Older adults appear to direct fixations towards the floor and moretowards the door edges than younger adults during a similar task. Theseresults suggest that the nature of the older adults’ fixation patterns aredirectly influencing their “cautious” actions when passing through doorapertures.Acknowledgement: WLU Science Technology and Endowment ProgramFriday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>21


Friday Evening PostersVSS 2010 AbstractsFriday PM16.405 When walls are no longer barriers: Perception of obstacleheight in ParkourJ. Eric T. Taylor 1 (j.eric.t.taylor@gmail.com), Jessica K. Witt 1 ; 1 Purdue UniversityParkour is an activity characterized by the athletic, acrobatic, and efficientinteraction of the athlete with the urban environment. Through training,skilled Parkour athletes (Traceurs) overcome everyday obstacles that aretypically thought of as insurmountable, including the most common elementof the modern carpentered landscape - walls. According to theories ofAction-Modulated Perception (AMP), increased ability to jump and climbwalls should correspond with perceiving walls as shorter. Traceurs andnovices (age, height and sex matched) performed visual matching tasks onthree walls (194 cm, 229 cm, and 345 cm), and also reported their subjectiveability to climb the wall. Results show that Traceurs see walls as significantlyshorter than novices, but only on the higher two walls. This patterncorresponds to the subjective difficulty ratings given by all participants, asnovices reported the higher two walls as being significantly harder to climbthan the Traceurs. The role of ability for action in perception is considered.16.406 Gait characteristics and gaze behaviours during a modifiedtimed “Up & Go” (TUG) test: a comparison of older adults andParkinson’s disease patientsMichael Cinelli 1 (mcinelli@wlu.ca), Rachel vanOostveen 1 , Quincy Almeida 1 ; 1 SunLifeMovement Disorders Research & Rehabilitation Centre, Department of Kinesiology,Wilfrid Laurier UniversityThe TUG test is a reliable and valid test for quantifying functional mobility(Podsiadlo & Richardson, 1991). Parkinson’s disease (PD) patients facemany mobility challenges as well as attention deficits. In order to test bothmobility and attention, the current study had both PD patients (N=10, 62.8+ 10.5 yrs) and healthy age-matched adults (N=12, 65.7 + 6.3 yrs) performthe TUG test with a dual task. Dual task paradigms determine the amountof interference a secondary task has on the performance of a primary task.The participants were instructed to rise from a chair and walk towards acounter three metres away. Hidden behind a curtain on the counter waseither: 1) nothing, 2) an empty tray, or 3) a tray with glasses (TWG). Afterreaching and grabbing the item located behind the curtain, the participantswere ask to turn around and walk back. Kinematic data was collected usingan Optotrak system and gaze behaviours (fixations) were collected usingan ASL Mobile Eye Tracker. Results showed that during the approach andreturn, both velocity and step length were significantly lower (p


VSS 2010 AbstractsFriday Evening Postersconstant over some range, although there may be an upper limit due tobiomechanical and proprioceptive constraints. Results for human headingdirection are compared with model predictions based on a control law forsteering in which the turning rate is a weighted linear sum of egocentricdirection and optic flow.Acknowledgement: NIH R01 EY10923, Brown Center for <strong>Vision</strong> Research16.410 Visual information about locomotor capabilities and theperception of possibilities for actionJonathan Matthis 1 (matthj5@rpi.edu), Brett Fajen 1 ; 1 Cognitive Science Department,Rensselaer Polytechnic InstituteNavigation through complex, dynamic environments requires people tochoose actions that are appropriately calibrated to their locomotor capabilities.For example, when a pedestrian crosses the street, the decisionwhether to go ahead of an approaching vehicle or wait until it passes musttake into account how fast the person can move. When actions are selectedbefore movement is initiated, people can rely on what they know abouttheir locomotor capabilities to select appropriate actions. The aim of thisstudy is to investigate the contribution of visual information picked upon the fly when actions are selected while moving. The experiment wasconducted in an immersive virtual environment viewed through a headmounteddisplay. On each trial, subjects walked 3 m along a tree-linedpath, at which point two cylindrical obstacles began to converge toward alocation along their future path. Within 1.2 s, subjects had to judge whetherthey could have safely passed between the obstacles. On a small percentageof trials , the visual gain was increased such that subjects moved throughthe virtual environment 50% faster than normal. Subjects were more likelyto perceive the gap as passable on catch trials with increased visual gain.The increase in “passable” responses is consistent with the use of on-the-flyinformation, and could be due to global optic flow picked up during the 3m approach phase or local motion of the converging cylinders. The relativecontributions of global flow and local motion were tested in Experiment 2by increasing the visual gain of the stationary background independently ofthe moving obstacles. The effect of visual gain was significant, but weakerthan in Experiment 1. The findings suggest that when people select actionswhile moving, they rely on both local and global sources of informationthat are picked up on the fly.Acknowledgement: NSF 0545141, NIH R01 EY01931716.411 The role of continuous vs. terminal visual cues in the acquisitionof a whole body perceptuo-motor coordination taskSaritha Miriyala Radhakrishn 1 (saritharadhakrishnan@gmail.com), VassiliaHatzitaki 1 ; 1 Laboratory of motor control and learning, Aristotle University ofthessalonikiThe incessant adaptation of posture to external visual cues is a complextask that requires the coordination of multiple degrees of freedom in orderto maintain balance during performance of everyday actions. In the presentstudy, we investigated the adaptation and learning of a rhythmicalWeight Shifting (WS) task while providing either terminal or continuousvisual cues during practice. Forty young healthy volunteers were randomlyassigned in to one of four visual feedback groups (Continuous Target-ContinuousFeedback, Continuous Target-Point Feedback, Point Target-ContinuousFeedback, Point Target-Point Feedback). Participants were askedto perform periodic WS in the sagittal plane at a standard oscillation frequency(0.23 Hz) by matching the force exerted on a dual force platformto a target sine wave stimulus. Baseline, post-test, transfer (ankle tendonvibration at 80 Hz) and retention (24hs later) tests required performance ofthe same task guided by an auditory signal. Ground reaction forces weresampled through an A/D board (50 Hz) and analyzed using spectral andcross-correlation analysis. During practice, participants receiving terminalfeedback, either as target or performance, had significantly lower aimingerror and cycle variability compared to participants receiving continuousvisual cues. On the other hand, the continuous feedback groups displayedsignificantly lower (closer to 1) performance-target power spectral signalratios confirming higher accuracy throughout the course of WS. Learningwas depicted in a reduction of cycle variability and a decrease of themedian oscillation frequency towards the target frequency that was similaracross all feedback groups. Nevertheless, participants practicing with terminalfeedback showed better transfer of learning as this was confirmed bythe reduced impact of the vibration stimulation on performance variables.It is suggested that terminal feedback reinforces the acquisition of a moreflexible internal model of the perceptuo-motor transformation required forthe performance of externally guided rhythmical whole-body movements.Acknowledgement: The research leading to these results has received funding from theEuropean Community’s Seventh Framework Programme FP7/2007-2013 under grantagreement number 21472816.412 The Naïve Physics Curvilinear Impetus Bias does not Occurfor LocomotionMichael K. McBeath 1 (m.m@asu.edu), Sara E. Brimhall 1 , Tyler S. Miller 1 , Steven R.Holloway 1 ; 1 Department of Psychology, Arizona State UniversityPast research examining naïve physics tendencies of individuals has confirmedthat a notable population exhibits a bias to believe that objectswhich are constrained to move along curved paths will continue to curvein the same direction after they emerge free from the constraint. The typicalexample is predicting the path of a rolling ball that emerges from a spiralmaze. The present study tests two competing models which could explainthis robust cognitive bias. First is the idea that people possess anthropomorphictendencies which assume that if they or another were runningthrough such a maze they would continue to lean and turn upon emerging.Second is that the curvilinear impetus bias only occurs for cognitive tasks,and disappears when people actually physically navigate out of a curvedmaze, due to their having access to error-resistant perception-action guidancemechanisms. To test these competing models, we had 50 individualsrace out of a spiral maze as fast as they could and we recorded the extent ofpath curvature that was exhibited between the end-point of the maze anda semicircular end-line 10 feet away. The end-line was designed to be equidistantto the end-point of the maze, with soccer cones placed every twofeet to provide discrete alternatives of path curvature. The results revealeda unimodal distribution of path curvature with a mean essentially deadstraight ahead (mean=3.1 inches to the side, t=0.44, p=n.s.). We also foundno significance for the correlation between curvature and running speed(r=-0.27, p=n.s.). The findings support the model that people do not exhibita naïve physics curvilinear impetus bias for perception-action tasks thatallow access to error-resistant guidance mechanisms. Thus, in actual realworldcases of directional locomotive priming, individuals appear to accuratelyminimize path curvature and running distances in order to minimizenavigational time.Acknowledgement: NSF BCS-0318313 and 0403428Eye movements: Mechanisms and methodsOrchid Ballroom, Boards 414–425Friday, May 7, 6:30 - 9:30 pm16.414 Saccadic target selection and temporal properties of visualencodingJelmer P. de Vries 1,2 (j.p.devries@uu.nl), Ignace T.C. Hooge 1,2 , Marco A. Wiering 3 ,Frans. A.J. Verstraten 1,2 ; 1 Utrecht Neuroscience & Cognition, Utrecht University,2 Helmholtz Institute, 3 Department of Artificial Intelligence, Faculty of Mathematicsand Natural <strong>Sciences</strong>, University of GroningenLiterature shows many tasks in which saccades following short latenciestend to land on salient elements, while those following longer latenciesmore frequently land on elements similar to the target. This has given riseto the idea that independent bottom-up and top-down processes governsaccadic targeting. Recent findings, however, show that even when the targetof a search is the most salient element, the tendency to saccade towardsthis target decreases as latencies prolong. It is difficult to explain thesefindings in terms of bottom-up and top-down processes. We investigatedwhether temporal differences in encoding of peripheral visual informationcan explain this finding. The visual system processes low spatial frequency(LSF) information faster than high spatial frequency (HSF) information.Similarly, high contrast information is processed faster than low contrastinformation. The stimuli in our experiments contained two deviating targetson a homogenous grid of non-targets. In the first experiment, one targetdeviates in its low spatial frequencies, the other target in its high spatialfrequencies. The task of the subject was to make a speeded saccade towardseither target. For the short saccade latencies, a bias towards the low frequencytarget was found. Interestingly, with increasing latency this biasdisappeared and both targets were selected equally frequent. These resultssuggest a link between temporal aspects of encoding and saccadic target-Friday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>23


Friday Evening PostersVSS 2010 AbstractsFriday PMing. In the second experiment we further tested this theory by varying thecontrast of the LSF target. In one condition we raised the contrast of the LSFtarget, resulting in an increase in the bias towards this target. Lowering thecontrast of the LSF target in a second condition resulted in a shift in biastowards the HSF target. These experiments provide converging evidencethat temporal aspects of encoding underlie saccadic target selection.16.415 The integration of visual and auditory cues for expresssaccade generationPeter Schiller 1 (phschill@mit.edu), Michelle Kwak 1 ; 1 Department of Brain andCognitive <strong>Sciences</strong>, Massachusetts Institute of TechnologyThe integration of information gained through various sensory modalitiesenables living organisms to execute motor acts rapidly. The purpose of thestudy was to examine how effectively visual cues can be integrated for therapid generation of saccadic eye movements. Previous work has establishedthat when saccadic eye movements are made to singly appearingvisual targets, a bimodal distribution of saccadic latencies is often obtained,the first mode of which has been termed “express saccades.”In this study we examined the rapidity with which saccadic eye movementscan be generated to auditory and visual cues when they are presented singlyand in combination. The visual display presented on a monitor consistedof a fixation spot which was followed by a single visual target thatappeared either to the left and or to the right of the fixation spot. The auditorycue was provided through one of two speakers positioned to the leftand right of the monitor. Eye movements were recorded enabling us toobtain a distribution of saccadic latencies. Data obtained from one of themonkeys we have studied showed 5% express saccades to singly appearingvisual targets, 1% express saccades to singly presented auditory cues but36% express saccades when single visual and auditory cues were presentedtogether. Statistical analyses showed these effects to be highly significant (p


Friday Evening PostersVSS 2010 AbstractsFriday PM16.432 Is Myopia Affected By Near Work, Outdoor Activities And/Or Level Of Education?Adeline Yang 1 (yhuixian@dso.org.sg), Frederick Tey 1 , Sheng Tong Lin 1 , GerardNah 2 ; 1 DSO National Laboratories, 2 Republic of Singapore Air ForceIntroduction: Near work has always been associated with the influence ofmyopia (S-M. Saw, et al., 2002; B. Kinge, et al., 2002; I.F. Hepsen, et al.,2001). It has been shown to affect the progression of myopia. In this study,we aimed to determine if there is a relationship between outdoor activitiesand myopia, as the prevalence of myopia seems to be lower in children whoare more active outdoor. (K.A. Rose, et al., 2008). In addition, we also aimedto establish if the level of education affects myopia prevalence. Method: Acohort study was carried out on 16,484 male volunteers, aged between 16 to21 years old, who are pre-enlistees to the Singapore Armed Forces. A demographicsurvey was conducted to determine their level of education, type ofhouses they stay and the daily amount of visual work for near and distance.The refractive status and corneal curvature was measured using the HuvitzNRK-3100 auto-refractor. Results: Pearson’s correlation test shown no significantcorrelation between amount of near work and myopia. However,individuals who are more active outdoors tend to be less myopic (pr=0.147,p


VSS 2010 AbstractsFriday Evening PostersBackground: Research with adults has established that higher-order visualinformation is processed along two functionally specialized pathways: aventral stream and a dorsal stream. Surprisingly, however, less is knownabout how this dissociation emerges and matures during development.The present study (1) investigated typical development of higher-ordervisual processes and (2) determined whether the developmental trajectoriesof these visual processes differed for atypically developing individuals.Methods: In Experiment 1, 30 typically developing adolescents (age 10-16years) completed four computerized experimental paradigms designed todifferentially draw on dorsal or ventral processing resources. These tasksrequired participants to (a) either decide if two abstract shapes match (ventraltask) or fit together (dorsal task) or (b) either pay attention to the identity(ventral) or the location (dorsal) of drawings of buildings, upright faces,and inverted faces. Results show no association between age and accuracy,but reaction times were negatively correlated with age. These relationshipsheld for both dorsal and ventral tasks. Data are compared with accuracyand reaction time results from young adults (age 18-25 years). In Experiment2, the same computer tasks were completed by a group of atypicallydeveloping children with congenital hypothyroidism (CH), a paediatricendocrine disorder caused by lack of thyroid hormone (TH) that is presentat birth. TH is a critical endocrine modulator of normal brain developmentknown to affect development of the visual system. CH had significantlypoorer accuracy scores on both ventral and dorsal tasks, but werealso significantly slower to judge identity than controls. Conclusions: First,our findings suggest that there is very little development in the dorsal andventral pathways after ten years of age in typically developing individuals,although processing speed does increase with age. Secondly, TH insufficiencyduring gestation is associated with impairments in higher-ordervisual processing in adolescence.Acknowledgement: March of Dimes, <strong>Vision</strong> <strong>Sciences</strong> Research Program16.437 Magnocellular Deficits in Dyslexia Provide EvidenceAgainst Noise Exclusion HypothesisTeri Lawton 1 (tlawton@pathtoreading.com), Garrison Cottrell 1 ; 1 Department ofComputer Science and Engineering, University of California, San Diego, LaJolla, CA 92093.There is significant controversy about the mechanism underlying dyslexia.Sperling et al. (2005;2006), for example, hypothesize that the underlyingmechanism is an inability to ignore noise in visual stimuli (the noise exclusionhypothesis), and that this inability is not due to a magnocellular deficit,as it shows up in both parvocellular-oriented (static, high frequency gaborfilters) and magnocellular-oriented (counterphase flicker low frequencygabors) stimuli in their experiments. Dyslexics are differentially impairedin discriminating these stimuli in noise. However, the noise used in theirexperiments is a flashed white noise stimulus, which can activate the magnocellularsystem, a system that has been implicated in figure-grounddiscrimination. If there is a magnocellular deficit, this would impact boththe parvo- and magno-oriented decisions because of poor figure/grounddiscrimination. Several studies by the first author using sinusoidal testand background patterns that optimally activate magnocellular neuronshave shown that dyslexics have reduced contrast sensitivity to directiondiscrimination. These studies showed that with equal test and backgroundspatial frequencies, dyslexics were initially least sensitive to the directionof movement, but that following training on left-right movement discriminationtwice weekly for 12-15 weeks, dyslexics were most sensitive to thedirection of movement with equal test and background frequencies. Equaltest and background spatial frequencies provide the greatest amount ofnoise, since test and background patterns are analyzed by neural channelstuned to the same spatial frequencies. Since training rapidly removes thisdeficit, these data suggest that the deficit in noise exclusion is due to therelatively sluggish magnocellular pathway in dyslexics. Furthermore, thistraining dramatically improves reading speed in the subjects. Our resultsare consistent with the view that dyslexia is due to sluggish magnocellularneurons, but not with the view that a noise exclusion deficit, without a concomitantmagnocellular deficit, underlies dyslexia16.438 The Effects of Acute Alcohol Consumption on the VisualPerception of Velocity and DirectionSherene Fernando 1 (sferna26@uwo.ca), Fahrin Rawji 1 , Alexandra Major 1 , BrianTimney 1 ; 1 Department of Psychology, The University of Western Ontario,London, CanadaThe effects of alcohol on velocity and direction discrimination were examined.Participants completed both tasks under control and alcohol conditions(.08% BAC) conducted at both a “slow” (3os-1) and “fast” velocity(12os-1). Stimuli were dark dots on a light background that could vary inspeed or direction. They were presented within a 5º circular field on a computerdisplay spanning 12° × 16° in visual angle. Thresholds were measuredusing a Method of Constant Stimuli and a 2-interval AFC. In the velocitycondition, one stimulus was always moving at the standard speed and theother, comparison, stimulus varied over a range of 85-115% of the standardvelocity. Participants made judgments as to which moved faster. Using thesame procedure, participants in the direction task judged which of the twodrifting patterns was moving vertically. The standard was always vertical,while the comparison stimuli ranged from 0.5 to 3.5 o to the right ofthe vertical plane. As expected, results of the velocity task demonstrateda small but significant effect of alcohol, demonstrating impairment in thegeneral ability to accurately discriminate stimulus velocity. In the directiondiscrimination condition, performance was impaired at both velocities,but for the slower speed, the initial range of directions used resulted in afloor effect, with performance at chance for both the alcohol and no alcoholconditions. There was a significant effect of alcohol for the higher velocitypattern. We conclude that, overall, alcohol has a modest effect on the abilityto discriminate both the velocity and the direction of moving targets.Color and light: Adaptation and constancyOrchid Ballroom, Boards 439–450Friday, May 7, 6:30 - 9:30 pm16.439 Color rendering and the spectral structure of the illuminantSérgio Nascimento 1 (smcn@fisica.uminho.pt), Paulo Felgueiras 1 , João Linhares 1 ;1 Department of Physics, Minho UniversityThe increasing availability of light sources with almost arbitrary spectraldistributions, like LED and DLP based sources, poses the problem of selectionof a specific spectral profile. To this effect the relationships betweenspectral structure and the visual effects over rendered scenes need to betaken into consideration, a matter that has not been quantified systematically.In this work we addressed this issue by studying, computationally, thechromatic effects of a large set of illuminants with almost arbitrary spectralstructure. The illuminants were metamers of a Plankian radiator with colortemperature of 6500 K and metamers of non-Plankian radiators with chromaticitycoordinates uniformly distributed over the same isotemperatureline. The metamers were generated by the Schmitt’s elements approach andwere parameterized by the spectral distance to the equi-energy illuminantE and by the number of non-zero spectral bands, both quantities measuringthe spectral structure. The chromatic effects of each illuminant were quantitativelyassessed by the CIE color rendering index (CRI), by a chromaticdiversity index (CDI) and by the number of discernible colors estimatedfor a set of indoor scenes digitized by hyperspectral imaging. It was foundthat CRI decreases as the illuminant spectrum becomes more structuredwhereas larger values of CDI could only be obtained with illuminants witha small number of non-zero spectral bands, that is, with highly structuredspectra. For indoor scenes, the maximum number of discernible colors wasalso obtained for highly structured spectra. Thus, structured spectra withlow number of non-zero spectral bands seem to maximize the chromaticdiversity of rendered scenarios but produce only modest CRI. These resultssuggest that highly structured illuminants may be best for applicationswhere maximization of chromatic diversity is important.Acknowledgement: PTDC/EEA-EEL/098572/200816.440 A low-cost, color-calibrated reflective high dynamic rangedisplayDan Zhang 1 (dxz8148@rit.edu), James Ferwerda 1 ; 1 Munsell Color Science Laboratory,Chester F. Carlson Center for Imaging Science, Rochester Institute ofTechnologyHigh dynamic range (HDR) displays are enabling new advances in visualpsychophysics, but commercial HDR displays are both expensive, and difficultto calibrate colorimetrically. Homebrew HDR displays incorporatingLCD panels and digital projectors are relatively inexpensive and can be calibrated,but building such displays requires sophisticated technical skills.We have developed a low-cost, color-calibrated HDR display for visionresearch that can be constructed and used by researchers without the needFriday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>29


Friday Evening PostersVSS 2010 AbstractsFriday PMfor specialized equipment or advanced engineering abilities. Inspired bythe work of Bimber et al., this reflective HDR display incorporates an inkjetprinter, a digital video projector and a digital camera. To display an HDRimage, the image is first processed through the iCAM06 image appearancemodel to produce a standard dynamic range (SDR) image that is sent to theprinter. The digital video projector is then roughly positioned so its imagefield covers the print. Custom camera-based structured-light image registrationsoftware then automatically aligns the projected and printed images.A color calibration module then measures the print colors and determinesthe values to send to the projector to achieve the best possible reproductionof the original HDR image. This iCAM-based approach to HDR color reproductiongoes substantially beyond prior work in terms of its colorimetricaccuracy. With respect to intensity and dynamic range, because the printarea is substantially smaller than a projector’s typical field size, the maximumintensity in the combined image can be quite high, and the currentdisplay has a peak luminance around 2000 cd/m2 with a dynamic rangegreater than 20,000:1. While the print-based nature of this display does limitits usefulness for interactive studies, its low-cost, do-it-yourself design, andits ability to be calibrated should make it a valuable addition to the visionresearcher’s laboratory.16.441 The Combined Effect of Chromatic Contrast and ChromaticAssimilation Produced by a Purple Surround on an AchromaticTargetGennady Livitz 1 (glivitz@gmail.com), Ennio Mingolla 1 ; 1 Department of Cognitiveand Neural Systems, Boston UniversityChromatic assimilation and chromatic contrast are two different types ofspatio-chromatic interactions, which are rarely observed simultaneously.These phenomena are normally considered mutually exclusive, as theyshift the chromaticity of an “induced” region in opposite chromatic directions:away and toward the chromaticity of the surround, respectively. Inour displays we observed a shift of chromaticity of a target achromatic fieldinduced by a uniform purple surround in a direction in color space that canbe interpreted as the combined effect of chromatic contrast and chromaticassimilation of the surround color. We measured this combined effect byvarying stimulus size, stimulus eccentricity, and binocular disparity of ourstimuli. Our results show that chromatic assimilation and chromatic inductiondo not always cancel each other and may lead to perceptual shifts inchromaticity in a direction in color space that does not coincide with theline formed by the color of the surround and its chromatic complement. Forexample, due to the impact of a purple surround, a region that would lookgray without chromatic surround does not look green or purplish, but isperceived as blue if viewed from a certain distance. We explain the observedeffects by the structure of receptive fields of the neurons that encode spatiochromaticinteractions and by combination of individual induction effectsproduced by inputs representing primary chromatic signals on the outputof double-opponent neurons.Acknowledgement: GL and EM were supported in part by CELEST, an NSF Science ofLearning Center (NSF SBE-0354378), HP (DARPA prime HR001109-03-0001), EM wassupported in part by HRL Labs LLC ( DARPA prime HR001-09-C-0011).16.442 Variations in achromatic settings across the visual fieldKimberley Halen 1 (halenk2@unr.nevada.edu), Igor Juricevic 1 , Kyle McDermott 1 ,Michael A. Webster 1 ; 1 Department of Psychology, University of Nevada, RenoThe stimulus spectrum that appears white shows little change between thefovea and near periphery, despite large changes in spectral sensitivity fromdifferences in macular pigment screening (Beer et al JOV 2005; Webster andLeonard JOSA A 2008). This perceptual constancy could occur if color codingat different regions of the retina is normalized to the local average spectrum.However, local adaptation could instead lead to changes in the achromaticpoint across the visual field if the spectral characteristics of the worlditself vary across space. Natural scenes in fact include significant spatialvariations in chromaticity because of factors such as the spectral differencesbetween earth and sky. We asked whether there might be correspondingdifferences in achromatic loci in upper and lower visual fields. Observersdark adapted and then viewed a 25 cd/m2 2-deg spot flashed repeatedlyfor 0.5 sec on and 3.5 sec off on a black background. The chromaticity ofthe spot was adjusted to appear achromatic by using a pair of buttons thatvaried chromaticities in terms of the CIE u’v’ coordinates. Settings wererepeated while observers fixated dim markers so that the spot fell at a rangeof eccentricities spanning +60 deg along the vertical meridian. Achromaticsettings did not change systematically with location, and in particular didnot show a blue to yellow-green trend consistent with outdoor scenes. Thiscould indicate that observers are primarily adapted to environments withmore stationary color statistics (e.g. indoor settings) or that achromatic lociare also calibrated by retinally non-local processes.Acknowledgement: Supported by EY-1083416.443 Are Gaussian spectra a viable perceptual assumption incolor appearance?Yoko Mizokami 1 (mizokami@faculty.chiba-u.jp), Michael Webster 2 ; 1 GraduateSchool of Advanced Integration Science, Chiba University, 2 Department ofPsychology, University of Nevada, RenoNatural illuminant and reflectance spectra can generally be well approximatedby a linear model with as few as three basis functions. Some modelsof color appearance further assume that the visual system constructs a linearrepresentation of spectra by estimating the weights of these inferred functions.However, such models do not accommodate nonlinearities in colorappearance such as the Abney effect. Previously, we showed that the hue oflights with Gaussian spectra remains constant over much of the spectrumas bandwidth changes, suggesting that the visual system might adopt anassumption like a Gaussian model of spectra so that hue is tied to a fixedinferred property of the stimulus such as the spectral centroid (Mizokami etal, 2006). This model is qualitatively consistent with measures of the Abneyeffect, and is also consistent with suggestions that natural spectra may insome cases be better described by Gaussian than linear models (MacLeodand Golz, 2003). Here, we examined to what extent this Gaussian inferenceprovides a sufficient approximation of natural color signals. Spectra fromavailable databases, hyperspectral images, and our own measurementswere analyzed to test how well the curves could be fit by either a simpleGaussian with 3 parameters (amplitude, peak wavelength and standarddeviation) vs. the first three PCA components of standard linear models.The spectra were coded from 400-700 nm in 10nm steps and were fit usingthe Matlab Optimization toolbox. Results shows that the Gaussian fits wereessentially comparable to a linear model with the same degrees of freedomfor both reflectance and illumination spectra, suggesting that the Gaussianmodel could provide a plausible perceptual assumption about stimulusspectra for a trichromatic visual system.Acknowledgement: EY-1083416.444 Colour constancy as measured by least dissimilar matchingAlexander D. Logvinenko 1 (a.logvinenko@gcal.ac.uk), Rumi Tokunaga 1 ; 1 Departmentof <strong>Vision</strong> <strong>Sciences</strong>, Glasgow Caledonian UniversityColour constancy is usually measured with the asymmetric colour matchingtechnique. As an exact colour match between objects lit by differentchromatic lights is impossible, we instructed our observers to establish theleast dissimilar pair when studying colour constancy. Using such a technique,Logvinenko & Maloney (2006) found nearly perfect lightness constancy.The stimulus display consisted of two identical sets of 22 Munsellpapers illuminated independently by neutral, yellow, blue, green and redlights. The lights produced approximately the same illuminance (50 lux).Their CIE 1931 chromaticity coordinates were (0.303, 0.351), (0.392, 0.410),(0.131, 0.150), (0.224, 0.667), and (0.635, 0.321). Four trichromatic observersparticipated in the experiment. Pointing out randomly a paper underone illumination, experimenter asked observer to indicate which paperunder the other illumination appeared least dissimilar in colour. All measurementswere repeated three times for each observer. When the least dissimilarmatch was the physically same paper we call it exact match. Theproportion of exact matches was evaluated as a colour constancy index(CCI). When both the sets of papers were lit by the same light, the CCIwas 0.92, 0.93, 0.84, 0.78, and 0.76 for the neutral, yellow, blue, green andred lights respectively. When one illumination was neutral and the otherchromatic, the CCI was 0.80, 0.40, 0.56, and 0.32 for the yellow, blue, greenand red lights respectively. Therefore, the simultaneous colour constancywas found to be much poorer. Yet, it was better than expected if one takesinto account the illuminant induced colour stimulus shift as defined byLogvinenko (2009). Therefore, the visual system somehow overcomes thelimitations on colour constancy imposed by the illuminant induced colourstimulus shift. References Logvinenko A. D. & Maloney L. T. (2006) Perception& Psychophysics, 68, 76-83. Logvinenko A. D. (2009) J. of <strong>Vision</strong>9(11):5.Acknowledgement: EPSRC30 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsFriday Evening Posters16.445 Why Does von Kries Law Hold?Minjie Xu 1 (chokkyvista06@gmail.com), Jinhui Yuan 1 , Bo Zhang 1 ; 1 Department ofComputer Science and Technology, Tsinghua Universityvon Kries law (1878) states that color adapation might be described by multiplicativegain controls within each class of cone receptor independently.This assumption has been widely used in various theories explaining colorconstancy phenomena under illuminant changes ever since Ives (1912). Severalrecent experimental work provide evidences supporting the idea thatvon Kries law indeed holds. One prominent example is the invariance ofcone-excitation ratios observed by Foster and his colleagues (1994). Thoughit is widely accepted now, little work has been done to understand why vonKries law holds. Gerhard West and Michael H. Brill (1982) have stuidedthe necessary and sufficient conditions for von Kries chromatic adaptation.Their conclusion characterized the properties of illuminannt and surfacereflectance spectral power distributions under fixed human cone sensitivitycurve. However, it is more likely that the cone sensitivity curves evolve inthe eviromental statistics such as illuminant and surface reflectance spectraldistribution. James Dannemiller (1993) attributed von Kries law to thefact that approx. 95% the variance in these reflectance spectra is capturedby the first principal component. However, we find that this might not bethe case for sufrace reflectance spectral data set other than Krinov (1947) .Combining experimental simulation and theoretical analsysis, we find thatthe shape of cone sensitivities curves might be the major cause of von Krieslaw. In addition, our findings might provide a novel view for explainingwhy the cone sensitivity curves are as they are.Acknowledgement: National Natural Science Foundation of China (No.60905064),Tsinghua National Laboratory for Information Science and Technology (TNList)Cross-discipline Foundation16.446 Individual differences in chromatic contrast adaptationSarah Elliott 1 (slelliott@ucdavis.edu), Eric Roth 2 , Jennifer Highsmith 2 , John Werner 1 ,Michael Webster 2 ; 1 Department of Ophthalmology & <strong>Vision</strong> Science, Universityof California, Davis, 2 Department of Psychology, University of Nevada, RenoPre-cortical color channels are tuned primarily to the SvsLM or LvsMcone-opponent (cardinal) axes, but appear elaborated in the cortex to formhigher-order mechanisms tuned to both cardinal and intermediate directions.Psychophysical evidence for these mechanisms includes adaptation totemporal chromatic contrast. Adapting to any axis of color space selectivelyreduces perceived contrast along the adapting axis, implying channels thatcan be selectively tuned to this axis. Previous studies have found that thedegree of selectivity for non-cardinal axes varies even for the small numberof observers tested (Krauskopf et al., 1982; Webster & Mollon, 1994). Herewe tested a larger sample of color-normal observers to explore individualdifferences in color contrast adaptation, to examine whether differences arelarger for cardinal vs. noncardinal axes (e.g., because they reflect channelsthat arise at different visual levels). Observers adapted to a 2 Hz temporalmodulation along the LvsM or SvsLM axis, or along 2 intermediate axeschosen to be midway between the cardinal axes. Test stimuli included 8fixed-contrast chromaticities falling on either side of the 4 adapting axes.After an initial adaptation (2 min), 1-sec test pulses were interleaved with4-sec top-ups in a 2° field above a central fixation cross, with the test colorsmatched by adjusting the color of a concurrent reference stimulus presentedin a field below fixation. Changes in the perceived test chromaticitieswere fit with ellipses to estimate the selectivity of the adaptation for eachof the 4 axes. The strength of adaptation varied widely across observers; allobservers showed significant though reduced selectivity along non-cardinalaxes. Inter-observer differences in these adaptation effects could reflectnormal variation in the distribution of cortical color mechanisms and/orthe adaptability of these mechanisms.Acknowledgement: EY10834 and AG0405816.447 The duration of contingent color aftereffects for differentdirections in color spaceSean F O’Neil 1 (seano@unr.edu), Megan Tillman 1 , Michael A Webster 1 ; 1 Departmentof Psychology, University of Nevada, RenoThe McCollough effect (ME) is a color aftereffect contingent on orientation.Though studied extensively, the basis for the effect and whether it reflectsspecialized processes remains poorly understood. ME’s are conventionallyinduced by adapting to gratings that covary in brightness and color (e.g.both bright and red) and then testing on gratings that are achromatic (e.g.bright only). The hue shifts functionally resemble a form of tilt aftereffectwithin the color-luminance plane (e.g. so that bright bars appear rotatedaway from bright-red toward bright-green), and are known to have remarkablylong persistence (e.g. Vul et al. JOV 2008). We compared the durationof these hue shifts to the shifts in both hue and lightness induced by comparablestimuli in other directions in the color-luminance plane (e.g. to therelative brightness changes induced in isoluminant gratings). Observersadapted to vertical and horizontal gratings with luminance and chromatic(LvsM) contrast paired in or out of phase, and then tracked the aftereffectsin achromatic or isoluminant gratings with a matching task. Both types oftest gratings show “tilts” away from the color-luminance direction of theadapting grating which are selective for orientation and which may thereforepartly reflect common processes like contrast adaptation. However,the marked persistence of the aftereffects in achromatic stimuli suggeststhat additional processes - which may be specific to luminance edges -contribute to the hue shifts in the conventional ME, and could support aspecial role of processes like color spreading in the aftereffect (Broerse etal. <strong>Vision</strong> Research 1999). Differences in aftereffect duration in luminanceand chromatic tests also argue against suggestions that the long persistenceof the ME results only because the stimuli required to de-adapt are rarelyencountered, and suggest instead that the persistence may reflect a specialcharacteristic of the adaptation.Acknowledgement: Supported by EY-1083416.448 Cortical aftereffects of time-varying chromatic stimuliRobert Ennis 1 (rennis250@gmail.com), Qasim Zaidi 1 ; 1 Graduate Program in <strong>Vision</strong>Science, SUNY College of OptometryColored afterimages of steady fields are predominantly photoreceptordriven (Williams & MacLeod, 1979), but afterimages in other domains haveimplicated cortical loci. We demonstrate a new method to measure aftereffectsof time-varying chromatic stimuli that can be used to probe propertiesof later color processes. If the colors of two halves of a disk start at the samepoint on a color-circle, and follow opposite paths for a half-cycle along thecircumference so that they end at the same point, the two halves appearsignificantly different. This would be compatible with successive contrastfrom different adapting colors. If equal numbers of frames are subtractedprogressively from the ends of the two animations, a point is reached wherethe two halves look identical to an observer, despite being physically distinct.Adaptation magnitude was estimated from the number of frames thathad to be rewound for equalization. For excursions beginning and endingon the ∆(L-M) and ∆(S) cardinal axes, adaptation magnitude decreasedfrom modulation frequencies of 0.5 to 2.0 Hz, both in phase and time. Forhalf-cycle modulations along the color circles, the colors of the two halvesgo from the neutral point to opposite extreme points and back for one cardinalaxis, and from the same extreme to the opposite extreme for the otheraxis. The adaptation effect of modulating solely along the cardinal axis withopposite directions was significantly less than the effect of the joint modulationalong the color circle, especially at low frequencies, implicating neuralinteractions beyond the LGN. Adding the third harmonic at one-thirdpower to the 0.5 Hz modulation gave a lower adaptation magnitude thansubtracting it, by an amount larger than predicted from the sum of independentadaptations, indicating that excursion magnitude is more importantthan sharp transients.Acknowledgement: EY07556, EY1331216.449 Very-long-term chromatic adaptation and short-term chromaticadaptation: Are their influences cumulative?Suzanne Belmore 1,2 (sbelmore@midway.uchicago.edu), Steven Shevell 1,2,3 ; 1 VisualScience Laboratories, Institute for Mind and Biology, University of Chicago,2 Department of Psychology, University of Chicago, 3 Visual Science, Universityof ChicagoDo very-long-term (VLT) and short-term chromatic adaptation have acumulative influence on color vision? VLT adaptation results from exposureto an altered chromatic environment experienced over days or weeks.Color shifts from VLT adaptation are measured hours or days after leavingthe altered environment. Short-term adaptation results from exposure for afew minutes or less, with color shifts measured within a few seconds or minutesafter the adapting light is extinguished. Here, both types of adaptationwere combined. Shifts in unique yellow caused by short-term chromaticadaptation can be ~10 times greater than for VLT adaptation. The specificquestion considered here is whether the color shift from VLT adaptation iscumulative with the far larger shift from short-term adaptation or, instead,does much stronger short-term adaptation eliminate the modest color shiftsFriday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>31


VSS 2010 AbstractsFriday Evening Posters16.454 Binocular shape vs. depth perceptionYun Shi 1 (shixiaofish@yahoo.com), Taekyu Kwon 1 , Tadamasa Sawada 1 , Yunfeng Li 1 ,Zygmunt Pizlo 1 ; 1 Department of Psychological <strong>Sciences</strong>, Purdue UniversityIt has been shown that binocular perception of depth intervals is both inaccurateand unreliable. On the other hand, binocular discrimination of depthorder (called stereoacuity) is extremely reliable. Our recent psychophysicalexperiments showed that human binocular 3D shape recovery of symmetricpolyhedra is also extremely reliable and accurate. These results suggestthat binocular shape mechanism relies on binocular judgment of depthorder, rather than of 3D distances. Our computational model provided apossible explanation of the underlying perceptual mechanisms by showinghow a 3D symmetry constraint interacts with the depth order informationto produce a 3D metric shape. The question arises as to whether the stereoacuitythresholds can actually account for the 3D shape recovery results.The study of Norman & Todd (1998) showed that stereoacuity thresholdsare substantially elevated when the points, whose depth order is judged,are superimposed on the image of a smoothly curved surface. If theseresults generalize to the case of vertices of a symmetric polyhedron, willthe elevated stereoacuity thresholds account for veridical 3D shape recovery?In order to answer this question we measured thresholds for depthorder discrimination between two vertices of a polyhedron in the presenceand in the absence of the line drawing of a polyhedron. The threshold wasalmost twice as big when the polyhedron was present, compared to whenthe two points were shown in isolation. These results were used to reviseour model of binocular 3D shape recovery. We conclude by discussing therole of depth vs. shape information in 3D shape recovery.Acknowledgement: National Science Foundation, US Department of Energy, Air ForceOffice of Scientific Research16.455 Percept of shape distortion induced by binocular disparityand motion parallaxMasahiro Ishii 1 (ishii@eng.u-toyama.ac.jp), Masayuki Sato 2 ; 1 University of Toyama,2 The University of KitakyushuA flat surface lying in a frontal plane appears slanted in depth about a verticalaxis when the image in one eye is horizontally magnified relative tothe image in the other eye. The surface appears to slant away from the eyeseeing the smaller image. Horizontal magnification disparity also producesshape distortion. Since the vertical angular size of the surface remains thesame both with and without horizontal magnification of the image, theside that appears farther away appears larger. A rectangular figure withhorizontal magnification disparity is therefore perceived as a horizontallytapered isosceles trapezoid slanted about a vertical axis. It seems that theapparent shape distortion induced by disparity has not been measuredsystematically although it is well established that the apparent slantapproximates to the geometrical prediction. The aim here is to examine theapparent shape distortion induced by disparity. The test stimulus was arandom-dot stereogram presented in a mirror stereoscope in a darkroom.The dots were depicted in a rectangular area. The stereoscopic image wasa 100-mm-square at 500 mm ahead of the subject. Ten magnitudes of slantwere tested: ±50, 40, 30, 20, and 10°. Subjects indicated the perceived slantof the test stimulus with an unseen paddle and then adjusted the taper ofa trapezoid on a computer monitor to coincide with the apparent shapewith buttons. The apparent slant and shape distortion from motion parallaxwere also investigated. Subjects monoularly viewed a single random-dotpattern displayed on a computer monitor while making side-to-side headmovements. Stimulus translation and head movement were synchronized.For both disparity and motion parallax the perceived taper angle wassmaller than prediction even though the perceived slant was almost veridical.While the predicted taper increases as slant increases, the perceivedtaper was immutably about 1°.Acknowledgement: Japan Science and Technology Agency (JST)16.456 Integration time for the mechanisms serving the perceptionof depth from motion parallaxMark Nawrot 1 (mark.nawrot@ndsu.edu), Keith Stroyan 2 ; 1 Center for VisualNeuroscience, Department of Psychology, North Dakota State University, 2 MathDepartment, University of IowaOur recent quantitative model for the perception of depth from motionparallax (MP), based on the dynamic geometry of MP, proposes that relativeobject depth (d) can be determined from fixation distance (f), retinalimage motion (dθ/dt), and pursuit eye movement (dα/dt) with formula:d/f = dθ/dα (Nawrot & Stroyan, 2009). Given the model’s dynamics, it isimportant to know the integration time required by the visual system torecover dα and dθ, and then estimate d. If the perception of depth frommotion is sluggish, and needs to “build-up” over a period of observation,then the potential accuracy of the depth estimate suffers as the observermoves during the viewing period. A depth-phase discrimination task wasused to determine the time necessary to perceive depth from MP. Observersremained stationary and viewed a briefly translating (4 deg/s) random-dotMP stimulus on a CRT (120 Hz) at 57 cm. The stimulus was 6.6 deg2, having4000 2 min2 dots. Fixation on the translating stimulus was monitored withan ASL eye tracker. Stimulus duration was varied within an interleavedstaircase procedure for leftward and rightward eye-movements. Depthdiscriminationcan be performed with presentations as brief as 16.6 msec,with only two stimulus frames providing both retinal image motion andthe stimulus window motion for pursuit (mean range = 16.6–33.2 msec).This was found for conditions in which, prior to stimulus presentation, theeye was engaged in ongoing pursuit or the eye was stationary. A large (13deg2) high-contrast masking stimulus (83 msec) disrupted depth-discriminationfor stimulus presentations less than 60-80 msec in both pursuit andstationary conditions. We conclude that neural mechanisms serving depthfrom MP generate a depth estimate quickly,


Friday Evening PostersVSS 2010 AbstractsFriday PMluminance-based) motion processing mechanisms, and the relative contributionsof the two know types of DFM cues, the accretion-deletion (AD)cue and the common motion (CM) cue. We performed a whole-brain fMRIscan using a mixed (i.e., events-within-blocks) design, which allowed usto compare the responses across blocks as well as across individual trials.Depending on the stimulus block, subjects were shown either stimuli thatelicited depth-order percepts, or stimuli that did not. Stimuli that elicitedthe depth-order percept contained both types of motion and both types ofDFM cues. During each trial of each stimulus block, subjects reported theperceived depth-order using a button press. We found significantly greaterresponses to depth-order stimuli relative to non-depth-order stimuli inseveral early retinotopic regions, including V1, V2, V3, V3A, and V4v. Theresponse in V3A reliably reflected, on a trial-to-trial basis, whether the subjectsperceived depth-order (logistic regression; group data, N = 5; p 0.05), although the responses in both regions showed a slight responsesuppression by the depth-order stimuli in some subjects. Together, theseresults identify specific brain regions may play an important role in DFMcue processing and mediate DFM perception.Acknowledgement: Supported by Medical College of Georgia16.459 Transcranial magnetic stimulation improves rotation sensitivityfor actively viewed structure from motionLorella Battelli 1,2 (lbattell@bidmc.harvard.edu), Giovanni Mancuso 1 , Carlo Fantoni 1 ,Fulvio Domini 1,3 ; 1 Center for Neuroscience and Cognitive Systems, Italian Instituteof Technology, 2 Department of Neurology, Beth Israel Hospital, HarvardMedical School, 3 Department of Cognitive and Linguistic <strong>Sciences</strong>, BrownUniversityIn previous experiments we measured observers’ performance in a rotationdetectiontask during active vision of structure from motion (SfM) displays.Observers performed a lateral head shift while viewing either monocularlyor binocularly the same optic flows consistent with either static or rotatingrandom-dot planar surfaces. An Optotrack Certus system was used toupdate in real-time the optic flows as a function of observer’s head positionand orientation. Results showed that the addition of a null disparity fieldincreased the likelihood of perceiving surface rotation causing reducedrotation sensitivity for the binocular relative to the monocular viewing condition.A possible hypothesis for this phenomenon is that the introductionof a null disparity field creates an inconsistency among the depth cues forcingthe visual system to interpret the optic flow in a way consistent withdisparity (rotating surface far from the point of view) rather than vergenceinformation (static surface located at the level of the screen). In order to testthis hypothesis we used low-frequency rTMS over the early visual cortex.Neurophysiological inactivation studies (Ponce et al., 2008) have found thatvisual areas V2/V3 are selective for the recovery of depth from binoculardisparityinformation. Two groups of subjects performed the same rotationdetection task before and after rTMS or Sham-TMS delivered offline(10min, 1Hz) over V2/V3 targeting binocular disparity-sensitive neurons.Consistent with our hypothesis rTMS induced an improvement in the rotationsensitivity that was selective for binocular condition, while monocularperformance remained intact. We conclude that low-frequency rTMS overV2/V3 inhibits binocular disparity-sensitive neurons allowing the visualsystem to interpret a binocularly viewed optic flow as consistent with retinalmotion information and vergence regardless of disparity information.16.460 Surface Layout and Embodied Memory: Optic Flow andImage Structure as Interacting Components in <strong>Vision</strong>Jing Samantha Pan 1 (jingpan@indiana.edu), Geoffrey P. Bingham 1 ; 1 Department ofPsychological and Brain <strong>Sciences</strong>, Indiana University BloomingtonIntroduction: Optic flow and image-based vision are treated by the TwoVisual Systems hypothesis (Milner and Goodale (1996)) as anatomicallyseparate systems. We advocate conversely that optic flow and image structureare functional components of a unitary perceptual system. Optic flowprovides powerful but temporary depth information; image structure is persistentbut weak in specifying depth. When combined, optic flow informsimage structure that provides embodied memory. Method: Two randomtexturedplanes—a large rear plane containing targets seen through holesin a smaller front plane (holes without targets were distracters)—rotatedrigidly to reveal depth structure; then the rear plane translated in one of8 diagonal directions and stopped with targets occluded. Participantsmarked locations of hidden targets after some delay, during which theysaw either the static image or a blank screen. In Experiment 1, delays were5s, 10s or 15s with 2 to 15 targets and distracters, respectively, in three conditions:image structure only— holes were outlined but translation of planewas discontinuous; optic flow only—holes were not outlined but translationwas continuous; and optic flow plus image structure. In Experiment 2,delays were 5s or 25s and numbers of targets and distracters were 9, 12, 15or 18, respectively. Results: In Experiment 1, participants could not locatetargets with only image structure and no optic flow. With only optic flow,participants correctly located up to 3 targets. With both, participants correctlylocated more than 60% of the 15 targets with 15s delay. In Experiment2, mean numbers of targets correctly located were 8.0 without blank regardlessof delay lengths; 7.7 with blank and 5s delay; and 7.0 with blank and25s delay. Conclusion: Optic flow and image structure contribute functionallydistinct properties to a single visual system. Optic flow yields layoutand image structure preserves it.Object recognition: Development andlearningVista Ballroom, Boards 501–513Friday, May 7, 6:30 - 9:30 pm16.501 Infant learning ability for recognizing artificially-produced3D objectsWakayo Yamashita 1 (k3544891@kadai.jp), So Kanazawa 2 , Masami K. Yamaguchi1,3 ; 1 Chuo University, 2 Japan Women’s University, 3 PRESTO, JSTRegardless of changes in viewpoint, observers can recognize objects fromalmost any direction. Experiencing objects from various viewpoints mayenhance the development of this ability. Previous study has shown that 6-to 8-month old infants who were presented with sequentially rotated faceimages from profile to frontal view could identify the learned face (Nakatoet al., 2005). Since faces are special objects for infants, it may be possible thatsuch ability is limited to facial recognition. Here, we investigate the differencesin infant learning ability for faces and objects. To investigate such3D object recognition, we designed images which were well controlled inboth their texture and color. Objects were created using three-dimensionalgraphic software (Shade 9 Professional; e-frontier, Inc., Japan, Poser 7;Smith Micro software, Inc., California). One hundred and twelve sequentialimages of each object were created by rotating an axis perpendicular to thevisual axis connecting the viewer’s eyes and the object from frontal viewto plus-minus 60 deg. 3- to 6-month-old-infants participated in the presentstudy, and a familiarization/novelty preference procedure was usedto investigate infants’ 3D object recognition. Infants were first familiarizedwith a face image (face image condition) or a shoe image (shoe image condition).During the familiarization phase, infants were repeatedly shownsequentially rotating images of a face or a shoe for 15 sec × 6 trials. Afterfamiliarization, we checked infants’ novelty preference between these twoconditions. In the test phase, infants were shown the familiarized face (ora shoe) and a novel face (or a shoe) side by side for 10 sec × 2 trials. Ourpreliminary results showed that the ability for face learning matures earlierthan that for object leaning. This result suggests that the face is a specialobject for infants even in artificially-produced 3D object recognition.Acknowledgement: This research was supported by PRESTO (Japan ScienceandTechnology Agency) and a Grant-in-Aid for Scientific Research(20119002, 21243041)from Japan <strong>Society</strong> for the Promotion of Science.16.502 The development of part-based and analytical objectrecognition in adolescenceElley Wakui 1 (e.wakui@gold.ac.uk), Dean Petters 2 , Jules Davidoff 1 , Martin Juttner 2 ;1 Goldsmiths, University of London, 2 Aston UniversityThree experiments (familiar animals, familiar artefacts, newly learned butpreviously novel objects) investigated different developmental trajectoriesfor part-based and analytical-based object processing between 7-16yrs. The3-AFC task required selecting the correct appearance from individual partor part-relational manipulated versions. In all experiments, even the youngestchildren showed adult-like performance on part-changes. However, foranimals and artefacts similar levels were only reached by 11-12yrs for relationalchanges. Interestingly, for novel objects relational- and part-change34 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsFriday Evening Postersperformance was equivalent throughout the age range. These results suggestan unexpected complex trajectory of analytical-based object recognitioninto adolescence.16.503 Adult Shape Preferences are Evident in InfancyOri Amir 1 (oamir@usc.edu), Rachel Wu 3 , Irving Biederman 1,2 ; 1 Psychology, Universityof Southern California, 2 Neuroscience, University of Southern California,3 Psychology, Birkbeck, University of LondonPeople and macaque IT cells are more sensitive to nonaccidental than metricdifferences (e.g., Biederman, et al., 2009; Kayaert, et al., 2003). For example,straight vs. curved contours (a nonaccidental difference) are more readilydiscriminated (and produce greater IT cell modulation) than two curvedcontours that differ in their degree of curvature (a metric difference). Similarly,parallel vs. nonparallel contours are more readily distinguished thantwo nonparallel contours that differ in their angle of convergence. Straightand parallel are singular values, zero curvature or convergence, respectively,as opposed to curvature or nonparallel, which can assume an infinitenumber of values. Are there spontaneous preferences for one or the otherkind of value? And, if so, are these preferences manifested early in life. 5mo. human infants and adults viewed a pair of geons arranged left andright on the screen. The geons differed in at least one nonaccidental, generalizedcylinder property. For example, one geon could be a cylinder with astraight axis and the other a cylinder with a curved axis. Or one could haveparallel sides (a cylinder or a brick) and the other nonparallel sides (a coneor a wedge). Both infants and adults showed a strong, significant, preferencefor initially fixating the geon with a nonsingular value, i.e., curved ornonparallel. Both groups of subjects also fixated longer on that initial value,although this effect was only reliable for the adults. This initial preferencefor nonsingular values, as well as search asymmetries which show pop outfor nonsingular but not for singular values (Treisman & Gormican, 1988),may be a consequence of greater neural activation to such stimuli (and, possiblygreater opioid release), as reflected in greater fMRI activation to thenonsingular values of these stimuli in the ventral pathway.Acknowledgement: NSF BCS 04-20794, 05-31177, 06-1769916.504 Visual recognition of filtered object in normal aging: Aparvocellular impairment?Pierre Bordaberry 1 (pierre.bordaberry@wanadoo.fr), Sandrine Delord 1 ; 1 Laboratoirede psychologie EA 4139, Université Victor Segalen Bordeaux 2Normal aging of visual processing was investigated using localization andcategorization of filtered pictures of real objects. Image filtering aimed atbiasing processing toward magnocellular (low-pass), parvocellular (bandpass)or both (no filtering) pathways whereas the tasks served to dissociatebetween dorsal (localization) or ventral (categorization) pathways. Thirtyyoung adults adults (m=22,6 ; α=1,2) and 23 old adults (m=60,1; α= 6,8)were asked to semantically categorize (animal vs. tools) or to localize (upvs. down) 120 stimuli that were presented onscreen for 200 ms in three differentversions: a low-pass filtered (centered on 0 cpd, with SF up to 3.8cpd), a band-pass filtered (centered 3.8 cpd, with SF from 1.9 cpd up to 7.7cpd), and a control stimuli (non-filtered). The main results were the interactionsbetween task, group and filter that were found on error and on RT(p=.07). Contrast analysis showed that, in the semantic categorization task,a decreased correct response rate and increased RT was observed for oldadultsrelative to young adults, especially for the band-pass filtered objects.In the localization task, the age-related deficit was higher for band-passfiltered than for the others objects on RT, but was equivalent for band-passfiltered and for non filtered objects on error. Compared to young, olderadults showed deteriorated performance specifically in the conditions thatisolated band-pass information, whatever the pathways involved, eitherdorsal or ventral. Moreover, magnocellular and parvocellular interactionswere found, when the task involved the dorsal pathway. Our results areconsistent with those of Viggianno et al. (2005, Archives of gerontology andgeriatrics) giving additional evidence for a parvocellular loss in early normalaging.16.505 Visual span as a sensory bottleneck in learning to readMatthieu Dubois 1,2 (matthdub@gmail.com), Sylviane Valdois 1 ; 1 Psychology andNeuroCognition Lab, CNRS & Université Pierre Mendes-France, 2 Psychologyand Neural Science, New York UniversityThe visual span is the number of letters, arranged horizontally as in text,that can be recognized without moving the eyes. It represents a sensorybottom-up bottleneck that limits reading speed (Legge et al., 2007). Inadult fluent readers, the visual span equals the uncrowded span, the numberof characters that are not crowded. Reading rate is proportional to theuncrowded span (Pelli and Tillman, 2008). But what about learning to read?Developmental growth of the visual span accounts for 35-52% of the readingspeed variability in english speaking children (Kwon et al., 2007). Herewe investigate whether this relationship applies to French speaking childrenand to dyslexics.In two age-matched groups of 10 dyslexic and 38 learning-to-read children(from 3rd to 7th grade), we estimate the visual span and reading rate. Aspredicted by the hypothesis, we find that visual span size and readingspeed both linearly increase with chronological age in normal reading children.Congruently with the Kwon et al.’s (2007) results, a significant partof the control participants’ reading speed was accounted for by their visualspan size. Dyslexics had small visual spans and slow reading rate. In nearlyhalf (4 of 10) of the dyslexic sample, reading slowness is accounted for bythe visual span shrinkage. For the remaining dyslexic participants, additionalfactors are required to explain their slow reading speed.Kwon, M., et al. (2007). Developmental changes in the visual span for reading.<strong>Vision</strong> Research, 47(22), 2889-900.Legge, G. E., et al. (2007). The case for the visual span as a sensory bottleneckin reading. Journal of <strong>Vision</strong>, 7(2), 1–15.Pelli, D. G., & Tillman, K. (2008). The uncrowded window of object recognition.Nature Neuroscience, 11(10), 1129–35.16.506 Is there a functional overlap between the expert processingof characters from alphabetic and non-alphabetic writingsystems?Zhiyi Qu 1 (zyqu@psy.cuhk.edu.hk), Alan C.-N. Wong 1 , Rankin Williams McGugin 2 ,Isabel Gauthier 2 ; 1 Department of Psychology, The Chinese University of HongKong, Shatin, N.T., Hong Kong, 2 Department of Psychology, Vanderbilt University,Nashville, Tennessee, USAPrevious ERP and fMRI studies have shown that concurrent processing ofunits from alphabetic and non-alphabetic writing systems, such as Romanletters and Chinese characters, activate overlapping brain regions. It isunknown, however, whether different types of characters simply recruitseparate yet nearby neural networks, or rather there are shared mechanismsfor expert processing of characters independent of writing system.Here we study the functional overlap of expert character processing fordifferent writing systems by examining the interference in a visual searchtask involving processing of multiple types of characters. Chinese-Englishbilinguals and English readers were asked to search for target Roman lettersamong images presented sequentially in a rapid serial visual presentation(RSVP) stream. The search for Roman letters occurred either in asequence of Roman and Chinese distractors, or in a sequence of Romanand Pseudoletter distractors. Bilinguals performed worse than Englishreaders during Roman letter search among Roman and Chinese characters,whereas there was no group difference in performance during Romanletter search among Roman and Pseudoletter distractors. In other words,the addition of Chinese distractors affected Roman letter search only forbilinguals. The existence of familiar distractors (Chinese characters forbilinguals) alone was insufficient to explain the finding. This can be shownin English readers, who performed similarly when searching for Pseudolettertargets among Pseudoletter and Roman (familiar) distractors comparedwith searching among Pseudoletter and Chinese (unfamiliar) distractors.Overall, we showed common expert processing mechanisms shared bycharacters in both alphabetic and non-alphabetic writing systems.Acknowledgement: This research was supported by the Direct Grant (2020939) fromthe Chinese University of Hong Kong and the General Research Fund (452209) from theResearch Grants Council of Hong Kong to A.W. and through the Temporal Dynamics ofLearning Center (NSF Science of Learning Center SBE16.507 Not all spaces stretch alike: How the structure ofmorphspaces constrains the effect of category learning on shapeperceptionJonathan Folstein 1 (jonathan.r.folstein@gmail.com), Isabel Gauthier 1 , ThomasPalmeri 1 ; 1 Department of Psychology, College of Arts and Science, VanderbiltUniversityHow does the way we experience and categorize the world affect the waywe visually perceive the world? By some perspectives, visual representationsprovide input for categorization but are not significantly altered bycategorization. Others argue that perception is required for categoriza-Friday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>35


Friday Evening PostersVSS 2010 AbstractsFriday PMtion but that categorization also alters visual perception. The latter viewis supported by studies showing that visual features of categorized objectsbecome more discriminable following category learning, but only if thefeatures are useful or “diagnostic” for categorization. Evidence for thisphenomenon is mixed, however. We investigate an explanation that hasremained unexplored up to now: the structure of the morphspaces categorizedby participants. Studies that do not find increases in discriminabilityoften use “polar” morphspaces, with morph-parents lying at cornersof the space, while studies with positive results use “dimensional” spaces,defined by orthogonal morphlines, each a dimension created by morphingtwo parents. Using the same four morph-parents, we created dimensionaland polar morphspaces matched in mean pair-discriminability. Categorizationcaused a selective increase in discriminability along the diagnosticdimension of the dimensional space, but not the polar space. This suggeststhat polar morphspaces should be used if one wishes to avoid selectiveincreases in perceptual discriminability caused by categorization butdimensional morphspaces should be used if one is interested in the effectof selective attention to object properties. In addition, our results suggestthat previous fMRI and electrophysiological studies finding little effect ofcategory learning in the visual system (as well as modest behavioral effectson perception) may have been limited by the use of polar spaces.Acknowledgement: Temporal Dynamics of Learning Center (SBE-0542013)16.508 Eye movement patterns during object recognition are modulatedby perceptual expertise and level of stimulus classificationLina Conlan 1 (l.i.conlan@bangor.ac.uk), Alan Wong 2 , Charles Leek 1 ; 1 WalesInstitute for Cognitive Neuroscience, School of Psychology, Bangor University,UK, 2 Department of Psychology, The Chinese University of Hong Kong, Shatin,N.T., Hong KongIn a previous study, Leek & Johnston (2008, Platform talk, <strong>Vision</strong> Science<strong>Society</strong>) showed that fixation patterns during three-dimensional object recognitionshow a preference for image regions containing local concave curvatureminima at surface intersections. In this study we examined the extentto which fixation-based local shape analysis patterns are influenced by theperceptual expertise of the observer and the level of stimulus classificationrequired by the task. The study was based on the paradigm developed byWong, Palmeri & Gauthier (2009, Psychological Science, 20, 1108-1117.) inwhich observers are extensively trained to categorize sets of novel objects(Ziggerins) at either a basic or subordinate level of classification. The effectsof training were measured by comparing performance between a pre- andpost-test sequential shape matching task that required either basic- or subordinate-leveljudgements. In addition, we also recorded fixation patternsduring the pre- and post-tests. Fixation data were analysed using the FROAmethodology (Johnston & Leek, 2009, Journal of Eye Movement Research,1 (3):5, 1-12). The results showed significant effects of training on shapematching RTs in the post-tests. In particular, Ss showed evidence of perceptualexpertise at making basic and subordinate-level shape classificationjudgements. We also found that the acquisition of perceptual expertiseresulted in changes in the local spatial distributions of fixational eye movementpatterns observed in the pre- and post tests. This finding provides aclear link between fixation-based shape analysis patterns, perceptual expertise,and the level of shape classification being undertaken by the observer.Acknowledgement: This work was supported by ESRC/EPSRC grant (RES-062-23-2075)awarded to CL.16.509 Knowledge influences perception: Evidence from theEbbinghaus illusionMatthew Hughes 1 (matthew.hughes@villanova.edu), Diego Fernandez-Duque 1 ;1 Psychology Department, Villanova UniversityA fundamental question in cognitive science is the relation between knowledgeand perception: does our knowledge of the world influence the waywe see it? To help answer this question, we used the Ebbinghaus illusion,in which a circle looks larger when surrounded by smaller circles thanwhen surrounded by larger ones. Unlike circles, coins – such as quarters ordimes – have a fixed size, and we predicted that such knowledge of objectconstancy would weaken the perceptual illusion. A hundred observersreported the apparent size of a quarter when surrounded by dimes, andwhen surrounded by one-dollar coins. The apparent size of the quarter wascompared to the apparent size of a circle when surrounded by small circles,and when surrounded by big circles. Consistent with our hypothesis, theillusion was weakened for coins. We interpret this result to suggest thatvisual perception is influenced by semantic knowledge, such as the knowledgeof coins as objects of invariant size.16.510 Benefits of a Hybrid Spatial/non-Spatial NeighborhoodFunction in SOM-based Visual Feature LearningRishabh Jain 1 (rishabh@usc.edu), Bartlett Mel 1,2 ; 1 Neuroscience GraduateProgram, University of Southern California, 2 Biomedical Engineering Department,University of Southern CaliforniaNeurally-inspired self-organizing maps typically use a symmetric spatialfunction such as a Gaussian to scale synaptic changes within the neighborhoodsurrounding a maximally stimulated node (Kohonen, 1984). This typeof unsupervised learning scheme can work well to capture the structure ofdata sets lying in low-dimensional spaces, but is poorly suited to operate ina neural system, such as the neocortex, in which the neurons representingmultiple distinct feature maps must be physically intermingled in the sameblock of tissue. This type of “multi-map” is crucial in the visual systembecause it allows multiple feature types to simultaneously analyze everypoint in the visual field. The physical interdigitation of different featurestypes leads to the problem, however, that neurons can’t “learn together”within neighborhoods defined by a purely spatial criterion, since neighboringneurons often represent very different image features. Co-training musttherefore also depend on feature-similarity, that is, should occur in neuronsthat are not just close, but also like-activated. To explore these effects, wehave studied SOM learning outcomes using (1) pure spatial, (2) pure featural,and (3) hybrid spatial-featural learning criteria. Preliminary resultsfor a 2-dimensional data set (of L-junctions) embedded in a high-dimensionalspace of local oriented edges features show that the hybrid approachproduces significantly better organized maps than do either pure spatial ornon-spatial learning functions, where map quality is quantified in terms ofsmoothness and coverage of the original data set.Acknowledgement: This work is supported by NEI grant EY01609316.511 How do we recognize our own stuff? Expert vs. genericrecognition of household itemsLauren Kogelschatz 1 (lkogelsc@fau.edu), Elan Barenholtz 1 ; 1 Dept. of Psychology,Florida Atlantic UniversityPrevious research on object recognition—as opposed to face recognition—has primarily focused on ‘generic’ objects (e.g. identifying an object as acar), in which different individuals are assumed to share the same basicknowledge about the target objects. However, we are all ‘experts’ withregard to a particular class of stimuli: the objects we see and use every dayin our home or work environment. The current study aims to address howsuch ‘expert’ recognition compares with generic recognition of householdobjects. We compared performance for expert observers—in which the targetobjects came from the subject’s own home, vs. generic observers— whowere unfamiliar with the particular environment from which the objectswere drawn. Recognition performance was measured using two paradigms:‘pixelation’— in which subjects progressively increased the resolutionof the image of the object until they could recognize it and ‘modifiedbubbles’—in which subjects had to progressively reveal the image of theobject by removing square checks from an occluder obscuring it. In addition,we assessed the role of specific features (color, size, object type) acrossexpert and generic observers. We found a large advantage for the expertobservers overall as well as differences between expert and generic observersin the role of specific features.16.512 How do Task-dependent Attentional Demands Alter HowObjects are Learned?Jeffrey Markowitz 1,2,3,4 (jmarkow@cns.bu.edu), Yongqiang Cao 1,2,3,4 , StephenGrossberg 1,2,3,4 ; 1 Department of Cognitive and Neural Systems, 2 Center forAdaptive Systems, 3 Center of Excellence for Learning in Education, Science,and Technology, 4 Boston UniversityWe learn to recognize objects in the world in environments whose attentionaldemands vary greatly. How does such learning depend upon taskdependentattentional demands? Object recognition needs to be tolerant, orinvariant, with respect to position, size, and object view changes. In monkeysand humans, a key area for recognition is the anterior inferotemporalcortex (ITa). Recent neurophysiological data show that ITa cells with highobject selectivity often have low position tolerance. We propose a neuralmodel whose cells learn to simulate this tradeoff, as well as ITa responses36 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsFriday Evening Postersto image morphs, while explaining how invariant recognition propertiesmay arise gradually due to processes across multiple cortical areas, includingthe cortical magnification factor, multiple receptive field sizes, and topdownattentive matching and learning properties that may be tuned by taskrequirements to attend to either concrete or abstract visual features. Themodel predicts that data from the tradeoff and image morph tasks emergefrom different task-dependent levels of attentive vigilance in the animalsperforming them. Computer simulations predict how receptive fieldproperties would change under different task-sensitive vigilance levels.The model also predicts how vigilance may be controlled by mismatchesbetween top-down learned expectations and bottom-up perceptual inputs,leading to acetylcholine release in neocortical circuits and an increase invigilance. These results emphasize the importance of top-down attentionalmechanisms in object learning and recognition, and of the need to carefullymonitor task demands in studies of perceptual and cognitive processing.Acknowledgement: Supported in part by CELEST, an NSF Science of Learning Center, andby the SyNAPSE program of the Defense Advanced Research Projects Agency.16.513 Discrimination training builds position tolerant objectrepresentationsDavid Remus 1 (remus@stanford.edu), Kalanit Grill-Spector 1,2 ; 1 Department ofPsychology, Stanford University, 2 Neuroscience Institute, Stanford UniversityStudies of perceptual learning have demonstrated that when observers aretrained to discriminate low-level image features, such as orientation or contrast,in a single retinal position, performance improvements are specific tothe trained stimuli and position. However, it is unknown whether perceptuallearning of objects is similarly specific to both the trained stimuli andposition. If perceptual learning of objects occurs at lower-level stages ofvisual processing it may display position sensitivity. However, if learningof objects occurs in higher-level visual regions, which show decreased retinotopicsensitivity, learning effects may generalize across retinal positions.We investigated whether learning to discriminate among novel objectsin a single retinal position improves performance in the trained position,untrained positions, or in cases where the objects to be discriminatedappear in two separate in positions (swap). 14 observers were trained withfeedback to discriminate among 24 exemplars from a single category ofnovel objects, each of which was shown in one of two possible retinal positionsover the course of 5 days (8640 total exposures per observer). Aftertraining, observers’ discrimination performance significantly increased(mean d’ increase = 1.1±0.12 SEM) for the trained but not untrained objects.Training improvements were not significantly different across the trainedpositions, untrained positions, or swap conditions. Generalization acrosspositions occurred despite the fact that a given object was only observedin one retinal position during training. 17 additional observers participatedin an identical experiment but were not given feedback during training.Learning improvements were smaller without feedback (mean d’ increase= 0.70±0.13 SEM), but resulted in the same category-specific, position-generalprofile. Our results suggest that discrimination training on objects ismediated by high-level visual regions with large receptive fields, and thatbuilding position invariant representations of objects does not necessitateexperience with these objects in many retinal positions.Acknowledgement: NEI R01 EY019279-01A1Face perception: DevelopmentVista Ballroom, Boards 514–527Friday, May 7, 6:30 - 9:30 pm16.514 Revisiting upright and inverted face recognition in 6 to 12-year-old children and adultsAdelaide de Heering 1,2 (adeheer@mcmaster.ca), Bruno Rossion 2 , Daphne Maurer 1 ;1 McMaster University, Hamilton, Ontario, Canada, 2 Université Catholique deLouvain, BelgiumAdults are experts at recognizing faces. However there is still controversyabout how this ability develops with age, with some arguing for adultlikeprocessing by 4-6 years of age (Crookes & McKone, 2009) while othersmaintaining that this ability undergoes protracted development (Monldochet al., 2002). Here we tested 108 6- to 12-year-old children and 36 youngadults with a digitized version of the Benton Face Recognition Test (Bentonet al., 1983), which is known to be a sensitive tool for assessing face recognitionabilities (Busigny & Rossion, in press). Participants had to identify3 faces among 6 alternatives that matched the target face despite changesin viewpoint and lightning. The faces were projected upright and upsidedownin separate blocks, with order counterbalanced across participants.Children’s correct response times did not improve with age, for eitherupright or inverted faces, but were significantly slower than those of adultsfor both conditions. This pattern is consistent with known increases withage in attention and information processing. Accuracy improved between6 and 12 and significantly more for upright than inverted faces, leadingto a larger face inversion effect in older children. Inverted face recognitionimproved slowly until late childhood whereas the improvement forupright faces was largest before versus after 8 years of age, with a furtherenhancement by young adulthood. Together, the results indicate that duringchildhood face processing becomes increasingly tuned to upright faces,likely as a result of increasing experience.16.515 Eyes on the target: A comparison of fine-grained sensitivityto triadic gaze between 8-year-olds and adultsMark Vida 1 (vidamd@mcmaster.ca), Daphne Maurer 1 ; 1 Department of Psychology,Neuroscience & Behaviour, McMaster UniversityAdults are able to determine which object in the environment someone islooking at with high precision (triadic gaze). By age 6, children can detectlarge (10°) differences in triadic gaze (Doherty et al., 2009). Here, we developeda child-friendly procedure to compare sensitivity to small horizontaldifferences in triadic gaze between 8-year-olds and adults (n = 18/group).Participants sat in front of a computer monitor on which they saw facesfixating a series of points (separated by 1.6°) that were physically markedon a board halfway between them and the monitor. The task was to indicatewhether each face appeared to be looking to the left or right of one of threetarget points (center, 6.4° left or 6.4° right). All participants were at least75% correct on a practice block completed before each experimental block.Adults were highly sensitive to deviations from the central target, with amean error of 0.83° (calculated from the .25 and .75 points on the fittedpsychometric curves). 8-year-olds were not as sensitive (M error = 2.05°, p


Friday Evening PostersVSS 2010 AbstractsFriday PMthe similarity of faces share common perceptual process while judgmentsof attractiveness require attention to properties that are not required for theidentification of an individual face.Acknowledgement: NIH Grant # EY013602, KN, AY16.517 Children’s Face Coding is Norm-Based rather than Exemplar-based:Evidence From Face Identity AftereffectsLinda Jeffery 1 (linda@psy.uwa.edu.au), Gillian Rhodes 1 , Elinor McKone 2 , ElizabethPellicano 1,3 , Kate Crookes 2 , Libby Taylor 1 ; 1 The University of Western Australia,2 The Australian National University, 3 Centre for Research in Autism and Education,Institute of Education, LondonChildren perform more poorly than adults on tests of face identification yetthe source of their difficulty is controversial, with recent evidence pointingto general cognitive immaturity rather than differences in the use ofspecialized face-coding mechanisms such as holistic coding. However, notall aspects of children’s face coding are well studied and relatively little isknown about children’s face-space. Immaturity in face-space is therefore apotential source of children’s face identification difficulties. We used faceidentity aftereffects to investigate children’s face-space. Previous studieshave shown that 8 year-olds experience face identity aftereffects and theiraftereffects do not differ quantitatively from adults’. In the present study wetested younger children and found that face identity aftereffects were presentas early as 4-5 years-of-age and did not change quantitatively between5 and 8 years-of-age. However, children’s aftereffects, including those of8 year-olds, were larger than adults’ suggesting that children’s face-spacemay not be mature by 8 years-of-age. We then conducted additional tests todetermine whether a major qualitative change in how faces are representedin face-space could occur between 8 years-of-age and adulthood. Specificallywe investigated whether children’s face identity aftereffects, like thoseof adults, reflect norm-based coding or instead result from exemplar-basedcoding. These tests showed that children’s face-space coding is norm-basedbecause (1) children’s face identity aftereffects were larger for adaptors farfrom the norm than for adaptors closer to the norm, (2) children’s aftereffectswere larger for opposite adapt-test pairs than non-opposite pairsequated for perceptual similarity and (3) children perceive faces close to theaverage as “neutral” for identity. We conclude that there is no evidence ofa qualitative change from exemplar to norm-based coding between 8 yearsand adulthood. Children’s larger aftereffects may reflect other immaturitiesin children’s face-space such as more flexible norms.Acknowledgement: Australian Research Council Discovery Grants DP0770923,DP0877379,DP098455816.518 Adaptation effect for facial identity in infantsMegumi Kobayashi 1 (oc084001@grad.tamacc.chuo-u.ac.jp), Yumiko Otsuka 2,3 ,Emi Nakato 4 , So Kanazawa 2 , Masami K Yamaguchi 1,5 , Ryusuke Kakigi 4 ; 1 Departmentof Psychology, Chuo University, 2 Department of Psychology, JapanWomen’s University, 3 Japan <strong>Society</strong> for the Promotion of Science, 4 Departmentof Integrative Physiology, National Institute for Physical <strong>Sciences</strong>, 5 PRESTO,Japan Science & Technology AgencyBy using the fMRI-adaptation technique, recent studies have demonstratedthat the face specific region of fusiform face area (FFA) and the superior temporalsulcus (STS) show the adaptation effect for facial identity; a reducedactivation to repeated presentation of identical face compared to presentationof different facial images (e.g., Andrews & Ewbank, 2004). In thepresent study, we used NIRS to examined whether a similar facial identityadaptation effects are shown in infants. By using Near-infrared spectroscopy(NIRS), we compared the hemodynamic responses of infants duringthe presentation of an identical face and the presentation of different faces.Based on our previous studies investigating face-related neural activationto faces by using NIRS (Otsuka et al., 2007; Nakato et al., 2009; Honda etal., 2009), we focused on the bilateral temporal regions. We hypothesizedthat infants would show the decreased brain activity during the repeatedpresentation of the same face compared to the presentation of the differentfaces. The responses were compared to the activation in the baselineperiod in which we presented various images of vegetables. The resultswere as follows: (1) the infants’ brain activities in the channels surroundingthe T5 and T6 regions increased during the observation of different facescompared to the baseline, suggesting that brain activity in infants’ STS canbe measured, (2) the repeated presentation of identical face lead to a significantreduction in the oxy-Hb concentrations compared to the presentationof different faces. These results suggested that the infants’ STS showed theadaptation effect for facial identity. Our findings are consistent with theprevious fMRI studies showing the adaptation effect in face recognition inadults’ STS.Acknowledgement: This research was supported by PRESTO (Japan Science andTechnology Agency), a Grant-in-Aid for Scientific Research (18300090) from Japan<strong>Society</strong> for the Promotion of Science and a Grant-in-Aid for Scientific Research onInnovative Areas, “Face perception and recognition” (20119002), and “Development ofbiomarker candidates for social behavior” carried out under Strategic Research Programfor Brain <strong>Sciences</strong>, by the Ministry of Education, Culture, Sports, Science and Technology.We are grateful to Prof. Norihiro Sadato, National Institute for Physiological <strong>Sciences</strong>, forhis technical assistance. We also thank Yuko Hibi, Aki Tsuruhara, Midori Takashima, JaileYang, Yuka Yamazaki for their help in data collection.16.519 Infants’ neural responses to facial expressions using Near-Infrared SpectroscopyEmi Nakato 1 (nakato@nips.ac.jp), Yumiko Otsuka 2,3 , So Kanazawa 2 , Masami KYamaguchi 4,5 , Ryusuke Kakigi 1 ; 1 National Institute for Physiological <strong>Sciences</strong>,2 Japan Women’s University, 3 Japan <strong>Society</strong> for the Promotion of Science, 4 ChuoUniversity, 5 PRESTO, Japan Science and Technology AgencyFacial expressions play an important role in social communication duringinfancy. 3-month-olds can discriminate between happy and anger faces(Barrera & Maurer, 1981), and 7-month-olds have the ability to categorizehappy facial expressions, but not fearful ones (Ludemann & Nelson, 1988).Neuroimaging studies in adults revealed that the superior temporal sulcus(STS) was implicated in the processing of facial expressions (Haxby etal, 2000). Our previous near-infrared spectroscopy (NIRS) study demonstratedthat the right STS was mainly activated in the perception of facesin infants (Nakato et al, 2009). However, infants’ brain regions involved inperceiving facial expressions has not been investigated.To examine whether STS was responsible for the perception of facialexpressions in infants, we used NIRS to measure the neural activation inSTS when infants looked at happy and angry faces. Twelve 6- and 7-montholdinfants viewed five happy and five angry female faces passively. Themeasurement area was located in the bilateral temporal area which wascentered at T5 and T6 according to the International 10-20 system of EEG.Our findings indicated that the time-course of the average changes in oxy-Hb concentrations showed a distinct pattern of the hemodynamic responsebetween happy and angry faces. The hemodynamic response increasedgradually when infants looked at happy faces. In contrast, the hemodynamicresponse peaked quickly when infants looked at angry faces. Followingthis peak, the hemodynamic response decreased until the stimulidisappeared.Moreover, we found that the right temporal area of infants’ brain was significantlyactivated against the baseline when infants looked at angry faces,while the left temporal area was activated for happy faces. These findingssuggest hemispheric differences in STS when processing positive and negativefacial expressions in infants.Acknowledgement: This research was supported a grant to MKY from PRESTO (JapanScience and Technology Agency) and a Grant-in-Aid for Scientific Research (20119007 toRK? to MKY? to MKY? to EN) from Japan <strong>Society</strong> for the Promotion of Science.16.520 Infants’ brain activity in perceiving facial movement ofpoint-light displayHiroko Ichikawa 1 (ichihiro@tamacc.chuo-u.ac.jp), So Kanazawa 2 , Masami K. Yamaguchi3,4 , Ryusuke Kakigi 5 ; 1 Research and Development Initiative, Chuo University,2 Faculty of Integrated Arts and Social <strong>Sciences</strong>, Japan Women’s University,3 Department of Psychology, Chuo University, 4 PRESTO Japan Science &Technology Agency, 5 National Institute for Physiological ScienceAdult observers quickly identify the specific actions performed by theinvisible actor from the points of lights attached to the actor’s head andmajor joints. Even infants are already sensitive to biological motion andprefer it depicted by the dynamic point-light display (Arterberry & Bornstein2001). To detect biological motion such as whole body movementsand facial movements, neuroimaging studies demonstrated involvement ofoccipitotemporal cortex including superior temporal sulcus (STS) (Lloyed-Fox et al., 2009). In the present study, we applied the point-light displaytechnique and examined infants’ brain activity while watching facial biologicalmotion in the point-light display by using near-infrared spectroscopy(NIRS). Dynamic facial point-light displays (FPD) were made fromvideo recordings. As in Doi et al. (2008), about 80 luminous markers werescattered pseudo-randomly over the surface of the actors’ face. Three actors38 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsFriday Evening Postersperformed the surprised expression in the dark room and were videotaped.In the experiment, we measured hemodynamic responses by using NIRS.We hypothesized that infants would show differential neural activity forupright and inverted FPD. The responses were compared to the baselineactivation during the presentation of individual still images those wereframes extracted from the dynamic FPD. We found that the concentrationof oxy-Hb and total-Hb increased in right lateral area during the presentationof the upright FPD compared to the baseline period. The results suggestedthat (1) the brain activity while watching the facial movement inpoint-light display would develop by 6-8-months of age, (2) processing ofthe facial biological motion related to the right lateral area.Acknowledgement: This research was supported by PRESTO (Japan Science andTechnology Agency) and a Grant-in-Aid for Scientific Research (20119002, 21243041)from Japan <strong>Society</strong> for the Promotion of Science.16.521 Age-contingent face aftereffects depend on age of theobserverJanice Murray 1 (jmur@psy.otago.ac.nz), Beatrix Gardiner 1 ; 1 Department ofPsychology, University of OtagoFollowing repeated exposure to faces with contracted (or expanded) internalfeatures, faces previously perceived as normal appear distorted in theopposite direction. These face aftereffects suggest that face-coding mechanismsadapt rapidly to changes in the configuration of the face. Past workwith young adults has suggested that distinct coding mechanisms respondto faces that differ in orientation, gender, race, and eye gaze direction. Thefirst aim of the present work was to determine whether coding of faces fromdifferent age categories (young and older adults) shows similar selectivity.Given evidence for age-related changes in face recognition, emotionalexpression recognition and configural information processing, we alsotested aftereffects in older adults. Before and after an adaptation phase,participants rated the normality of morphed distorted faces ranging from50% contracted through normal to 50% expanded. These test faces depictedyoung (18-32 years) and older (64+ years) individuals matched for distinctivenessand presented in equal numbers. In the adaptation phase, participantsviewed either young or older faces with 60% contracted features. Thesize of the adapt and test faces was varied. For young participants (18-23years), aftereffects occurred in all conditions but were significantly reducedwhen the age of the adapting face and test faces differed. These findingssuggest that dissociable neural populations code young and older faces. Forolder adults (60-83 years), a different pattern of aftereffects was observed.When older adults were adapted to older faces, a significant aftereffectoccurred with older but not young test faces, consistent with the age-contingentaftereffects observed with young adults. However, after adaptingto young faces, older adults showed significant aftereffects of equal magnitudefor young and older test faces. These findings suggest that changes inthe perceptual or neural mechanisms that code faces take place as a functionof the aging process.16.522 Exploring the perceptual spaces of faces, cars, and birds inchildren and adultsTamara L. Meixner 1 (tmeixner@uvic.ca), Justin Kantner 1 , James W. Tanaka 1 ;1 Department of Psychology, University of VictoriaTo date, much of the developmental research concerning age-relatedchanges in face processing has focused on the type of information andthe specific strategies utilized by children during face recognition. Otheraspects of facial recognition, such as the principles governing organizationof individual face exemplars and other objects in perceptual memory, havebeen less extensively investigated. The present study explores the organizationof face, bird, and car objects in perceptual memory using a morphingparadigm. Children ages five-six, seven-eight, nine-ten, and eleven-twelve,and adults were shown a series of morphs created with equal contributionsfrom typical and atypical face, bird, and car parent images. Participantswere asked to judge whether each 50/50 morph more strongly resembledthe typical or the atypical parent image from which it was created. Childrenin all age groups and adults demonstrated a systematic atypicality bias forfaces and birds: the 50/50 face (bird) morph was judged as appearing moresimilar to the atypical parent face (bird) than the typical parent face (bird).Interestingly, the magnitude of the atypicality bias remained robust andstable across all age groups, indicating an absence of age-related differences.No reliable atypicality bias emerged for the car category. Collectively,these findings establish that by the age of five, children are sensitive to thestructure and density of face and bird probes, and are capable of encodingand organizing face, bird, and car exemplars into a perceptual space that isstrikingly similar to that of an adult’s. These results suggest that categoryorganization, for both children and adults, follows a distance-density principle(Krumhansl, 1978) where the perceived similarity between any twocategory exemplars is attributed to both their relative distance and the densityof neighboring exemplars in the perceptual space.16.523 Sad or Afraid? Body Posture Influences Children’s andAdults’ Perception of Emotional Facial DisplaysCatherine Mondloch 1 (cmondloch@brocku.ca), Danielle Longfield 1 ; 1 PsychologyDepartment, Brock UniversityAdults’ perception of facial displays of emotion is influenced by context(body posture; background scene), especially when the facial expressionsare ambiguous (Van den Stock et al., 2007) and the emotion displayed inthe context is similar to that displayed in the face (Aviezer et al., 2008).We investigated how context influences children’s perception of rapidlypresented emotional expressions. Adults and 8-year-old children (n=16per group) made two-alternative forced-choice judgments about sad andfearful facial expressions. Each facial expression was posed by 4 models(2 males) and presented with both congruent and incongruent body posturesthat were either aligned or misaligned with the face. Participants wereinstructed to ignore the body. In the aligned condition, accuracy was higherwhen face and body were congruent versus incongruent, p


Friday Evening PostersVSS 2010 AbstractsFriday PMto fearful faces in adolescents is likely due to immature top-down processingthat fails to adequately over-ride more bottom-up, affectively-drivenprocesses.16.525 The effects of aging and stimulus duration on face identificationaccuracy with differing viewpointsAyan K. Dey 1 (deyak@muss.cis.mcmaster.ca), Matthew V. Pachai 1 , Patrick J.Bennett 1,2 , Allison B. Sekuler 1,2 ; 1 Department of Psychology, Neuroscience, andBehaviour, McMaster University, 2 Centre for <strong>Vision</strong> Research, York UniversityHabak, Wilkinson and Wilson (2008, <strong>Vision</strong> Res, 48(1), 9-15) reported thatface identification accuracy was lower in older subjects than younger subjects,especially for faces presented at different viewpoints. In addition,they found that identification accuracy for faces presented in differentviewpoints improved in younger subjects, but not older subjects, as stimulusduration increased from 500 to 1000 ms. This result led Habak et al. topropose that the accumulation of information used to refine neural representationssaturates earlier in the older visual system than in the youngervisual system. However, Habak et al. used artificially constructed stimulithat differed in outer contour and hair in addition to the geometry of internalfeatures, and may not have been discriminated on the basis of facialfeatures per se. We therefore investigated the effect of stimulus durationin older (n=8) and younger (n=8) observers using pictures of faces that differedin viewpoint but had identical outer contours and no hair, forcingsubjects to base identification on internal facial features. Observers vieweda high-contrast target face for 250ms, 500ms, 1000ms, or 2000ms followedby a 10-face choice response. Faces in the response window were presentedin fronto-parallel view; target faces were presented at one of several obliqueviewpoints. A repeated measures ANOVA revealed significant main effectsof age (p


VSS 2010 AbstractsFriday Evening Posters16.529 Attention ignores rewards when feature-reward mappingsare uncertainAlejandro Lleras 1 (Alejandro.Lleras@gmail.com), Brian Levinthal 2 ; 1 University OfIllinois at Urbana-Champaign, 2 Northwestern UniversityRecent investigations have shown that externally-adjudicated rewards canmodulate selection processes both within and between trials. In particular,when participants recently received a reward or a penalty (on the previoustrial) or can expect to receive a reward or a penalty in the current trial(based on learned reward-feature contingencies), rewards can stronglyguide attention, biasing selection mechanisms towards highly-rewardedinformation and away from penalty-inducing information. These effectsare observed after participants have had extensive experience with the taskand with the reward-feature contingencies (how much each feature is typicallyworth). Here, we investigated whether reward-based effects on attentioncan be induced on a trial-by-trial basis (i.e., without a consistent associationbetween a level of reward and a specific visual feature). We usedthe Distractor Previewing Effect, an inter-trial bias of selective attentionthat is observed in oddball-search tasks: participants are slower to select anoddball target when its defining feature was shared by all distractors on apreceding target-absent trial, and are faster when distractors share a featurewith the distractors on a preceding target-absent trial. Previously, we haveshown that learned rewards strongly modulate the DPE. When penaltiesare associated with the color of distractors on a target-absent trial, the ensuingDPE is exaggerated, whereas when high levels of rewards are associatedwith the color of distractors on the target-absent trial, the ensuingDPE is reversed, showing now an attentional preference to select normallyinhibited information. Furthermore, we observed strong within-trial biasessuch that items defined by rewarded features were preferentially selected,and items defined by penalized features were efficiently rejected. Ourcurrent results show that these reward-induced modulations of attentionare totally absent when reward levels are randomly assigned to featureson trial-by-trial basis. Under conditions of reward-uncertainty, attentionignores rewards, presumably because previous rewarding experiences failto predict future rewards.Acknowledgement: National Science Foundation grant to AL, award number BCS 07-46586 CAR16.530 The role of motivational value in competition for attentionalresourcesJennifer O’Brien 1 (obrien.jenk@gmail.com), Jane Raymond 2 , Thomas Sanocki 1 ;1 Psychology, University of South Florida, 2 School of Psychology, BangorUniversityValue associations are acquired for visual stimuli through interaction withthem, which can subsequently predict both the resulting value of interaction(i.e., in terms of reward or punishment) and the likelihood of obtainingthat outcome should they be encountered again. We have previouslyshown evidence that visual stimuli are processed in a value-specific manner,where expected value is determined by both valence and motivationalsalience. Under conditions of constrained attention (e.g., presentation duringan attentional blink, AB), recognition of value-laden stimuli is determinedby their associated valence. More specifically, a reward-associatedstimulus presented for recognition as a second target (T2) in a rapid serialvisual presentation (RSVP) of non-valued stimuli escapes the AB; a lossassociatedstimulus does not. Thus, when attention is limited visual recognitionappears to be biased in favor of reward-associated stimuli. Here weasked whether this reward bias in visual recognition persists when valueladenstimuli are in direct competition for attentional resources. To test this,we first had participants engage in a simple choice task where they gainedor lost money with high or low probability in response to choosing specificvisual stimuli. We then measured recognition of these learned stimuli ina dual RSVP stream AB task, where T2 response required the recognitionof two value-laden stimuli presented simultaneously under conditions oflimited attention. Preliminary evidence suggests that reward-associatedstimuli are preferentially processed over other valenced stimuli; however,performance is also modulated by motivational salience.16.531 Reward speeds up response inhibition, but only when it isunpredictableY. Jeremy Shen 1 (yankun.shen@yale.edu), Daeyeol Lee 2 , Marvin Chun 1 ; 1 Departmentof Psychology, Yale University, 2 Department of Neurobiology, YaleUniversityWe often must inhibit response to one visual stimulus upon seeing another.We asked whether people could inhibit responses in less time when it ismore important to do so by offering different levels of reward—points thatwere later converted into monetary bonuses—for successful inhibition. Inour experiments, participants made rapid manual responses to dots appearingon either side of the computer screen. We tested their inhibitory abilitiesby presenting a square “stop signal” shortly after the dot onset in sometrials, indicating that participants must cancel their response to receivereward. We measured the efficiency of response inhibition by estimatingthe time required for participants to react to the stop signal and cancel theirresponse, namely their stop-signal reaction time (SSRT). In experiment 1,we separated trials with high and low rewards for stopping into differentblocks. We found that participants’ SSRTs did not vary with the reward forstopping, although participants were significantly slower when respondingto the dots in high reward blocks, suggesting that they waited longerin anticipation of potential stop signals in those blocks. We then lookedto reduce this anticipation by associating high and low stop rewards withdifferent dot locations and presenting trials with different stop rewards inrandom order. The unpredictable ordering of reward eliminated the differencebetween response times to dots in high and low reward locations,and now participants were significantly faster to inhibit their responsewhen reward is high than when reward is low. Our findings suggest thatat least in the domain of inhibiting responses to visual stimuli, anticipationof higher reward interferes with more automatic mechanisms we have forimproving performance in response to reward.Acknowledgement: Kavli Foundation16.532 Saccadic reaction times in response to rewards of varyingmagnitude and probabilityAngela Vavassis 1 (vavassis@live.concordia.ca), Michael von Grunau 1 , AaronJohnson 1 ; 1 Department of Psychology, Concordia University, Montreal, Quebec,CanadaDecision-making has often been studied by asking observers to choosebetween two movements (e.g., a saccade to a target on the left or right offixation). Choosing a movement with the highest expected value is adaptive,in that it maximizes reward over time. Saccadic reaction time (SRT) isused as a conventional index of movement preparation in such tasks. In thecurrent study, latencies to initiate a saccade to a red target dot presented tothe left or right of fixation were measured. Reward manipulations consistedof varying the magnitude of the reward, as well as the probability of receivingthe reward, following a correct eye movement to the left or right target.Results show that higher expected reward leads to lower saccadic reactiontimes (SRT) to the target, taken to imply better saccadic preparation, andsupporting previous findings by Milstein & Dorris (2007).16.533 Effects of hunger and body mass index on attentionalcapture by high and low calorie food images: An eye-tracking studyAlison Hoover 1 (AlisonMH@txstate.edu), Natalie Ceballos 1 , Oleg Komogortsev 2 ,Reiko Graham 1 ; 1 Deparment of Psychology, Texas State University, 2 Departmentof Computer Science, Texas State UniversityReaction time indices of attentional biases toward food and food-relatedstimuli have been shown to vary with changes in motivational state (i.e.,hunger) and variations in body mass index (BMI). The current study usedeye-tracking methodology to examine how attentional biases towards differentfood images are moderated by hunger and BMI. Twenty-six women(15 normal BMI, 11 overweight or obese; 13 sated, 13 hungry) viewed pairsof images of high-calorie sweet, high-calorie salty, and low-calorie foodswhile eye movements were monitored. Proportions of initial fixations tothe different food types were used as an index of attentional capture andpupil diameter as an index of emotional arousal. Results revealed a significantinteraction between food type and BMI: the overweight group had agreater proportion of first fixations on low-calorie food images relative tothe normal weight group (who had a tendency to fixate first on high-caloriesalty images). These results are consistent with reaction time data showingmore positive implicit attitudes to high-calorie salty foods (e.g., pizza,burger; Czyzewska & Graham, 2007). In addition, there was a significantfood type by hunger interaction: the hungry group made more initial fixationsto high-calorie salty foods (relative to low-calorie foods), suggestingthat hunger temporarily enhances attentional capture by high-calorie saltyfoods. Furthermore, the effects of BMI and hunger on attentional captureto these foods are statistically separable. In contrast, pupil diameters didFriday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>41


Friday Evening PostersVSS 2010 AbstractsFriday PMnot change as a result of hunger: mean pupil diameter was larger overallfor the overweight group, but this main effect was mitigated by an interactionbetween BMI and food type wherein pupil diameters were largest tohigh-calorie salty foods. Overall, these results suggest that hunger and BMIhave separate effects on attentional capture to food images that increase thesalience of high-calorie salty foods.16.534 Exploring the relationship between anxiety and processingcapacity for threat detectionHelen Richards 1 (hjr105@soton.ac.uk), Valerie Benson 1 , Julie Hadwin 1 , MichaelWenger 2 , Nick Donnelly 1 ; 1 School of Psychology, University of Southampton,U.K., 2 Department of Psychology, Pennsylvania State UniversityCognitive models suggest that anxiety is associated with the presence of ahighly sensitised threat detection mechanism which, once activated, leadsto the automatic allocation and focusing of attention on the source of threat(review by Bar-Haim, Lamy, Pergamin, Bakermans-Kranenburg & Ijzendoorn,2007). Previous studies have only ever considered the detection ofsingleton threat targets in anxiety. The threat detection system should alsobe configured to rapidly detect signs of impending danger in situationswhere there is a possibility of multiple threats. Given multiple threats, itis unclear whether a more advantageous strategy for threat detection inanxious individuals is to localise and focus attention on one threat stimulusor to distribute attention widely (see Eysenck, Derakshan, Santos & Calvo,2007). To address this theoretical question, we conducted a reaction timeredundant signals study in which participants were asked to indicate thepresence or absence of an angry or happy target face in displays containingno targets, one target or two targets. In all conditions, the task was to detectthe presence of at least one target. We used measures of processing capacity(e.g., capacity coefficient, Miller and Grice inequalities; see Wenger &Townsend, 2000) to assess whether, at all time points, the fastest RTs in theredundant target condition (e.g., two target condition) could be predictedfrom the fastest RTs in the single target conditions. Eye-movements werealso measured during the study. Significant correlations showed that anxietywas associated with increased processing capacity for threatening facesbut only at early time points in target detection. The results also demonstratedthat significantly fewer eye-movements were made to targets whenanxiety was high. The data are consistent with anxiety influencing threatdetection via a broadly tuned attentional mechanism.Acknowledgement: Economic and Social Research Council, U.K.16.535 Value associations make irrelevant stimuli especiallydistractingJulia Gómez-Cuerva 1 (j.gomez@bangor.ac.uk), Jane E. Raymond 1 ; 1 BangorUniversityLearning and experience lead us to associate reward or punishment valuewith specific visual objects. Value associations, especially reward associations,are thought to activate dopaminergic systems that may in turn supportenhanced attention to such objects. Here we asked whether value associations(reward or punishment) learned in one context could make stimuliespecially distracting when irrelevant in other contexts. To explore thispossibility, we use a two-phase experimental procedure. First, participantslearned to associate different face stimuli with winning, losing, or havingno outcome in a simple choice game. Later, a typical ‘flanker’ attention task(with no possibility of winning or losing points) was conducted using thepre-conditioned stimuli or novel stimuli as distractors. Five faces were presentedand the task was to categorize the gender of the middle (target) faceas quickly as possible. In this task, attentional distraction by flanking stimuliis indexed as a slowing in mean response time (RT) to judge the central targetrelative to a baseline condition. Here, the target was always a novel face;the baseline was measured using novel faces as flankers (gender congruentor incongruent), and the experimental conditions used the preconditionedfaces as flankers. We found that RTs were significantly slower than baseline(regardless of gender congruity) when flankers had been preconditionedwith rewards or punishers but not when preconditioned with no outcome.This effect was especially robust for stimuli that had been optimal choicesin the choice game, regardless of their association with reward versus punishment.These findings indicate preconditioned value associations play animportant role on visual selection processes.16.536 Interaction effects of emotion and attention on contrastsensitivity correlate with measures of anxietyEmma Ferneyhough 1 (emmafern@nyu.edu), Damian Stanley 1 , Elizabeth Phelps 1,2 ,Marisa Carrasco 1,2 ; 1 New York University Psychology Department, 2 New YorkUniversity Center for Neural ScienceBackground: Last year at VSS we showed that faces effectively cue attention,improving contrast sensitivity at a cued location, and impairing sensitivityat an uncued location, compared to distributed cues; however, facialexpression had no impact. Some have suggested that anxiety modulatesthe effects of emotion and attention on performance (e.g., Bar-Haim et al.,2007), which led us to look at individual differences in trait anxiety. Herewe investigate whether anxiety influences the interaction of emotion andattention on contrast sensitivity.Method: Non-predictive precues directed exogenous (involuntary) attentionto a visual task stimulus. Precues were faces with either neutral orfearful expressions and were presented to the left, right, or both sides (8°eccentricity) of central fixation. On each trial, a target (tilted Gabor) was displayedon one (random) side and a distracter (vertical Gabor) on the other(1.5cpd, 3° Gabors; 4° eccentricity). Attention was thus randomly cuedtoward the target (valid cue), distracter (invalid cue), or distributed overboth locations. Observers discriminated target orientation with contrastvaryingstimuli, and completed self-reported measures of anxiety (PANAS:Watson, Clark & Tellegen, 1988; STAI: Spielberger et al., 1983).Results: We found that emotion significantly interacted with attention ina manner that reflected trait anxiety. Consistent with previous research,distributed-fear cues significantly improved performance compared to distributed-neutralcues. Although valid- and invalid-fear cues did not consistentlymodulate sensitivity across observers, individual differences inanxiety significantly correlated with this interaction of emotion and attention.The emotion effect (fear minus neutral sensitivity) was negatively correlatedwith anxiety for valid cues but positively correlated for invalid cues.These results suggest that for observers with increased anxiety the fear cueimpairs processing of the nearby stimulus. These findings will be discussedin the context of an ongoing debate regarding the relation of anxiety andattention.Acknowledgement: NIH R01 MH062104 to EP, and NIH R01 EY016200 to MC16.537 Evaluation of attentional biases towards thin bodiesChristina Joseph 1 (christij@psychology.rutgers.edu), Maggie Shiffrar 1 , SarahSavoy 1 ; 1 Rutgers University, NewarkBackground: The aim of this research is to understand how people distributetheir visual attention across scenes containing people. Previous studieshave indicated that women with eating disorder symptoms (high body dissatisfaction)selectively attend to images of thin female bodies. With increasingnumbers of men experiencing body dissatisfaction (BD), we examinedwhether men also selectively attend to thin bodies, whether this attentionalbias depends upon the gender of observed body, and whether this bias is afunction of observer BD, weight, and/or gender. Method: Male and femaleparticipants completed the Body Shape Questionnaire-34 to measure theirlevel of BD. They then completed a dot-probe task in which they first sawa fixation followed by two bodies of the same gender (one thin, one overweight)presented simultaneously one above the other. After 500 ms, thebodies disappeared from the display and an arrow appeared in the previouslocation of one of the bodies. With a key press, participants reportedwhether the arrow pointed to the left or right. Reaction times were recordedto determine whether observers had directed their attentional resourcestowards the thin or heavy body type. Trials were blocked by figure gender.Results: There was a significant interaction between subject gender,figure gender, and figure body type, F(1,35)-5.52, p=.025. No main effectswere significant. Conclusions: Female observers spontaneously direct theirattentional resources to thin female bodies and to overweight male bodies.Male observers selectively attend to thin male bodies and distribute theirattention equally across all female bodies. It has been proposed that selectiveattention to thin bodies maintains high levels of body dissatisfaction.If so, the current results suggest that both men and women exhibit similarrelationships between attentional biases and body dissatisfaction.16.538 Suppressing sex and money: Response inhibition leads todevaluation of motivationally salient visual stimuliAnne E. Ferrey 1 (aferrey@uoguelph.ca), Angele Larocque 1 , Mark J. Fenske 1 ;1 Department of Psychology, University of Guelph42 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


Friday Evening PostersVSS 2010 AbstractsFriday PMinstructed to correct gaze to the target disk. On key trials, the target and distractorswapped colors during the initial saccade. This change in the colorof the target caused significant interference with gaze correction, despitethe fact that color was incidental to the task. Similar results were obtainedeven when participants were given strong incentive to avoid encoding thetarget color. Thus, participants appear to have minimal control over theobject features encoded into transsaccadic VWM. All features of the saccadetarget, including task-irrelevant features, are encoded, maintained acrossthe eye movement, and consulted when the visual system locates the targetafter the saccade.Acknowledgement: NIH Grant R01EY01735616.543 Proximity Grouping in Visual Working MemoryAndrew McCollough 1 (awm@darkwing.uoregon.edu), Brittany Dungan 1 , EdwardVogel 1 ; 1 University of OregonThe ability to group information into “chunks” is a well know phenomenonin verbal working memory paradigms. However, the effects of chunkingin the visual memory domain is not as well understood. Here, we investigatethe effects of visual chunking on working memory capacity by utilizinggestalt principles to bias subjects to group individual items into larger,virtual objects. Previously, we have examined the effects of grouping inKanizsa figures and demonstrated a reduction in working memory load forelements comprising illusory triangles compared to individual “pac-men”.Here, we investigate the effect of proximity in generating virtual objects. InExperiment 1 Subjects were presented with randomly spaced groups of 6dots: 1 group of 6 dots, 2 groups of 3 dots, or 3 groups of 2 dots. Subjects performeda location change detection task on a single item probe after a briefdelay, indicating whether the probe was in the same or different location asthe sample. ERPs were also recorded during the experiment. In particular,we examined the contralateral delay activity, which is an ERP componentsensitive to the number of items held in memory during the delay activityof a visual working memory task. By examining the amplitude of this activitywe were able to further determine whether these grouping principlesfacilitated efficient allocation of memory capacity towards the “chunked”objects or whether the number of maintained representations in memorywas set by the number of elements within the figure.Change detection performance was greater in grouped conditions comparedto individual presentation conditions. In addition, ERP activityindexing online working memory load was greater for higher numbersof elements compared to grouped figures containing the same number ofelements. The implications of working memory grouping mechanisms forcategory learning will also be discussed.16.544 Visual short term memory serves as a gateway to long termmemoryKeisuke Fukuda 1 (keisukef@uoregon.edu), Edward K. Vogel 1 ; 1 University ofOregonThe classic “modal model” of memory argues that short term memory(STM) serves as the primary gateway for the formation of long term memory(LTM) representations (Atkinson & Shiffrin, 1968). Over the years, though,this model has been disregarded by many because of various incompatibleresults. For example, one common interpretation of this model is thatSTM serves as an “incubator” that strengthens representations throughrepeated rehearsal so that they can be successfully transferred to LTM.However, several researchers have found that longer periods of retentionand rehearsal in STM does not lead to better LTM representations (e.g.Craik & Watkins, 1973). In this study, we took a different approach to testthis model. Rather than conceptualizing STM as an incubator, we insteadtested whether it serves as the “gate” that filters what information from theenvironment will ultimately be encoded into LTM. It is well known thatindividuals substantially and reliably vary in their STM capacity. Here wetested whether individuals with a high STM capacity, and thus a “largergate”, were better able to successfully store and retrieve information fromLTM than their low capacity counterparts. To do this, we tested LTM recognitionperformance for novel and repeated arrays of simple objects thatwere originally presented as part of a STM change detection task. Acrossseveral experiments, we found that an individual’s STM capacity stronglypredicted his or her success on both incidental and intentional LTM recognitiontasks (r’s = .47~.78). These results support the proposal that the effectivesize of the individual’s STM gate determines how much informationfrom a display will be successfully stored in LTM.16.545 Can Observers Trade Resolution for Capacity in VisualWorking Memory?weiwei zhang 1 (wwzhang@ucdavis.edu), Steve Luck 1,2 ; 1 Center for Mind & Brain,UC Davis, 2 Department of Psychology, UC DavisThe storage capacity of visual working memory (VWM) is strongly correlatedwith broad measures of cognitive abilities, but the nature of capacitylimits has been the subject of considerable controversy. Some researchershave proposed that VWM stores a limited set of discrete, fixed-resolutionrepresentations, whereas others have proposed that VWM consists of a poolof resources that can be allocated flexibly to provide either a small numberof high-resolution representations or a large number of low-resolution representations.To distinguish between these possibilities, we asked whetherthe resolution and capacity of stored representations in VWM is undertop-down strategic control. That is, can VWM store more coarse-grainedrepresentations or fewer fine-grained representations depending on taskdemands? To address this question, we used a short-term color recall taskin which observers attempted to retain several colors in VWM over a 1- secondretention interval and then reported one of them by clicking on a colorwheel (Zhang & Luck, 2008, Nature). In one condition, the colors variedcontinuously across the color wheel, encouraging the retention of precisecolor information. In a second condition, the color wheel was divided into9 or 15 homogeneous color wedges, reducing the degree of precision necessaryto perform the task. The flexible resource hypothesis predicts thatVWM should store fewer colors with higher resolution in the continuouscolor wheel condition but more colors with lower resolution in the discretecolor wheel condition. In contrast, the fixed resolution hypothesis predictsthat VWM resolution and capacity must remain constant across conditions.We found that VWM resolution and capacity remained constant across conditions,supporting the fixed resolution slot hypothesis. Follow-up experimentsusing other methods of manipulating the need to maintain preciserepresentations also found no evidence that subjects could increase thenumber of items in VWM by storing them with less precision.Acknowledgement: This research was made possible by grant R01 MH076226 from theNational Institute of Mental Health.16.546 The Optimal Allocation of Visual Working Memory: Quantifyingthe Relationship Between Memory Capacity and EncodingPrecisionChris R. Sims 1 (csims@cvs.rochester.edu), David C. Knill 1 , Robert A. Jacobs 1 ;1 Center for Visual Science and Department of Brain and Cognitive <strong>Sciences</strong>,University of RochesterVisual working memory is central to nearly all human activities. Given itsimportance, it is perhaps surprising that the capacity of visual workingmemory is severely limited. A long history of research has sought to identifythe nature of this limit, with the primary theoretical division concerningwhether this capacity is a continuous resource, or a number of discrete“slots”. In an effort to resolve this debate, Bays and Husain (2008) haveexamined how the precision of information encoded in visual workingmemory changes as a function of the number of features that are stored,with the finding that storing even two features can degrade memory precisioncompared with the case of storing just a single feature. While this relationshiphas been characterized using a power-law model (Bays & Husain,2008), or a modified discrete slot model (Cowan & Rouder, 2009), we haveinstead applied a principled theoretical framework to explain the relationshipbetween allocated memory resources and the resulting fidelity of theencoded information. In particular, results from a branch of informationtheory known as rate–distortion theory (Shannon & Weaver, 1949), dictatethe optimal precision with which any information can be transmitted as afunction of the capacity of the system, measured in bits. Importantly, thisupper limit on performance must hold regardless of whether human visualworking memory is biologically instantiated as discrete slots, a continuouspool of resources, or any other encoding scheme. It is shown that resultsfrom rate–distortion theory not only provide a principled theoretical basisfor describing capacity limits in visual working memory, but are also ableto provide a remarkable quantitative fit to empirical results of previousexperiments (Bays & Husain, 2008). These findings form the basis for thedevelopment of a computational model of the optimal allocation of visualworking memory.Acknowledgement: NIH #T32EY007125-1944 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsFriday Evening Posters16.547 Interactions between motion perception and visual workingmemoryMin-Suk Kang 1 (min-suk.kang@vanderbilt.edu), Geoffrey Woodman 1 ; 1 Departmentof Psychology, Vanderbilt UniversityQuestion: Are working memory representations biased by new visualinputs?Method: Observers were instructed first to view a random dot motiondisplay (100% coherence) and, then, to remember its direction of motionduring a two-second retention interval. While holding that direction ofmotion in visual working memory, observers viewed second motion display(10%~15% in coherence) and indicated whether it was counter-clockwise or clock wise with respect to a reference line presented at the edge ofthe display (2AFC task). The difference in motion direction between thesetwo motion displays was independently varied from -20° to +20°. After performingthe direction judgment task, observers reported the remembereddirection of motion by adjusting a clock needle to the memorized direction.This judgment provided a measure of memory precision, indexed as thedifference between the actual direction of the first motion display and thememorized direction.Result: The remembered direction of motion was systematically biasedtoward the direction of motion shown during the intervening perceptualdiscriminationtask.Discussion: These findings demonstrate that new perceptual inputs affectvisual working memory representations. This general experimental paradigmopens avenues to investigate how perception influences workingmemory representations and, for that matter, how working memory representationsinfluence ongoing perception.16.548 Crossmodal Working Memory Load: Perceptual andConceptual Contributions of Image CharacteristicsAnne Gilman 1 (anne.gilman@gmail.com), Colin Ware 2 , John Limber 3 ; 1 PsychologyDepartment, Iona College, 2 Center for Coastal and Ocean Mapping, Universityof New Hampshire, 3 Psychology Department, University of New HampshireWhat can crossmodal associations reveal about working memory capacityfor complex objects? Forming these associations starts before baby firstshakes a rattle and continues past Grandma’s first cellphone purchase. Toexamine the contributions of perceptual and conceptual characteristics ofimages to WM for novel visual-auditory associations, we assessed crossmodalchange-detection accuracy (Gilman & Ware, 2009) for associationsbetween animal sounds and four image types: grayscale shapes, colorshapes, grayscale drawings (Rossion & Pourtois, 2004), and color photographs.Following Alvarez & Cavanagh (2004), image information load wasmeasured by comparing time costs for adding more images of the sametype to a search array. Participants’ average search times (N=22) were allunder 20ms/item, comparable only to the two fastest image classes—coloredsquares and letters—tested by Alvarez and Cavanagh (2004). Meaningfulnessof the image types was assessed using word association counts,with each image prompting on average four associations; representationalimages garnered .7 more associations than shapes, and full-color imagesreceived approximately .2 more associations than grayscale ones. Similarresults were obtained for associational variety. Crossmodal change-detectionaccuracy was higher for associations between color images (representationalor abstract) and unrelated animal sounds than for those usinggrayscale images. However, response bias was lower for crossmodal associationswith representational images (color photos or grayscale drawings)than for those with color or grayscale shapes. This difference parallels recognitionmemory findings of lower bias for more-distinct visual (Sekuler& Kahana, 2007) and auditory (Visscher, Kaplan, Kahana, & Sekuler,2007) stimuli—stimuli designed to avoid conceptual associations. Visualsearch cost for the present image types varied linearly with participants’measured working memory capacity for crossmodal associations (r2=.971,F(1,2)=66.99, p=.015); the proportion of variance explained is comparableto that found in the visual precedent. These crossmodal WM dynamics arecompatible with information-theoretic WM models, showing sensitivity tounimodal perceptual and conceptual characteristics affecting the discriminabilityof stimuli to be associated.Acknowledgement: UNH Graduate School Dissertation Fellowship16.549 Visual Working Memory Capacity in Retinotopic Cortex:Number, Resolution, and Population Receptive FieldsBrian Barton 1 (bbarton@uci.edu), Alyssa Brewer 1 ; 1 University of California, IrvineIntroduction: Visual working memory (VWM) capacity has been shown tobe limited by the number of items one can hold in memory and the resolutionat which those items are represented. The number limit appears to besubserved by cortex in inferior interparietal sulcus (IPS), while the resolutionlimit seems to be subserved by superior IPS and part of the lateraloccipital complex (LOC). Visual field maps have recently been discoveredin IPS and LOC, and some or all of the regions involved in VWM capacitylimits may lie in these visual field maps, which may have functionalconsequences for VWM capacity limits. Methods: We measured angularand eccentric retinotopic organization and population receptive fieldsacross visual cortex using fMRI. Retinotopic stimuli consisted of black andwhite, drifting checkerboards 11° in radius comprising wedges, rings, and/or bars. A change detection task with set sizes 1, 2, 4 or 8 was performedusing fMRI, with locations of stimuli controlled such that they were in theshape of a ring or wedge, to directly measure angular and eccentric VWMorganization. The stimuli consist of colored squares (6 possible colors) orshaded cubes (controlled for spatial frequency, in 6 colors with 6 shadingpatterns, with low-similarity changes between colors and high-similaritychanges between shading patterns) that subtend roughly 1° of visualangle. Results/Discussion: We present the location of functionally definedregions underlying VWM capacity limits and whether some or all fall intovisual field maps. The change detection tasks replicate previous studiesshowing that we maintain just as many complex objects as simple objects,but at limited resolution. We present the first measurements of populationreceptive fields in IPS visual field maps, and analyze the number limit andmnemonic resolution for simple and complex objects as a function of populationreceptive fields.16.550 An investigation of the precision and capacity of humanvisual working memoryRosanne L. Rademaker 1 (rosanne.l.rademaker@vanderbilt.edu), Frank Tong 1 ;1 Department of Psychology, Vanderbilt UniversityHow does the visual system maintain an active representation of a visualscene after it can no longer access that information directly? Two key theoriesdominate the current understanding of how the brain deals with thechallenges of maintaining information from its rich visual surroundings.Slot models predict that up to 3-4 discrete items can be simultaneouslymaintained in working memory. By contrast, resource models assume alimited capacity that can be flexibly distributed to remember a few itemsvery well, or to store many items with less precision. Our study evaluatesthese models by measuring the precision and capacity of visual workingmemory for orientations. We used orientation because this continuouslyvarying feature can be precisely represented by the human visual system.Subjects were briefly presented with 1-6 randomly oriented gratings; aftera 3-second delay subjects were cued to report the orientation of one of thegratings by method-of-adjustment. This design allows the dissociation oftwo components of visual short-term memory performance: precision of aremembered item and likelihood of forgetting (Zhang & Luck, 2008). Rigorouspsychophysical testing indicated that each of our subjects was ableto accurately maintain a representation of orientation across set sizes (5-20° SD). Precision of this memory declined steadily when subjects had toremember more items, which is consistent with the resource model. However,not all our results were inconsistent with the slot model. The likelihoodof forgetting sharply increased when more items had to be remembered,indicating the difficulty of maintaining over 4 items in memory. Finally,we found only a weak effect of distracter orientation on target response,contrary to some recent reports (Bays et al. 2009). Our results suggest thatpeople can maintain more than 4 visual items simultaneously, albeit withsome loss of precision and a greater likelihood of forgetting.Acknowledgement: NSF BCS-0642633, NIH R01-EY017082, P30-EY00812616.551 The capacity limit of the visual working memory of themacaque monkeyEvelien Heyselaar 1 (evelien@biomed.queensu.ca), Kevin Johnston 1 , Martin Paré 1 ;1 Center for Neuroscience Studies, Queen’s UniversityBehavior can be guided by visual working memory as well as vision. Forexample, visual exploratory behavior is most efficient if subjects can accuratelyretain items they have previously fixated. This visual working memorycapacity is limited; human studies have estimated the visual workingmemory capacity as 3 items on average, with values as low as 1.5 in someindividuals. To date, no study has determined the capacity limit in animalsFriday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>45


Friday Evening PostersVSS 2010 AbstractsFriday PMand as such, no animal model has been established to investigate the neuralbasis of the capacity of visual working memory. We employed an adaptationof the sequential color-change detection task used in human studiesto determine the visual working memory capacity in the macaque monkey(see supplementary figure 1). Each trial began with the presentation of afixation spot on a blank screen. The monkey was required to fixate on thiscentral fixation spot before a memory array was presented. The memoryarray consisted of a set of two to five highly discriminable colored stimuli,presented for 500ms. The memory array, except for the fixation spot, wasremoved for a retention interval of 1000ms, during which the monkey wasrequired to maintain fixation. The test array was then presented with one ofthe stimuli having changed color. The monkey was required to indicate thischange by making a single saccadic eye movement to its location. Consistentwith the use of mnemonic processes, the performance decreased withincreasing set size (see supplementary figure 2). Using the relationshipbetween performance and set size, monkey visual working memory capacitywas at least 2 memoranda, a value within the range of human capacityestimates. This similarity between the monkey and human visual workingmemory capacity suggests a shared common neural process, which cannow be investigated with invasive techniques.16.552 Variations in mnemonic resolution across set sizes supportdiscrete resource models of capacity in working memoryDavid E. Anderson 1 (dendersn@gmail.com), Edward Awh 1 ; 1 Department ofPsychology, University of OregonDiscrete resource models propose that WM capacity is determined by asmall number of discrete “slots” that share a limited pool of resources. Bycontrast, flexible resource models posit a single resource pool that can beallocated across an unlimited number of items. To test these models, wemeasured mnemonic resolution for orientation as a function of set size (1-8). Using a mixture model consistent with discrete resource models (Zhangand Luck, 2008), we estimated number (Pmem) and resolution (SD) as afunction of set size. To test the flexible resource model, we fitted a singleGaussian distribution to the distribution of recall errors to operationalizeWM capacity. Although both models predict worse mnemonic resolutionfor larger set sizes, the discrete resource model predicts that resolutionshould reach an asymptote when capacity has been achieved because itemsthat are not stored should not affect the precision of the stored representations.In line with this hypothesis, the group data revealed a clear asymptotein resolution at set size 4. Critically, we also found that observers withfewer “slots” reached asymptote at smaller set sizes, leading to a strongcorrelation between individual slot estimates and the set size at which mnemonicresolution reached asymptote. By contrast, capacity estimates basedon the assumptions of the flexible resource model were significantly worseat predicting resolution as a function of set size. Thus, discrete resourcemodels provide superior predictive validity for understanding the relationshipbetween resolution and set size in visual WM.Acknowledgement: NIMH R01 MH087214 to E.A.16.553 Developmental evidence for a capacity-resolution tradeoffin working memoryJennifer Zosh 1,2 (jzosh@psu.edu), Lisa Feigenson 2 ; 1 Human Development & FamilyStudies, Pennsylvania State University - Brandywine, 2 Psychological & Brain<strong>Sciences</strong>, The Johns Hopkins UniversityA recent debate in the study of visual short-term memory (VSTM) askswhether capacity is better characterized as limited by the number of itemsstored (Luck & Vogel, 1997), the total information load of the items (Alvarez& Cavanagh, 2004; Xu & Chun 2006), or by a hybrid of these (Gao, Li,Liang, Chen, Yin & Shen, 2009; Zhang & Luck, 2008). Here we extend thescope of this capacity-resolution debate by studying infants and by usinga working memory rather than a VSTM task. We asked whether the resolutionof infants’ object representations decreases as infants rememberlarger numbers of objects. We presented 18-month-olds with arrays of 1-3toys that were then hidden in a box. Infants were allowed to search forand retrieve all of the hidden objects, and we measured their subsequentsearching. On some trials infants retrieved exactly those objects they sawhidden (e.g., brush and car hidden; brush and car retrieved), and as a resultthen searched very little (because the box was expected empty). On othertrials, one or more of the retrieved objects secretly switched identity (e.g.,brush and car hidden; duck and car retrieved). On these trials the numberof retrieved objects was always correct, but if infants remembered theidentity of the initially hidden objects they should detect a mis-match andcontinue searching for the missing objects. In three experiments we foundthat infants’ ability to detect this kind of identity switch decreased as thenumber of hidden objects increased. These studies expand the capacityresolutiondebate in two ways: First, they provide evidence of a tradeoff ininfants, suggesting that capacity and resolution interact from early in thelifespan. Second, they extend existing results to a longer timescale, suggestingthat capacity and resolution trade off beyond VSTM, in memory morebroadly.16.554 The Effect of Minimizing Visual Memory and Attention Loadin Basic Mathematical TasksRobert Speiser 1 (rspeiser@cfa.harvard.edu), Matthew Schneps 1 , AmandaHeffner-Wong 1 ; 1 Laboratory for Visual Learning. Harvard-Smithsonian Center forAstrophysicsHow might minimizing visual working memory and attention load affectstudents’ ability to perform basic mathematical tasks? Recent work (Schnepset al, 2007) suggests a potential trade-off between central and peripheralvisual abilities: “While the central field appears well suited for taskssuch as visual search, the periphery is optimized for rapid processing overbroad regions. People vary in their abilities to make use of information inthe center versus the periphery, and we propose that this bias leads to atrade-off between abilities for sequential search versus contemporaneouscomparisons.” We focus here on two pencil-and-paper algorithms forfinding multi-digit products: the familiar standard algorithm (S), whichplaces large demands on visual working memory and visual attention;and an older algorithm (Treviso, 1478). In the latter (T), an elegant spatiallayout guides visual attention, and at the same time minimizes demandsfor visual memory. While algorithm S makes strong use of central (hencesequential) processing, the alternative algorithm T makes effective use ofits spatial (therefore more pre-attentive, less sequential) layout. We reportresults from two experiments in progress, to compare the performance ofpost-secondary students on these algorithms in two learner populations:typical STEM undergraduates, and STEM undergraduates whose executivefunctions are believed to be impaired (specifically, those with dyslexia). Inthe first experiment, fifteen students from each population are tested onaccuracy of performance on multi-digit products, comparing methods Sand T. In the second experiment, students from each population again performmulti-digit products as above, but in this experiment their eye andhand motions are simultaneously tracked, to assess task dynamics. Resultsare analyzed statistically, for comparisons within and across both learnerpopulations.Acknowledgement: NSF HRD-093096216.555 How low can you go? An investigation of working memoryspan and change detectionBonnie L. Angelone 1 (angelone@rowan.edu), Nikkole Wilson 1 , Victoria Osborne 1 ,Zachary Leonardo 1 ; 1 Department of Psychology, College of Liberal Arts and<strong>Sciences</strong>, Rowan UniversityChange detection performance is often impaired due to limits in visualmemory and attention. Therefore, individual differences in visual memoryand attentional abilities may impact change detection performance. Extensiveresearch has examined the impact of factors related to the externalstimulus on change detection performance. For example, changes to objectsthat are more important to scene context are detected faster than objects oflesser importance. Also, several studies have shown that type of task, typeof change, scene complexity, meaningfulness, salience, and change probabilityplay a role in change detection performance. Although not as extensivelyexamined, research has also investigated the role of internal personalfactors. For example, individuals with increased attentional breadth (testedusing Functional Field of View) demonstrate better change detection performance.In addition, individuals immersed in cultures with more holisticworld views show a benefit for certain types of changes compared totheir counterparts from more individualistic cultures. Finally, previouslyat VSS we showed that visual memory for locations accounted for a significantamount of the variance in change detection performance, whilefield independence/dependence and perceptual speed did not. The currentproject investigated the effect of working memory span on naturalisticscene change detection. Participants completed the Automated OperationSpan Task (AOSPAN) and a change detection task for both type and tokenchanges. Research suggests that individuals with high working memory46 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsFriday Evening Postersspan do not always excel at other tasks; they may not show a benefit untilthe task becomes more attentionally demanding. As such, high workingmemory span individuals may only outperform low span individuals fortype changes and not token changes.16.556 Beyond magical numbers: towards a noise-based accountof visual-short term memory limitationsWei Ji Ma 1 (wjma@bcm.edu), Wen-Chuang Chou 1 ; 1 Department of Neuroscience,Baylor College of MedicineVisual short-term memory (VSTM) performance decreases with set size,but the origins of this effect are disputed. Some attribute it to a limit onthe number of items that can be memorized (the “magical number 4”, e.g.Cowan, 2001), others to internal noise that increases with set size (e.g.Wilken and Ma, 2004). We present new experiments and a neural model todistinguish these theories. Observers viewed widely spaced colored discsat fixed eccentricity for 100 ms. After a 1-second delay, one location wasmarked and the observer reported the color of the disc that had been atthat location (the target) by either clicking on a color wheel or scrollingthrough all colors using arrow keys. A limited-capacity model predicts: 1)an observer’s capacity, K, is independent of response modality; 2) when setsize N satisfies N≤K, the target color is always reported; 3) any instance ofnot reporting the target color is due to random guessing; 4) when reportingthe target color, response variance is independent of N. Instead, we findthat: 1) observers’ capacity is 36% higher in the scrolling than in the colorwheel paradigm; 2) when N≤K, subjects do not always report the targetcolor; 3) when subjects do not report the target color, they often report thecolor of another item, consistently with Bays and Husain (2009); 4) responsevariance increases continuously with N. We confirmed these findings in atwo-alternative forced-choice experiment in which subjects indicated for agiven test color, which of two marked locations contained that color. Ourfindings can be explained by a simple neural network characterized by spatialaveraging and divisive normalization, without an item limit. We arguethat VSTM must be reconceptualized in terms of noise and uncertainty, andthat its limitations are likely tied to attentional ones.Friday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>47


Saturday AMSaturday Morning TalksAttention: Interactions with eye and handmovementSaturday, May 8, 8:15 - 10:00 amTalk Session, Royal Ballroom 1-3Moderator: Amelia Hunt21.11, 8:15 amRemapping of an unseen stimulusAmelia Hunt 1 (a.hunt@abdn.ac.uk), Kay Ritchie 1 , Lawrence Weiskrantz 2 , ArashSahraie 1 ; 1 School of Psychology, University of Aberdeen, 2 Department of ExperimentalPsychology, Oxford UniversityWhen the eyes move, the visual world shifts across the retina. In visualcortex, this means information represented by one population of neuronswill suddenly be represented by another population, sometimes transferringfrom one hemisphere to another. Saccadic remapping is the predictiveresponse of neurons that has been shown to precede the retinotopic shift ofstimuli caused by an eye movement. We examined whether conscious perceptionof a visual stimulus and an intact V1 are prerequisite to remappingstimuli into expected coordinates. Stimuli below the threshold for detectionwere presented in the blind field of patient DB, who has left homonymoushemianopia after surgical removal of the right striate cortex. Using atwo-alternative forced-response procedure, DB performed at chance level(~50%) when asked to detect a target presented within his blind field whilefixating on a fixation cross. However, when he executed a saccade thatwould bring the visual target into his intact field, his accuracy improved to89%, even though the stimulus was removed at saccade onset, and neverentered his sighted field. Despite the increase in sensitivity, DB reported noconscious awareness of the stimulus. Saccades of equal size and eccentricitythat would not bring the stimulus into his sighted field did not elevatedetection. The results suggest that the intact visual hemifield may havepredictively responded to a stimulus in the blind field, even though thatstimulus was neither detected nor consciously perceived. This predictiveresponse improved detection, but did not lead to explicit awareness.21.12, 8:30 amNon-retinotopic cueing of visual spatial attentionMarco Boi 1 (marco.boi@epfl.ch), Haluk Ogmen 2 , Michael Herzog 1 ; 1 Laboratory ofPsychophysics, Brain and Mind Institute, Ecole Polytechnique Fédérale de Lausanne(EPFL), Switzerland, 2 Department of Electrical and Computer Engineering,Center for Neuro-Engineering and Cognitive Science, University of Houston,Houston, TX, USAAttentional capture by an exogenous spatial-cue is generally believed to bemediated by retinotopic mechanisms. Using a Ternus-Pikler display, weshow that attentional capture occurs in non-retinotopic coordinates. In afirst frame, three squares were presented for 200ms followed by a 70msISI and a second frame containing the same three squares shifted laterallyby one position. Observers perceived three squares moving laterally as agroup. Within the central square of the first frame, a cue was flashed eitherat the center (neutral cue) or at a peripheral position (retinotopically neutralcue, non-retinotopically 100% valid cue). In the central square of thesecond frame, a conjunction search display was presented. Subjects had tosearch for a red tilted bar (target) and indicate its tilt direction. Relativeto the surrounding squares, the peripheral position of the cue in the firstframe corresponded to the peripheral position of the target in the secondframe. On the other hand, because the central square moved, retinotopicpositions of the cues were different from those of the search items. A retinotopicaccount of attention predicts no difference in performance betweencentral and peripheral cue as both cues recruit attention at a location wherethe target is not presented. In contrast to this prediction, we found fasterreaction times for peripherally versus centrally cued trials, indicating thatthe peripheral cue acts in non-retinotopic coordinates in summoning attentionto the target location. These results provide strong evidence for a nonretinotopiccomponent in spatial visual attention.Acknowledgement: Swiss National Fund21.13, 8:45 amPredictive updating of attention to saccade targetsMartin Rolfs 1 (martin.rolfs@parisdescartes.fr), Donatas Jonikaitis 2 , Heiner Deubel 2 ,Patrick Cavanagh 1 ; 1 Laboratoire Psychologie de la Perception, Université ParisDescartes, 2 Department Psychologie, Ludwig-Maximilians-Universität MünchenWe examined the allocation of attention in a two-step-saccade task, probingseveral locations in space at different times following the onset ofthe central cue indicating the locations of the two targets. We found theexpected advantage in detection performance at both the first and secondsaccade target locations increasing from around 150 msec before the firstsaccade, with a slight delay for the appearance of the advantage at the secondsaccade target. More interestingly, we also found an advantage at theremapped locations for the second target (and of the first as evaluated in aseparate experiment), emerging just 75 msec before the saccade. These locationscorrespond to the positions the two saccade targets will have on theretina following the saccade but, prior to the saccade when these benefitsare seen, they do not correspond to the saccade target locations in eitherretinotopic or spatiotopic coordinates. These results suggest that locationpointers to saccade targets are updated by a predictive remapping processworking in a retinotopic frame of reference, allowing attention to be allocatedto the upcoming target location in advance of the saccade landing,lending behavioral support to the now classic physiological finding thatmany cells in retinotopically organized brain areas pre-activate in anticipationof stimuli that will be landing in their receptive fields.Acknowledgement: This research was supported by a Chaire d’Excellence grant to PatrickCavanagh.21.14, 9:00 amToward an interactive race model of double-step saccadesClaudia Wilimzig 1 (claudia.b.wilimzig@vanderbilt.edu), Thomas Palmeri 1,2 , GordonLogan 1,2 , Jeffrey Schall 1,2,3 ; 1 Vanderbilt University, Department of Psychology,2 Vanderbilt University, Center for Integrative and Cognitive Neuroscience,3 Vanderbilt <strong>Vision</strong> Research CenterPerformance of saccade double-step and search-step tasks can be understoodas the outcome of a race between GO processes initiating the alternativesaccades and a STOP process (Camalier et al. 2007 <strong>Vision</strong> Res) parallelingthe race model of stop signal task performance (Logan & Cowan 1984Psych Rev). The models require stochastic independence of the finishingtimes of the racing processes. However, the control of movement initiationis accomplished by networks of neurons that interact through mutual inhibition.An interactive race model demonstrated how late, potent inhibitionof a STOP unit on a GO unit can reproduce stop signal task performancewith patterns of activation that resemble actual neural discharges (Boucheret al. 2007 Psych Rev). We are extending this interactive race architectureto account for double-step performance in which a second saccade is producedto the final target location after canceling the first saccade to the originaltarget location. Alternative architectures have been explored. In a GO-STOP-GO architecture a separate STOP unit inhibits the GO unit producingthe first saccade and allows the GO unit producing the second saccade tocomplete. In a GO-GO architecture the second GO unit both inhibits thefirst GO unit and initiates the saccade to the final target location.The models were fit to the probability of compensating for the target stepand the response times of compensated saccades as well as noncompensatedfollowed by corrective saccades. The quality of fits of the GO-STOP-GO architecture was contrasted with that of the GO-GO architecture. Also,the form of the activation of the GO and STOP units was compared to thepatterns of neural activity in FEF of monkeys performing the search stepand double-step task (Murthy et al. 2009 J Neurophysiol). The results illustratehow stochastic cognitive models can be related to neural processes tounderstand how choice responses are initiated, interrupted and corrected.Acknowledgement: Supported by AFOSR, R01-EY08890, P30-EY08126, and P30-HD015052 and the E. Bronson Ingram Chair in Neuroscience.48 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong> See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Morning Talks21.15, 9:15 amInformation at hand is detected better in change detectionPhilip Tseng 1 (tsengphilip@gmail.com), Bruce Bridgeman 1 ; 1 Department ofPsychology, University of California, Santa CruzRecent studies have suggested an altered visual processing for objects thatare near the hands; visual search rates were slower when an observer’shands were near the display, which was interpreted as a result of a detailedevaluation of objects. Slower reaction times, however, can also arise froma number of inhibitory processes and therefore do not warrant the claimof a detailed visual analysis. Here we present two experiments that use achange detection paradigm to test this claim. While performing a changedetection task, observers placed their hands either vertically or horizontallyon the frame of the display, or away from the display. When their handswere on the display, change detection performance was more accurate andthey held more items in visual short term memory. Both vertical and horizontalhand positions were facilitative, but vertically-placed hands eliciteda robust enhancement that was resistant to task difficulty. Gains in hit ratewere equal in magnitude across all regions, regardless of distances fromthe hands, suggesting that the extensive analysis is non-specific in nature.Together, our accuracy data provide concrete evidence for an enhancedvisual analysis of objects near the hands.21.16, 9:30 amThe role of feedback to foveal cortex in peripheral perception: ATMS studyMark Williams 1 (mark.williams@maccs.mq.edu.au), Christopher Allen 2 , ChristopherChambers 2 ; 1 Macquarie Centre for Cognitive Science, Macquarie University,Australia, 2 School of Psychology, Cardiff University, United KingdomRecent neuroimaging evidence suggests that visual inputs arising beyondthe fovea can be ‘fed back’ to foveal retinotopic cortex, building a new representationof extra-foveal events. Williams et al. (2008) presented novelobjects in diagonally opposing parts of the peripheral visual field, whileparticipants fixated centrally. The task was to determine if the objects wereidentical or different, and the objects on each trial could be from one ofthree different categories. Using fMRI and multi-pattern voxel patternanalysis, they found significant discrimination performance in the fovealconfluence – the region of cortex representing central vision, where noobjects were presented. Critically, this pattern was not dependent on thelocation of the objects suggesting it is due to feedback from higher areasthat are position-invariant. Thus, contrary to the traditional view of earlyvisual cortex being strictly retinotopic in the mapping of visual space, hereis evidence of object-specific information about objects in the peripherybeing fed back to foveal cortex. However, the designation of such encodingas feedback depends entirely on its neural timecourse and behaviouralsignificance, both of which are unknown. Here, we used a similar task andapplied transcranial magnetic stimulation (TMS) to the posterior terminationof the calcarine sulcus (the foveal site), and an occipital control regionin line with a more peripheral calcarine representation (the non-foveal site).On each trial, a double-pulse of TMS was applied at one of seven possibletimes relative to target onset (–150 to +500 ms), at either a low or high intensity(40% or 120% motor threshold). Late (350-400ms) disruption of fovealvisual cortex impaired the ability to perceptually discriminate objects in theperiphery. This shows that delayed foveal processing is crucial for extrafovealperception, and highlight the pivotal role of ‘constructive’ feedbackin human vision.Acknowledgement: This research was supported by the Biotechnology and Biological<strong>Sciences</strong> Research Council (CDC, David Phillips Fellowship, UK), the AustralianResearch Council (MAW, Queen Elizabeth II Fellowship), the Wales Institute of CognitiveNeuroscience (CDC) and a Cardiff University International Collaborative Award (MAW/CDC).21.17, 9:45 amGuidance of gaze based on color saliency in monkeys with unilaterallesion of primary visual cortexMasatoshi Yoshida 1,2 (myoshi@nips.ac.jp), Laurent Itti 3,4 , David Berg 3,4 , TakuroIkeda 1,5 , Rikako Kato 1,5 , Kana Takaura 1,2 , Tadashi Isa 1,2,5 ; 1 Department of DevelopmentalPhysiology, National Institute for Physiological <strong>Sciences</strong> (Okazaki,JAPAN), 2 School of Life Science, The Graduate University for Advanced Studies(Hayama, JAPAN), 3 Computer Science Department, University of Southern California(Los Angeles, California), 4 Neuroscience Graduate Program, University ofSouthern California (Los Angeles, California), 5 CREST, JST (Kawaguchi, Japan)In the accompanying paper (Itti et.al.), we investigated residual visuallyguidedbehavior in monkeys after unilateral ablation of primary visual cortex(V1), to unravel the contributions of V1 to salience computation. Weanalyzed eye movements of monkeys watching video stimuli and a computationalmodel of saliency-based, bottom-up attention quantified the monkeys’propensity to attend to salient targets. All monkeys were attractedtowards salient stimuli, significantly above chance, for saccades directedboth into normal and affected hemifields. We also quantified the contributionof visual attributes (intensity, color, motion and so on) to the saliencybasedeye movements and obtained evidence that the monkeys’ guidanceof gaze was influenced by color saliency. Here we directly examined residualvisuomotor processing based on color saliency with color discriminationtasks. In two monkeys after unilateral ablation of V1, the isoluminant,chromatic stimuli was presented in one of the two positions in their affectedhemifield. The monkeys were rewarded by making saccade to the target.The CRT monitor (Mitsubishi DZ21) was used for stimulus presentationand was calibrated with a colorimeter (PR650). The stimuli were definedby the DKL color space, that is, the luminance axis, the L-M axis and theS-(L+M) axis. In both monkeys, the correct ratio was significantly abovechance for stimuli with the L-M component and the S-(L+M) component.Control experiments were done to exclude the possibility that a small luminancedifference from background may contribute to the above-chance performance.When a small positive or negative luminance difference (


Saturday Morning TalksVSS 2010 AbstractsSaturday AM21.22, 8:30 amWorking Memory for Spatial Relations Among Object PartsPamela E. Glosson 1 (glosson2@uiuc.edu), John E. Hummel 1 ; 1 University of Illinois,Urbana ChampaignIt is broadly agreed that the capacity of working memory (WM) is 4±1items. In the vision literature these items are objects (i.e., bound collectionsof object features), whereas in higher cognition they are role bindings. Thedistinction between these accounts becomes clearer in the context of thespatial relations among an object’s parts: If parts are items then the WMcapacity required to store the spatial relations among an object’s partsshould scale with the number, n, of parts (i.e., load = n); but if part-relationbindings are items, then WM load should scale as r*n2, where r is thenumber of relations to be remembered. An intermediate account, accordingto which relational roles can be “stacked” on object parts, predicts thatload should scale simply as n2. We ran an experiment investigating WM forspatial relations among object parts, orthogonally varying both the numberof parts composing and object and the number of relations the subject wasrequired to remember. The results clearly support the intermediate model,which accounts for 85% of the variance in Ss accuracy. The higher cognitivemodel accounts for only 71% and the “parts as objects” model accounts for0%. Visual WM load thus appears to scale with n2 with no additional costimposed for additional relations.Acknowledgement: AFOSR Grant # FA9550-07-1-014721.23, 8:45 amThe limitations of spatial visual short-term memoryPatrick Wilken 1 (pwilken@gmail.com), Ronald van den Berg 2 , Jochen Braun 3 , Wei JiMa 2 ; 1 Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Germany,2 Baylor College of Medicine, One Baylor Plaza, Houston TX 77030, USA,3 Cognitive Biology Lab, University of Magdeburg, GermanyWe present a novel experimental paradigm examining the effects of set sizeon the encoding of spatial location in visual short-term memory (VSTM).The nature of the structure of VSTM has recently been the subject of intensedebate. One group of researchers (e.g. Zhang and Luck, 2008) have arguedthat changes in performance as a function of set size reflect limits in thenumber of items that can be encoded in VSTM. In contrast, we along withothers (Wilken and Ma, 2004; Bays and Husain, 2008) argue that VSTM performanceis limited by internal noise, which itself grows with set size. Inour experiment, observers viewed randomly positioned Gaussian “blobs”,presented for 100 ms. After a 1000 ms ISI, a second display was shown thatwas identical to the first except that one blob was missing. The observer’stask was to report the memorized location of this “missing” (target) blob.Logically, reports could fall into two categories: (1) those based on a noisyinternal representation of the true location of the target; and (2) those inwhich no information about the target location was used. Accordingly, wefitted a mixture model consisting of a target-centered, bivariate Gaussianand a second, approximately uniform distribution centered on the fixationpoint. The proportion of responses that were assigned to the first distributiondecreased as a function of set size, though significantly more slowlythan would be predicted by a “4-item” slot model (e.g., Cowan, 2001).Importantly, we found that the precision of spatial localization decreasedmonotonically as a function of set size, independent of eccentricity. Theseresults are consistent with a model of spatial VSTM in which responses arelimited by a continuous resource that is distributed across all items.21.24, 9:00 amThe Primary Visual Cortex as a Modality-Independent ‘Screen’ forWorking MemoryLora Likova 1 (lora@ski.org); 1 Smith-Kettlewell Eye Research InstituteINTRODUCTION: Increasing evidence suggests that the function of theearly visual areas is not restricted to sensory processing. Recently, Harrison& Tong (2009) have shown that areas V1-4 play a role in the maintenanceof visual information in visual working memory. This result, however,leaves open the question of whether these areas can also support non-visualworking memory, and if so what are the implications for the underlyingfunctional architecture. METHODS: We addressed these questions in anfMRI drawing paradigm with blindfolded subjects in a Siemens 3T scanner.Four experimental conditions, separated by 20sec intervals, were run:Tactile Exploration - line-images (faces and objects) were explored with onehand and remembered to guide a future drawing task; Drawing - subjectshad to draw the image remembered from a tactile template presented 20seconds earlier to the non-drawing hand; Scribbling - a control task of purehand movements with no image structure or memory component; Copying- drawing the image, but with concurrent access to the tactile template,minimizing demands on memory. The drawing trajectory was recordedwith a custom MRI-compatible system, incorporating a fiber-optic stylus.RESULTS AND CONCLUSIONS: Of the occipital areas, only V1 was significantlyactivated in any condition, implying direct top-down feedbackfrom high-level cortex beyond the occipital lobe. Tactile Exploration activatedfoveal V1 only, as if it were representing the local focus of attentionduring the exploration. In contrast, tactile-memory-guided Drawing whileblindfolded activated full peripheral V1, as if responding to a large-scalevisual representation. Frontal, parietal and inferotemporal activations specificto the memory-guided condition provide a basis for extensive topdowninvolvement in the V1 memory maintenance. The overall pattern ofresults suggests that V1 can operate as the conscious ‘visualization screen’for working memory, even when evoked by non-visual sensory and complexmemory guided tasks.Acknowledgement: NSF # 084623021.25, 9:15 amDecoding individual natural scene representations during perceptionand imageryMatthew Johnson 1 (matthew.r.johnson@yale.edu), Marcia Johnson 1,2 ; 1 InterdepartmentalNeuroscience Program, Yale University, 2 Department of Psychology,Yale UniversityPrevious neuroimaging results indicate that reflective processes such asworking memory and mental imagery can increase activity in categoryselectiveextrastriate (CSE) cortical areas whose preferred category is thatof the item held in mind. Recent classification studies also show that someCSE areas demonstrate exemplar-specific activity during perception foritems of the preferred category, and that early visual cortex contains informationabout simple stimuli (e.g., oriented gratings) held in working memory.The aim of the present study was to determine to what extent item-specificinformation about complex natural scenes is represented in differenthuman cortical areas (both in early visual cortex and several scene-selectiveextrastriate areas) during both perception and visual mental imagery.We used a multi-voxel classification analysis of moderately high-resolutionfMRI data and found item-specific scene information represented inmultiple areas including middle occipital gyrus (MOG), parahippocampalplace area (PPA), retrosplenial cortex (RSC), and precuneus/intraparietalsulcus (PCu/IPS). Furthermore, item-specific information from perceivingscenes was partially re-instantiated during mental imagery of the samescenes. In addition, we examined voxels in the fusiform face area (FFA)and found that, despite the area’s preference for face stimuli, item-specificscene information was represented there as well. These results suggest that1) item-specific natural scene information is carried in both scene- and faceselectiveextrastriate areas during perception and 2) activity induced in CSEareas by reflective tasks such as mental imagery does carry informationrelevant to maintaining the specific representation in question.Acknowledgement: National Institute on Aging, National Science Foundation21.26, 9:30 amVisual working memory information in foveal retinotopic cortexduring the delayWon Mok Shim 1,2,3 (wshim@mit.edu), Nancy Kanwisher 2,3 ; 1 Psychological andBrain <strong>Sciences</strong>, Dartmouth College, 2 Brain and Cognitive <strong>Sciences</strong>, MIT,3 McGovern Institute for Brain Research, MITSeveral studies have implicated early retinotopic cortex in the storage oflow-level visual information in working memory over a delay. An intuitiveinterpretation of these findings is that visual working memory for visualfeatures reflects a continuation of activity (whether synaptic or spikingactivity) in the neural populations that originally responded to the stimulusperceptually. Here we propose and test a more radical hypothesis, basedon our recent discovery of a novel form of feedback in the visual system(Williams et al., 2008): that visual working memory information is representedin foveal retinotopic cortex, no matter where in the visual field thestimulus was first presented. In order to test this hypothesis, we used fMRIand pattern classification methods to examine whether the foveal area containsinformation about objects presented in the periphery during a workingmemory delay. During each trial, a single memory sample (drawnfrom one of two categories of novel 3D objects) was briefly presented in a50 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Morning Talksperipheral location, followed by a 14s delay period. A probe appeared subsequently,followed by another 14s interval, and subjects reported whetherit was identical to or different from the memory sample. The results showthat the pattern of fMRI responses in the foveal cortex, where no stimuluswas presented, contains position-invariant information about the categoryof objects presented in a peripheral location during the delay period, butnot during the inter-trial interval when no stimulus is held in memory. Furthermore,over the course of the delay period, object information decreasesat the location in retinotopic cortex corresponding to the stimulus location,whereas it increases at the foveal region, suggesting a transfer of visualinformation from the stimulus location to foveal cortex. These findingsindicate that working memory information arises in foveal retinotopic cortexregardless of where the stimulus is presented.Acknowledgement: EY1345521.27, 9:45 amCortical anatomy relates to individual differences in dissociableaspects of attention and visual working memory capacityMaro Machizawa 1,2 (m.machizawa@ucl.ac.uk), Ryota Kanai 2 , Garaint Rees 2,3 , JonDriver 2,4 ; 1 UCL Institute of Neurology, 2 UCL Institute of Cognitive Neuroscience,3 Wellcome Trust Centre for Neuroimaging, 4 UCL Department of PsychologyAttention and working memory are important aspects of visual cognition,for which brain networks and individual differences have been studiedextensively with functional neuroimaging. Here we instead related behaviouralmeasures of these functions to brain anatomy . We studied 39 healthyadult participants who performed five tasks : the Attentional Network Test(ANT) , two separate measures of visual working memory precision, a measureof visual working memory capacity, and a test of filtering efficiency.A principal component analysis on behavioural measures yielded threemain components: precision of visual working memory; executive functionthat loaded with filtering inefficiency; and working memory capacity thatloaded with attentional measures. Each participant also underwent structuralMRI scanning. Voxel Based Morphometry analyses revealed that graymatter density in basal ganglia, anterior intraparietal sulcus, middle frontalgyrus and visual cortex were positively correlated with precision of visualworking memory; executive function with gray matter density in precentralgyrus; and visual working memory capacity with gray matter densityin middle frontal gyrus and frontal eye field. A negative correlation wasfound between gray matter density in the mid-cingulate gyrus and visualworking memory precision. These findings identify separable contributorycomponents to visual working memory and attention, both behaviourallyand in the structural anatomy of the human brainMultisensory processingSaturday, May 8, 11:00 - 12:30 pmTalk Session, Royal Ballroom 1-3Moderator: Paola Binda22.11, 11:00 amTouch disambiguates rivalrous perception at early stages of visualanalysisPaola Binda 1,2 (p.binda1@studenti.hsr.it), Claudia Lunghi 3,4 , Concetta Morrone 5,6 ;1 Department of Psychology, Università Vita-Salute San Raffaele (Milano, Italy),2 Research Unit of Molecular Neuroscience, IIT Network, Italian Institute of Technology(Genova, Italy), 3 Scientific Institute Stella Maris (Pisa, Italy), 4 Instituteof Neuroscience, CNR (Pisa, Italy), 5 Department of Physiological <strong>Sciences</strong>,Università di Pisa (Pisa, Italy), 6 RBCS unit, Italian Institute of Technology(Genova, Italy)Signals arising from different sensory modalities are integrated into acoherent percept of the external world. Here we tested whether the integrationof haptic and visual signals can occur at an early level of analysis byinvestigating the effect of touch on binocular rivalry. Visual stimuli wereorthogonal (vertical or horizontal) Gabor Patches (spatial frequency: 3.5cpd or 5 cycles/cm, patch size: 1.5 deg, contrast: 45%), presented foveallyagainst a grey background (7.8 cdm-2) and displayed alternatively to thetwo eyes through Ferro-Magnetic Shutter goggles (driven at the monitorframe rate, 120 Hz). At random intervals subjects explored briefly (~3 s) ahaptic stimulus (sinusoidal milled Plexiglas, 5 cycles/cm) oriented verticallyor horizontally. The task was to report the perceived orientation ofthe visual stimulus. We measured the probability of switching perceptionduring haptic stimulation and during periods of visual-only stimulationof comparable duration. When the orientation of the haptic stimulus wasorthogonal to the dominant visual percept, perception switched towardsthe haptic orientation; the probability of a switch was significantly higherthan during visual-only stimulation. Similarly, when the haptic orientationwas parallel to the dominant visual percept, maintenance of that perceptwas significantly more probable. We repeated the experiment varyingthe spatial frequency of the visual (1.3 and 3 cycles/cm) and the hapticstimuli (1.3, 2, 3 and 4 cycles/cm) and we showed that the effect is spatial-frequencytuned, occurring only when visuo-haptic spatial frequenciescoincided. Our results indicate that a visual stimulus, rendered invisibleby binocular rivalry suppression, can nonetheless revert to consciousnesswhen boosted by a concomitant haptic signal of congruent orientation andspatial frequency. Given that suppression is thought to occur early in visualanalysis, our results suggest that haptic signals modulate early visual processing.Acknowledgement: Italian Ministry of Universities & EC projects MEMORY and STANIB22.12, 11:15 amThe common 2-3Hz limit of binding synchronous signals acrossdifferent sensory attributes reveals a slow universal temporalbinding processShin’ya Nishida 1 (nishida@brl.ntt.co.jp), Waka Fujisaki 2 ; 1 NTT CommunicationScience Laboratories, Nippon Telegraph and Telephone Corporation, 2 NationalInstitute of Advanced Industrial Science and Technology (AIST)The human brain processes different aspects of the surrounding environmentthrough multiple sensory modalities (e.g., vision, audition, touch),and each modality can be subdivided into multiple attribute-specific channels(e.g., color, motion, form). When the brain re-binds sensory signalsacross different channels, temporal coincidence, along with spatial coincidence,provides a critical binding clue. It however remains unknownwhether neural mechanisms for binding synchronous attributes are specificto each attribute combination, or universal and central. In a series of humanpsychophysical experiments, we examined how combinations of visual,auditory, and tactile attributes affect the temporal binding of attributevaluecombinations. Observers discriminated the phase relationship of thetwo repetitive sequences. Each sequence was an alternation of two attributevalues (e.g., red/green, high/low pitches, left/right finger vibrations). Thealternation was always synchronized between the two sequences, but theparing of attribute values was changed between the two phase conditions(in-phase or 180-deg out-of-phase). We measured the upper temporal-frequencylimit to perform this binding discrimination task. The results indicatedthat the temporal limits of cross-attribute binding were relatively lowin comparison with those of within-attribute binding. Furthermore, theywere surprisingly similar (2-3 Hz) for any combination of visual, auditoryand tactile attributes. The cross-attribute binding limits remained low andinvariant even when we increased the stimulus intensity or adjusted therelative timing of the two sequences. They are unlikely to reflect the limitsfor judging synchrony, since the temporal limits of a comparable crossattributesynchrony task (Fujisaki & Nishida, 2005; 2009) were higher andmore variable with the modality combination (4-9 Hz). These findings suggestthat cross-attribute temporal binding is mediated by a slow centralprocess universal to any attribute combinations. We conjecture that ‘what’and ’when‘ properties of a single event are once separately processed, andthen combined in this slow central universal process.22.13, 11:30 amDynamic Grapheme-Color SynesthesiaBruce Bridgeman 1 (bruceb@ucsc.edu), Philip Tseng 1 , Dorina Winter 2 ; 1 Departmentof Psychology, University of California, Santa Cruz, 2 Institut für Psychologie,Universität GöttingenIn grapheme-color synesthesia, observers perceive colors that are associatedwith letters and numbers. We tested the dynamic properties of this phenomenonby exposing two synesthetes to characters that rotate smoothly,that morph into other characters, that disappear abruptly, or that havecolors either consistent or inconsistent with the corresponding synestheticcolor. First we tested our observers for color identifications on all lettersof the alphabet, numbers up to 12, and roman numerals. Two tests morethan 48 hours apart were in 100% agreement, showing that our subjectswere true synesthetes. Peripheral crowding eliminated synesthetic color,Saturday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>51


Saturday Morning TalksVSS 2010 AbstractsSaturday AMso our synesthetes were both associators, not projectors. Rotating lettersat 36deg/sec changed their synesthetic colors abruptly as letter identificationchanged or failed. Rotated characters, in a constant-width sans-seriffont, were N, M, T, A, H, S, K and X, and the number 9. These characterswere chosen to have varying symmetries and varying letter transformationsunder rotation; rotating M, for example, produced a W 180deg and anE at 270deg with the corresponding changes in synesthetic color. Morphingletters also changed color together with a change in letter identification, forexample P to R with growth of the diagonal line element. The transformationswere E-F, P-R, and I-J. Abrupt disappearance of a colored character ona white background yielded a negative color afterimage, but maintenanceof the same synesthetic color. Our synesthetes could maintain both physicaland synesthetic color in the same character, without conflict. Neon colorspreading in one observer occurred for physical but not synesthetic color,in the enclosed regions of the number 8. These results show close linkingof synesthetic color with character identity rather than image properties, incontrast to physical color.22.14, 11:45 amInfluence of asynchrony on the perception of visual-haptic complianceMassimiliano Di Luca 1 (max@tuebingen.mpg.de), Benjamin Knörlein 2 , MatthiasHarders 2 , Marc Ernst 1 ; 1 Max Planck Institute for Biological Cybernetics,2 Computer <strong>Vision</strong> Laboratory ETH ZurichCompliance of deformable materials is perceived through signals aboutresistive force and displacement. During visual-haptic interactions, visualand proprioceptive signals about material displacement are combined overtime with the force signal. Here we asked whether multisensory complianceperception is affected by the timing of signals by introducing an asynchronybetween the participant’s movement (sensed proprioceptively) andforce information or visual information. Visual-proprioceptive asynchroniesare obtained by making participants see a delayed video of their hapticinteraction with an object rather than the real interaction. Force-proprioceptiveasynchronies are instead obtained by making participants compress avirtual object with their hand and sense the resistive force generated by aforce feedback device.Results indicate that force-proprioceptive asynchronies can significantlyalter the perception of object stiffness. Moreover, we find that perceivedcompliance changes also as a function of the delay of visual information.These effects of asynchrony on perceived compliance would not be presentif all force-displacement information would be utilized equally overtime, as both delays generate a bias in compliance which is opposite in thecompression and release phases of the interaction. To explain these findingswe hypothesized instead that information during object compressionis weighted more than information obtained during object release and thatvisual and proprioceptive information about the hand position are used forcompliance perception depending on the relative reliability of the estimateobtained. We confirm these hypotheses by showing that sensitivity to complianceis much higher during object compression and that degradation ofvisual and proprioceptive information can modify the weights assignedto the two sources. Moreover, by analyzing participants’ movements andfeedback forces we show that the two hypothesized factors (compressionreleaseand visual-proprioceptive reliability) can account for the change inperceived compliance due to force-proprioceptive and force-displacementasynchronies.Acknowledgement: This work has been supported by the EU project Immersence IST-2006-27141 and the SNSF and it was performed within the frame of NCCR Co-Me22.15, 12:00 pmVisual perception of motion produced solely by kinesthesiaKevin Dieter 1 (kdieter@bcs.rochester.edu), Randolph Blake 2,3 , Duje Tadin 1 ; 1 Centerfor Visual Science, University of Rochester, 2 Vanderbilt <strong>Vision</strong> Research Center,Vanderbilt University, 3 Brain and Cognitive <strong>Sciences</strong>, Seoul National UniversityWe all experience repeated, reliable pairings of self-generated movementsthat result in visual sensations. Simply wave your hand in front of youropen eyes and you will invariably perceive visual motion. Given this consistentpairing of kinesthetic and visual sensations, is it possible that experiencingonly one of the paired sensations could give rise to the other?We tested this possibility in three groups of blindfolded volunteers. Peoplein Group A slowly waved their own hand in front of their face, those inGroup B waved a cutout of a hand in front of their face, and Group C hadthe experimenter wave his hand in front of participants’ blindfold. Participantsrated resulting visual experience on a six-point scale ranging from“no visual experience” to “I perceive an outline of a moving hand.” Importantly,participants were given two successive test trials, and deceptiveinstructions explicitly led them to expect no visual sensation on one of thetwo trials. This created a bias against our proposed hypothesis. Nevertheless,people in Group A reported substantially higher visual ratings thanthose in Group C (main effect of group: F(2, 85)=5.72, p=0.005; planned t-test: t(55)=3.37, p=0.001). Ratings from Group B were between those fromGroups A and C. Importantly, Group A participants who reported perceivingvisual motion did so on both trials (t(27)=0.76, p=0.46) despite expectingno visual sensation on one of the trials. Preliminary data suggest thatthis illusory visual motion perception may be stronger when a dominanthand is used.In conclusion, we show that self-generated movements are sufficient toyield visual sensations when executed in a way that typically results inthe reliable pairing of vision and kinesthesia. We are currently exploringwhether illusory visual motion is stronger in expert musicians, who havehad years of training on intricate hand motions.22.16, 12:15 pmEfficient visual search from synchronized auditory signals requirestransient audiovisual eventsErik Van der Burg 1 (e.van.der.burg@psy.vu.nl), John Cass 2 , Christian Olivers 1 , JanTheeuwes 1 , David Alais 2 ; 1 Cognitive Psychology, Vrije Universiteit Amsterdam,Netherlands, 2 School of Psychology, University of Sydney, AustraliaA prevailing view is that audiovisual integration requires temporally coincidentsignals. Here we demonstrate that audiovisual temporal coincidencealone (i.e., synchrony) does not necessarily lead to audiovisual binding.In visual search experiments, subjects found a modulating visual targetvastly more efficiently when it was paired with a synchronous auditorysignal. By manipulating the shape of temporal modulation (sine-wavevs. square-wave vs. difference-wave; harmonic sine-wave synthesis; gradientof onset/offset ramps) we show that abrupt audiovisual events arerequired for this search efficiency to occur, and that sinusoidal audiovisualmodulations do not support efficient search. Thus, temporal alignment willonly lead to audiovisual integration if the changes in the component signalsare both synchronized and transient. We propose that transient signals arenecessary in synchrony-driven binding to avoid spurious integration whenunrelated signals occur close together in time.Motion: PerceptionSaturday, May 8, 11:00 - 12:45 pmTalk Session, Royal Ballroom 4-5Moderator: Scott Stevenson22.21, 11:00 amThe vestibular frame for visual perception of head rotation.Albert V. van den Berg 1,2 (a.v.vandenberg@uu.nl), David Arnoldussen 2 , JeroenGoossens 2 ; 1 Functional Neurobiology, Helmholtz Institute, Utrecht University,The Netherlands, 2 Dept. Biophysics, Donders Institute for Brain Cognition andBehaviour, Centre for Neuroscience, Radboud University Nijmegen MedicalCentreThe visual flow provides the brain with important information about thechanging orientation of eye, head, and body and the direction of movement.The visual flow related to translation and rotation of the eye is processedin extra-striate areas in combination with an extra-retinal signallike eye-in-head movement. Previously we have shown that the putativehuman homologue of monkey area MST includes a subregion with BOLDsignals that represent the (simulated) rotation of the subject’s head. Herewe investigate the 3D organisation of this capacity. We simulated forwardmotion through a 3D cloud of dots along a sinusoidal trajectory. Thus, thegaze line rotated relative to the environment about an axis perpendicular tothe plane of the trajectory. As in our previous study we decoupled the retinalrotation (as determined by the gaze rotation) from the simulated headrotation about the same axis, by combining identical gaze rotation with dif-52 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Morning Talksferent eye pursuit conditions. We varied the axis of rotation between verticaland various axes in the horizontal plane of the head. Using wide-field(120 deg diameter) presentation of such stimuli to 6 subjects and, recordingBOLD signals at 1.0 mm resolution (Siemens 3T), we observed distinctlocations within human pMST for the vertical axis and for two horizontalaxes aligned with the posterior and anterior canals of the vestibular system.These subregions were characterised by BOLD signals that varied in proportionto the simulated speed of rotation of the head and no modulationby the gaze rotation. The same two areas related to the horizontal vestibularaxes were activated by simulated head pitch. These data indicate that theprocessing of visual flow and eye-in-head movement signals to representhead rotation is arranged in a vestibular frame of reference. We presentperceptual evidence to probe this notion further.Acknowledgement: NWO-ALW Grant 818.02.006 (AvB)22.22, 11:15 amSuppression of retinal image motion due to fixation jitter is directionallybiasedScott Stevenson 1 (SBStevenson@uh.edu), David Arathorn 2 , Qiang Yang 2 , PavanTiruveedhula 3 , Nicole Putnam 3 , Austin Roorda 3 ; 1 College of Optometry, Universityof Houston, 2 Center for Computational Biology, Montana State University,3 School of Optometry, University of California - BerkeleyBackground: Although relative motion thresholds are just a few seconds ofarc for adjacent targets (McKee et al. 1990), the overall image motion due tofixation jitter is typically not perceived. An internal copy of the efferent eyemovement commands influence motion perception for larger eye movement,but may not have the precision required to correct for fixation jitter.Alternatively, overall retinal image motion may be sensed by early visualmechanisms and then suppressed in perception (Murakami & Cavanagh1998). Here we ask whether fixation jitter motion suppression is sensitive tothe relative direction of eye and retinal image motion. Methods: We decoupledretinal image motion from eye motion using a modified stabilizedimage technique. An AOSLO with target stabilization was used for botheye tracking and target presentation. Eye motion was recorded and fed backinto target position in real time (~4 ms delay) so that the target moved withthe eye (conventional stabilization, gain = 1), with the eye but faster (gain =2 to 4), opposite the eye (gain = -1 to -4), or in a different direction at variousgains. Subjects used a matching procedure to set a conventionally viewedjittering target (flat velocity spectrum) to have the same apparent averageexcursion as the modified stabilized target. Each appeared in unstabilizedsquare 2-degree frames. Results: Gain and direction had a strong effect onperceived motion of the target. Conventionally stabilized targets faded rapidly,as expected. Higher positive gain motion resulted in greater perceivedmotion, again as expected. Surprisingly, all motions that were in the oppositedirection of eye motion (negative gains) appeared as stationary or onlyvery slightly moving. Conclusions: Suppression of retinal image motiondue to fixation eye movements includes information about the current eyemotion direction, as well as the stimulus motion direction.Acknowledgement: Supported by NSF AST-9876783 through the UC Santa Cruz Centerfor Adaptive Optics22.23, 11:30 amComparing the static and flicker MAEs with a cancellation techniquein adaptation stimuliSatoshi Shioiri 1 (shioiri@riec.tohoku.ac.jp), Kazumichi Matsumiya 1 ; 1 ResearchInstitute of Electrical Communication, Tohoku University[Purpose] After the exposure to superimposed sinusoidal gratings withdifferent spatial frequencies moving in the opposite directions, the motionaftereffect (MAE) of the high spatial frequency grating was seen with astatic test while that of the low spatial frequency was seen with a flickertest. We interpreted the MAEs by assuming the slow and fast motion systems,showing difference in temporal frequency selectivity of MAE durations(Shioiri and Matsumiya, 2009). The purpose of the study is to confirmthe assumption using a technique that can estimate contrast sensitivity ofthe systems with MAE. The technique varied contrast of either of the twosuperimposed gratings in adaptation to find the condition where no MAEwas perceived. With varying temporal frequency of either grating, temporaltuning of each motion system was estimated. [Experiment] The spatialfrequencies of the gratings were 0.53 c/deg and 2.1 c/deg. After 5 s of adaptation,the observer judged MAE direction in the stationary or the flicker(4 Hz) stimulus. Dependently on the response, the contrast of one of thegratings changed so that the MAE would be weaker. The contrast with noMAE was obtained with a stair case procedure. This provides the equivalentcontrast of the grating to the fixed one of the other. Temporal frequencyof the 2.1 c/deg (or 0.53 c/deg) grating was varied between 0.63 and 20Hz in adaptation when the static (or flicker) test was used to investigatethe MAE strength of the slow (or fast) motion system. [Results] The staticand flicker MAEs showed different dependency of temporal frequency: thestatic MAE duration peaked at lower temporal frequency than the flickerMAE as has been shown with the MAE duration measurements. This indicatesthat the dependency of contrast sensitivity on temporal frequency isdifferent between the two motion systems.Acknowledgement: KAKENHI(B) 1833015322.24, 11:45 amThe neural correlates of motion streaks: an fMRI studyDeborah Apthorp 1 (deboraha@psych.usyd.edu.au), Bahador Bahrami 2, 3 , ChristianKaul 2, 3 , D. Samuel Schwarzkopf 2, 3 , David Alais 1 , Geraint Rees 2, 3 ; 1 Schoolof Psychology, University of Sydney, 2 Institute of Cognitive Neuroscience,University College London, 3 Wellcome Trust Centre for Neuroimaging, Instituteof Neurology, University College LondonAim: Due to temporal integration in the visual system, a fast-moving objectcan generate a static, oriented trace (a ‘motion streak’). These are generallynot seen, but might be used to judge direction of motion more accurately(Geisler, 1999). Psychophysics and single-unit studies support this hypothesis,but no physiological evidence from the human brain has yet been provided.Here we use functional magnetic resonance imaging combined withstandard univariate as well as multivariate pattern classification techniquesto investigate the neural correlates of motion streaks. Method: Observersviewed fast (‘streaky’) or slow-moving dot fields, moving at either 45 or 135degrees, or static, oriented patterns (filtered noise) at the same orientations,while performing a fixation task in the scanner (3T, high-res sequence, 1.5 x1.5 x 1.5mm, 32 slices, TR= 3.2s). 10 sessions, each with 6 blocks per session(randomized block design) gave 10 blocks for each stimulus type. Results:Initial univariate group analysis in SPM5 showed greater activation in earlycortical areas (V1, V2 and V3) when comparing fast to slow motion, but noincreased activation in V5/MT+; the pattern of activity was similar to thatseen when comparing static, oriented conditions to fixation rest. A multivariatepattern classifier trained on brain activity evoked by static, orientedpatterns could successfully generalize to decoding brain activity evoked byfast but not slow motion sessions. These results suggest that static, oriented“streak” information is indeed present in human early visual cortex whenviewing fast motion.Acknowledgement: Australian Federation of University Women University of SydneyWellcome Trust22.25, 12:00 pmPerception of motion from the combination of temporal luminanceramping and spatial luminance gradientsPeter Scarfe 1 (p.scarfe@ucl.ac.uk), Alan Johnston 1 ; 1 Cognitive, Perceptual andBrain <strong>Sciences</strong>. University College London. London. UK.It has been shown previously that illusory motion is seen when local temporalramp after-effects are viewed slightly out of register with a staticdisplay of light or dark regions (Anstis, 1990, Perception, 19, 301-306). Inthe research presented here we investigated the apparent motion producedby after-effects such as these. In a first experiment observers adapted toa radial pattern of ramping lightening or darkening regions, which werereplaced by static luminance gradients. The combination of temporallyramping luminance after-effects and physically present luminance gradientsinduced clear rotational motion. The speed of this rotation was measuredin a binary choice task. The speed of rotation was very regularlyrelated to the magnitude of the luminance gradient, shallower gradientsresulted in faster rates of rotation, but the ramping rate during adaptationhad no effect on the speed of perceived rotation. In a second experimentwe adapted observers to a radially interleaved spatially separated patternof static spatial luminance gradients and temporal luminance ramps. Afteradaptation we presented observers with a static uniformly mid-grey circle.Although this test pattern contained no physical luminance change, eitherspatially or temporally, observers perceived radial expanding or contractingmotion, which was dependant on the direction of the temporal luminanceSaturday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>53


Saturday Morning TalksVSS 2010 AbstractsSaturday AMramping during adaptation. This suggests that temporal ramp after-effectsand spatial gradient after-effects were spatially integrated to produce illusorymotion. Overall our results point to a precise integration of temporalluminance ramping and spatial luminance gradients in the computationof image motion, whether these are physically present or in the form ofperceptual after-effects.Acknowledgement: BBSRC22.26, 12:15 pmPosition-variant perception of a novel ambiguous motion fieldAndrew Rider 1,2 (a.rider@ucl.ac.uk), Alan Johnston 1,2 , Shin’ya Nishida 3 ; 1 Cognitive,Perceptual and Brain <strong>Sciences</strong>. University College London., 2 CoMPLEX,University College London., 3 NTT Communication Science Laboratories, NipponTelegraph and Telephone CorporationObservers can extract the translational motion of Gabor arrays (staticGaussian-windowed drifting sine gratings) when the velocities of the individualGabors are consistent with a global solution (Amano et al., 2009;doi:10.1167/9.3.4). The ambiguity in the motion of the Gabors (apertureproblem) is overcome by pooling over space and orientation. We haveshown that observers can perform a similar disambiguation for rotatingand expanding stimuli where a large-field pooling algorithm for computingglobal translation would be uninformative (Rider and Johnston, 2008,ECVP). Models of global complex motion encoding typically involve threestages: local motion extraction, pooling to provide unambiguous 2D estimatesof local motion and a third stage that uses these estimates to calculatethe global complex motion percept. We developed a novel stimulus thatis theoretically ambiguous at all three stages. The orientations of an arrayof Gabors are chosen to be orthogonal to their position vector relative tothe centre of the array and hence form concentric ring patterns. The driftspeeds are then set to be consistent with a rigid translation, but this meansthe arrays are also consistent with an infinite number of rotations. Subjectswere shown these arrays in a number of positions in the visual field andadjusted the motion of a surrounding array of plaid patches to match theperceived motion of the Gabor array. We found that the stimuli were perceivedas translating, rotating clockwise or rotating anticlockwise dependingon their position in the visual field, although conventional models predicttranslation only. We propose an explanation in which local 1D motionestimates are used directly in computing the global rotation without beinglocally disambiguated. This implies a novel mechanism for the apertureproblem solution that uses global rotation templates.Acknowledgement: Supported by the EPSRC and BBSRC.22.27, 12:30 pmPerceptual grouping of ambiguous motionStuart Anstis 1 (sanstis@ucsd.edu); 1 Dept of Psychology, UC San DiegoIntroduction. What are the rules of common fate? How are spots that movein different directions grouped perceptually? Method. A pair of spots, separatedby 2°, rotate about their common centre at 1 rps. Four such pairs spinin synchrony at the corners of an imaginary square of side 8°. Results. Onfirst viewing, observers report four spinning pairs (Local motion), but after5—20s the percept suddenly changes to two overlapping 8° squares circlingaround (Global motion). Thereafter, global motion tends to predominate.Factors that increase local motion include: Gazing straight at a spinner.Proximity – putting the two spots in a spinner closer together. Orientation– replacing the spots within a spinner by two radial or tangential dashes(as if painted on an invisible disk). Luminance – making each spot-pair adifferent grey. Increasing the number of spots in each spinner from 2 upto 3 or 4. Factors that increase global motion include: Viewing spinners inperipheral vision. Moving the two spots in a spinner further apart. Orientation– replacing the spots with two floating lines that remain horizontal (orvertical) as they spin. Luminance polarity – on a grey surround, four spotsdefining an 8° square (one spot from each pair) are black, the remainingspots are white. Increasing the number of spinners from 4 to 8. Conclusions.It is a more parsimonious perceptual hypothesis to group the data from themotion array into only two objects (squares) moving globally, rather thaninto four objects (spinners) moving locally.Acknowledgement: UCSD Senate54 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


Spatial vision: Image statistics and textureRoyal Ballroom 6-8, Boards 301–312Saturday, May 8, 8:30 - 12:30 pm23.301 The Role of Higher-Order Statistics in Naturalistic TextureSegmentation: Modelling Psychophysical DataElizabeth Arsenault 1 (elizabeth.arsenault@mail.mcgill.ca), Curtis Baker 1 ; 1 McGill<strong>Vision</strong> ResearchSome texture boundaries are easier to segment than others, and characterizingthese discrepancies is an important step in understanding the neuralmechanisms of texture segmentation. Previously we demonstrated (Bakeret al., VSS 2008) that contrast boundary segmentation thresholds in naturaltextures decrease when the higher-order statistics are removed by phasescrambling. We also demonstrated (Arsenault et al., VSS 2009) that naturalisticsynthetic textures are subject to this phase-scrambling effect, and wereable to determine that some higher-order statistics are more important thanothers. Here we sought to examine the extent to which a standard twostage(filter-rectify-filter) model can account for the observed psychophysicaldata. Stimuli were naturalistic textures extracted from high-resolutionmonochrome photographs of natural scenes. Mean luminance and RMScontrast were fixed. A half-disc contrast modulation was applied to eachtexture to create a left- or right-oblique boundary. The first stage of themodel consisted of a bank of Gabor spatial filters in a range of orientationsand spatial frequencies. Each texture was convolved with this filter bank,subjected to a power-law nonlinearity, pooled, and passed through leftandright-oblique second-stage filters. The model simulated trial-by-trialresults by making a 2AFC decision on the boundary orientation based onthe second stage filter response magnitudes with additive noise. As in thepsychophysical experiment, modulation-depth thresholds were obtainedfor two conditions: phase-scrambled and intact. The model is capable ofproducing results qualitatively similar to those measured in human observers:phase scrambling improves segmentation thresholds. This improvementcan occur over a range of noise levels, power-law exponents (bothcompressive and expansive) and model architectures (both early and latepooling). These results suggest that a two-stage model, like human observers,can be sensitive to higher-order image statistics.Acknowledgement: Supported by Canadian NSERC grant #OPG0001978 to C.B.23.302 Distance dependent contextual interactions in naturalimagesChaithanya Ramachandra 1 (cramacha@usc.edu), Bartlett Mel 1,2 ; 1 BiomedicalEngineering Department, Univ. of Southern California, 2 Neuroscience GraduateProgram, Univ. of Southern CaliforniaLong-range horizontal connections in V1 (Bosking et.al. 1997) are thoughtto mediate the facilitating effects of aligned contour elements in the extraclassicalsurrounds of V1 cells, and to contribute to contour integration atthe perceptual level (Kapadia et.al. 1995). To better understand the natureof these nonlinear classical-extraclassical modulation effects, we have analyzedthe statistics of object contours in natural images (from the Coreldatabase) labeled by human subjects. We gathered joint statistics of multipleGabor-like oriented edge detectors at varying spatial separations innatural images both on and off contours, and characterized the changes inthe contour probability function P(contour | edge cues) as we varied thespatial separation and arrangement of the edge cues. We observed cleardifferences in the form of the contour probability function when the contributoryedge elements were closely spaced within the classical receptivefield, which led to symmetric AND-like interactions between edge cues,compared to the more asymmetric “contextual” interactions seen betweenflanker and central cues at greater spatial separations.Acknowledgement: This work is supported by NEI grant EY016093Saturday Morning Posters23.303 Natural scenes statistics and visual saliencyJinhua Xu 1 (jhxu1008@yahoo.com), Joe Tsien 1,2 , Zhiyong Yang 1,3 ; 1 Brain andBehavior Discovery Institute, Medical College of Georgia, Augusta, GA 30912,2 Department of Neurology, Medical College of Georgia, Augusta, GA 30912,3 Department of Ophthalmology, Medical College of Georgia, Augusta, GA30912Visual saliency is the perceptual quality that makes some items in visualscenes stand out from their immediate contexts. Visual saliency playsimportant roles in natural vision in that saliency can direct eye movement,deploy attention, and facilitate object detection and scene understanding.Natural visual scenes consist of objects of various physical properties thatare arranged in three dimensional space in a variety of ways. When projectedonto the retina, visual scenes entail highly structured statistics, occurringover the full range natural variation in the world. Thus, a given visualfeature could appear in many different ways and in a variety of contexts innatural scenes. Dealing effectively with these enormous variations in visualfeature and their contexts is a paramount requirement for routinely successfulbehaviors. Thus, for visual saliency to have any biological utility fornatural vision, it has to tie to the statistics of natural variations of visual featuresand the statistics of co-occurrences of natural contexts. Therefore, wepropose to explore and test a novel, broad hypothesis that visual saliencyis based on efficient neural representations of the probability distributions(PDs) of visual variables in specific contexts in natural scenes, referred to ascontext-mediated PDs in natural scenes.We first develop efficient representations of context-mediated PDs of arange of basic visual variables in natural scenes. We derive these PDs fromthe Netherland database of natural scenes and the McGill dataset of naturalcolor images using independent component analysis. We then derive ameasure of visual saliency based on context-mediated PDs in natural scenes.Experimental results show that visual saliency derived in this way predictsa wide range of perceptual observations related to texture perception, popout,saliency-based attention, and visual search in natural scenes.23.304 Classification Images for Search in Natural ImagesSheng Zhang 1 (s_zhang@psych.ucsb.edu), Craig Abbey 1 , Miguel Eckstein 1 ;1 Psychology Department, UC Santa Barbara, Santa Barbara CA, USAPurpose: Previous studies have proposed estimation procedures for classificationimages in white and correlated noise (Abbey & Eckstein 2006).Here, we investigate methods to estimate classification images (linear template)for search of targets embedded in natural images. Methods: Observerssearched for an additive Gaussian luminance target in one of four locations(4 alternative forced choice) within 3000 calibrated natural scenes (vanHateren & van der Schaaf, 1998). We compute classification images usingvarious methods including genetic algorithms (GA; Castella et al., 2007),support vector machines (SVM; Jakel et al., 2009) and weighted averagingof prewhitened noise fields (Abbey & Eckstein, 2002). All methods reliedon a limited set of Gabor basis functions. We compare human classificationimages to optimal linear templates also estimated using GA and SVM.Results: GA and SVM methods result in similar estimates of the optimallinear templates. Average observer performance was higher than the estimatedhuman classification images and optimal linear templates. For allthree observers, the estimated linear templates were similar to the targetbut contain inhibitory surroundings. Conclusions: We extend previousclassification image methods to search for a target embedded in naturalscenes and explore computational procedures for reliable estimation. Thepresence of inhibitory surrounds in human classification images reflectsa strategy to optimize detection of targets in natural scenes. The superiorhuman performance relative to the optimal linear template suggests thathumans are able to use additional higher-order information to detect targetsin natural scenes.Acknowledgement: NSF(0819582)See page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>55


Saturday Morning PostersVSS 2010 AbstractsSaturday AM23.305 Implementing a maximum-entropy parameterization oftexture spaceJonathan Victor 1 (jdvicto@med.cornell.edu), Jason Mintz 1,2 , Mary Conte 1 ;1 Neurology and Neuroscience, Weill Cornell Medical College, 2 DartmouthCollegeVisual textures are an important tool for studying many aspects of perception,including local feature extraction, segmentation, and perceptionof surface properties. Visual textures are defined by their statistics. Imagestatistics include the luminance histogram, the power spectrum, andhigher-order analogs, and thus, constitute a very large number of parameters.This richness enables construction of visual stimuli that can be usedto discriminate among candidate models. However, it also presents a challenge,because image statistics are not only high-dimensional, but also,have complex algebraic interrelationships. To approach this problem, weuse maximum-entropy extension. The basic idea is that a texture can bedefined by specifying only a small number of image statistics explicitly. Theunspecified statistics are then determined implicitly, by creating texturesthat are as random as possible, but still satisfy the constraints of the explicitly-specifiedstatistics (Zhu et al., 1998). We implement this idea for binaryhomogeneous textures, focusing on local image statistics. 10 parameters arerequired to determine the probabilities of the 16 possible 2x2 blocks. Viamaximum-entropy extension, these parameters comprehensively describeall homogeneous binary textures with purely local structure, along withthe long-range structure that the local organization necessarily implies.We develop algorithms to generate texture examples in all 45 coordinateplanes of the space. In most planes, an iterative “glider” rule suffices, butin some, a novel Metropolis (1953) algorithm is required. We show how theMetropolis algorithm can be used to project naturalistic textures into thetexture space, thus extracting their local structure. Perceptually, each planeof this space is characterized by salient and distinctive visual structure.We present isodiscrimination contours in several of these planes. Whileideal-observer contours are circular, human isodiscrimination contours arestrongly elliptical, and may be tilted with respect to the coordinate axes.Acknowledgement: EY797723.306 Frequency content of the retinal stimulus during activefixationMartina Poletti 1 (martinap@cns.bu.edu), Jonathan Lansey 2 , Michele Rucci 1,3,4 ;1 Department of Psychology, Boston University, 2 Department of Cognitive andNeural Systems, Boston University, 3 Department of Biomedical Engineering,Boston University, 4 Program in Neuroscience, Boston UniversityUnder natural viewing conditions, the retinal input depends not only onthe external scene but also on the observer’s behavior. During fixation, theluminance modulations caused by small, involuntary eye movements profoundlyinfluence the spatiotemporal stimulus on the retina. In this study,we examined the frequency content of the retinal stimulus during the normalinstability of visual fixation. Eye movements were recorded while subjectsfreely viewed grayscale natural images. On the basis of the recordedtraces we reconstructed the spatiotemporal retinal input experienced bythe observers, i.e. the movie resulting from scanning the image accordingto subjects’ eye movements. We then selected periods of fixation and estimatedthe power spectrum of such input. The results of this analysis showthat, outside of the zero temporal frequency plane, the spatiotemporal spectrumof the retinal stimulus during fixational instability is space-time separable.That is, the spatial distribution of power was similarly organized onevery nonzero temporal frequency plane, while the total amount of powerat any given temporal frequency was entirely determined by the powerspectrum of eye movements. The luminance modulations caused by fixationalinstability had the effect of flattening the 1/f2 power spectrum ofnatural images. That is, at every nonzero temporal frequency, the amountof power present at different spatial frequencies was approximately constant.This effect occurred only when viewing images with a scale-invariantpower spectrum, like natural images, and was lost when the trajectories ofeye movements were artificially enlarged. It is often argued that the shapesof neuronal receptive fields in the early visual system act to reduce theredundancy of input signals. Our results show that, during normal fixationon natural stimuli, the visual input which drives neurons sensitive totemporal modulations is already decorrelated in space.Acknowledgement: NIH R01 EY18363, NSF BCS-0719849, and NSF IOS-084330423.307 Sampling Efficiencies for Spatial RegularityMichael Morgan 1 (m.morgan@city.ac.uk), Isabelle Mareschal 1 , Joshua Solomon 1 ;1 Applied <strong>Vision</strong> Research Centre, City University LondonObservers performed a 2AFC (temporal) discrimination in which theyhad to decide which of two arrays of dots had the greater amount of spatialregularity in their spacing. The 11 dots in each array were arrangedon a notional circle of radius R, centred on the fixation point. The actualeccentricity of each dot in the array was sampled independently from auniform distribution over the interval [R-P-C(x), R+P+C(x)], where P was apedestal common to both arrays, C(ref) was zero for the “reference” array,and C(test) was determined on each trial by a QUEST staircase designedto converge on the 84% correct discrimination point. Two-dimensionalperturbations of dot positions on an a 11 x 11 rectangular grid were alsoinvestigated. As in a previously reported study of orientation variance[Morgan et al, 2008], data formed a ‘dipper’ function, having a minimum(best discrimination) at a non-zero pedestal value, and were well fit by atwo-parameter model, in which one parameter represents the intrinsic (inthis case, positional) noise, and the other parameter represents samplingefficiency. The latter varied between 4/11 and 6/11 in different observersand conditions. Sampling efficiencies of less than 4/11 could be ruled-outwith high confidence. Sampling efficiencies were lower for the rectangulararrays ~ (8/121), suggesting a limit on the absolute number of samples.Adding a second source of variance, by randomising the contrast polarityof the dots, which the observer was instructed to ignore, made performanceworse by increasing intrinisic noise, with little if any effect on samplingefficiency. The same was true of adding irrelevant tangential perturbationsin dot position. We conclude that there is some degree of obligatory confusionbetween different sources of variance, as in previous studies of colourcamouflage [Morgan et al, 1992].Acknowledgement: Wellcome Trust23.308 Noise reveals what gets averaged in “size averaging”Steven Dakin 1 (s.dakin@ucl.ac.uk), John Greenwood 1 , Peter Bex 2 ; 1 UCL Institute ofOphthalmology, University College London, 2 Schepens Eye Research Institute,Harvard Medical SchoolObservers are adept at estimating texture statistics such as mean elementorientation,a process that can be modeled using population coding ofresponses from orientation-selective neurons in V1. Here we consider howobservers average the size of objects, given that (a) the neural substrate forobject-size is less clear, and (b) limitations of previous paradigms used toexplore size-averaging have sparked debate as to whether observers canaverage size at all.We used a noise paradigm: observers reported which of two sets of 16 Gaborelements had the greater mean element-size in the presence of different levelsof element-size variability. We randomized the spacing of the Gabors(thus minimizing any cue from element-”coverage”) and both the contrastand orientation of elements (minimizing any cue from global statistics). Inthe first condition (scale averaging) the envelope-size and carrier spatial frequency(SF) of elements co-varied, so that all elements were scaled/rotatedversions of one another. Under these conditions observers averaged ~50%of the elements, effectively estimating the scale of each with a precision (σ)of ~25%. This unequivocally indicates that observers can average elementscale.Fixing carrier spatial frequency (SF) forces observers to use envelopes(size averaging) and produced near-identical performance. Fixing envelopesize forces subjects to use carrier-SF (SF-averaging) and produced moderatelypoorer performance. Thus observers must, at least in part, be usingenvelope size when scale-averaging. Critically, adding independent noiseto the SF and the envelopes of elements substantially increases the numberof elements that are averaged, indicating that observers can exploit independentstatistical properties of both the envelope-size and SF of elementsto make perceptual discriminations. We consider it likely that cues fromfeature (e.g. edge) density drive both these and a range of related tasks (e.g.judgment of number and density).Acknowledgement: Wellcome Trust23.309 Dimensions of preattentive visual sensitivity in human colorspaceChuan-Chin Chiao 1 (ccchiao@life.nthu.edu.tw), Charles Chubb 2 ; 1 Departmentof Life Science & Institute of Systems Neuroscience, National Tsing HuaUniversity, Hsinchu, Taiwan, 2 Department of Cognitive <strong>Sciences</strong> & Institute forMathematical Behavioral <strong>Sciences</strong>, University of California at Irvine, USA56 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Morning PostersThis study used texture discrimination tasks to investigate preattentivevisual sensitivity to equiluminant chromatic variations. Specifically, welooked for evidence of “half-cardinal-axis” mechanisms in DKL space - i.e.,mechanisms sensitive exclusively to variations between neutral gray andeach of the red and green poles of the L-M axis and the blue and yellowpoles of the S-(L+M) axis. Observers strove to discriminate spatially randommixtures of colors called scrambles. A given scramble is characterizedby its color histogram. The preattentive sensitivity space of a set C ofcolors is the space of histogram differences people can discriminate givena brief display. The dimensionality of this space gives the number of preattentivemechanisms sensitive to C variations. In these experiments, brieflypresented stimuli comprised alternating bars of scramble differing in histogram,and observers had to judge bar pattern orientation. We used a newmethod called “iterated seed-expansion” to obtain a basis of the sensitivityspace for each of 6 different sets of colors: a set drawn from each of thehalf-DKL-cardinal-axes in the equiluminant plane and also from each of thefull L-M and S-(L+M) cardinal axes. For each of these sets C, the sensitivityspace proved to be two-dimensional, with one basis element showing linearand the other parabolic sensitivity for the colors in C. This suggests thatassociated with each full-cardinal-axis is one linear mechanism L and onesecond-order mechanism S derived from the L output. Our results supportthe idea that for any element x in the scramble, S(x) is proportional to thesquared difference between L(x) and the mean L-output in the neighborhoodof x. These same two mechanisms suffice to account for discriminationof half-axis scrambles; thus, we find no evidence for separate half-axismechanisms.Acknowledgement: The National Science Council of Taiwan, NSC-97-2918-I-007-004 &NSC-98-2628-B-007-001-MY3 NSF BCS-084389723.310 Lateral Occipital cortex responsive to correlation structureof natural imagesH.Steven Scholte 1 (h.s.scholte@uva.nl), Sennay Ghebreab 2 , Arnold Smeulders 2 ,Victor Lamme 1 ; 1 Department of psychology, University of Amsterdam, 2 InformaticsInstitute, University of AmsterdamThe distribution of features around any location in natural images adheresto the Weibull distribution (Geusebroek & Smeulders, 2005), which is afamily of distribution deforming from normal to a power-law distributionwith 2 free parameters, beta and gamma. The gamma parameter from theWeibull distribution indicates whether the data has a more power-low ormore normal distribution. We recently showed that the brain is capable ofestimating the beta and gamma value of a scene by summarizing the X andY cell populations of the LGN (Scholte et al., 2009) and that this explains85% of the variance in the early ERP. Here we investigate to what degreethe brain is sensitive to differences in the global correlation (gamma) of ascene by presenting subjects with a wide range of natural images whilemeasuring BOLD-MRI.Covariance analysis of the single-trial BOLD-MRI data with the gammaparameter showed that only the lateral occipital cortex (LO), and no otherareas, responds stronger to low gamma values (corresponding to imageswith a power-law distribution) than high gamma values (corresponding toimages with a normal distribution). The analysis of the covariance matrixof the voxel-pattern cross-correlated single-trial data further revealed thatresponses to images containing clear objects are more similar in their spatialstructure than images that do not contain objects. This data is consistedwith a wide range of literature over object perception and area LO (Grill-Spector et al., 2001) and extend our understanding of object recognitionby showing that the global correlation structure of a scene is (part of) thediagnostics that are used by the brain to detect objects.23.311 Adaptation effects that gain strength over 8 hour inductionperiodsMin Bao 1 (baoxx031@umn.edu), Peng Zhang 1 , Stephen Engel 1 ; 1 Department ofPsychology, University of MinnesotaDepriving adult subjects of visual stimulation at a narrow range of orientationsincreases sensitivity to the deprived orientation. Here we measuredthe growth of this effect as a function of adaptation duration. Subjects weredeprived of vertical energy for 1, 4, or 8 hours, viewing the world throughan “altered reality” system. The system was comprised of a head mountedvideo camera that fed into an image-processing laptop computer that in turndrove a head-mounted display (HMD). Vertical energy was removed fromthe video in real time using a simple mask in the Fourier domain. Viewingthe filtered video, subjects were able to interact with the world whilebeing deprived of vertical visual input. Prior to and following deprivation,we measured perceived orientation of sinusoidal gratings, using a versionof the tilt aftereffect. Subjects viewed a plaid made from two 45 deg gratings,which perceptually resembled a blurred square checkerboard. Whenthe grating components were symmetrically tilted away from 45 degrees,the checks appeared rectangular. Subjects adjusted the tilt of the componentsfrom a random initial angle until the checks appeared square, whichrevealed the physical orientations that appeared to be 45 deg. Subjectsadapted to deprivation: Following deprivation, they set the componentstilted away from 45 degrees towards horizontal, indicating that they perceived45 degree gratings tilted towards vertical. Eight hours of adaptationproduced reliably larger and longer-lasting effects than four or one hours.The shift in apparent orientation towards vertical suggests that deprivationincreased the gain of neurons selective to the deprived orientation. Thefact that this effect continues to strengthen over eight hours suggests thatrelatively low level mechanisms of adaptation can operate over long timescales, which may allow them to contribute to long-term plasticity in thevisual system, such as adaptation to retinal disease.23.312 The limited availability of brain resources determines thestructure of early visual processingMaria Michela Del Viva 1,2,3 (michela@in.cnr.it), Rachele Agostini 1 , DanieleBenedetti 1 , Giovanni Punzi 4 ; 1 Dipartimento di Psicologia, Università degli Studidi Firenze, 2 Psychology, University of Chicago, 3 Visual Science Laboratories,Institute for Mind and Biology, University of Chicago, 4 Dipartimento di Fisica,Università degli Studi di PisaThe visual system summarizes complex scenes to extract meaningful features(Barlow, 1959; Marr 1976) by using image primitives (edges, bars),encoded physiologically by specific configuration of receptive fields (Hubel& Wiesel, 1962).This work follows a pattern-filtering approach, based on the principle ofmost efficient information coding under real-world physical limitations(Punzi & Del Viva VSS-2006; Del Viva & Punzi VSS-2008). The model,applied to black and white images, predicts from very general principlesthe structure of visual filters that closely resemble well-known receptivefields, and identifies salient features, such as edges and lines, providinghighly compressed “primal sketches” of visual scenesHere we perform a psychophysical study of the effectiveness of the sketchesprovided by this pattern-filtering model in allowing human observers todiscriminate between pairs of similar natural images. As a control, we compareresults with alternative sketches with the same information content,derived from a similar procedure, but not keeping into account the needsfor optimal usage of computing resources.The performance was measured by the task of identifying natural imagescorresponding to briefly presented sketches (


Saturday Morning PostersVSS 2010 AbstractsSaturday AMVarious models were proposed for the interplay between subcortical andcortical mechanisms and bottom-up and top-down mechanisms in drivingour saccades rapidly to targets in the environment. One such model is the“linear approach to threshold at ergodic rate” (LATER) model (Carpenter &Williams, 1995). In this work we show evidence based on experimental datafor this mechanism being involved in our eye movements. We used eyetrackingdata from subjects viewing natural scenes in free and task-dependantviewing (Cerf, Frady, & Koch, 2009) to test bottom-up and top-downbased attention allocation to high-level objects. Separating the distributionsof saccades according to their latencies† provides mean of identifying differentpopulations types of saccades. We identified 3 sets of population:very early saccades (60ms), which can be looked at as correction saccadesto an over/under-shoot of a target, early saccades (~80-100ms), and regularsaccades. Using attractive stimuli such as faces and text we were able to testthe latency by which these saccades are initiated towards a target, and identifythe attention mechanisms which drive us to look at attractive targets ateach stage of our viewing. We used the saccadic latencies to estimate thebrain regions which are involved in driving our eyes, under each condition†.We quantified the interplay of subcortical and cortical structures ingenerating rapid, accurate saccades to images. We show a separate, corticalsource of bottom-up saliency to objects within a visual scene which disappearswithin a few fixations, and modification of the decision signal by topdowninfluences. We propose that these observations reflect a common corticalpathway which represents a utility signal which modulates the processof saccadic decision. In addition we propose a parallel subcortical pathwaycapable of generating rapid, accurate saccades to salient targets under thecontrol of cortical structures.† See attached PDF for illustrationAcknowledgement: Mathers foundation, DARPA23.314 Modulation of saccade latencies by hand action codingSimona Buetti 1 (simona.buetti@unige.ch), Bernhard Hommel 2 , Dirk Kerzel 1 , TakatsuneKumada 3 ; 1 Université de Genève, 2 Leiden University, 3 National Institute ofAdvanced Industrial Science and TechnologyPrevious studies indicated that saccade latencies are affected by the spatialcompatibility between the target position on the screen and the positionof a static hand. Further, the modulation of saccade latencies dependedon the delay between fixation point offset and target onset. With a 0-msdelay, saccades were slower toward a target close than opposite to the handlocation (eye-hand proximity interference), while the opposite pattern wasfound for a 1000-ms delay (eye-hand proximity facilitation). Time-consumingcompetition for attentional processes between eye and hand wereevoked to account for these results. In the present study, we opposed a codeoccupation hypothesis (COH) to this attentional explanation. According toCOH, once a code is bound to a current action, all other access to this codewill be temporarily delayed. We varied the target location with respect tothe hand. The hand was laid at a fixed location on the left or right andthe target was presented at different eccentricities to the left or right of thescreen center. The results indicated that the saccadic modulation did notdepend on the spatial proximity between the hand and the target. Rather,similar eye-hand interference (0-ms delay) and facilitation (1000-ms) werefound for all targets sharing the same hemi-space as the hand. In agreementwith the code occupation hypothesis, the saccadic modulation in the presenceof a static hand depended on whether the saccade shared or not thehand-related action code.23.315 Attention during pauses between successive saccades:Task interference vs. modulation of contrast-gainMin Zhao 1 (minzhao@eden.rutgers.edu), Brian S. Schnitzer 1 , Barbara A. Dosher 2 ,Eileen Kowler 1 ; 1 Department of Psychology, Rutgers University, 2 Department ofCognitive <strong>Sciences</strong>, University of California-IrvinePerceptual performance is better at the target of a saccade than other locations(e.g., Gersch et al., 2009). To better understand pre-saccadic attentionshifts, we studied perceptual discrimination across different stimulus contrastsduring pauses between successive saccades.Displays contained 4 outline squares (1.4° on a side) located at the cornersof an imaginary square. Sequences of saccades were made in a V-shapedpath from one corner square, to center, to another corner square. Whenthe eye reached the center, perceptual targets (oriented letter T’s) appearedinside each eccentric square. The orientation of a randomly-selected T wasreported after scanning was completed. Orientation discrimination waspoor (


VSS 2010 AbstractsSaturday Morning Posterstrace compared to the first part. Then we used this amount to null the effectand find the moment at which this error of remapping occurred. The magnitudeof the trace offset around the time of the saccade was about 15-20%of the saccade amplitude, and this offset appeared in a temporal windowabout 30 msec to 70 msec after the saccade onset. The displacement of thepre-saccadic relative to post-saccadic motion trace suggests that the remappingovercompensates for the saccade vector. Moreover, the visibility of thepre-saccadic trace after the saccade is a novel demonstration of spatiotopicvisual persistence; visual because monitor persistence would not show abreak between the pre- and post-saccadic portions of the trace.Acknowledgement: This research was funded by Ministère de l’enseignement supérieur etde la recherche to MS and by a Chaire d’Excellence grant to PC.23.318 Spatial localization during fixation does not depend on anextraretinal drift signalChiara Listorti 1 (chiarali@bu.edu), Martina Poletti 1 , Michele Rucci 1,2,3 ; 1 Departmentof Psychology, Boston University, 2 Department of Biomedical Engineering,Boston University, 3 Department of Program in Neuroscience, Boston UniversityWe are normally unaware of the eye movements which, during visual fixation,keep the retinal image continually in motion. How does the visualsystem disregard the retinal motion caused by these movements to achievevisual stability? According to extraretinal theories, stability is attained bymeans of motor/proprioceptive signals; according to retinal theories, eyemotion is, instead, inferred directly from the spatiotemporal stimulus onthe retina. In this study, we focus on the retinal motion caused by oculardrift. We have previously shown that motion detection does not rely onpossible extraretinal drift signals. Here, we investigated whether this is truealso for spatial localization. In a 2AFC experiment, subjects reported whichone of two dots briefly displayed at distinct times during ocular drift wasat the same spatial location of a reference presented at the beginning ofeach trial. One of the two dots was displayed at the same spatial (monitor)position of the reference, whereas the other was at the same retinal position.Stimuli were displayed in complete darkness. If an extraretinal signalis used, subjects should be able to correctly identify the dot at the samemonitor position of the reference. Moreover, discrimination performanceshould increase with the extent of drift, as the extraretinal signal would alsoincrease. Subjects systematically reported that the dot at the same retinalposition of the reference was the one at the same spatial location. Furthermore,the probability of this error increased, rather than decreased, with thesize of ocular drift. These results strongly support the predictions of retinaltheories. Like motion detection, also spatial localization does not depend onan extraretinal drift signal, but relies instead on the spatiotemporal imageon the retina to discard the retinal motion caused by ocular drift.Acknowledgement: NIH EY18363, NSF BCS-0719849, and NSF CCF-072690123.319 Differential involvement of the oculomotor system in covertvisual search and covert endogenous cueingArtem Belopolsky 1 (A.Belopolsky@psy.vu.nl), Jan Theeuwes 1 ; 1 Vrije UniversiteitAmsterdamThe relationship of spatial attention to eye movements has been controversial.Some theories propose a close relationship, while others viewthese systems as completely independent. In a recent study using a cueingtask we proposed that this controversy can be resolved by distinguishingbetween the maintenance and shifting components of attention (Belopolsky& Theeuwes, 2009; Psy Science). Specifically, we proposed that shiftingcovert attention is always associated with preparation of saccade, whilemaintaining attention at a location can be dissociated from saccade preparation.The current study tests the boundary conditions of this proposal.Experiment 1 used a visual search task in which repeated serial shifts ofattention were required in order to find the target. The identity of the targetindicated whether an eye movement towards the target or a non-targetlocation had to be made. The results indicated that saccades were initiatedfaster towards the location where covert attention was shifted. Experiment2 used endogenous cueing manipulating the SOA between the cue and theappearance of the target. The results showed suppression of saccade in thedirection of the covert attention shift even at the shortest SOA. The findingssuggest that shifts of attention during covert visual search are associatedwith activation of an oculomotor program, while shifts of attentionduring covert endogenous cueing are associated with suppression of anoculomotor program. This suggests that distinction between endogenousand exogenous covert shifts of attention is important when relationshipbetween attention and eye movements is investigated. We propose thatonly during pure endogenous covert shifts of attention can the oculomotorsystem be suppressed. In addition and consistent with previous findings(Wolfe, Alvarez & Horowitz, 2000), our results implicate that shifts of attentionduring covert visual search are not purely endogenous.Acknowledgement: Netherlands Organization for Scientific Research (NWO)23.320 Gaze Patterns and Visual Salience in Change Detection ofNatural ScenesTy W. Boyer 1 (tywboyer@indiana.edu), Chen Yu 1 , Thomas Smith 1 , Bennett I. Bertenthal1 ; 1 Department of Psychological & Brain <strong>Sciences</strong>, Indiana UniversityMost change blindness studies suggest that attention is necessary to detecta change in a scene. Recent research also suggests that visual attentionis guided in part by bottom-up visual salience of the regions in a scene.In this study, we used an image processing algorithm for measuring thevisual salience of different regions in a visual scene, and measured participants’ability to detect changes in high and low salience regions of thescenes with a flicker paradigm. The stimuli were 28 digital photographs ofnatural outdoor scenes. Itti’s saliency map algorithm was used to select onehigh saliency and one low saliency region in each image; color or presence/absence changes were applied to both regions. Participants completed 56trials; one low and one high salience trial with each image. We also used aTobii 2150 eye tracking system for measuring eye movement. Preliminaryresults indicate: 1) Participants detected changes made to high salienceregions (M = 6,855 ms) faster than those made to low salience regions (M= 10,397 ms); 2) Participants fixated high visual salience changed regions(first fixation onset M = 2,812 ms) sooner than low visual salience changedregions (M = 4,339 ms); 3) The total time fixating changed regions was similarin the two conditions (Mhigh = 915 ms and Mlow = 1073 ms); and 4)Participants were more likely to require more than one fixation within theregion of change to detect the change in the low saliency condition. Ananalysis of the eye movement data will allow us to further investigate individualdifferences in scene perception and change detection.23.321 Does eye vergence dissociate between covert and overtattention?Maria Sole Puig 1,2 (mariasolepuig@ub.edu), Laura Perez Zapata 1,2 , Sancho Moro 1,2 ,Antonio Aznar-Casanova 1,2 , Hans Supèr 1,2 ; 1 Institute for Brain, Cognition andBehavior (IR3C), 2 Dept Basic Psychology, Faculty of Psychology, University ofBarcelona (UB)The neural mechanisms of attention are closely related to oculomotorcontrol of saccadic eye movements and vergence eye movements. Visualcovert attention is a mechanism for mentally scanning the visual field toenhance the sensory signal. This shift in covert attention is linked to eyemovement circuitry that prepares a saccadic eye movement to a particularlocation. Overt attention is believed to direct the saccade towards that location.Currently, it is unclear whether and how covert and overt attentioninfluences vergence responses. To test this idea, we used a visual task inwhich subjects focused on a central fixation spot surrounded by an array of8 letters. After one second of fixation one of the letters flashed. After additionalfixation period an identical or a different letter briefly appeared atthe fixation spot. The observer responded by making a saccade towards theflashed letter in the case it was the same as the central letter. Otherwise theobserver remained its gaze at the fixation spot. In addition a button presswas requested. Our findings show that eye vergence changes during thetask. During the initial period, where covert attention was required, eyesconverge in a plane further away than the physical depth (screen) plane.During the period a saccade was planned (overt attention) eyes convergedback to the depth plane of the screen. From our observation we concludethat eye vergence may serve not only for depth perception but also have arole in covert and overt attention. The findings are interpreted in terms ofrelaxation during covert attention, perceived depth, and that during covertattention the visual system benefits from a wider view of field.23.322 Biasing attentional priority by microstimulation of LIPKoorosh Mirpour 1 (kmirpour@mednet.ucla.edu), Wei Song Ong 1,4 , JamesBisley 1,2,3,4 ; 1 Department of Neurobiology, David Geffen School of Medicineat UCLA, 2 Jules Stein Eye Institute, David Geffen School of Medicine at UCLA,3 Department of Psychology and the Brain Research Institute, UCLA, 4 InterdepartmentalPhD Program for Neuroscience, UCLASaturday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>59


Saturday Morning PostersVSS 2010 AbstractsSaturday AMPeople can find objects hidden in a cluttered scene quickly and efficiently.This cannot be done unless there is a prioritizing algorithm, which optimizesthe choice of the goal of the next eye movement. It has been suggestedthat the lateral intraparietal area (LIP) acts as a priority map, whichincorporates both bottom-up sensory and top-down cognitive inputs inorder to find stimuli similar to the target of the visual search. An eye movementis then made toward the most behaviorally important location in thescene, represented by the highest activity in LIP. In this study, we investigatedwhether increasing the activity of the LIP priority map can bias saccadegoal selection during a visual foraging task.Two animals were trained to perform a free-viewing visual foraging taskin which they searched through 5 potential targets (T) and 5 distractors(+) to find the target that was loaded with reward. To get the reward theyhad to fixate the target for 500 ms within 8 s. After training, both animalsperformed the task with a high degree of efficiency by avoiding Ts that hadbeen previously fixated and distractors. On microstimulation trials, a 350ms burst of 20 μA peak-to-peak biphasic pulses at 200 Hz was injected intoLIP 150 ms after the third saccade. We found that on stimulation trials, theanimals were more likely to make their next saccade to stimuli that were inthe stimulated receptive field than on non-stimulation trials. The strengthof this bias was consistent for all visual stimuli, regardless of behavioral relevance.These results demonstrate that the activity of LIP neurons is causallyrelated to a strategy that guides efficient visual search.Acknowledgement: the National Eye Institute, the Kirchgessner Foundation, the GeraldOppenheimer Family Foundation, the Klingenstein Fund, the McKnight Foundation and theAlfred P. Sloan Foundation23.323 Attention is predominantly guided by the eye during concurrenteye-hand movementsAarlenne Khan 1,2 (aarlenne@biomed.queensu.ca), Joo-Hyun Song 1 , RobertMcPeek 1 ; 1 The Smith-Kettlewell Eye Research Institute, San Francisco, CA, US,2 Centre for Neuroscience Studies, Queen’s University, Kingston, ON, CanadaAttention is directed to the upcoming goal location of both saccades andreaches . It remains unknown however, how attention is allocated duringsimultaneous eye and hand movements. We investigated attentionalallocation through a 4-alternative forced-choice shape discrimination task(Deubel & Schneider, 1996) while subjects made either a saccade or a reach(or both) when cued by an arrow to one of five peripheral locations. Thediscrimination shape appeared during the latency period either at the goal(50% of the time) or at one of the other 4 locations. We found that targetdiscrimination was better when the discrimination stimulus appeared atthe movement goal than when it appeared elsewhere. Discrimination performanceat the movement goal was not better in the combined conditioncompared to either effector alone, suggesting limited shared attentionalresources rather than separate attentional resources specific to each effector.To test which effector dominated in guiding attentional resources, wethen separated the goals for the hand and the eye. This was done usingtwo paradigms, 1) cued reach/constant saccade - subjects made a saccadeto the same peripheral location throughout the block, while the reach goalwas cued by the arrow and 2) cued saccade/constant reach - subjects madea reach to the same location, while the saccade goal was cued. During botheye-hand goal dissociation paradigms, discrimination performance wasconsistently better at the eye goal than the hand goal. This indicates thatlimited attentional resources are guided predominantly by the eye duringeye and hand movements.23.324 Sudden hand movements enhance gaze cueingRobert Volcic 1 (volcic@uni-muenster.de), Markus Lappe 1 ; 1 PsychologischesInstitut II, Westf. Wilhelms-Universität Münster, GermanyAmple evidence supports the idea that social signals, such as eye gaze,influence our voluntary eye movements. However, people move their eyesconstantly and most of these eye movements are irrelevant in a social context.It is thus to expect that even stronger shifts in overt attention should beinduced by eye movements conveying a potentially relevant action.We tested this hypothesis with a variation of the gaze cueing paradigm.Participants were required to perform a saccadic eye movement toward atarget either to the left or to the right. A colored instruction cue signaledthe direction of the saccade. Cueing with varying SOAs was induced eitherby an averted eye gaze and/or by a small hand gesture corresponding tothe initial phase of a pointing movement towards one of the targets. Thesestimuli were provided either in isolation or in combination with each other.In the latter case, the cued direction could be either matched or unmatched.Participants were informed that the stimuli were spatially uninformativecues. As previously reported, gaze and hand cueing were effective at triggeringthe saccades in the opposite to the intended direction. A strongergaze cueing effect was, however, observed when the gaze and hand cuewere presented simultaneously. Interestingly, the proportion of saccadesfollowing the gaze cue increased irrespective of the hand cue direction.Relevant actions are usually the product of combined eye and hand movementswhere the eyes select the target of interest. The mere presence of asudden hand movement might have been interpreted as a sufficient indicationof a forthcoming relevant action that consequently enhanced thesaliency of the directional cue provided by the gaze. These findings thussuggest a process that prioritizes potentially relevant actions to which thevisual system automatically responds.Acknowledgement: Research supported by EU grant (FP7-ICT-217077-Eyeshots)23.325 Attentional bias to brief threat-related stimuli revealed bysaccadic eye movementsRachel Bannerman 1 (r.bannerman@abdn.ac.uk), Maarten Milders 1 , Arash Sahraie 1 ;1 <strong>Vision</strong> Research Laboratories, School of Psychology, University of AberdeenAccording to theories of emotion and attention we are predisposed to orientrapidly towards threat. However, previous examination of attentionalcueing by threat signals showed no enhanced capture at brief durations.We propose that the manual response measure employed in previousexaminations is not sensitive enough to reveal threat biases at brief stimulusdurations. Here, we investigated the time course of orienting attentiontowards threat-related stimuli in the exogenous cueing task. The type ofthreat-related stimulus (fearful face or body posture), cue duration (20msor 100ms) and response mode (saccadic or manual) were systematicallyvaried. In the saccade mode, both enhanced attentional capture and difficultyin disengaging attention from fearful faces and body postures wereevident and limited to 20ms cue duration, suggesting that saccadic cueingeffects emerge rapidly and appear to be a short lived phenomenon. Conversely,in the manual response mode, fearful faces and bodies impactedonly upon the disengagement component of attention at 100ms cue duration,suggesting that manual responses reveal cueing effects which emergeover more extended periods of time. Taken together, the results show thatsaccades are able to reveal threat biases at brief cue durations consistentwith current theories of emotion and attention.23.326 Evolving illusory motion using eye-movementsTim Holmes 1 (t.holmes@rhul.ac.uk), Kati Voigt 2 , Johannes Zanker 1 ; 1 Departmentof Psychology, Royal Holloway, University of London, 2 University of Hildesheim,GermanyOp artists, such as Bridget Riley, frequently use monochromatic abstractcompositions to create works which produce a strong percept of illusorymotion in the observer. Previous work has looked at the effects of eyemovements(Zanker, Hermens & Walker, 2008, Perception, 37, ECVPAbstract Supplement: 150) and the image statistics (Zanker, Hermens &Walker, 2008, Perception, 37, ECVP Abstract Supplement: 70) in an attemptto explain and optimise such illusory motion. Preferential looking literaturesuggests that the eye-movements needed to see this percept are alsosubject to top-down influences which result in increased fixation time onpreferred images. Here, we use a combination of cumulative fixation timeand fixation sequence which has been shown to correlate with aestheticpreference (Holmes & Zanker, 2009, Journal of <strong>Vision</strong> [abstract], 9(8): 26.)to provide the selection pressure for an evolutionary algorithm operatingon a chromosome encoding the parameters of stimuli known to producethis percept. By varying the presentation time of the stimuli and trackingthe eye-movements of 20 participants in a free-looking paradigm, we showthat with increased time to view, participants attention is attracted to thosestimuli with a stronger motion percept and that these stimuli are robustlypreferred by the participants when retested using a 2AFC experiment 1week later. The results demonstrate that general aesthetic preferences canbe detected using evolutionary algorithms that use oculomotor statistics asfitness information, thus providing a reliable and robust paradigm for usein future studies of subjective decision making and experimental aesthetics.Acknowledgement: Suppported by EPSRC Grant 05002329.60 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Morning Posters23.327 Gender Differences in Visual Attention During Listeningas Measured By Neuromorphic Saliency: What Women (and Men)WatchJohn Shen 1 (shenjohn@usc.edu), Laurent Itti 1,2 ; 1 Neuroscience Graduate Program,University of Southern California, 2 Department of Computer Science, Universityof Southern CaliforniaPredictive models of eye movements often do not address population differences.Different tasks may play an important role in differentiating eyemovements among discrete groups. For example, eye movement behavioris known to vary by gender for an emotion-perception task (Vassallo, 2009).We explore observed differences in eye movements between genders byeye-tracking subjects during a audio-visual listening task, as compared toa free-viewing task.Thirty-four subjects, balanced by gender, are eye-tracked while watchingeighty-five videos of different people who give answers to conversationalquestions. Videos are filmed outdoors with a natural background of distractors,such as pedestrians and vehicles. After viewing each clip, subjectsanswer questions about the video to measure any attentional differences.To control for task effects, a separate group of ten control subjects are askedto free view the clips. Interestingly, the main sequence of collected saccadessignificantly differs across gender (n=33806, peak velocity: p


Saturday Morning PostersVSS 2010 AbstractsSaturday AMthe higher classification accuracy derived from the use of block designs is aresult of the averaging of fMRI data points during analysis rather than thedifferent temporal characteristics of the two designs. Observers judged theorientation of centrally presented Gabor patches (10° diameter, 1.2 cpd) orientedat +/-45° relative to the vertical. Trial durations were 2 seconds withan initial 200 ms presentation of the Gabor stimulus. Experimental runsconsisted of 25 presentations of each condition including 25 fixation trialsof equal duration. Observers participated in three scanning sessions whichdiffered only in the ordering of experimental trials: either blocked or fastevent-related (m-sequence or genetic algorithm optimized designs). Eachobserver completed 10 runs per session. We used a linear SVM to assess theorientation discrimination accuracy of retinotopically defined visual cortexin two ways: training and predicting on single trials (fMRI data pointsshifted by 4 seconds) or trials averaged across blocks. Trials from eventrelateddesigns were grouped into three blocks matching the block design.Averaging trials produced a significantly higher classification accuracythan single trial analysis for all experimental designs. Further, the singletrial analysis accuracy of the block design was close to chance across visualareas with no significant accuracy difference between designs. Our resultssuggest that much of the benefit that block designs provide in MVPA fMRIstudies is due to the averaging of fMRI data during analysis and this techniquecan equally well be applied to fast event-related designs.Acknowledgement: Army grant W911NF- 09-D-000123.405 How much tuning information is lost when we averageacross subjects in fMRI experiments?Natalia Y. Bilenko 1 (nbilenko@berkeley.edu), An T. Vu 2 , Thomas Naselaris 1 , AlexanderG. Huth 1 , Jack L. Gallant 1, 3 ; 1 Helen Wills Neuroscience Institute, Universityof California, Berkeley, 2 Department of Bioengineering, University of California,Berkeley, 3 Department of Psychology, University of California, BerkeleyMost fMRI studies average results across subjects, but there is substantialindividual variability in anatomical structure and BOLD responses. Therefore,averaging is usually performed by transforming each subject’s anatomicalvolume to a standard template, and then averaging functional datafrom all subjects within this common anatomical space. However, anatomicalnormalization is under-determined, so this process is likely to introducesome error into the averaged data set. How much tuning information is lostwhen we average across subjects in fMRI experiments? To investigate thisissue we compared averaged and individual results, using a computationalmodeling approach used previously in our laboratory (Kay et al., Nature2008, v.452, 352-355). The data consisted of fMRI BOLD activity recordedfrom the visual cortex of three subjects who viewed a large set of monochromaticnatural images. We first estimated voxel-based receptive fieldsfor each subject and calculated the correlation between observed and predictedBOLD responses. We then averaged the fMRI data across subjects(using a leave-one-out procedure to avoid over-fitting), estimated voxelbasedreceptive field models on the averaged data, and calculated the correlationbetween observed and predicted BOLD responses. We found thatthe predictions of models based on individual data were more highly correlatedwith the observed data than were the predictions of models based onaveraged data. In summary, our data suggest that averaging across subjectsreduces the information that can be recovered from fMRI data.Acknowledgement: NEI23.406 No Grey Matter Reduction following Macular DegenerationJoshua B. Julian 1 (joshua.b.julian@gmail.com), Daniel D. Dilks 2 , Chris I. Baker 3 ,Eli Peli 4 , Nancy Kanwisher 2 ; 1 Dept. of Philosophy, Tufts University, 2 McGovernInstitute for Brain Research, MIT, 3 Laboratory of Brain and Cognition, NIMH,NIH, 4 Schepens Eye Research Institute, Harvard Medical SchoolA recent study reported that individuals with central retinal lesions dueto macular degeneration (MD) showed grey matter reduction in “foveal”cortex, apparently due to the loss of bottom-up input. Here we ask whethersimilar structural changes are found in individuals with loss of bottom-upinput due to MD, but who show functional reorganization, in which fovealcortex responds to peripherally presented stimuli. We predicted that if greymatter reduction is driven by cortical deprivation, then such structuralchanges should not be found in MD individuals who show functional reorganization.As predicted, we found no evidence for grey matter reductionin foveal cortex in these individuals. These findings suggest that reorganizationof visual processing (i.e., the activation of foveal cortex by peripheralstimuli) may be sufficient to maintain “normal” cortical structure.Acknowledgement: NIH grant EY016559 (NK), and a Kirschstein-NRSA EY017507 (DDD).23.407 The human MT/V5 clusterHauke Kolster 1 (hauke.kolster@med.kuleuven.be), Ronald Peeters 2 , Guy Orban 1 ;1 Laboratorium voor Neuro-en Psychofysiologie, KU Leuven Medical School,Leuven, Belgium, 2 Division of Radiology, UZ Gasthuisberg, Leuven, BelgiumIntroduction. Recent observations of retinotopically organized areas withinthe human MT/V5 complex suggest two conflicting models for the relationshipof these areas to neighboring areas LO1/2: a discontinuous model(Georgieva et al., 2009), with separated central representations and distincteccentricity distributions in hMT/V5+ and lateral occipital complex (LOC),and a continuous model (Amano et al., 2009), in which LO1/2, MT/V5, andMST share a common eccentricity distribution. Methods. We used functionalmagnetic resonance imaging (fMRI) at 3T to identify areas withinhMT/V5+ and the LOC and recorded responses to motion and shape localizersand to hand action for their characterization. We correlated the functionalresponses across subjects through the retinotopic data in each subjectinstead of using an anatomical registration, which resulted in a specificity ofthe group analysis near the resolution of the functional volumes of (2mm)3.Results. We consistently located areas LO1 and LO2 within the LOC andfour retinotopic areas, the likely homologues V4t, MT/V5, MSTv, and FST,within hMT/V5+. Responses in the hand action vs. static hand conditionwere strong in all areas of the hMT/V5 complex but weak and not significantin the LOC. We found significant shape sensitivity in all areas ofboth complexes. MT/V5 and MSTv, however, showed half the sensitivitycompared to all other areas. Conclusion. The four areas of the hMT/V5complex share a common central representation distinct from the LOC andtheir topological organization closely resembles the organization recentlyobserved in the MT/V5 field map cluster of the macaque monkey (Kolsteret al., 2009). Areas V4t and FST, located between MT/V5 and LO1/2, showequally strong shape sensitivity as the areas within the LOC. They are, interms of functional properties as well as topological location, consistentwith the previously reported LO-ML overlap (Kourtzi, et al. 2002).Acknowledgement: IUAP 6/29, EF/05/014, and FWO G.0730.0923.408 Representations of physical and perceived colour-motionconjunction in early visual cortexRyota Kanai 1 (r.kanai@ucl.ac.uk), Martin Sereno 1,2 , Walsh Vincent 1 ; 1 Department ofPsychology, University College London, 2 Department of Psychology, BirkbeckCollege, University of LondonIn order to reveal how combinations of visual features are represented inearly visual areas (V1, V2, V3, V3A and V4), we examined whether theyshow adaptation to colour-motion conjunctions using functional MRI.Further, we investigated which visual areas show adaptation to perceivedconjunctions rather than physical conjunctions using the steady-state misbinding(Wu, Kanai & Shimojo, 2004), which allows separation of perceivedconjunction from physical conjunction. Adaptation to physical and perceivedconjunctions was evaluated within regions of interest (ROIs) correspondingto the patches of the visual field where misbinding was induced.In one condition, peripheral conjunctions alternated physically every 2seconds, but the central part remained constant, resulting in a constant percept(physical alternation condition). In a second condition, the central partalternated physically every 2 seconds, but the peripheral target patchesremained physically constant (perceptual alternation condition). Twoadditional conditions were included as the baseline conditions in whichboth the central and peripheral patches stayed constant or both alternated.We found that most of the early visual areas adapted to physical stimuluscombinations, suggesting that these areas encode physical colour-motioncombinations even when the percept alternated. The only exception wasV3A, which showed stronger adaptation to perceived combinations ratherthan physical combinations. These results indicate that colour and motionmay not be segregated as previously believed. Furthermore, the adaptationto perceived combination in V3A suggests that conscious perception ofcolour-motion conjunction may be directly represented at an intermediatestage of visual processing.62 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Morning Posters23.409 Multiple areas in human cerebral cortex contain visualrepresentations of head rotationD.M. Arnoldussen 1 (d.arnoldussen@donders.ru.nl), J. Goossens 1 , A.V. vanden Berg 1,2 ; 1 Dept. Biophysics, Donders Institute for Brain Cognition andBehaviour,Centre for Neuroscience, Radboud University Nijmegen MedicalCentre , 2 Functional Neurobiology, Helmholtz Institute, Utrecht University, TheNetherlandsOur brain uses visual flow patterns to derive important information aboutthe rotation of the eye and head through space and the direction of selfmotion.This information is processed in various regions along the visualhierarchy, some of which also receive non-visual signals. Several regionsalong the dorsal stream are selective for elementary motion, like area V3A,V6 and the middle temporal area (MT). Other areas in this path, like themedial superior temporal area (MST) and the ventral intra-parietal area(VIP), are particularly modulated by optic-flow patterns. They are closelyinvolved in heading perception and are both modulated by vestibular andeye movement signals. Although most of these higher visual areas havebeen retinotopically mapped, their functional role is still poorly understood.Previously, we have identified a sub-region of pMST, to be modulated byvisual flow signals corresponding to a rotation of the head. For this, weused stimuli that allowed dissociation between simulated head- and gazerotation (see abstract: van den Berg et al.). Here, we show -using psychophysicaltechniques, high-resolution functional resonance imaging andwide-field visual stimuli- that: (1) perceived ego-rotation corresponds tothe simulated head rotation rather than the gaze rotation. (2) like in pMST,regions within area V3A, V6 and pVIP show a specific modulation of theBOLD response to simulated head rotation. (3) these areas have a retinotopicorganization. Our observations do not permit us to conclude in whichreference frame the receptive fields collect the visual flow: retino-centric orhead-centric? Possibly the multiple visual representations of head-rotationdiffer in this respect.Acknowledgement: This work was Funded by NWO-ALW grant 818.02.006 (AvB)Color and light: MechanismsOrchid Ballroom, Boards 410–424Saturday, May 8, 8:30 - 12:30 pm23.410 Changes in the space-average S-cone stimulation ofinducing patterns suggest an interaction among the differentcone-typesPatrick Monnier 1 (patrick.monnier@colostate.edu), Vicki Volbrecht 1 ; 1 Dept. ofPsychology, Colorado State UniversityBACKGROUND: Induction with S-cone stimulating patterns can causestriking color shifts (e.g., Monnier & Shevell, 2003). In the present study,we explored whether changing the space-average S-cone stimulation of theinducing pattern, holding the differences in S-cone stimulation betweeninducing and test chromaticities constant, affected the color shifts. METH-ODS: Chromatic induction was measured with patterns composed ofcircles that varied in S-cone stimulation using asymmetric matching. Thecolor appearance of a test ring presented with high and low S-cone stimulatingcircles was matched by adjusting the hue, saturation, and brightnessof a comparison ring presented within a uniform gray (EEW) field. Thespace-average S-cone stimulation of the test pattern was varied, holding thedifferences in S-cone stimulation between the inducing and test chromaticitiesconstant. For each inducing pattern, measurements for three test-chromaticitiesthat varied in L/(L+M) were obtained. RESULTS: As previouslyreported, S-cone inducing patterns can cause relatively large shifts in colorappearance. The arrangement of the inducing circles (high-S-cone-adjacent/low-S-conenon-adjacent or vice versa) determined the direction of thecolor shifts toward higher or lower S-cone stimulation, respectively, independentof the space-average S-cone stimulation of the inducing pattern.The space-average manipulation did affect the three test L/(L+M) chromaticitiesdifferently, with a general shift toward lower S-cone matches for thehigh L/(L+M) test. CONCLUSION: Variations in the space-average S-conestimulation of the inducing patterns did not alter the overall direction of thecolor shifts but did affect the magnitude of the shifts for the three test-ringchromaticities that varied in L/(L+M) chromaticities. These measurementssuggest an interaction among the different cone-types.Acknowledgement: NA23.411 Testing models of color deficiencies using normal observerswith Ishihara plates simulated for color deficient observersJoao Linhares 1 (jlinhares@fisica.uminho.pt), Sergio Nascimento 1 ; 1 Department ofPhysics, University of Minho, Gualtar, Braga, PortugalThe chromatic diversity of complex scenes can be simulated for normal andcolor deficient observers. Current models of color vision deficiencies allowto simulate for a normal observer the chromatic sensations experienced bya color deficient observers. How real such simulations are is an open question.The goal of this work was to assess the effectiveness of the simulationswith normal observers viewing Ishihara plates simulated for color deficientobservers. The plates were digitized with a hyperspectral imaging systemand the spectral reflectance of each pixel of the plates was estimated from agray reference surface present near the plate at the time of digitizing. Datawas acquired from 400 to 720 nm in 10 nm steps. Images were assumed renderedunder the D65. Simulations for normal observers of the perceptionof dichromatic and of anomalous color vision were done using deMarco’sanomalous color matching functions (JOSA-A, 9(9): p.1465-1476, 1992) andBrettel’s simulation of color appearance for dichromats (JOSA-A, 14(10):p.2647-2655, 1997). The resulting images were displayed on a calibrated17-inch, RGB color monitor with flat screen controlled by a computer raster-graphicscard providing 24 bits per pixel in true-color mode (VSG 2/5;Cambridge Research Systems, Rochester, UK). Normal observers wereasked to read the numbers on the plates displayed on the screen, simulatedfor normal, protanomalous, deuteranomalous, protanope and deuteranopeobservers. Ishihara plates were displayed randomly in the same observercategory to avoid plate memorization. Comparing the expected results asdescribed in the Ishihara’s test instructions with those obtained here valuesof about 70% to 90% were found to all observers. These results suggest thatthe models used describe vision of color deficient observers well enough toreproduce answers of Ishihara plates.Acknowledgement: This work was supported by the Centro de Física of Minho University,Braga, Portugal and by the Fundação para a Ciência e a Tecnologia (grants POSC/EEA-SRI/57554/2004 and POCTI/EAT/55416/2004). João M.M. Linhares was fully supportedby grant SFRH/BD/35874/2007.23.412 Equiluminance Settings Interact Strongly With SpatialFrequencyAlissa Winkler 1 (awinkler@uci.edu), Charles Chubb 1 , Charles E. Wright 1 ; 1 Dept. ofCognitive <strong>Sciences</strong>, University of California, IrvineThe minimum motion method is a standard tool used by psychophysicistsobtaining perceptually equiluminant display settings for a light of hue A toanother fixed light F. This method uses a 4 frame periodic stimulus, whose1st and 3rd frames comprise counterphase, achromatic gratings and whose2nd and 4th frames comprise counterphase square wave gratings alternatingbetween lights A and F in quadrature with the square wave of frames 1and 3. When the luminance of hue A is adjusted to make the motion of thisstimulus ambiguous, the resulting light is taken as equiluminant to F. Wedocument dramatic effects of the spatial frequency (SF) of the square waveused in the motion stimulus on the equiluminance settings obtained usingthis method. Some observers show the following pattern: when the squarewave is low SF (3 cycles/deg), in order to be made equiluminant to a fixedgray, a saturated green needs to be made much lower in luminance than itdoes when the square wave is high SF (6 cycles/deg). For other observers,the reverse pattern holds: their equiluminant green settings are higher forthe low than for the high SF square wave. Moreover, whichever patternan observer shows in her equiluminant settings for green, she is likely toshow the reverse pattern in her settings for red lights: i.e., if an observerproduces higher equiluminance settings for green with the high than withthe low SF square wave, then she tends to produce lower equiluminancesettings for red with the high than with the low SF square wave. Thesefindings underscore the importance of matching the SF of the minimummotion stimulus to the SF of context in which the equiluminant lights are tobe used experimentally.Support: National Science Foundation BCS-084389723.413 The role of color in the early stages of visual analysisGiovanni Punzi 1 (giovanni.punzi@pi.infn.it), Maria Michela Del Viva 2,3,4 , SteveShevell 3,4,5 ; 1 Dipartimento di Fisica, Università degli Studi di Pisa, 2 Dipartimentodi Psicologia, Università degli Studi di Firenze, 3 Psychology, University ofChicago, 4 Visual Science Laboratories, Institute for Mind and Biology, Universityof Chicago, 5 Ophthalmology & Visual Science, University of ChicagoSaturday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>63


Saturday Morning PostersVSS 2010 AbstractsSaturday AMThe visual system is capable of quickly extracting relevant informationfrom a large amount of visual data. In order to do so, the early stagesof analysis must provide a compact image representation that extractsmeaningful features (Barlow, 1959; Marr 1976). Color in natural scenes isa rich source of information, but a worthwhile question is whether coloris sufficiently important to justify its implicit computational load duringthe early stages of visual processing, when strong compression is needed.A pattern-filtering approach (Punzi & Del Viva, VSS-2006), based on theprinciple of most efficient information coding under real-world physicallimitations, was applied to color images of natural scenes in order to investigatethe possible role of color in the initial stages of image representationand edge detection. That study, performed on photographic RGB images,confirmed the effectiveness of the pattern-filtering approach in predictingfrom first principles the structure of visual representations, and additionallysuggested that color information is used in a very different way thanluminance information (Del Viva, Punzi & Shevell, VSS-2009). The presentstudy is significantly more detailed and uses the photoreceptor color spaceof MacLeod and Boynton, where luminance and chromatic information canbe expressed separately. The results show that, when strict computationallimitations are imposed, the use of color information does not provide asignificant improvement in either the perceived quality of the compressedimage or its information content, over the use of luminance alone. Theseresults suggest that the early visual representations may not use color.Instead, color may be more suitable for a separate level of processing, followinga rapid, initial luminance-based analysis.Acknowledgement: Supported by an Italian Ministry of University and Research Grant(PRIN 2007)23.414 Filling-in of color spreads to well-localized illusory contoursClaudia Feitosa-Santana 1,2 (claudia@feitosa-santana.com), Anthony D’Antona 1,2 ,Steven K Shevell 1,2,3 ; 1 Psychology, University of Chicago, USA, 2 Visual ScienceLaboratories & Institute for Mind and Biology, University of Chicago, USA,3 Ophthalmology & Visual Science, University of Chicago, USAPURPOSE: Observers report that a filled-in color from a chromatic lightinto an equiluminant achromatic surround is bounded by illusory contours(Feitosa-Santana et al, VSS 2009), but a possible explanation is thatobservers report filling-in because they cannot accurately localize illusorycontours. This was tested by measuring (1) observers’ ability to localize illusorycontours and (2) the frequency of perceived filling-in when the chromaticlight that normally fills-in had a higher luminance than the surround.If contour lozalization is poor, then frequency of filling-in should not varywith luminance because the added luminance-contrast edge still reaches apoorly localized illusory contour. METHODS: Three kinds of illusory contourswere tested: Kanizsa square from solid “pacmen”, Kanizsa squarefrom “bull’s eye” pacmen, and horizontally phase-shifted vertical lines. Inexperiment (1), two thin dark horizontal lines on an achromatic backgroundwere presented on either side of a horizontal illusory contour. In differenttrials, the lines were positioned at various positions above or below theillusory contour; observers indicated whether the lines appeared above orbelow the contour. In experiment (2), a yellow square with a luminancehigher than its achromatic surround was presented some distance from theillusory contour. Without luminance contrast, the yellow square fills-in upto the contour, which was either 4 or 6 min away. Three levels of luminancecontrast were tested (5%, 7%, 11%). Observers indicated whether the yellowsquare appeared to be touching the illusory contour (thus a filled-incolor). RESULTS & CONCLUSION: (1) Observers perceived the illusorycontour’s position with accuracy ±1min. (2) The frequency of filling-in wasattenuated with 5% or 7% luminance contrast, and abolished at 11%. Bothresults are inconsistent with poorly localized illusory contours, and thusconfirm that the spread of filled-in color is bounded by illusory contours.Acknowledgement: Supported by NIH grant EY-04802.23.415 Why do coloured filters improve vision?Annette Walter 1 (A.E.Walter3@Bradford.ac.uk), Michael Schuerer 2 , Marina Bloj 1 ;1 Bradford Optometry Colour and Lighting (BOCAL) Lab, School of Optometryand <strong>Vision</strong> <strong>Sciences</strong>, University of Bradford, 2 OncoRay – OnCOOPtics, MedicalDepartment Carl Gustav Carus, Universitity of DresdenColoured filter media are said to improve colour contrast especially forsport related activities. The improvement is not well defined and apparentlyincludes contrast vision (discrimination based on luminance differences),colour discrimination (the ability to distinguish colours in directcomparison depending on their colour distance) and some other effects likesimultaneous contrast and adaptation. We developed an objective measurementmethod for detecting the effects of coloured filter media on colourperception while excluding the effects of luminance differences.The apparatus employed in this investigation fulfils the requirements of“colorimetry by visual matching” and does not have the limitations of CRTor TFT displays. It is based on additive mixing of the emission spectra ofseven different light emitting diode types (LEDs). Based on this, a freelyadjustable spectrum is generated. The selected LEDs covered a continuousspectra in the range of 420 nm to 750 nm. In our initial measurements, theoverall luminance level was fixed at 377 cd/m2. We evaluated just distinguishablecolour difference on a vertically divided, two degree, test fieldaround a yellow-green reference colour (CIEx=0,466, CIEy=0,453) alongfive colour directions. Measurements with volume filters (laser goggles(12 participants, 3 repeats), contact lenses (n=12, 3 repeats) and sport filters(n=3, 3 repeats)) were done in a similar fashion; the filters absorbed partsof the reference spectra and induced color shifts in different parts of thetri-stimulus space. Any induced luminance difference where eliminated byadjusting the LEDs’ intensity.For all filters and participants, the smallest colour discrimination ellipses(thresholds) were found in the yellow region, while size and geometry variedwidely for each subject. We believe that this major improvement wasbased on increased colour discrimination in the yellow region and can notbe accounted by variation in luminance or the use of a non-uniform colourspace.Acknowledgement: Prof. Dr. Langenbucher, Department of Experimental Ophthalmology,University of Saarland and School in Advanced Optical Technologies, University ofErlangen23.416 Filling-in with afterimages: Modeling and predictionsGregory Francis 1 (gfrancis@purdue.edu), Jihyun Kim 1 ; 1 Purdue UniversityVan Lier, Vergeer, and Anstis (2009) reported that color information in avisual afterimage could spread across regions that were not colored in theinducing stimulus. The perceived color and shape of the afterimage couldbe manipulated by drawn contours that apparently trapped the spread ofafterimage color signals. New simulations of the BCS/FCS model of visualperception (Grossberg & Mingolla, 1985a,b) demonstrates that the modeleasily accounts for many of the properties of these afterimages. A core ideaof the model is that representations of colors spread in all directions at afilling-in stage until blocked by boundary signals. Boundary signals thatform closed connected contours can trap the spreading colors to create asurface of relatively uniform color. A side effect of this process is that colorcontrasts that are too weak to form boundaries may spread beyond theirphysical location. The weak color contrasts that are often present with anafterimage are one example of this phenomenon. The model simulationsfurther predict that a small closed contour should block the spread of afterimagecolor into the interior of the contour. Empirical data demonstrate thevalidity of this prediction.23.417 The role of color in perceptual organizationBaingio Pinna 1 (baingio@uniss.it), John S. Werner 2 ; 1 Dept. of Architecture andPlanning, Univ. of Sassari, Italy, 2 Dept. of Ophthalmology & <strong>Vision</strong> Science,UCDavis, CA, USAColor is a visual attribute that appears to belong to an object and to itsshape. Phenomenally, the perception of an object is often considered identicalto the perception of its shape but not to its color, which appears as a secondaryattribute. As such it is believed color has relatively little influence inthe perception of shape even if it enhances the capacity of an organism todistinguish objects. If color can scarcely influence shape perception, it canbe more effective with grouping that is a more simple kind of perceptualorganization. Grouping defines what belongs with what and color is oneamong many possible attributes defining the similarity principle studied byWertheimer. In other words, color can determine in terms of similarity howelements in the visual field ‘go together’ to form an integrated, holistic percept.Among the many possible kinds of similarities, grouping by color isbelieved to be less effective compared with other attributes like shape andluminance. The main purposes of this work are to study the role played bycolor in determining visual grouping, not only in relation to other similarityattributes but also in relation to other principles such as proximity, goodcontinuation and past experience, and the perceptual shape of objects. Psychophysicalexperiments revealed several new effects and demonstratedthat in spite of previous results color can strongly influence both the form of64 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Morning Postersgrouping and the form of shape. These results were extended and strengthenedby using a reading task that implies a process of segmentation ofwords and then phenomenal grouping and shape formation.Acknowledgement: Fondo d’Ateneo ex 60% (to BP)23.418 Illusory backward motion occurs only with a luminancecomponentCaterina Ripamonti 1 (c.ripamonti@ucl.ac.uk); 1 Institute of Ophthalmology, UniversityCollege LondonFor stimulus durations shorter that 35 msec, the perceived direction ofmotion of a stimulus composed of a moving 3-c/deg grating and a static1-c/deg grating can appear reversed (Derrington and Henning, 1987), eventhough the direction of motion of the high frequency grating when presentedalone is perceived correctly. We tested whether this illusory motionoccurs also for stimuli composed of coloured gratings with and without aluminance component. Stimuli consisted of a moving 3-c/deg and a static1-c/deg horizontal sinewave gratings. The high frequency grating moved at6-c/deg or 12-c/deg. The gratings were seen through a circular aperture of5 deg diameter surrounded by a uniform grey background. Stimulus durationwas controlled by varying the standard deviation of a temporal Gaussianenvelope. A 2-AFC paradigm was used to determine the perceivedmotion direction of the stimulus. When both gratings contained a luminancecomponent, we found that for stimulus durations between 35 and125 msec, the gratings appeared to slide on top of each other. The apparentmotion of the steady low-frequency grating was in the opposite directionto the high-frequency grating. At durations shorter than 35 msec, the twogratings appeared as a single pattern moving in the opposite direction ofthe high frequency grating (illusory motion). Interestingly, when either orboth gratings were isoluminant, only the high frequency grating was seenmoving. Its perceived direction of motion was correct only for durationsabove 35 msec, but below 35 msec performance was change. In summary,illusory backward motion is only found with stimuli with a luminancecomponent. We suggest that illusory backward motion is due to a higherorderfeature tracking system that requires two luminance inputs.Acknowledgement: Fight for Sight23.419 The Role of S-Cone Signals in the Color-Motion AsynchronyEriko Miyahara-Self 1 (eself@fullerton.edu), Catherine Tran 2 , Naul Paz 2 , AshleyWatson 1 ; 1 Department of Psychology, California State University, Fullerton,2 Department of Biological Science, California State University, FullertonBackground. In order to perceive simultaneous changes in color (e.g., fromred to green) and motion direction (e.g., from upward to downward),the change in the motion direction needs to precede the color change byapproximately 80 ms. This indicates color-motion asynchrony. This phenomenonhas been investigated only with red and green stimuli that representthe L- and M-cone activity. The purpose of this study was to examinethe asynchrony with stimuli that vary along the S/(L+M) axis as well asthose that vary along the L/(L+M) axis. Because S-cone signals are processedmore slowly than L- and M- cone signals, decreased asynchrony wasexpected with stimuli that vary along the S/(L+M) axis. Methods. Stimuluswas 200 random equiluminant dots in a circular field of 8° in diameter. Thedirection of the motion of the dots was initially upward (or downward) andchanged to downward (or upward) after 300 ms of the stimulus onset. Thecolor of the dots changed once, either along the L/(L+M) or the S/(L+M)axis and the second color lasted for 300 ms. The relative timing of motiondirection change and color change was varied from -100 to 250 ms in incrementsof 50 ms. The observer’s task was to judge the predominant directionof motion of the second-color dots. The magnitude of color-motion asynchronywas assessed by the method of constant stimuli from four observers.Results. Surprisingly, the results showed that both stimuli that varied alongthe L/(L+M) axis and those that varied along the S/(L+M) axis producedperceptual asynchrony of about 90 ms. Conclusion. The equal magnitudeof the color-motion asynchrony along the L/(L+M) and the S/(L+M) axesindicates that the color-motion asynchrony takes place in higher corticalareas beyond the integration of cone signals. This further supports the differentialprocessing time model.23.421 The effect of luminance intrusion on the chromatic VEPresponseChad Duncan 1 (cduncan@unr.edu), Michael Crognale 1 ; 1 University of Nevada,RenoThe use of large-field stimuli to elicit chromatic visual evoked potentialsfrom the S-(L+M) pathway is useful for evaluation of compromised retinas.However the use of large fields has been criticized as containing luminanceintrusion due in part to the distribution of macular pigment across the retina.We tested the effects of luminance intrusion on the chromatic component(CII) of the onset VEP waveforms. Over a range of luminance mismatches,the latencies of the chromatic waveform components were unaffected byluminance intrusion. Responses to low spatial frequency luminance onsetsare known to be highly variable. Consequently, the affects of luminancemismatches were also highly variable. The degree to which intentionalluminance mismatches affected the component latencies depended on theshape of individual achromatic components in the waveforms. However,over a range of luminance mismatches that should encompass that encounteredby normal variations in macular pigment, the latencies were unaffected.These results suggest that luminance mismatches due to macularpigment differences across the retina have little effect on the latencies of thechromatic components of the VEP response to large field S cone stimuli.23.422 Quantifying the perception of colour in visual saltationDavid Lewis 1 (dave37@gmail.com), Sieu Khuu 1 ; 1 Optometry and <strong>Vision</strong> <strong>Sciences</strong>,University of New South WalesIn the visual saltation illusion, stimuli are presented first to one locationthen another in rapid succession which produces the illusion of the intermediatestimuli as jumping in equidistant steps between the two locations(Geldard, 1975). Geldard also noted that if stimuli at the two sites of stimulationwere of different colours, the apparent colour of the mislocalisedstimuli appeared to be a mixture of the two colours. For example, if stimuliat one site was red and the other site was green then the mislocalised stimuliwould appear yellow. In the present study, we systematically quantifiedthis illusory colour change with different colour combinations. In Experiment1, observers were presented with 3 coloured bars (0.5x2deg, interstimulus-intervalof 0.25 seconds); two bars of the same colour were flashedat one location (10 degrees to the right of fixation), and one bar of a differentcolour was flashed at another location (15 degrees). Saltation was notedwith the second bar appearing mislocalised between the first and thirdbars, and six observers were required to adjust the colour and position ofa probe to match the perceived colour and position of the mislocalised bar.We observed that the perceived colour of the mislocalised element does notcorrespond to its physical colour for a range of colour combinations, butappears to be of an equal mixture of the two physical colours. Additionally,in Experiment 2 we showed that the perceived colour of the mislocalisedelement can be altered by briefly changing the colour of the backgroundcoinciding with its perceived position, and the resultant colour is equal to amixture of the perceived colour of the bar and the background colour. Thisfinding indicates that phemeonological colour perception in visual saltationrelies on the perceived colour and not the physical colour of stimuli.23.423 Experimental study of the pre-nonlinearity, nonlinearity andpost-nonlinearity stages at medium wavelengthsDaniela Petrova 1 (d.petrova@ucl.ac.uk); 1 University College LondonA 560 nm amplitude-modulated flickering stimulus undergoes a brightnesschange at low to medium intensities and desaturates at high intensities.This change in appearance is consistent with a distortion product producedby a nonlinear stage, which can be used to dissect the visual system intopre-nonlinearity and post-nonlinearity stages whose frequency responsescan be measured separately. Despite previous investigations in distortionof temporally varying visual stimuli, the location of the nonlinearity andthe frequency and intensity responses of the individual pre- and post-nonlinearitystages are still not understood for a 560 nm stimulus, which isthe subject of this study. A Maxwellian-view system was used to generatethe visual stimulus. The input-output function of the nonlinearity wasmeasured at different frequencies by matching the distortion product withthe change in appearance of a sinusoidal stimulus of equal wavelengthand intensity. Changes in the frequency responses were measured at fourintensities between 8.56 and 10.41 log10 quanta s-1 deg-2. The results ofthe experiments show that the peak frequency response for the pre-nonlinearitystage is at 7.5-25 Hz and the upper frequency limit is at 35-45 Hz,depending on intensity. At large amplitude-modulations, there is greaterinter-subjective variability and a plateau is reached in the matching data.The post-nonlinearity stage is low-pass and most sensitive at mediumintensity levels. The study provides new data for the early and late frequencyresponses of the visual system including how they differ withSaturday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>65


Saturday Morning PostersVSS 2010 AbstractsSaturday AMsystematic variation in intensity and frequency of amplitude-modulatedstimuli. The results are consistent with retinal cell physiology data, whichindicates an early nonlinearity. Different models are considered as possibleexplanations to the underlying nonlinearity including an expansive nonlinearityand asymmetric slew rate. The findings of this study are consistentwith the Bezold-Brucke, Brucke-Bartley and Broca-Sulzer effects, and havewide possible applications.Acknowledgement: BBSRC23.424 Measuring perceived flicker in field-sequential displaysWei-Chung Cheng 1 (waynewccheng@gmail.com); 1 US FDAPurpose: To reduce the color breakup phenomenon in field-sequential displays,different color sequences have been proposed in the literature suchas WRGB, RGBKKK, RGBCMY, etc. Although some of them alleviate theperceived chromatic artifacts, most of them however introduce flicker inthe luminance domain. The goal of this study is to measure and quantifythe perceived flicker by using electroencephalography.Method: A conventional blocked ERP experiment was conducted. Thesubjects were stimulated by a custom-made field sequential liquid crystaldisplay, in which the RGB LED backlight can generate arbitrary colorsequences between 90 and 180 Hz. The on-set was 4 seconds of flickeringRGB image followed by the off-set, which was 4 seconds of dark image. Atrial consists of 10 consecutive cycles of on-sets and off-sets. A 64-channelEEG recorder (EGI 250) and its software were used to analyze the waveformdifference between on-set and off-set as an index of perceived flicker.Results: Altering the color sequence mutually affects chromatic artifactsand luminous flicker. The best solution to color breakup has the worse perceivableflicker. The outcomes also show higher consistency compared withpsychophysical methods. This method can be used to determine the criticalflicker fusion frequency for judging the image quality of field-sequentialdisplays.Perception and action: Reaching andgraspingOrchid Ballroom, Boards 425–439Saturday, May 8, 8:30 - 12:30 pm23.425 Effects of object shape on the visual guidance of actionOwino Eloka 1 (owino.eloka@googlemail.com), Volker H. Franz 1 ; 1 Department ofPsychology, Justus-Liebig-University, Giessen, GermanyMany studies suggest that the perception of object shape is encoded holisticallyrather than analytically. However, little is known about how objectshape is processed to control grasping movements. It has been proposedthat visual control of action utilizes only the most relevant dimensionsof an object (Ganel & Goodale, 2003).We tested whether visual control ofaction also takes into account information about object shape. 26 participantsgrasped a disk or a bar of identical lengths (bar: 4.1 cm long, disc:4.1 cm diameter). In 20 % of the trials, the object changed its shape frombar to disk or from disk to bar during the movement. The change occurredearly during the movement (after index-finger or thumb moved 2 cm awayfrom the starting position) or late (after 2/3 of the movement distance wascovered). In the remaining 80 % of the trials no object change took place.We found that maximum grip aperture depended on object shape. Participantsgrasped bars with a significantly larger maximum grip aperture thandisks. Furthermore, they adjusted maximum grip aperture when objectshape changed from bar to disk. Specifically, these adjustments occurredonly in the early phase of the movement. Our results reveal that vision foraction is sensitive to object shape information. They also indicate that objectinformation encoded holistically is used for corrective adjustments duringthe grasping movement. Taken together, these results show that holisticprocessing might play a notable role in vision for action.Acknowledgement: This work was supported by grant DFG/FR 2100/1-3 to VolkerFranz and the research unit DFG/FOR 560 ‘Perception and Action’ by the DeutscheForschungsgemeinschaft (DFG).23.426 Older adults use a distinctive form of visual control to guidebimanual reachesRachel Coats 1 (rcoats@indiana.edu), John Wann 2 ; 1 Psychological & Brain<strong>Sciences</strong>, Indiana University, 2 Royal Holloway University, LondonBackground: Previous research has shown that young adults are skilled atcoordinating the left and right hands when reaching to grasp two separateobjects at the same time, or when carrying two objects to the samelocation. Less is known about the behaviour of older adults with regardto such tasks. We examined the performance differences between youngadults (mean age 20) and older adults (mean age 74) in terms of how theycoordinate the two hands during a bimanual movement. Methods: Identicalobjects were located to the left and right of 3 trays laid out in front ofthe participants along the fronto-parallel plane. Participants picked up theobjects (one in each hand) and placed them in the specified tray simultaneously.Movements of the objects were recorded using a VICON 3D motioncapture system. Results: Although no group differences were found inoverall movement time, the details of the reach movements were not thesame. The older adults moved as quickly as possible to the tray vicinity,producing reaches with greater peak velocities than the young. They thenspent longer than the young in the ‘near-zero velocity’ final phase of thereach and made more adjustments during this phase. In contrast, the youngspent longer in the preceding low velocity phase than the older adults, andmade more adjustments during this phase. Conclusions: We propose that,in contrast to the younger group, older adults have more problems usingonline sensory feedback to correct trajectory errors during the flight phase.As a result they wait until both hands are together so they can visuallymonitor both objects before making trajectory corrections.Acknowledgement: Economic and Social Research Council, UK23.427 Time-course of allocentric-to-egocentric conversion inmemory-guided reachYing Chen 1,2 (liuc@yorku.ca), Patrick Byrne 1 , J. Douglas Crawford 1,2,3,4 ; 1 Centre for<strong>Vision</strong> Research, York University, 2 School of Kinesiology & Health Science, YorkUniversity, 3 Departments of Psychology & Biology, York University, 4 NeuroscienceGraduate Diploma Program, York UniversityIt has been suggested that both egocentric and allocentric cues can be usedfor memory-guided movements, and that allocentric memory dominatesduring longer memory intervals (Obhi&Goodale, 2005; Hay&Redon, 2006).In the present study we examined 1) at what point in the reach plan allocentricrepresentations are converted to egocentric representations and 2) therates of decay of egocentric and allocentric memory. Nine subjects reachedfor a remembered target in complete darkness after a variable memorydelay (2.5s, 5.5s, or 8.5 seconds in total). In the Ego Task the target was presentedalone in the periphery on a CRT screen. In the Allo Task the targetwas presented along with four nearby blue disks (visual landmarks). Afterthe variable delay, the landmarks reappeared at a shifted location, and subjectswere instructed to reach to the target relative to the landmarks. In theAllo-Ego Conversion Task the shifted landmarks re-appeared twice: oncebefore the variable delay and once immediately after (just before the reachcue). We analyzed the variance of reaching errors and reaction time (RT) foreach memory delay in the three tasks. In the Ego Task, variance increasedsignificantly in medium and long delays compared to the short delay; RTwas longer in the short delay than medium and long delays, and the latterwas significant. In the Allo Task there was no significant difference in varianceand RT across the delays. In the Allo-Ego Conversion Task, there weresignificant increase in variance and decrease in RT for the medium and longdelays compared to the short delay, which was similar to Ego Task. Theseresults confirm that egocentric memory for reaching degrades more rapidlythan allocentric memory, but despite this, in our Allo-Ego Task subjectspreferred to convert allocentric into egocentric representations at the firstpossible opportunity.Acknowledgement: Canada Research Chairs Program23.428 Impact of hand position during reaching on the manualfollowing response induced by visual motionHiroaki Gomi 1,2 (gomi@idea.brl.ntt.co.jp), Naotoshi Abekawa 1 ; 1 NTT CommunicationScience Labs., Nippon Telegraph and Telephone Corporation, 2 ShimojoImplicit Brain Function Project, JST-ERATOIt has been recently found that the manual following response (MFR),which is shortly induced by applying a surrounding visual motion duringreaching, is modulated by a spatial relationship between gaze and reachingtarget locations (Abekawa & Gomi 2006 <strong>Society</strong> for Neurosci.). On theother hand, change in the spatial relationship between reaching target andvisual motion locations appeared not to affect the MFR. However, it has notyet been examined whether or not the hand position relative to the motion66 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Morning Postersstimulus affects the MFR. To investigate this aspect, we conducted an experimentin which the location of visual motion stimulus (longitudinal gratingpatternmotion in 120 x 13 cm) was changed along the reaching path fromproximal to distal positions (proximal, middle, and distal). Each stimulusstarted to move transversally at different hand positions (proximal, middle,and distal) during arm-extension reaching movements. The distal stimuluswith the middle hand position induced the greatest MFR among all nineconditions, and that stimulus with the proximal and distal hand positionsalso greatly induced the MFR. The middle stimulus with the proximal andmiddle hand positions also induced the MFR clearly, but that with thedistal hand position did not induce the MFR significantly. The proximalstimulus with any hand positions did not induce the MFR while the samestimulus induced the MFR during arm-flexion reaching movements. TheseMFR variations, therefore, could not be explained only by changes in thestimulus location on the retina and in the relationships between the gazeand reaching target locations. The MFR variations observed in the experimentsuggest that the MFR is affected not only by the hand position butalso by the hand movement direction relative to the stimulus location.23.429 Learning reward functions in grasping objects with positionuncertainty via inverse reinforcement learningVassilios Christopoulos 1 (vchristo@cs.umn.edu), Paul Schrater 1, 2 ; 1 Department ofComputer Science and Engineering, University of Minnesota, 2 Department ofPsychology, University of MinnesotaMany aspects of visuomotor behavior have been explained by optimalsensorimotor control, which models actions as decisions that maximizethe desirableness of outcomes, where the desirableness is captured by anexpected cost or utility to each action sequence. Because costs and utilitiesquantify the goals of behavior, they are crucial for understanding actionselection. However, for complex natural tasks like grasping that involve theapplication of forces to change the relative position of objects, modeling theexpected cost poses significant challenges. We use inverse optimal controlto estimate the natural costs for grasping an object with position uncertainty.In a previous study, we tested the hypothesis that people compensatefor object position uncertainty in a grasping task by adopting strategiesthat produce stable grasp at first contact – in essence using time efficiencyas a natural cost function. Subjects reached to an object made uncertain bymoving it with a robot arm while out of view. In accord with optimal predictions,subjects compensate by approaching the object along the directionof maximal position uncertainty, thereby maximizing the chance of successfulobject-finger contact. Although subjects’ grasps were near optimal,the exact cost function used is not clear. We estimated the unknown costfunctions that subjects used to perform the grasping task based on movementtrajectories. Our method involves computing the frequency that trajectoriespassed through a grid of spatial locations in the 2D space and usedthe results to estimate the transition probability matrix. Formulating thegrasping task as a Markov Decision Process (MDP) and assuming a finitestate-space, as well a finite set of actions, we can solve for the cost functionthat generate the MDP as an optimal solution. The estimated costs areconsistent with a trade-off between efficient grasp placement and a lowprobability of object-finger collision.Acknowledgement: This projected was funded by NIH grant NEI R01 EY01526123.430 View-based neural encoding of goal-directed actions: aphysiologically inspired neural theoryMartin A Giese 1 (martin.giese@uni-tuebingen.de), Vittorio Caggiagno 1 , FalkFleischer 1 ; 1 Dept. of Cognitive Neurology, HIH / CIN, Univ. Clinic Tübingen,GermanyThe visual recognition of goal-directed movements is crucial for actionunderstanding. Neurons with visual selectivity for goal-directed handactions have been found in multiple cortical regions. Such neurons arecharacterized by a remarkable combination of selectivity and invariance:Their responses vary with subtle differences between hand shapes (definingdifferent grip types) and the exact spatial relationship between effectorand goal object (as required for a successful grip). At the same time, manyof these neurons are largely invariant with respect to the spatial position ofthe stimulus and the visual perspective. This raises the question how thiscombination of spatial accuracy and invariance is accomplished in visualaction recognition. Numerous theories in neuroscience and robotics havepostulated that the visual system reconstructs the three-dimensional structureof effector and object and then verifies their correct spatial relationship,potentially by internal simulation of the observed action in a motor frameof reference. However, novel electrophysiological data, showing viewdependentresponses of mirror neurons, suggest alternative explanations.METHODS: We propose a novel theory for the recognition of goal-directedhand movements that is based on physiologically plausible mechanisms,and which makes predictions that are compatible with electrophysiologicaldata. It is based on the following key components: (1) A neural shape recognitionhierarchy with incomplete position invariance; (2) a dynamic neuralmechanism that associates shape information over time; (3) a gain-field-likemechanism that computes affordance- and spatial matching between effectorand goal object; (4) pooling of the output signals of a small number ofview-specific action-selective modules. RESULTS: We show that this modelis computationally powerful enough to accomplish robust position- andview-invariant recognition on real videos. It reproduces and predicts correctlydata from single-cell recordings, e.g. on the view- and temporal–orderselectivity of mirror neurons in area F5.Acknowledgement: Supported by DFG (SFB 550), the EC FP7 project SEARISE, and theHermann und Lilly Schilling Foundation.23.431 No pain no gain: Assessment of the grasp penalty functionUrs Kleinholdermann 1 (urs@kleinholdermann.de), Volker H. Franz 1 , Laurence T.Maloney 2 ; 1 Department of Experimental Psychology, Justus-Liebig-UniversityGiessen, 2 Psychology & Neural Science, New York UniversityPurpose: In experiments where the outcome of movements result inexplicit monetary rewards and penalties, subjects typically planmovements that come close to maximizing their expected gain. But whatif an economically optimal movement proves to be intrinsicallystressful to the organism? Would subjects trade gain to avoid pain?And if they did so, how would they price biomechanical discomfort inmonetary terms? We tested how degree of discomfort affected movementplanning in a simple grasping task.Methods: Subjects attempted to rapidly grasp circular disks (50 mmdiameter, 10 mm high). The edge of each disk was marked with tworeward regions symmetrically-placed on the circumference. If the thumband forefinger contact points both fell within the reward regions thesubject received a monetary reward and otherwise a penalty. A graspaimed at the centers of the reward regions would maximize expectedreward but such a grasp varied in comfort with rotation angle. Fromtrial to trial we rotated the reward regions, forcing the subject totrade off comfort and expected gain. In one condition (“narrow”) thereward regions spanned 40 degrees, in a second 60 degrees (“wide”).Deviating from the center was potentially more costly to the subjectin the narrow than in the wide condition.Results: Participants systematically traded a portion of theirpotential gain to achieve a more comfortable grasp position. Therelationship can be described by a monotonic function of wristrotation angle. This interrelation implies that biomechanicalconstraints may have a direct influence on the estimated usefulness ofa movement. Our findings demonstrate that the motor system includesbiomechanical comfort as one factor component of planning movementsthat maximize expected gain.Acknowledgement: Graduateschool NeuroAct [UK] Research unit DFG/FOR 560‘Perception and Action’ [UK] grant DFG/FR 2100/1-3 [VHF] Humboldt Stiftung [LTM]23.432 Visual feedback modulates BOLD activity in the posteriorparietal cortex more so for visually-guided grasping than for visually-guidedreachingRobert L. Whitwell 1,2 , Philippe A. Chouinard 1 , Melvyn A. Goodale 1 ; 1 Department ofPsychology, The University of Western Ontario, 2 Graduate Program in Neuroscience,The University of Western OntarioWhen we reach out to grasp an object, the visuomotor system uses visionto direct our hand to the object’s location and scale our grip aperture tothe object’s size. Several lines of evidence from human and non-humanprimate studies have implicated a network of structures in the posteriorparietal cortex (PPC) in the programming and updating of visually guidedgrasping. The present study was designed to examine whether the availabilityof visual feedback during movement execution would modulateSaturday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>67


Saturday Morning PostersVSS 2010 AbstractsSaturday AMpatterns of brain activation in the PPC for visually-guided reach-to-graspmovements. Participants were asked to either reach out and touch or reachout and grasp novel 3-D objects with or without visual feedback throughoutthe movement. A voxel-wise analysis was carried out using a 2 (task) x2 (feedback) ANOVA. Not surprisingly, the availability of visual feedbackwas found to increase activation in many visual areas in both the dorsal andventral streams. In addition, grasping (as compared to reaching) invokedactivity in motor areas (premotor and primary motor cortex), early visualareas (striate and extra-striate cortex), and areas in both the dorsal [e.g.,anterior intraparietal sulcus (aIPS) and superior parietal lobule (SPL)] andventral (lateral occipital complex, fusiform gyrus, and inferior temporal cortex)streams of visual processing. Importantly, however, task by feedbackinteractions were observed in several dorsal stream regions. In the rightSPL, the left aIPS, and the precuneus in both hemispheres, visual feedbackincreased the level of activation associated with reach-to-grasp movementsrelative to those made without visual feedback, a difference that was notapparent in the levels of activation associated with reach-to-touch movements.Taken together, these results add to a growing body of evidence thatimplicates the PPC in the programming, online monitoring, and updatingof visually guided grasping.Acknowledgement: Natural <strong>Sciences</strong> and Engineering Research Council of Canada(NSERC) and the Canadian Institutes for Health Research (CHIR)23.433 Plans for action in posterior parietal cortex: An rTMSinvestigationChristopher L. Striemer 1 (cstrieme@uwo.ca), Philippe A. Chouinard 1 , Melvyn A.Goodale 1 ; 1 Department of Psychology, Centre for Brain and Mind, University ofWestern OntarioMany theories of visuomotor control distinguish between the planningof a movement (i.e., programming the initial kinematic parameters), andthe execution of the movement itself (so-called ‘online control’). Evidencefrom neurological patients and functional brain imaging studies stronglysupport the notion that the posterior parietal cortex (PPC; especially theleft hemisphere) plays a critical role in the planning and execution of goaldirectedmovements. Importantly, however, there is no clear consensus onhow different sub-regions within the PPC contribute to movement planningand execution. Some theories suggest that both planning and execution arecarried out primarily within the superior parietal lobe (SPL), whereas otherssuggest that planning is carried out by inferior parietal lobe (IPL) andexecution is carried out by the SPL. In the current study we investigatedthis question using MRI-image guided repetitive transcranial magneticstimulation (rTMS; 3 pulses at 10Hz). Specifically, we applied rTMS to differentsites within the left IPL (angular and supramarginal gyri) and the leftSPL (anterior and posterior SPL) either at target onset (planning), or movementonset (execution), while participants (n=12) made open-loop pointingmovements to targets in peripheral vision. Thus, participants had vision oftheir hand and the target during the planning phase; however, vision of thehand and the target were removed at movement onset. The results revealeda significant interaction between the site of rTMS stimulation and the timeof rTMS delivery. This interaction was driven by a significant increase inmovement endpoint error when rTMS was applied during movement planningcompared to execution in the SPL compared to both the IPL and shamstimulation. In short, these data are consistent with the idea that the SPLplays a crucial role in the planning (i.e., programming) of goal directedmovements.Acknowledgement: This work was supported through a Heart and Stroke Foundation ofCanada Postdoctoral Award to C.L.S., a Canadian Institutes of Health Research (CIHR)Postdoctoral Award to P.A.C., and a CIHR operating grant awarded to M.A.G.23.434 Parietal regions specialized for saccades and reach in thehuman: a rTMS studyMichael Vesia 1,2,5 (mvesia@yorku.ca), Steve Prime 1,2,3 , Xiaogang Yan 1,2 , LaurenSergio 1,2,3,5 , J.D. Crawford 1,2,3,4,5 ; 1 Centre for <strong>Vision</strong> Research , 2 CanadianAction and Perception Network , 3 Departments of Psychology, York University,Toronto, Canada., 4 Biology, York University, Toronto, Canada., 5 Kinesiology andHealth Science, York University, Toronto, Canada.Primate neurophysiology and human brain imaging studies have identifiedeffector-related regions in the posterior parietal cortex (PPC). However,this specialization is less clear in human fMRI studies. Here we used fMRIderivedregions of interest to navigate transcranial magnetic stimulation(TMS) and causally determine saccade and reach specificity in three PPCregions. In experiment 1, six subjects performed memory-guided saccadesand reaches with their dominant, right-hand to remembered peripheral targetsin complete darkness. During the interval between viewing the targetand the saccadic eye- or reach-movement, we applied trains of repetitiveTMS to anatomically defined regions of interest from individual subjects- 1) superior parieto-occipital cortex (SPOC); 2) the more anterior-lateralmedial intraparietal sulcus (mIPS); and 3) a yet more anterior dorsal-lateralPPC region near the angular gyrus (cIPS) - in both hemispheres. Stimulationto left mIPS and cIPS regions increased reach endpoint variability torightward (contralateral) targets, whereas stimulation of SPOC deviatedreach endpoints towards visual fixation. Only rTMS to right mIPS and cIPSdisrupted contraversive saccades. We then repeated experiment 1 with thenondominant left-hand to investigate whether rTMS-induced errors remainspatially fixed or reverse (experiment 2). Here we found that stimulation ofright mIPS and cIPS caused a significant increase in endpoint variability toleftward (contralateral) targets. In our final third experiment, we investigatedreaching with or without visual feedback from the moving hand todetermine whether rTMS disrupted the reach goal or the internal estimateof initial hand position needed to calculate the reach vector. In both mIPSand cIPS visual feedback negated rTMS-induced reach errors, whereasrTMS-induced, directional reach biases in SPOC remained. Collectively,these results show that a more medial region centered on parieto-occipitaljunction is involved only in planning of reach (encodes goal representation),and more lateral regions in human PPC have multiple overlappingmaps for saccade and reach planning (encodes reach vector information).Acknowledgement: Canadian Institutes of Health Research, Canada Research ChairProgram23.435 Coding of curved hand paths in the Parietal Reach RegionElizabeth Torres 1 (ebtorres@rci.rutgers.edu), Christopher Buneo 2 , RichardAndersen 3 ; 1 Rutgers University, 2 Arizona State University, 3 CALTECHThe posterior parietal cortex is an interface between perception and intentionalactions (3) demonstrated with the memory-guided reach paradigm.Often the typical PRR neuron responds to visual targets briefly flashed inthe dark, sustains the activity during the planning period, and has anotherfiring burst with the initiation of the reach. These responses are gain-modulatedby arm position yet the tuning remains throughout the planningperiod. Such planning-activity patterns suggest a representation of thereach goal indicating the direction of a straight reach from the initial positionof the hand to the distal target. It is at present unknown (1) whetherthis planning code would change when the path to the distal target is notstraight, and/or (2) whether this code exclusively represents the goal of thereach. We addressed the first question by interposing physical obstacles onthe way to some targets to evoke curved hand paths and necessarily elicitan initial rotation of the hand-target movement vector. To address the secondquestion we placed the obstacle outside of the cells’ memory responsefield and compared its memory activity to the first case, in which the obstaclewas placed near the cell’s response field. We found systematically in 95cells from two monkeys a transient remapping and re-scaling of the cells’-response fields during the planning of obstacle-avoidance. Furthermore,these dramatic changes occurred whether or not the obstacle fell near thecenter of the memory-response field, and despite identical retinal input inthe dark (i.e. the fixation light and the target in the periphery). These PRRcells transiently maintained a curved hand path code, and returned back totheir original response fields when planning straight reaches again.23.436 Posterior Cortical Atrophy: An investigation of graspingBenjamin Meek 1 (ummeek@cc.umanitoba.ca), Loni Desanghere 1 , JonathanMarotta 1 ; 1 Perception and Action Lab, Dept. of Psychology, University ofManitobaAt last year’s VSS meeting (journalofvision.org/9/8/1095/) we presenteda patient with posterior cortical atrophy (PCA) who demonstrated a severedeficit in face recognition, problems interpreting and reproducing linedrawings of common objects, simultanagnosia, and colour hallucinations.Despite these perceptual difficulties, she was able to accurately guide herhand to stable grasp sites on irregularly-shaped objects, and she showedappropriate grip scaling during reach-to-grasp movements. It has beensuggested that such symptomatology represents a ‘ventral’ form of PCA, inwhich damage is predominantly restricted to occipitotemporal areas (Rosset al., 1996). The current study explored the visuomotor abilities of threepatients with PCA in relation to their perceptual deficits. In one experiment,subjects were presented with two asymmetrical, irregularly-shaped68 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Morning Postersobjects and asked whether they were the same or different. They were thenprompted to reach out and pick up one of these objects. We found that PCApatients unanimously performed worse on the object discrimination taskthan age- and gender-matched controls, yet they executed accurate graspsto these same objects. In another experiment, subjects had to reach outand pick up simple, rectangular blocks under free-viewing (closed-loop),no-vision (immediate open-loop), and delay conditions. Control subjectsappropriately scaled their grasps in accordance with block size under alltask conditions. In contrast, PCA patients scaled effectively during closedloopand immediate open-loop conditions, but lost this ability followinga three second delay. Previous work has demonstrated that introducinga delay in a grasping task forces visuomotor systems to rely on a stored‘percept’ of the target object retrieved from the ventral stream (Goodale etal., 1994), which our patients clearly lack. This study supports the idea thatPCA may include two unique variants - ‘dorsal’ and ‘ventral’ PCA, andreinforces the findings that there are separate neural pathways that mediatevision-for-perception and vision-for-action.Acknowledgement: This work was supported by grants from the Natural <strong>Sciences</strong> andEngineering Research Council of Canada to JM and the Manitoba Health Research Councilto BM.23.437 Investigating action understanding: Activation of themiddle temporal gyrus by irrational actionsJan Jastorff 1 (jan.jastorff@med.kuleuven.be), Simon Clavagnier 1 , Gyorgy Gergely 2 ,Guy A Orban 1 ; 1 1. Laboratorium voor Neuro- en Psychofysiologie, K.U. Leuven,Medical School, Leuven, Belgium, 2 2. Cognitive Development Center, CentralEuropean University, Budapest, HungaryPerforming goal directed actions towards an object according to contextualconstraints has been widely used as a paradigm to assess the capacity ofinfants to evaluate the rationality of others’ actions. Here we used fMRIto visualize the cortical regions involved in the assessment of action rationality.To this end, we scanned 15 participants, showing videos of humanactors reaching over a barrier to grasp a fruit. The conditions were arrangedaccording to a 2x2 factorial design by either changing the height of the barrier,or the height of the arm trajectory. The conditions were: 1) low barrierwith high arm trajectory, 2) high barrier with high arm trajectory, 3) lowbarrier with low arm trajectory and 4) high barrier with low arm trajectory.Thus, in the first three conditions the arm trajectory was not adapted tothe height of the barrier, rendering the action non-rational. Directly afterscanning, participants rated the rationality of the videos and the ratings ofa given subject were directly used to model the contrasts for that subject.Random effects analysis, combining these contrast images from all subjects,showed bilateral activation of the posterior middle temporal gyrus (pMTG).In contrast to rationality, the height of the barrier was indicated amongstothers by the activity of the EBA. An additional control experiment, showingrandom dot texture patterns, animated with exactly the same localmotion present in the original videos, confirmed that pMTG activation wasrelated to rationality and not to low level differences in the videos. OurpMTG activations were imbedded in the STS regions processing the kinematicsof observed actions [Jastorff & Orban, 2009]. These results togetherwith those of Saxe et al [2004] suggest that rationality is assessed initially bypurely visual computations, combining kinematics of the action with visualelements of the context.Acknowledgement: Neurocom, EF, FWO23.438 Role of visual guidance in reaching after right intraparietalsulcus resectionJared Medina 1 (jared.medina@uphs.upenn.edu), Steven A. Jax 2 , Sashank Prasad 1 ,H. Branch Coslett 1,2 ; 1 Department of Neurology, University of Pennsylvania,2 Moss Rehabilitation Research InstituteWe report data from a 50-year-old woman (KH) who exhibited gross visually-guidedreaching errors shortly after surgical resection of a benign braintumor restricted to the right posterior intraparietal sulcus. Testing was performedtwo to three months post-resection. In Experiment 1, we assessedKH’s ability to reach to non-foveated targets presented on a touch screenwith either the right or left hand. Trials were randomly presented withintwo conditions: with vision during the entire reach, or without vision (usingPLATO occlusion glasses) after reach initiation. KH was significantly lessaccurate with both hands when reaching for targets presented left but notright of fixation (non-foveal optic ataxia), and only on trials with vision ofthe limb. These results suggest that the right posterior intraparietal sulcus isinvolved in online correction of visually-guided reaching. Second, previousstudies reported that optic ataxics became more accurate when reachingto remembered target locations when compared to visible target locations(e.g. Milner, Paulignan, Dijkerman, Michel, & Jeannerod, 1999; Milner etal., 2001; Himmelbach & Karnath, 2005). In Experiment 2, we repeated thesame task as in Experiment 1, with a five second delay between stimulusoffset and reach initiation. In contrast to this previous report, KH was significantlyless accurate when reaching to targets after a five second delay(relative to visible targets). We discuss our findings with respect to theneural representations used to guide reaching to visible and rememberedtarget locations.Acknowledgement: NIH Grant R01: NS04813023.439 Visual Field Effects of Bimanual GraspingAda Le 1 (ada.le@utoronto.ca), Matthias Niemeier 1 ; 1 University of TorontoGrasping objects is a fundamental skill, required to successfully interactwith the environment. Most research on grasping has focused on graspingwith one hand, and it has shown that grasping involves a network offronto-parietal brain regions that controls grasps in a relatively segregated,contralateral fashion. However, one phylogenetically older form of graspingis grasping with two hands. Mechanisms underlying bimanual grasping(BMG) are not well understood, specifically how the brain’s two hemispheresintegrate their control processes of grasping for the two hands viathe corpus callosum. BMG could either involve both hemispheres equally,requiring callosal connections at the level of motor control, or BMG couldbe predominately controlled by one hemisphere, only requiring callosalconnections at earlier, sensory stages. To test this, we asked participants tograsp objects with both hands while fixating either to the left or right of theobjects. The dependent measure was the tilt of the maximum grip aperture(MGA) in space. We predicted tilt to be forward on the side of the dominanthand. However, tilt should not be influenced by visual field if BMG werecontrolled by the dominant left hemisphere only. In contrast, tilt shouldvary across visual fields if both hemispheres coordinated their BMG control.We found the latter to be true. MGA was less tilted when participantsfixated to the left side of the objects than when fixating to the right side.Our results suggest that BMG is not exclusively controlled by the left hemisphere.Further research is required to confirm whether direct input fromthe right visual field into the left hemisphere rather than input from the leftvisual field results in more coordinated bimanual grasps.Attention: Spatial selection andmodulationOrchid Ballroom, Boards 440–456Saturday, May 8, 8:30 - 12:30 pm23.440 Attention does alter apparent contrast: Evaluating comparativeand equality judgmentsKatharina Anton-Erxleben 1 (katharina.antonerxleben@nyu.edu), Jared Abrams 1 ,Marisa Carrasco 1 ; 1 Psychology and Neural Science, New York UniversityIntroduction: Covert attention not only improves performance in manyvisual tasks but also modulates the appearance of several low-level visualfeatures (e.g. Carrasco, Ling & Read, 2004). Studies on attention andappearance have assessed subjective appearance using a task contingentupon a comparative judgment between two stimuli. Recently, Schneiderand Komlos (2008) questioned the validity of those results because theydid not find a significant effect of attention on contrast appearance using asame-different task. They claim that such equality judgments are bias freewhereas comparative judgments are bias prone and propose an alternativeinterpretation of the previous findings based on a decision bias. However,there is no empirical support for the superiority of the equality procedure.Here, we compare the sensitivity of both paradigms to shifts in perceivedcontrast.Methods: In four experiments, we measured contrast appearance usingeither a comparative or an equality judgment. With both paradigms, thesame observers judged the contrasts of two simultaneously presented stimuli,while either the contrast of one stimulus was physically incremented(Experiments 1&2) or exogenous attention was drawn to it (ExperimentsSaturday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>69


Saturday Morning PostersVSS 2010 AbstractsSaturday AM3&4). Observers’ points of subjective equality (PSEs) were derived fromcumulative- or scaled-Gaussian model fits for the comparative and equalityjudgments, respectively.Results & Conclusions: We demonstrate several methodological limitationsof the equality paradigm. For instance, changes in the frequency of ‘same’responses make an additional scaling parameter necessary to explain thedata. PSE estimates are less accurate: unlike in the comparative judgment,asymmetric criteria for low and high contrasts lead to a consistent underestimationof the PSE relative to veridical contrast. Furthermore, variabilityacross observers is higher in the equality judgment. Nevertheless, bothparadigms capture shifts in PSE due to physical (Experiments 1&2) andperceived changes (Experiments 3&4) in contrast. Regardless of the paradigmused, attention significantly increases apparent contrast.Acknowledgement: Supported by a Feodor-Lynen Research Fellowship, Alexander-von-Humboldt Foundation, Germany, to KAE, and NIH EY016200 to MC.23.441 Covert attention affects second-order contrast sensitivityAntoine Barbot 1 (antoine.barbot@nyu.edu), Michael S. Landy 1,2 , Marisa Carrasco 1,2 ;1 Department of Psychology, New York University, 2 Center for Neural Science,New York UniversityCovert spatial attention affects contrast sensitivity for first-order, luminance-definedpatterns, increasing sensitivity at the attended location,while reducing sensitivity at unattended locations relative to a neutralattentioncondition. Humans are also sensitive to “second-order” patterns,e.g., spatial variations of texture. Second-order sensitivity is typically modeledusing a cascade of a linear filter tuned to one of the constituent textures,a nonlinearity (rectification) yielding stronger positive responses to regionscontaining that texture, and a second spatial filter to enhance texture modulations.Here, we assessed whether covert attention affects sensitivity tosecond-order, texture-defined contrast. Methods: Stimuli were orientationdefined,second-order, sine-wave gratings. A vertical or horizontal gratingwas used to modulate between two carrier textures (gratings with higherspatial frequency, oriented at ±45°). Second-order modulator and first-ordercarrier phases were randomized. Observers judged the orientation (verticalor horizontal) of the modulator. Orientation-discrimination performancewas measured as a function of modulator contrast. Stimuli appeared infour isoeccentric locations (5° eccentricity). Exogenous (involuntary) attentionwas manipulated with a brief peripheral precue adjacent to one of thestimulus locations. Target location was indicated by a response cue afterstimulus presentation, yielding three cue conditions: valid (precue matchedresponse cue), invalid (mismatched) and neutral (all stimulus locations precued).Results: Covert attention increased second-order contrast sensitivityat the attended location, while decreasing it at unattended locations, relativeto the neutral condition. These effects were more pronounced at highsecond-order contrasts. The magnitude of improvement was a function ofsecond-order modulator spatial frequency and independent of first-ordercarrier spatial frequency, and thus could not be explained by increased sensitivityto the carriers. The results indicate that attention improves secondordercontrast sensitivity.Acknowledgement: Support: NIH R01-EY016200 to MC and R01-EY16165 to MSL23.442 Comparison of effects of the spatial attention on stereoand motion discrimination thresholdsMasayuki Sato 1 (msato@env.kitakyu-u.ac.jp), Keiji Uchikawa 2 ; 1 Department ofInformation and Media Engineering, University of Kitakyushu, 2 Department ofInformation Processing, Tokyo Institute of TechnologyIn order to examine whether effects of the spatial attention depend on thedimension of visual functions we compared magnitudes of attentional influenceson stereo and motion discrimination thresholds by using the centerperipherydual visual task paradigm. The visual threshold was measuredin the following four conditions. In the center-only condition (condition 1)a square target of 1° size was presented on a CRT monitor at 2° eccentricityin 8 possible directions (up, down, right, left and four oblique directions) inthe visual field. In the periphery-only condition (condition 2) the target of5° size (for stereo) or 2° size (for motion) was presented at 10° eccentricity.In the center-priority condition (condition 3) the central and peripheral targetswere presented simultaneously while the observer paid more attentionto the central target. In the periphery-priority condition (condition 4) moreattention was paid to the peripheral target. The conditions 1 and 2 providedthe baseline performance. In Experiment 1 we measured stereo thresholdswith a staircase method. A random dot stereogram subtending 29° by 29°was presented for 0.2 s while the observer fixated at the central fixationpoint. Crossed or uncrossed disparity was given to the target and the positionof the target was indicated by a white-line square. The observer’s taskwas to indicate the polarity of depth. In Experiment 2 motion thresholdswere measured. Luminance modulation and rightward or leftward motionwas given to the target. The observer’s task was to indicate the direction ofmotion. The results showed that stereo thresholds elevated almost exclusivelyfor the less attended target. The magnitudes of stereo thresholdelevation were in up to 0.5 log unit while the effects of spatial attentionfor motion threshold were less clear. It appears that stereo discriminationprocessing needs more spatial attention than motion discrimination processing.23.443 Attention modulates S-cone and luminance signals differentlyin human V1Jun Wang 1 (jwang@ski.org), Alex Wade 1,2 ; 1 Smith-Kettlewell Eye Research Institute,2 Department of Neurology, University of California, San FranciscoINTRODUCTION: Attention modulates the steady-state EEG response toluminance and (L-M)-cone gratings by altering the response amplitudeand phase (DiRusso et al, 2001). However, little is known about attentionmodulation in short-wave (S–cone) pathways. We used a combination ofsteady stage, source-imaged EEG and fMRI to extract luminance and S-cone contrast response functions (CRFs) in retinotopic human V1. We thenevaluated how attention affected these CRFs as well as the temporal phaseof each component. METHODS: 12 subjects viewed a series of on/off flickeringGabor patches (2cpd, 6Hz, duration 10s) defined by either luminanceor S-cone contrast. The contrast and color of each patch were chosen at random.S-cone stimuli were presented at 10, 20, 40 or 80% contrast and luminancestimuli at 5, 10, 20 or 40%. Subjects performed one of two attentionaltasks. 1: Detecting near-threshold contrast decrements. 2: Detecting targetletters among distracters. Performance was around 75% correct on bothtasks. Each combination of contrast, attentional condition and chromaticitywas presented 10 times while we collected high-density EEG data. Wecomputed the steady-state, visual evoked current density timecourses inretinotopically-defined V1 using a minimum norm inverse and boundaryelement models of individual heads. Finally, we extracted the phase andamplitude of the stimulus-driven responses. RESULTS: Luminance andS-cone stimuli generated qualitatively different responses. The amplitude,but not the phase of luminance responses changed with increasing stimuluscontrast while S-cone-driven responses showed systematic phase changesin addition to amplitude changes. Attending to the contrast of the Gaborchanged the amplitude but not the phase of the luminance responses, consistentwith a contrast gain change. Attention changed the phase, ratherthan the amplitude of S-cone-driven responses. CONCLUSION: Attentionaffects S-cone and Luminance pathways differentially. This may reflecttheir segregation in the early visual system.Acknowledgement: Acknowledgement NIH Grant EY018157-02 and NSF BCS-071997323.444 Attractiveness is leaky (1): Center and SurroundEiko Shimojo 1,2 (eiko@caltech.edu), Chihiro Saegusa 1,3 , Junghyun Park 1 , AlexandraSouverneva 1 , Shinsuke Shimojo 1,2 ; 1 CNS / Division of Biology , CaliforniaInstitute of technology, 2 JST.ERATO Shimojo Implicit Brain Function Project,3 Institute of Beauty Creation, Kao CorporationRepeated experience with a stimulus form a memory that affects preferencedecision in future. We have demonstrated that N(ovelty)/F(amiliarity) of asurrounding natural scene (NS) affects attractiveness of a central face (FC),even when the subjects neglected the surround NS (Shimojo, et al., VSS ’09).To examine further how N and F interact between a center (task-relevant)stimulus and surrounding (task-irrelevant) stimuli, we prepared three newstimulus sets in which a central FC(, NS, or GF) is surrounded by four others(always in the same object category), and N/F of the center and thesurround were manipulated independently. According to pre-ratings, thebaseline attractiveness was matched between the center and the surround.The subjects performed two tasks in separate sessions: (1) to rate attractivenessof the central stimulus only, or (2) to rate attractiveness of the wholeimage. Eye movements were recorded (by EyeLink 2). The eye trackingresults ensured the effectiveness of task instructions. Even when the subjectfocused on the attractiveness of the center only (in the task (1) above),it was implicitly affected by that of the surround modulated via memory.For example, attractiveness of the central new GF changed more positivelyacross trials when the surround GF is new as opposed to old, which washowever not true for the central old GF. More in general, there are signifi-70 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Morning Posterscant interactions between the central (new/old) and the peripheral (new/old) conditions. Different factors, including (a) segregation of N/F acrossobject categories (Shimojo, et al., VSS ’07), (b) modulation of N/F due totask-dependent attention, and (c) implicit contagion of attractiveness fromoutside of attention, will be considered.Acknowledgement: Institute of Beauty Creation, Kao Corporation , JST.ERATO ShimojoImplicit Brain Project, Tamagawa-Caltech gCOE23.445 Enhance or inhibit? Behavioral and ERP effects ofdistractor memory on attentional competitionStephen M. Emrich 1 (steve.emrich@utoronto.ca), Yongjin F. Lee 1 , Stefan R.Bostan 1 , Susanne Ferber 1 ; 1 Department of Psychology, University of TorontoAccording to competition models of attention, two items compete for a cell’sresponse when they both fall within the receptive field of that cell. Thiscompetition can be measured behaviorally as an increase in the responsetime to one target as its proximity to a distractor increases. This competitioncan also be observed in two event-related potentials (ERPs): the canonicalparietal N2pc increases as a target’s distance increases from a distractor,whereas the more temporal Ptc is largest when targets and distractors arevery close. Thus, these components are thought to reflect different processesof resolving competition, with the N2pc involved in target selection,and the Ptc thought to reflect processes of target enhancement. Here, weexamine the effect of having prior exposure to distractors on behavioraland ERP measures of attentional competition. Participants had to report theorientation of a target, and the distance between the target and a distractorwas manipulated. In addition, on half the trials, participants were firstgiven a 200 ms “preview” of the distractor. Thus, participants could usememory to remember and inhibit the distractor on the subsequent searchdisplay. The results indicate that although RTs were faster in the previewcondition, they were still modulated by the spatial separation between thetarget and distractor, such that RTs were slowest when the distractor wasnext to the target. We also observed a reduction in the amplitude of the Ptcin the preview condition, relative to the “no preview” condition. This suggeststhat prior information about the distractor makes target processingeasier. These findings indicate that the inhibition of the distractor may aidin subsequent target enhancement, although competition between targetsand distractors is not entirely resolved.Acknowledgement: NSERC, CIHR23.446 What’s up with What versus Where?Bart Farell 1 (bfarell@syr.edu), Julian Fernandez 1 ; 1 Institute for Sensory Research,Syracuse UniversityTwo early theories of preatttentive versus attentive processing—Treisman’sfeature-integration theory and Julesz’s texton theory—are often groupedtogether but in an important way they’re opposites. They differ on theissue of ‘What’ versus ‘Where’. Preattentive vision in feature-integrationtheory knows the identity of basic features in a scene but not where they arerelative to each other; they’re ‘free-floating’. Preattentive vision in textontheory knows where texton gradients are, but does not know the identity ofthe textons whose difference creates the gradient. In both theories, attendingto location provides the missing information.Our purpose is to re-examine empirical support for the assumptions of thetexton theory. Our reason for doing this comes from data of Farell and Pelli(<strong>Vision</strong> Research, 1993). They measured performance for identifying andlocalizing targets (digits, flickering checks) in single-and multi-scale displays.The data imply that target identification does not depend on priorknowledge of target location.We presented arrays of 36 Gabor patches. Orientation distinguished target(horizontal or vertical) from background (oblique) patches. We measuredperformance as a function of target number in two tasks: pairwise discriminationof target number, and discrimination of homogeneous and heterogeneoustarget orientations. Brief presentation of the test array was followedby a variable interval preceding the masking array.Results did not conform to texton theory expectations. Homogeneous vs.heterogeneous discrimination, which depends on target identity, was independentof target number. Number discrimination, which requires onlytarget locations, fell as target number increased. And of the two, performancewas lower for number discrimination, which also improved less asISI increased. These data are the reverse of the theory’s predictions andof prior results. Thus, in tasks directly testing texton theory assumptions,target identification does not depend on prior localization.23.447 Change of object structure as a result of shifts of spatialattentionYangqing Xu 1 (xuy@u.northwestern.edu), Steven Franconeri 1 ; 1 NorthwesternUniversityThe perception of ambiguous figures (e.g. duck-rabbit) can be influencedby cueing spatial attention to a part of the image associated more closelywith one interpretation (e.g., the mouth of the duck or the rabbit) (Tsal &Kolbert, 1985). The distribution of spatial attention can also affect the perceptionof ambiguous figures that change only in structure but not in meaning(Slotnick & Yantis, 2005). We asked participants to report their perceptionof this type of figure (similar to a Necker-cube) after briefly cueing oneside. Participants were more likely to perceive the cued side of the figure asthe closer side. In a second study, participants viewed the same ambiguousfigure for a series of 8-second trials, and reported each perceptual switchwith key-press responses. Using an ERP correlate of the distribution of spatialattention (n2pc), we found that more attention was directed toward theperceived closer side 500 ms before the report of perception, and towardthe alternative side 500 ms after the report of perception. The distributionof spatial attention appears to affect the perceived structure of a constantvisual stimulus.23.448 Do the eyes really have it? Ocular and visuomanual judgmentsof spatial extent.Marc Hurwitz 1 (marc.hurwitz@gmail.com), Derick Valadao 1 , James Danckert 1 ;1 Psychology, University of WaterlooModels of line bisection implicitly consider distance to be the metric bywhich spatial extent is processed. For example, if a 20 cm line is presentedvisually, the brain infers or computes its length from the visual angle subtended.An alternate hypothesis would suggest that length (D) is determinedfrom the product of velocity (V) over time (T). We refer to this asthe DVT model, which reflects an ‘indirect’ computation of spatial extentbecause it does not rely on a direct measurement of distance (D). To investigatethe DVT model in a healthy population, we conducted a series ofexperiments which measured pointing and ocular judgments of spatialextent using the line bisection task. We manipulated line length, position,and the direction of ocular scanning prior to bisection. Scanning ledto different biases in bisection than did free viewing suggesting that themechanism involved in scanning introduced additional perceptual biasesof spatial extent. Pointing behavior showed a robust influence from scandirection (i.e., left-to-right scanning created a bias leftward to that of rightto-leftscanning), whereas the speed of scanning was inversely related toocular fixation biases (i.e., slower speeds induced exaggerated biases). Wewere unable to show a strong effect of timing on bisection behavior perhapsbecause of the probe(s) used. Rather, to our surprise, we found that ocularbehavior, presumably operating in a gaze-centered reference frame, andpointing behavior, operating in a hand-centered reference frame, produceddistinct patterns of bisection. In general, pointing behavior generated systematicerrors that were impervious to manipulations such as length, lineposition, or speed of scanning, whereas ocular behavior was far more variableand more susceptible to these manipulations. This suggests that judgmentsof spatial extent can be made independently for the hand and eye.Acknowledgement: NSERC23.449 Mind wandering preferentially attenuates sensoryprocessing in the left visual fieldJulia Kam 1 (kamjulia@gmail.com), Camila Fujiwara 1 , Todd Handy 1 ; 1 Department ofPsychology, University of British ColumbiaAn emerging theory in visual attention is that it operates in parallel at twodistinct timescales – a shorter one associated with moment-to-moment orientingof selective visual spatial attention, and a longer one (>10s) associatedwith more global aspects of attention-to-task. Our question is whetherthis slower fluctuation in task-related attention biases the same mechanismof early attentional selection as selective attention. Given that past studieshave consistently revealed visual field asymmetries in selective visualattention, the objective of the present study was to determine whethersensory processing in the two visual fields is differentially modulated bywhether or not one is paying attention to the current task. Participants performeda simple target detection task at fixation while event-related potentials(ERPs) to task-irrelevant visual probes presented in the left and rightvisual fields were recorded. At random intervals, participants were askedto report whether they were “on-task” or “mind wandering”. Our resultsSaturday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>71


Saturday Morning PostersVSS 2010 AbstractsSaturday AMdemonstrated that sensory attenuation during periods of “mind wandering”relative to “on-task”, as measured by the visual P1 ERP component,was only observed in the left visual field. Alternatively, the magnitude ofsensory responses in the right visual field was insensitive to the two differentattentional states. Taken together, our results point to a visual fielddifference in task-related attention, one that mirrors asymmetry found inselective visual attention.23.450 On the relationship between spatial and non-spatial attentionNicola Corradi 1 (nicola.corradi@unipd.it), Milena Ruffino 2 , Simone Gori 1 , AndreaFacoetti 1,2 ; 1 General Psychology Department, University of Padua, Italy, 2 Unitàdi Neuropsicologia dello Sviluppo, Istituto Scientifico “E. Medea” di BosisioParini, Lecco, ItalySpatial attention orienting is known to enhance the signal in attended locationas well as to exclude flanked noise. Moreover, spatial attention orientingis able to modulate the temporal processing, as suggested by theline motion illusion. Non-spatial attention is defined as the processingresources engagement onto the currently relevant object (measured byattentional masking) and processing resources disengagement from thepreviously relevant object (measured by attentional blink). In the presentstudy we investigated the modulation of the spatial attention orienting onattentional masking. Spatial attention was manipulated by an exogenous(i.e., peripheral and non-informative) cue, while attentional masking wasmeasured as the impaired identification of the first of two rapidly sequentialobjects. Results showed that, in the attended location, the non-spatialattention engagement on an object seems to occur faster (i.e., reduced attentionalmasking) than in the unattended location. We suggest that the spatialattention orienting is able to enhance the ability to rapidly engage non-spatialattention over time.23.451 Linguistic control of visual attention: Semantics constrainthe spatial distribution of attentionGregory Davis 1 (gdavis2@nd.edu), Bradley Gibson 1 ; 1 Department of Psychology,University of Notre DamePrevious research suggests that spatial reference frames mediate linguistically-drivenshifts of visual attention (Gibson & Kingstone, 2006). Oneconsequence of reference frame usage is a selection cost when attention isdirected along the left/right axis but not the above/below axis (Gibson,Scheutz, & Davis, 2009). This cost is reflected in an “Opposite CompatibilityEffect” (OCE) in which slower RTs are observed when distractors locatedalong the left/right axis opposite the cued target are response-incompatiblerelative to when they are response-compatible. There are two possibleexplanations of the OCE. According to the “differential validity hypothesis,”the OCE arises because the spatial referents of “left” and “right” areless consistent than the spatial referents of “above” and “below” acrossdiscourse contexts. In this view, RTs are slower in the incompatible conditionthan in the compatible condition because attention is distributed morebroadly in response to “left” and “right.” In contrast, according to the “differentialprocessing hypothesis,” the OCE arises because observers are lesslikely to differentiate between the left and right locations than the aboveand below locations. In this view, RTs are slower in the incompatible conditionthan in the compatible condition because this is the only conditionin which it is necessary to differentiate between the left and right locations(which takes additional time). The present experiments tested these twoaccounts by creating three different context conditions varying the necessityof differentiating between the two endpoints: the high differentiationcondition (20% compatible/80% incompatible); the medium differentiationcondition (50% compatible/50% incompatible); and the low differentiationcondition (80% compatible/20% incompatible). Consistent with the differentialvalidity hypothesis, the results showed that the magnitude of theOCE remained stable regardless of context.23.452 The effect of attention on the multistable motion perception:Does it involve the perceived depth?Hua-Chun Sun 1 (96752002@nccu.edu.tw ), Shwu-Lih Huang 1 ; 1 Department ofPsychology, National Chengchi UniversityThe diamond stimulus, introduced by Lorenceau and Shiffrar (1992), containsfour occluders and four moving lines that can be perceived as coherentor separate motion. Here we used it to investigate whether attentionalone (excluding the effect of fixation) can bias multistable perception, andwhether this effect of attention is due to that attended areas look nearer.Our previous research has found that coherent motion perception increasedwhen the occluders were in front of the moving lines, and decreased whenthe occluders were behind. Therefore, we predicted that coherent motionshould be perceived more under the condition of attending to the occludersrather than the moving lines if attended areas look nearer. Observers’ intentionwas controlled in this study in order to reveal the effect of attentionbetter. In experiment 1, we manipulated attention (attending to occludersor moving lines) as an independent variable. Results showed that the percentageof time perceiving coherent motion during one minute trials washigher in attending to occluders condition than attending to moving linescondition significantly, consistent with our prediction. In experiment 2,one more variable was added: the binocular disparity of the moving lines.It was manipulated at four different levels behind the occluders. We predictedthat the effect of attention should decrease with increasing depth,because the effect of attention that affects depth would be minor underlarge binocular disparity. The results were consistent with our prediction.In experiment 3, we added cast shadow for the occluders as a monoculardepth cue to enhance the perceived depth of occluders in front of movinglines, and the effect of attention was found to be eliminated compared withthe normal condition. These results all consisted with that attention alonecan bias multistable perception by making attended areas look nearer.23.453 Distractor filtering in media multitaskersMatthew S. Cain 1 (matthew.s.cain@duke.edu), Stephen R. Mitroff 1 ; 1 Center forCognitive Neuroscience, Deparment of Psychology & Neuroscience, DukeUniversityDespite the near-ubiquity of visual search, performance can differ wildlyfrom person to person, especially under distracting conditions. Recentresearch suggests that extensive exposure to certain everyday activities(e.g., playing action video games, speaking a second language, media multitasking)may be able to enhance search performance. Here we exploredindividual differences in frequency of media multitasking (e.g., watchingTV while reading or playing video games while talking on the phone) toinvestigate whether this common behavior can impact the ability to filterout distractions during visual search. Participants searched simple arraysof objects for a shape singleton (i.e., a circle among squares). Half the arraysalso contained a color singleton (i.e., a red shape among green shapes).Each participant completed two conditions; in the ‘Never’ blocks participantswere instructed that the color singleton distractor would never be thetarget shape singleton, and in the ‘Sometimes’ blocks they were instructedthat it could sometimes be the target. Previous work has shown that participantscan successfully use this instructional information to improveperformance in Never blocks by exercising top-down control to filter outirrelevant singletons. Here we found that overall (collapsed across blocks),media multitaskers responded more quickly than non-multitaskers. z-Transformed results revealed specific ways participants differed; in theNever blocks multitaskers performed relatively worse than non-multitaskerswhen distractors were present, but both groups showed comparabledistractor-related slowdowns in the Sometimes blocks when top-down distractorfiltering was not necessary. These results suggest that media multitaskersdid not use the information about the distractor’s irrelevance inthe Never blocks to filter it out to the same degree as non-multitaskers.This is consistent with the idea that those who routinely consume multiplemedia in daily life demonstrate poorer filtering of irrelevant information ina laboratory setting.Acknowledgement: Army Research Office, Institute for Homeland Security Solutions23.454 The sensory component of inhibition of returnDavid Souto 1,2 (d.souto@ucl.ac.uk), Sabine Born 2 , Dirk Kerzel 2 ; 1 Cognitive,Perceptual and Brain <strong>Sciences</strong>, University College London, 2 Faculté depsychologie et des sciences de l’éducation, University of GenevaInhibition of return (IOR), the slowing of reaction times to a target presentedat a location that was cued more than some 300 ms earlier, is usuallyattributed to attentional or oculomotor mechanisms. The sensory influenceof the cue (i.e. masking) is ignored as such sensory effects are believed tooccur within a much shorter time-range. The attentional and oculomotoraccounts of IOR would predict independence of IOR from similaritybetween the cue and the target, as long as the cue is equally effective indrawing attention and the target is equally detectable. We asked whethersaccadic reaction times (SRT) to a pre-cued location are sensitive to a differencein orientation between the cue and the saccade target; in particular72 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Morning Postersfor stimulus timings that are common in IOR paradigms. We first tested theinfluence of the brief bilateral presentation of oriented stimuli (Gaussianwindowed sine-wave gratings) on SRT to a unilateral grating presented ata stimulus-onset-asynchrony (SOA) of 100, 250, 450 or 650 ms. The targetwas similar to the cue(s), but of a lower contrast and rotated by 0°, 45° or90°. SRT showed a strong dependence on rotation for the shortest SOAs, ascan be expected from masking of the target by the cue. More interestingly,some dependence on rotation was still found with the 450 and 650-ms SOA.In a subsequent experiment we tested the effect of rotation on IOR (uncuedlocation – cued location SRT) by presenting a single cue (the proper IORparadigm), with an SOA of 650 ms. We observed that IOR was larger whenthe orientation of the cue was the same as the orientation of the target. Ourresults indicate that inhibited visual processing of a repeated stimulus cancontribute to IOR independently of a spatial attention bias.Acknowledgement: Swiss National Sicence Foundatoin Project PBGEP1-12596123.455 Spatial properties of the Attentional Repulsion EffectAnna A. Kosovicheva 1,2 (anna.kosov@gmail.com), Francesca C. Fortenbaugh 3 , LynnC. Robertson 3,4 ; 1 School of Optometry, UC Berkeley, 2 Helen Wills NeuroscienceInstitute, UC Berkeley, 3 Department of Psychology, UC Berkeley, 4 VeteransAdministration Medical Center, MartinezReliable effects of attention on reaction time and accuracy measures havebeen well documented, yet little is known about how attention affects one’sperception of space. Utilizing the attentional repulsion paradigm developedby Suzuki and Cavanagh (1997), the present study examined theeffects of transient involuntary spatial attention on the perception of targetposition. The attentional repulsion effect (ARE) refers to the illusory displacementof two vernier lines away from the focus of attention. In the firstexperiment, brief peripheral cues captured observers’ attention prior to thepresentation of a vernier. Responses indicated perceived vernier offset to beaway from the cues, replicating the ARE. Moreover, the magnitude of theARE depended on cue-target distance, indicating that the effects of attentionon perceived target location are not uniform and vary systematicallyas a function of the proximity of the target to the focus of attention. Experiment2 was designed to determine whether repulsion occurs away from thecue’s center of mass or from the cue contour. Perceived repulsion alwaysoccurred away from the cue’s center of mass, regardless of the arrangementof the contours relative to the vernier lines. However, presenting the vernierwithin the contours of the cue reduced the magnitude of the repulsioneffect, suggesting that the contextual relationship between the cue and thetarget plays a role in modulating the effect as well. Finally, Experiment 3demonstrated that increasing the size of the cue increases the magnitudeof the repulsion effect when cue-target distance is held constant. Together,these experiments suggest that the magnitude of the ARE depends jointlyon the center of the cue’s mass and on whether the target is bounded by thecue contour, though attention to the cue’s center of mass is largely responsiblefor producing the repulsion effect.Acknowledgement: This research was supported by a grant from NIH (EY016975) toL.C.R. and by the Undergraduate Swan Research Award (Psychology Dept, UC Berkeley)to A.A.K.23.456 Individual differences in attentional orienting predictperformance outcomes during learning of a new athletic skillRyan Kasper 1 (kasper@psych.ucsb.edu), James Elliott 1 , Barry Giesbrecht 1 ;1 Department of Psychology, University of California Santa BarbaraDynamic performance of many athletic motor skills requires rapid orientingof visual attention. For example, previous studies have shown thatyoung, high skill-level hockey players have smaller response time differencesbetween validly and invalidly cued locations in standard spatialcueing tasks compared to low skill-level hockey players (Enns & Richards,1997). One interpretation of these results is that high-skill individuals aremore efficient in deploying spatial attention over multiple locations. However,it is unclear if this effect was brought about by experience in performingthe skill, or if individual differences in attentional orienting facilitateddevelopment of the skill itself. Here, we tested whether individual differencesin attentional orienting predicted success during learning of a newathletic skill. Eighteen novices with no previous golf experience learned toputt a golf ball to targets placed at variable distances. Individual differencesin attentional orienting were measured using a variant of the attention networktask (Fan et al, 2002) in which a predictive number cue presented atfixation indicated the likely location of a target. The RT difference betweeninvalid and valid trials was used to index volitional orienting. A mediansplit of the orienting scores divided the group into those with small orientingscores and those with large orienting scores. The results indicated thatindividuals with lower orienting scores had significantly higher accuracyduring the early stages of learning in the putting task (p


Saturday Morning PostersVSS 2010 AbstractsSaturday AMpreceding stimulus should be larger when the combination of color andorientation in the preceding stimulus was the same as that in one of therivalrous stimuli, even when the stimulus color was not relevant to binocularrivalry. In the experiment, 2 c/deg square-wave gratings were used asstimuli whose mean luminance was 4 cd/m2 and Michelson contrast was0.60. The duration of the preceding stimulus was 1 sec and that of rivalrousstimuli was 200 msec. In addition to the orientation-rivalry conditiondescribed above, the color-rivalry condition was also designed using thesimilar principle. Results under the orientation-rivalry condition showedno effect of the color-orientation combination; a strong modulative effectof the preceding stimulus was found but its magnitude did not changedepending upon whether the preceding stimulus had the same color as therivalrous stimuli. Under the color-rivalry condition, however, a small differencedue to the color-orientation combination was found. These resultssuggest that the dominance/suppression of rivalrous chromatic gratings ismainly determined by visual processes responding separately to color andorientation. A small contribution of color-orientation selective processescould be found only over limited conditions.Acknowledgement: Supported by JSPS grant23.503 Dominance of Sharp over Blurred Image Features inInterocular Grouping during “Patchwork” Binocular RivalryYu-Chin Chai 1 (sunnia.chai@gmail.com), Thomas Papathomas 1,2,3 , XiaohuaZhuang 1,3 ; 1 Laboratory of <strong>Vision</strong> Research, Center for Cognitive Science,Rutgers University, 2 Department of Biomedical Engineering, Rutgers University,3 Department of Psychology, Rutgers UniversityPurpose. Previous studies have reported the dominance of a sharp imagewhen it competes during binocular rivalry (BR) with a blurred (low-passfiltered) version of itself [e.g., Chai, Papathomas, Zhuang, Alais, VSS 2009].In the current study we continued our efforts to mimic “monovision” correction(two drastically different focal distances for near and far in the twoeyes). We investigated interocular grouping of sharp and blurred componentsof an image under “patchwork” BR [e.g., Kovacs, Papathomas, Feher,Yang, PNAS 1996] that is expected under monovision correction. Methods.Each eye had a 2x3 checkerboard pattern of alternating sharp and blurredpatches with the complementary pattern in the other eye. Four types of conditionswere compared. (1) Grayscale images with steady fixation (2) Grayscaleimages with the fixation mark changing position every 2 s. (3) Colorcondition: Sharp/blurred patches were red/green or vice versa. (4) Controlcondition for (3): Color patchworks either all-sharp or all-blurred images.Task was to press different buttons to report dominance of sharp/blurredfeatures in (1) and (2) and of red/green in (3) and (4). Results. Strong interoculargrouping was observed in all four conditions: 66%,67%,82%,61% inconditions 1, 2, 3, 4. Predominance of sharp patches was always larger versusblurred patches (45%, 41%, 53% versus 21%, 26%, 30% in conditions 1,2, 3, respectively). Color enhanced the interocular grouping for both sharpand blurred features, as compared with grayscale conditions. Results werenot significantly different between conditions 1 and 2. Conclusions. Theseresults corroborate and extend earlier findings for the dominance of sharpfeatures. They help explain how monovision can function, even when thegaze changes, as indicated by condition 2. The well-documented ability ofcolor for interocular grouping (condition 4) explains the enhanced interoculargrouping of condition 3 over 1.Acknowledgement: Laboratory of <strong>Vision</strong> Research, Center for Cognitive Science IGERT onPerceptual Science23.504 MIB and target saliency: how many salient features arenecessary for the target to disappear?Dina Devyatko 1 (tsukit86@gmail.com); 1 Department of Psychology, LomonosovMoscow State UniversityAttentional competition between object representations can account forthe disappearances of salient targets superimposed upon a moving mask– a phenomenon known as motion-induced blindness (MIB, Bonneh etal., 2001). It has been shown that differences in luminance contrast and inshape between a target and a mask lead to the increase of the disappearancesduration in MIB (Bonneh et al., 2001, Hsu et al., 2004). But how manyfeature differences between the target and the mask components wouldbe sufficient to trigger such competition? We used a target which differedfrom the mask components either in one or in two features (color and/ormotion). Participants reported disappearances in all three experimentalconditions, but the amount of disappearances was significantly higher inthe condition with two distinguishing features than in the conditions withjust one distinguishing feature (either color t(24)=4,282, p≤0,001, or motiont(24)=7,352, p≤0,001). The amount of disappearances was also higher inthe condition where the moving target and the mask differed in color ascompared to the condition with the static blue target and the moving bluemask (t(24)=4,481, p≤0,001). However, differences in the duration of disappearancesreached the level of significance only for the condition with twodistinguishing target features and for the “color” condition (t(24)=4,709,p≤0,001). Thus, just one salient feature is enough to trigger competitionbetween object representations. But the more target and mask representationsdiffer in terms of visual features, the more the target disappears.Acknowledgement: Supported by Russian Foundation for Basic Research Grant #08-06-00171a23.505 Crowding occurs before or at the site of binocular rivalrySangrae Kim 1 (psyche@psycheview.com), Sang Chul Chong 1, 2 ; 1 GraduateProgram in Cognitive Science, Yonsei University, 2 Department of Psychology,Yonsei UniversityAs the ways of studying the nature of consciousness, both crowding andbinocular rivalry are commonly used to render visual stimuli invisible.Crowding is the interference of object identification in peripheral visionwhen an object is flanked by other objects, and binocular rivalry is alternatingperception created by presenting two different objects separately toeach eye. We investigated whether these two phenomena interacted witheach other in orientation discrimination. The task in our experiments wasto discriminate the orientation of the target. In Experiment 1, we measuredthe thresholds of orientation discrimination when the target underwentboth binocular rivalry and crowding. Either flankers (surrounding the targetto produce crowding) or a competing grating (presented to the samelocation of the target in the opposite eye to evoke rivalry) increased thethresholds. When both the flankers and the competing grating were presentat the same time, the thresholds increased more than the sum of thetwo effects alone. In Experiment 2, we used flankers undergoing rivalry toexamine the effect of rivalry on crowding. The crowding effect with flankersundergoing rivalry was closer to the effect of collinear flankers than thatof orthogonal flankers. These results suggest that rivalry does not influencethe effectiveness of flankers in crowding, and crowding may occurbefore or at the site of rivalry. In Experiment 3, we measured mean phasedurations to examine the effect of crowding on binocular rivalry. When acollinear grating to flankers was visible, suppression from flankers changedthe current percept. However, when an orthogonal grating to flankers wasvisible, suppression from flankers helped to maintain the current percept.In conclusion, our results suggest that crowding interacts with binocularrivalry and crowding occurs before or at the site of rivalry.Acknowledgement: This research was supported by the Converging Research CenterProgram through the National Research Foundation of Korea(NRF) funded by the Ministryof Education, Science and Technology (2009-0093901)23.506 Perceptual memory increases amplitude of neural responsein sensory brain regionsMaartje Cathelijne de Jong 1 (m.c.dejong@uu.nl), Zoe Kourtzi 2 , Raymond van Ee 1 ;1 Dept. Physics of Man, Helmholtz Institute, Utrecht University, the Netherlands,2 Cognitive Neuroimaging Lab, University of Birmingham, United KingdomThe way the brain interprets visual input is strongly dependent on visualexperience. This becomes strikingly evident when the same ambiguousvisual input is seen repeatedly: Whereas continuous viewing of an ambiguousstimulus leads to ongoing abrupt changes in visual awareness, intermittentviewing results in the same perceptual interpretation over andover again. Memory for a certain perceptual interpretation thus booststhis interpretation during later encounters with the stimulus. This is contraryto the effect of adaptation, which inhibits previously seen percepts.As yet there is very limited evidence regarding the neural mechanismsunderlying perceptual memory. To investigate these neural mechanismswe measured fMRI when subjects viewed an ambiguously rotating globewith intervening blank periods. As expected the intermittent presentationparadigm resulted in long sequences of reoccurrence of the same percept(either clockwise or counter-clockwise rotation). The build-up of perceptualmemory during a sequence of stabilized perception was accompaniedby an increase in the amplitude of the BOLD response in motion sensitivebrain regions. This increase in amplitude cannot be explained as adaptationto the stimulus, because it is well established that adaptation results in a74 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Morning Postersdecreased amplitude of the BOLD response. We thus conclude that perceptualmemory is represented by an increase in the amplitude of response inthose brain regions that represent the memorized percept.23.507 Differentiating the contributions of surface feature andboundary contour strengths in binocular rivalryXuan Li 1 (alima_lsy@yahoo.com.cn), Yong G Su 1,2 , Teng Leng Ooi 2 , Zijiang J He 1 ;1 Department of Psychological and Brain <strong>Sciences</strong>, University of Louisville,2 Department of Basic <strong>Sciences</strong>, Pennsylvania College of Optometry at SalusUniversityBinocular rivalry (BR) is typically stimulated with a pair of grating halfimageswith orthogonal orientations. These half-images can be characterizedby their surface boundary contour (outline) and surface feature (interiortexture/grating) properties. We have shown when the interior surfacefeature (grating contrast) is constant, BR predominance of seeing a gratinghalf-image increases with its boundary contour strength (Ooi & He, Perception2006; Xu et al, <strong>Vision</strong> Research 2009). Here, we revealed the sole contributionof surface feature to BR. We used 1-deg grating discs (5 cpd, 35 cd/m2). One disc (e.g., horizontal) was fixed at 30% contrast while the other(vertical) assumed one of three contrast levels (30, 50, 85%). Two conditionswere tested. (1) Disc-only condition: The grating discs were displayedagainst a gray background so changing the grating disc contrast variedboth its boundary contour and surface feature strengths. (2) Disc-plusbackgroundcondition: The grating discs were displayed against a 135 deggrating background. The grating background contrast in one half-imagewas the same as the grating disc contrast in the fellow eye. This ensuredthat changing the contrast of one grating disc varied only the surface featurestrength of that grating disc, while equalizing the boundary contourstrength of the right and left half-images’ grating discs. Observers trackedtheir BR percepts of the grating discs for 30 sec in each trial. We foundincreasing the contrast of one grating disc increased its predominance anddominance duration in the disc-plus-background condition, indicatingthe sole impact of surface feature strength. The increases were, however,smaller than that in the disc-only condition where both boundary contourand surface feature strengths contributed to BR. Thus, the present findingsalong with our previous results reveal both the surface boundary contourand surface feature strengths are separate factors influencing BR.Acknowledgement: NIH (R01 EY015804)23.508 Bistable percepts in the brain: fMRI contrasts monocularpattern rivalry and binocular rivalryAthena Buckthought 1 (athena.buckthought@mail.mcgill.ca), Samuel Jessula 1 ,Janine D. Mendola 1 ; 1 McGill <strong>Vision</strong> Research Unit, Department of Ophthalmology,McGill University, Montreal, Quebec, CanadaINTRODUCTION. In monocular rivalry, the observer experiences alternationsbetween different perceptual representations of the same image, inwhich different components alternate in visibility, but without completesuppression as in binocular rivalry. Surprisingly, no previous fMRI studieshave directly compared binocular and monocular rivalry.METHODS. Here we used fMRI at 3T to image activity in visual cortexwhile subjects perceived either monocular or binocular rivalry. The stimuluspatterns were colored gratings (left & right oblique orientations) orface/house composites. The stimulus components (red or green) were eitherpresented dichoptically for binocular rivalry or as a (monoptic) compositeimage with both components shown to each eye. Linear polarizers wereused for dichoptic presentation. Six subjects performed either a binocularor monocular rivalry report task with the face/house or grating stimuli byindicating alternating percepts with button presses to measure alternationrates. The luminance contrasts were 9%, 18% or 36%.RESULTS. The cortical activation for monocular rivalry included occipitalpole, ventral temporal, and superior parietal cortex, while the areas for binocularrivalry also prominently included lateral occipital regions includingMT+ as well as inferior parietal cortex near the TPJ. Both binocularrivalry and monocular rivalry showed a U-shaped function of activationas a function of contrast, i.e. higher activity for most areas at 9% and 36%.The increase in activation at higher contrast can be explained by an increasein neuronal response gain reflected in faster alternation rates, while that atlow contrast can be explained by disinhibition (Wilson, 2007).CONCLUSIONS. Overall, our results call into question models that distinguishbinocular from monocular rivalry solely on the basis of V1 interocularcompetition. Rather our results indicate that binocular rivalry invokesbinocular competition and suppression in higher-tier levels, whereas competitionin monocular rivalry is relatively focused in early visual areas, withless inhibition.Acknowledgement: Supported by NSERC and NIH.23.509 Zero correlation is not a hallmark of perceptual bistability:Variation in percept duration is driven by noisy neuronal adaptationRaymond van Ee 1 (r.vanee@phys.uu.nl); 1 Helmholtz Inst, UtrechtWhen the sensory system is subjected to ambiguous input perceptioninvoluntarily alternates between alternative interpretations in a seeminglyrandom fashion. Although it is clear that neuronal noise (on microsecondtime scale) must play a role in the dynamics of perceptual alternations, theneural mechanism for the generation of randomness at the slow time scaleof the percept durations (multiple seconds) is unresolved. Here significantnon-zero serial correlations are reported in series of visual percept durations(for the first time accounting for duration impurities caused by reactiontime, drift, and incomplete percepts). This refutes a general belief thata Poisson process governs perceptual alternations and that zero serial correlationis a hallmark of binocular rivalry. Comparing different stimuli, wefound that serial correlations for perceptual rivalry using structure-frommotionambiguity were smaller than for binocular rivalry using orthogonalgratings. After considering a spectrum of computational models it is concludedthat noise in adaptation of percept-related neurons cause the serialcorrelations. This work bridges, in a physiologically plausible way, widelyappreciated deterministic modelling and randomness in experimentalobservations of visual rivalry.23.510 The Brain changing its Mind: bistable perception andvoluntary control investigated with frontoparietal TMST. A. de Graaf 1,2 (tom.degraaf@maastrichtuniversity.nl), M. C. de Jong 3 , R. vanEe 3 , A. T. Sack 1,2 ; 1 Dept. of Cognitive Neuroscience, Faculty of Psychology andNeuroscience, Maastricht University, Maastricht, the Netherlands, 2 MaastrichtBrain Imaging Center, Maastricht, the Netherlands, 3 Dept. Physics of Man,Helmholtz Inst, Utrecht University, Utrecht, the NetherlandsIn bistable vision, the brain is unsure which of two percepts accuratelyrepresents the outside-world. As a result, the conscious percept switchesback-and-forth continuously. Neuroimaging studies have shown thatfrontoparietal regions are activated during this process. The role of theseregions constitutes one of the main outstanding questions in visual awarenessresearch. Some have suggested that frontoparietal regions causallyinduce the perceptual switches. Alternatively, perceptual switches mightinduce frontoparietal activations, which would speak against a top-downcausal influence from frontoparietal regions. During voluntary control,when people ‘will’ the percept to switch more often, top-down modulationshould in any case occur. Subjects are able to voluntary increase the switchrate when watching, e.g., a bistable sphere-from-motion (SFM) stimulus.Are the same regions and/or mechanisms involved in voluntary controland passive bistable vision? Are frontoparietal regions causally relevant?In the current study we addressed these issues by directly interfering withfrontal and parietal cortex activity, during passive bistable vision and duringvoluntary control, in ten participants. Offline rTMS lasting 5 minutes(1 Hz stimulation at 110% of individual motor threshold) was found tosignificantly reduce the amount of voluntary control over ambiguous SFMvision for 2 minutes after stimulation, as measured by perceptual switchrate. This was true for both parietal (p


Saturday Morning PostersVSS 2010 AbstractsSaturday AMOne way to study the neural basis of perception is to measure the effects ofthe different perceptual states induced by bi-stable visual stimuli on brainresponses. Previously, we reported that the steady-state visual evokedpotentials (SSVEP) to a bi-stable counter-phase flickering 8-arm radial patternwas modulated according to perceptual state, yielding higher powerduring perception of rotational apparent motion than during flicker perception.The current study investigated whether this dependence of SSVEPon the perceptual interpretation of an ambiguous display can be generalizedto other bi-stable phenomena. We used a plaid pattern undergoingapparent motion in steps of 1/8 cycle every 50 msec, to generate 20 Hz localflicker. Similarly to continuously moving plaids, the stimulus appeared toeither move as a whole in one direction (Coherency) or as two gratings slidingover each other (Transparency). The angle between the superimposedgratings was adjusted for each observer, so that the proportion of coherencypercept was approximately 50%. During each 1-minute trial, observersindicated their perception continuously (coherency or transparency) byholding one of two buttons while their EEGs were recorded with 64 scalpelectrodes. Coherency percept enhanced the SSVEP response at 20 Hz inthe posterior (occipital) scalp region compared to the transparency percept.The results of the current study and our previous findings suggest that perceivingcoherent global motion (global plaid motion or global rotation) ina dynamic bi-stable display could cause an increased phase-coherence ofstimulus-driven neural activity.Acknowledgement: R01-EY01403023.512 Percept-related changes found in the pupillary constrictionsto physically-identical, dichoptic luminance changesEiji Kimura 1 (kimura@L.chiba-u.ac.jp), Satoru Abe 2,3 , Ken Goryo 4 ; 1 Department ofPsychology, Faculty of Letters, Chiba University, 2 Graduate School of AdvancedIntegration Science, Chiba University, 3 Research Fellow of the Japan <strong>Society</strong>for the Promotion of Science, 4 Faculty of Human Development and Education,Kyoto Women’s University[Purpose] By taking advantage of binocular rivalry, different perceptualchanges can be produced with a physically identical stimulus sequence.When different brightness changes were produced using this technique,the pupillary response exhibited percept-related changes (Kimura et al.,ECVP2009). This study investigated the generality of this finding using variousstimulus sequences of white and black disks. [Methods] At the start ofeach trial, the observer dichoptically viewed white (8 cd/m2) and black (2cd/m2) disks presented on a gray background (4 cd/m2) and pressed a keywhen one of the disks became exclusively dominant. The key press initiateda stimulus change to one of the followings after a short break; the samedichoptic white and black disks (WB), binocular white disks (WW), binocularblack disks (BB), or dichoptic black and white disks (eye switching, BW).For example, when the initial dominant percept was black, the WW conditionproduced a black-to-white perceptual change. However, the same WWcondition produced a white-to-white change when the initial percept waswhite. [Results and Discussion] The percept-related change in the pupillaryresponse was consistently found with different stimulus sequences; largerpupillary constrictions were evoked when apparent brightness increasedmore with a stimulus change. In the BW condition, a large contrast incrementproduced by the black-to-white stimulus change in one eye seemedto have made the white disk perceptually dominant regardless of the initialdominant percept. In the WB condition, an individual difference in theperceptual change was found when the initial percept was black. However,even with these nonsystematic variations in percept, the pupillary constrictionamplitude changed consistently with the perceptual change. Thesefindings suggest that, although the pupillary light reflex is believed to bea primitive reflex and mainly mediated by subcortical pathways, it alsoreflects neural activities correlated with perceptual changes.Acknowledgement: Supported by JSPS grant23.513 Neural correlates of binocular rivalry in human superiorcolliculusPeng Zhang 1 (zhang870@umn.edu), Sheng He 1 ; 1 University of Minnesota, Departmentof PsychologyThe functional role of subcortical structures in binocular rivalry remainspoorly understood. The superior colliculus (SC) is one of the key structuresin subcortical visual pathways. To address the question whether the SCparticipates in binocular rivalry, we used high-resolution functional magneticresonance imaging to measure the activity of the SC during binocularrivalry. Two orthogonal gratings with different colors (red/green) andcontrasts (100% / 15%) were dichoptically presented. While in the scanner,subjects viewed the stimulus through anaglyphic glasses and tracked therelative dominance of these two gratings by pressing one of two buttons.The BOLD signal level of the SC correlated well with subjects’ perception,increasing when the high contrast grating became dominant and decreasingwhen the low contrast one was perceived. This BOLD signal modulationwas similar to that observed in the replay condition, during which thetwo monocular stimuli were physically alternating between the two eyes.BOLD signals consistent with perceptual rivalry alternations were found inthe LGN and V1. The subcortical pathway through the SC is considered analternative pathway to the geniculate-striate pathway, and is often consideredto be a pathway supporting unconscious visual information processing.However, our results suggest that binocular rivalry also occurs in theSC, which has significant implications on our understanding of the neuralmechanisms supporting unconscious visual information processing.Acknowledgement: Research supported by grants EY015261 and NSF/BCS-081858823.514 Dominance times in binocular rivalry reflect lateralizedcortical processing for faces and wordsSheng He 1 (sheng@umn.edu), Tingting Liu 1 ; 1 Department of Psychology, Universityof MinnesotaDuring binocular rivalry, the relative dominance time of the two imagesare strongly influenced by lower level image factors such as image contrast(i.e., the image with a higher contrast has a longer relative dominance time).In the current study, we investigated whether lateralized cortical processingfor different categories of objects could also influence rivalry alternationdynamics. Since it is known that face processing is biased towards the righthemisphere (e.g., right fusiform face area stronger than left) and processingof visual word form is biased towards the left hemisphere (e.g., visualword form area is usually localized in the left fusiform cortex), we hypothesizedthat the dynamics of rivalry between faces and words will show aface advantage for left visual field presentation and a word advantage forright visual field presentation. Specifically, a face and a Chinese character ofthe same size were dichoptically presented either to the left or right visualfield, 2.5 degrees from fixation. Subjects viewed the stimuli and recordedthe perceptual alternations with key presses. As predicted, the results showthat there was an interaction between the visual field of presentation andthe relative dominance of the type of stimuli: the relative dominance timefor faces over Chinese characters was longer when they were presented inthe left visual field while the relative dominance time for Chinese charactersover faces was longer for right visual field presentation. We concludethat object category selective cortical areas participate in binocular rivalrycompetition processes and are part of the mechanisms that determine thedynamics of rivalry competition.Acknowledgement: Research supported by grants EY015261 and NSF/BCS-081858823.515 Expectation from temporal sequences influences binocularrivalryAdrien Chopin 1 (adrien.chopin@gmail.com), Madison Capps 2 , Pascal Mamassian 1 ;1 Laboratoire Psychologie de la Perception, Université Paris Descartes & CNRS,2 Massachusetts Institute of TechnologyWe investigate here the implicit encoding of a temporal sequence of visualevents and the expectation to complete the sequence. For this purpose, wetested the extent to which a series of non-rivalrous patterns can influencethe dominant perception in binocular rivalry. We rely on and extend thepattern suppression phenomenon: when rivalrous oriented Gabors follownon-rivalrous Gabors, observers usually perceive the repeated orientationless often (Brascamp, Knapen, Kanai, van Ee & van den Berg, 2007). Ourobservers viewed sequences of non-rivalrous Gabors that could be orientedeither to the left (A) or to the right (B). Sequences varied in length up tofour items, for instance AABA. A pair of rivalrous Gabors then followed thesequence where A was presented to one eye and B to the other. The spatialfrequency of the images during this rivalry presentation was slightly differentfrom that of the images during the sequence and observers reportedtheir dominant percept by referring to its spatial frequency (higher or lowerthan the sequence). We found that the dominant percept during the rivalrousstage was largely predictable from the previous sequence of non-rivalrouspatterns. The primary factor was related to adaptation: the more oftenA was presented during the sequence, the more likely B would be perceivedduring rivalry. Another factor was related to alternation: after the sequence76 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Morning PostersBABA, B was more likely to be perceived than after the sequence AABA.These results are consistent with the phenomenon of pattern completionfound with the ambiguous motion quartet (Maloney, Dal Martello, Sahm &Spillmann, 2005). In conclusion, binocular rivalry is not only influenced byadaptation of a pattern seen in the past, but also by more complex temporalstructures such as the one found in the alternation of two patterns withina sequence.23.516 The effect of stimulus interruptions on “fast switchers” and“slow switchers”: a neural model for bistable perceptionCaitlin Mouri 1 (mouri.caitlin@gmail.com), Avi Chaudhuri 1 ; 1 Department ofPsychology, McGill UniversityBistable perception is triggered by a physical stimulation that causes fluctuationsbetween two perceptual interpretations. To date, no physiologicalmechanism has been causally linked to switching events (Einhauser et al.,2008; Hupé et al., 2008), leaving the neural basis of bistability unclear. Externalinterruptions in the stimulus are known to affect perceptual switchingrates: with long offsets, stimulus interruptions stabilize the percept, whileshort offsets trigger destabilization (Noest et al., 2007). The current studyexplores the latter phenomenon in a Necker cube presented for 600:1200ms, 900:900 ms, and 1200:600 ms onset:offset durations. Figure-ground contrastvaried between 100%, 50%, 25%, and 12.5%. In the flashing conditions,a 100% contrast cube was presented during the “onset phase”, followedby a lower contrast cube during the “offset phase”. Overall results indicatethat destabilization occurs for flashing conditions, though individualresults varied. In addition, subjects were evenly split between fast andslow switchers. Slow switchers showed strong biases for one percept, andsensitivity to contrast manipulations. These results suggest a dichotomybetween low-level rivalry, of orthogonal orientations for example (Yu et al.,2002), and whole-form perception. Similar patterns have been described inbinocular rivalry (Kovacs et al., 1996; Lee & Blake, 1999). Previous researchimplicates experience (Sakai et al., 1995) and genetic differences (Shannonet al., 2009) to explain why certain individuals experience fast or slow perceptualswitching. We discuss our results in the context of noisy neuralcompetition (Marr, 1982; Moreno-Bote et al., 2007). Our neural model makesuse of a dynamical system developed by Wilson and Cohen (Wilson, 1999),in which two mutually inhibitory neurons interact. Manipulation of inputsignal strengths yields broadly similar results to those observed in this psychophysicalstudy, suggesting that input strengths at different levels of processingmay explain the divergence between fast and slow switchers.23.517 Visual Working Memory Content Modulates Competition inBinocular RivalryHenry Chen 1 (henry-chen@uiowa.edu), David E. Anderson 2 , Andrew Hollingworth 1 ,Shaun Vecera 1 , Cathleen M. Moore 1 ; 1 Department of Psychology, University ofIowa, 2 Department of Psychology, University of OregonWe explored whether the content of visual working memory (VWM) canmodulate competition during binocular rivalry. Two arbitrary shapes, onegreen and one red, were presented above and below fixation for one secondat the beginning of each trial. Immediately following the offset of this display,a post-cue indicated whether the upper or lower shape (a high or lowtone, respectively) should be committed to VWM for later recognition. Notethat color was irrelevant to the task; only the shape had to be remembered.While observers held the shape in VWM, a horizontal and a vertical grating,randomly assigned opposing colors of green and red, were presenteddichoptically for 750 ms. Observers reported the orientation of the perceivedgrating, which served as our measure of rivalry dominance. Color was alsoirrelevant to the rivalry task. Approximately 4 seconds following the offsetof the original shape display, two white shapes were presented to the leftand right of fixation, and observers reported which matched the original tobe-rememberedshape. Results showed that rivalry dominance was biasedby the color of the shape held in VWM. Specifically, observers reportedseeing the grating that was the same color as the to-be-remembered shapemore often than the other grating. Because both colors had been viewedimmediately before the rivalry task, but dominance reflected the color ofthe to-be-remembered object, we conclude that binocular rivalry can beresolved by the top-down influence of the properties of the object beingmaintained in VWM. These data demonstrate a novel interaction betweenthe content of VWM and the content of perceptual experience.23.518 Where does the mask matter? Testing a local interactionaccount of Motion-induced BlindnessErika T. Wells 1 (erika.wells@unh.edu), Andrew B. Leber 1 ; 1 Department ofPsychology, University of New HampshireIntroduction: Motion-induced blindness (MIB) is the perceptual phenomenonwhereby stationary peripheral targets disappear when presentedwith a moving mask. It has been proposed that MIB is mediated by competitivelocal interactions between the target and mask. To evaluate thisaccount, we introduced displays in which motion properties of the maskwere systematically altered in spatially confined regions of the display (i.e.,surrounding the target location or elsewhere). Specifically, these regionscontained incoherent motion while the remainder of the displays containedcoherent motion (we recently found that incoherent motion enhances targetdisappearance, and we thus exploited this finding for present purposes;Wells, Leber, & Sparrow, 2009, OPAM). We predicted that if local targetmaskinteractions underlie MIB, disappearance should be greatest whenthe incoherent motion is closest to the target. Method: The mask was composedof three distinct, evenly spaced columns of moving dots, with fixationcentered in the middle column. Observers were instructed to reportthe perceived disappearance of a peripheral target, which was presented inone of the outer columns. On 75% of the trials, local coherence was manipulatedsuch that one of the three columns contained incoherent motion whilethe other two columns were coherent. For the remaining trials, all columnswere coherent. Results/Conclusions: Observers reported greater disappearanceon trials containing incoherent motion, replicating our previousresults. Interestingly, this enhanced disappearance occurred regardless ofwhich column contained the incoherence. Specifically, incoherence in thecenter column generated the greatest disappearance, followed by incoherencein the target and opposite columns; these latter two produced similarenhanced rates of disappearance compared to the all-coherent condition.These findings do not support a mechanism in which local target-maskcompetition mediates MIB. Rather, the properties of the mask responsiblefor the phenomenon seem insensitive to the target location and also appearto scale with eccentricity.23.519 Why is Continuous Flash Suppression So Potent?Eunice Yang 1 (eunice.yang@vanderbilt.edu), Randolph Blake 1,2 ; 1 Department ofPsychology/ Vanderbilt <strong>Vision</strong> Research Center, Vanderbilt University, 2 Brainand Cognitive <strong>Sciences</strong>, Seoul National UniversityContinuous flash suppression (CFS), a potent form of binocular rivalryintroduced by Tsuchiya and Koch (2005), has become a popular tool forrendering stimuli perceptually invisible. But why is CFS so effective atproducing interocular suppression? We sought to identify visual propertiesthat empower CFS and, thereby, to infer something about the neuralrepresentation of the stimulus being suppressed. In Experiment 1 we measuredcontrast thresholds for detecting a gabor patch (embedded in 1D,broadband noise) that was dichoptically paired either with a CFS display(dynamic 10Hz noise patterns) or with a gray screen. Compared to contrastthresholds measured without CFS, thresholds under CFS were strongly elevatedwhen the gabor was low spatial frequency (0.5-4cpd) but less so whenit was high spatial frequency (8-16cpd). In Experiment 2, we manipulatedspatial frequency content of the CFS and found that a low-pass filtered CFS(0.5-4cpd) produced elevated thresholds similar to those measured whenthe CFS was unfiltered. High-pass filtered CFS (8-16cpd), however, producedno elevation in thresholds. In Experiment 3 we varied the temporalfrequency of unfiltered CFS displays. 5Hz CFS elevated thresholds onlyfor a low spatial frequency gabor (1cpd), whereas 20 Hz CFS produced thesame pattern of threshold elevations as did 10 Hz CFS. We conclude thattransients produced by rapid, abrupt flicker, together with random changesin pattern shape and contrast over time, create a suppressor that is itselfimmune to adaptation and that selectively impairs low-spatial frequencycomponents of stimuli presented to the opposing eye. This selectivity ofsuppression underscores the importance of considering spatial frequencycontent of stimuli suppressed by CFS. Indeed, the present results may shednew light on the selective effects of CFS on different object categories (e.g.tools vs faces) reported in behavioral and neuroimaging studies. We arecurrently examining this possibility.Acknowledgement: NIH EY13358 & 5T32 EY007135Saturday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>77


Saturday Morning PostersVSS 2010 AbstractsSaturday AM23.520 Changes in Bistable Perception Induced by Fear ConditioningJi-Eun Kim 1 (blessedpond@gmail.com), Tae-Ho Lee 2 , Hanmo Kang 1 , Chai-Youn Kim 1 ; 1 Department of Psychology, Korea University., 2 Deptartment ofPsychology, Korea Military AcademyBackground: When observers view ambiguous figures for prolongedperiod of time, they experience perceptual alternations between two possiblevisual interpretations (Leopold & Logothetis, 2003). Dubbed bistableperception, this phenomenon has been considered as a useful means tostudy visual awareness since it induces spontaneous fluctuation in awarenessdespite constant physical stimulation (Kim & Blake, 2005). To investigatewhether visual awareness during bistable perception is affected byemotional valence associated with one of two interpretations, we exploitedPavlovian fear conditioning (Pavlov, 1927). Methods: Among a variety ofambiguous figures, we selected man-rat and duck-rabbit which inducedbalanced perceptual alternations in a pilot test. Prior to and following conditioning,observers tracked their perceptual experiences during 12 100-sectrials (6 for each ambiguous figure) by depressing one of two keyboardbuttons. During conditioning, a pair of unambiguous variants of the manratfigure was used as conditioned stimuli (CS). For a half of the observerstested, the man image (CS+) was paired partially with electrical fingershock (US) while the rat image (CS-) was unpaired with the electrical shock.For the other half, the rat image was CS+ while the man image was CS-. Reaction time was measured following observers’ 2-AFC discriminationtask (man or rat) to assess conditioning effect independently. Anxiety testwas also given to all observers. Results: For observers who showed fasterresponse to CS+ paired with the shock than to CS- during conditioning,perceptual awareness of CS+ during bistable perception increased followingconditioning. Besides, observers who marked high anxiety scorestend to perceive CS- longer following conditioning. Conclusion: Perceptualawareness during bistable perception is affected by fear conditioning. Individualdifferences in susceptibility of conditioning and the level of anxietyare influential factors.Acknowledgement: This research was supported by Basic Science Research Programthrough the National Research Foundation of Korea (NRF) funded by the Ministry ofEducation, Science and Technology (2009-0089090)23.521 Spatial aspects of binocular rivalry in emotional facesKay Ritchie 1 (kay.ritchie@abdn.ac.uk), Rachel Bannerman 1 , Arash Sahraie 1 ;1 <strong>Vision</strong> Research Laboratories, School of Psychology, University of Aberdeen,Aberdeen, United KingdomPrevious research has shown that emotional content can influence dominanceduration in binocular rivalry, with the period of dominance for anemotional image (e.g. a fearful face) being significantly longer than a neutralimage (e.g. a neutral face or a house). Furthermore, it has been foundthat the greater the foveal eccentricity of a rival pair of simple images, theslower the rate of rivalry. The current study combined these two findingsto investigate the dominance of faces and the rate of rivalry in the periphery.Rival face (fearful or neutral) and house pairs subtending 5.2° x 6.7°were either viewed foveally, or the near edge of the stimuli was at 1° orat 4° eccentricity. While neutral faces dominated over houses in only thefoveal condition, fearful faces dominated over houses in all three conditions.There was no effect of eccentricity on the rate of rivalry. These resultsprovide support for the dominance of face stimuli over house stimuli, particularlyfor faces displaying fearful expressions.Acknowledgement: This project was funded through a vacation scholarship awarded to K.Ritchie by the Wellcome Trust.Face perception: ExperienceVista Ballroom, Boards 522–538Saturday, May 8, 8:30 - 12:30 pm23.522 The speed of familiar face recognitionBruno Rossion 1,2 (bruno.rossion@psp.ucl.ac.be), Stéphanie Caharel 1,2 , CorentinJacques 3 , Meike Ramon 1,2 ; 1 Institute of Psychological science, University ofLouvain, 2 Institute of Neuroscience, University of Louvain, 3 Stanford University,Computer Science DepartmentRecognizing a familiar person from his/her face is a fundamental brainfunction. Surprisingly, to date the actual speed of categorizing a face asfamiliar remains largely unknown. Here we seek to clarify this questionby using a Go/No-go familiarity judgment task with photographs of personallyfamiliar (same classroom as the participant) and well-matchedpictures of unfamiliar faces, which required speeded responses to individuallypresented face stimuli. During the recording of high-densityevent-related potentials (ERP, 128 channels), two groups of young adultparticipants were instructed either to respond when a photograph of a personallyfamiliar face was presented (n = 11, 6 females), or when the facewas unfamiliar (n = 12, 7 female). Face stimuli contained external features(hair), but external indicators of identity were carefully removed (clothes,….). Each face stimulus appeared for 100ms, followed by a blank screen(1500-1700ms). Behaviorally, faces could be classified as familiar as early as310-320ms (average RT, 450 ms), about 80ms faster than when unfamiliarface categorization was required. ERP differential waveforms between Goand No-go responses when detecting familiarity showed the earliest differenceat occipito-temporal cortex shortly after 200 ms, starting in the righthemisphere, and 10 ms later in the left hemisphere. Differences appearedabout 50 ms later for the Go-unfamiliar decision task, with no differencesin lateralization of onset times. There were no clear effects of face familiarityon earlier visual event-related potentials (P1, N170). These earliesteffects observed in electrophysiological recordings are compatible with thebehavioral output taking place at about 100 ms later. They indicate thatthe human brain needs no more than 200ms following stimulus onset torecognize a familiar person based on his/her face only, a time frame thatputs strong constraints on the time-course of face processing operations inthe human brain.23.523 Tracking qualitative and quantitative information useduring face recognition with a dynamic SpotlightLuca Vizioli 1 (lucav@psy.gla.ac.uk), Sebastien Miellet 1 , Roberto Caldara 1 ; 1 Departmentof Psychology and Centre for Cognitive Neuroimaging (CCNi), Universityof Glasgow, United KingdomSocial experience and cultural factors shape the strategies used to extractinformation from faces. These external forces however do not modulateinformation use. Using a gaze-contingent technique that restricts informationoutside the fovea - the Spotlight - we recently showed that humans relyon identical face information (i.e., the eye and mouth regions) to achievehuman face recognition (Caldara, Zhou and Miellet, 2010). Althoughthe Spotlight allows precise identification of the diagnostic informationrequired for face processing (i.e., qualitative information), the amount ofinformation (i.e., quantitative information) necessary to effectively codefacial features is still unknown. To address this issue, we monitored theeye movements of observers during a face recognition task with a noveltechnique that parametrically and dynamically restricts information outsidecentral vision. We used Spotlights with Gaussian apertures centeredon the observers’ fixations that dynamically and progressively expanded(at a rate of 1° every 25ms) as a function of fixation time, Thus, the longerthe fixation duration, the larger the Spotlight aperture size. The Spotlightaperture was contracted to 2° (foveal region) at each new fixation. Tofacilitate the programming of saccades and natural fixation sequences, wereplaced information outside central vision with an average face template.This novel technique allowed us to simultaneously identify the active use ofinformation, and provide an estimate of the quantity of information necessaryat each fixation location to achieve this process. The dynamic Spotlighttechnique revealed modulations in the quantity of information extractedfrom diagnostic features, even for the same facial features (i.e., the eyes).This sensitivity varied across observers. Our data suggest that the face systemis not uniformly tuned for facial features, but rather that the calibrationmodulating the intake of visual information is observer-specific.Acknowledgement: The Economic and Social Research Council and Medical ResearchCouncil (ESRC/RES-060-25-0010)23.524 What’s behind a face: semantic person identity coding inFFA, as revealed by multi-voxel pattern analysisJob van den Hurk 1,2 (Job.vandenhurk@maastrichtuniversity.nl), Bernadette M.Jansma 1,2 ; 1 Department of Cognitive Neuroscience, Faculty of Psychology andNeuroscience, Maastricht University, Maastricht, the Netherlands, 2 MaastrichtBrain Imaging Center, Maastricht, the NetherlandsIdentifying a familiar face involves the access to and activation of semanticknowledge about an individual. Several studies have shown that theFusiform Face Area (FFA) is involved in face detection and identification.Conventional studies targeting face identification processes are generally78 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Morning Posterslimited to visual features, thereby ignoring semantic knowledge aboutindividuals. To what extent FFA has access to person-specific knowledgeremains unknown.In the present study we addressed this issue by designing an 8 x 8 wordmatrix, consisting of 8 categories: profession, European capital cities, carbrands, music styles, pets, hobbies, sports and housing types. Each columnrepresents a category, whereas each row can be interpreted as informationabout an individual. In the fMRI-scanner, participants were repeatedly presentedwith blocks of 8 words: either presented in category-related context(column-wise, category condition), or presented in person-related context(row-wise, person condition). Subjects were instructed to memorize all 8items belonging to each category (e.g., “sports”, category condition) and toeach person (e.g. “John”, person condition). Using this approach, we wereable to control for visual and semantic stimulation across conditions.Univariate statistical contrasts did not show any significant differencesbetween the two conditions in FFA. However, a multivariate method basedon a machine learning classification algorithm was able to successfully classifythe functional relationship between the two conditional contexts andtheir underlying response patterns in FFA. This suggests that activationpatterns in FFA can code for different semantic contexts, thus going beyondfacial feature processing. These results will encourage the debate about thespecific role of FFA in face identification.23.525 Can we dissociate face perception and expertise?Marijke Brants 1,2 (Marijke.Brants@psy.kuleuven.be), Johan Wagemans 2 , HansOp de Beeck 1 ; 1 Laboratory of Biological Psychology, University of Leuven(K.U.Leuven), Belgium, 2 Laboratory of Experimental Psychology, University ofLeuven (K.U.Leuven), BelgiumSome of the brain areas in the ventral temporal lobe, such as the fusiformface area (FFA), are critical for face perception, but what determines this specializationis a matter of debate. The face specificity hypothesis claims thatfaces are processed domain specifically. However, the alternative expertisehypothesis states that the FFA is specialized in processing objects of expertise.To disentangle these views, some experiments used an artificial classof novel objects called Greebles. These experiments combined a learningand fMRI paradigm. However, there are some problems with these studies:the limited number of brain regions examined (only face-selective regions),the high similarity between faces and Greebles, and other methodologicalissues. Given the high impact of this paradigm, we investigated these issuesfurther. In our experiment eight participants were trained for ten 1-hoursessions at identifying a set of novel objects, Greebles. We scanned participantsbefore and after training and examined responses in the FFA as wellas in the lateral occipital complex (LOC). To isolate expert processing, wecompared responses to upright and inverted images for faces and for Greebles.In contrast to previous reports, we found an inversion effect for Greeblesbefore training. This result suggests that people interpret the ‘novel’Greebles as faces, even before training. This prediction was confirmed in apost-experimental debriefing. In addition, we did not find an increase of theinversion effect for Greebles in the FFA after training. This indicates thatthe activity in the FFA does not depend on the degree of expertise acquiredwith the objects. In the LOC we find some indications for an increase ofactivation for Greebles after training. These findings are in favor of the facespecificity hypothesis, with the understanding that the notion ‘face’ refersto “every stimulus that is interpreted as containing a face”.Acknowledgement: This work was supported by the Research Council of K.U.Leuven(CREA/07/004), the Fund for Scientific Research – Flanders (1.5.022.08), the HumanFrontier Science Program (CDA 0040/2008) and by the Methusalem program by theFlemish Government (METH/08/02).23.526 Adaptation aftereffects to facial expressions viewedwithout visual awarenessSang Wook Hong 1 (sang.w.hong@vanderbilt.edu), Eunice Yang 1 , RandolphBlake 1,2 ; 1 Department of Psychology, Vanderbilt University, 2 Brain and Cognitive<strong>Sciences</strong>, Seoul National UniversityHuman faces are compelling visual objects whose salience is further boostedwhen they portray strong emotional expressions such as anger. Can aftereffectsassociated with adaptation to facial expressions (FEAs) be inducedwhen observers are unaware of those expressions under continuous flashsuppression (CFS)? During repeated 5-second adaptation periods, observersmonocularly viewed faces that were either visible continuously or wereerased from visual awareness by a CFS stimulus presented to the other eye.During brief test trials interspersed between successive adaptation periods,observers were presented with a “morph” face whose emotional expressionwas the weighted average between two extremes of expression used tocreate the morph (e.g., angry vs fearful); following each test presentation,observers selected one of two response categories to indicate perceptionof the test face’s emotional expression. In two experiments we found thatrobust FEAs were generated when adapting faces were visible but wereabolished when those faces were perceptually suppressed by CFS; thesefindings replicate earlier results measuring face identity and gender aftereffects.A third experiment using the same stimuli and procedures producedsignificant contrast adaptation aftereffects to suppressed faces, confirmingthat the adapting stimuli were not rendered completely ineffective byCFS. In a fourth experiment, observers performed a luminance discriminationtask that required attending to the spatial location of an adaptingface, although the face itself could not be seen. In the presence of endogenousattention, significant FEAs were induced by suppressed adaptingfaces. These findings, together with other evidence, suggest that attentionalresources must be available and further allocated to the location of the facestimulus for adaptation aftereffects to occur, even when the face is outsideobservers’ awareness.Acknowledgement: Supported by EY1335823.527 Crossing the “Uncanny Valley”: adaptation to cartoon facescan influence perception of human facesHaiwen Chen 1,4 (haiwen95@gmail.com), Richard Russell 1,3 , Ken Nakayama 1 ,Margaret Livingstone 2 ; 1 Psychology, Harvard College, Harvard University, 2 Neurobiology,Harvard Medical School, Harvard University, 3 Psychology, GettysburgCollege, 4 Functional Neuroimaging Laboratory, Brigham and Women’s Hospital/Harvard Medical SchoolAdaptation to distorted faces can shift what individuals identify to be aprototypical or attractive face. This effect occurs both across and withinsub-categories of human faces, such as gender and race/ethnicity, suggestingthat there is a common coding mechanism for human faces (a singleface space) and dissociable coding mechanisms for subgroups of humanfaces. But does this face space extend to non-human faces? The construct ofthe “uncanny valley” suggests that as human-like features increase, peoplerespond more positively, but at a distinct point, there is an “uncanny valley,”a region where the deviations from humanness are stronger than thereminders of humanness, which creates feelings of uncanniness and repulsion.This points to a significant divide between human faces and otherfaces such that they may not share a common face space. It is also importantto note that low-level shape adaptation can affect high-level face processingbut is position dependent and hence not face-specific. Thus, it is unclearwhether there is a common coding mechanism for all faces, includingnon-human faces, that is nevertheless specific to faces. This study assessedwhether there is a single face space common to both human and cartoonfaces by testing whether adaptation to cartoon faces can affect perception ofhuman faces. Participants were shown Japanese animation cartoon videoscontaining faces with abnormally large eyes. Using animated videos eliminatedthe possibility of position dependent adaptation (because the facesappear at many different locations) and more closely simulated naturalisticexposure. Adaptation to cartoon faces with large eyes significantly shiftedpreferences for human faces toward larger eyes, consistent with a positionindependent, common representation for both cartoon and human faces.This supports the possibility that there are representations that are specificto faces yet common to all kinds of faces.23.528 Adaptation to Up/Down Head Rotation in Face SelectiveCortical AreasMing Mei 1 (mmei@yorku.ca), Lisa Betts 2 , Frances Wilkinson 1 , Hugh Wilson 1 ;1 Centre for <strong>Vision</strong> Research, York University, 2 Department of Psychology,McMaster UniversityAlthough faces are naturally seen in both left/right and up/down rotatedviews, virtually all fMRI work on the representation of face views hasexamined only left/right rotation around frontal views. Accordingly,we designed an fMRI adaptation study to test multiple cortical areas forup/down viewpoint selectivity. Face-selective regions of interest weredetermined in a block-designed scan comparing responses to faces versushouses. This identified five face-selective regions of interest: fusiform facearea (FFA), occipital face area (OFA), lateral occipital complex (LOC), superiortemporal sulcus (STS), and inferior frontal sulcus (IFS). Event-relatedSaturday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>79


Saturday Morning PostersVSS 2010 AbstractsSaturday AMscans with a cross-adaptation paradigm were used to examine BOLD signalsin each face region. Subjects adapted to frontal, up 20°, or down 20°views followed by one of these as a test view, thus producing nine differentadapt/test combinations. Twelve subjects with normal vision werescanned. An initial two way ANOVA examined effects of hemisphere andself-adaptation (i.e. identical test and adapt stimuli). This analysis showedan effect of hemisphere (right magnitudes larger) only in FFA, and significantadaptation effects in FFA (p


VSS 2010 AbstractsSaturday Morning PostersThe 10 second study condition shows a diminution in performance nearlyreaching a level of chance performance. The same experiment was run witha new face. These findings were replicated. Based on the SAME responsedata, we estimated that to obtain a high level memory performance of anunfamiliar face (>.75), the minimal exposure time appears to be between 20and 30 seconds.Acknowledgement: NIH23.533 Plastic representation of face view in human visual systemTaiyong Bi 1 (bitaiyong@pku.edu.cn), Juan Chen 1 , Fang Fang 1 ; 1 Department ofPsychology, Peking UniversityPrevious brain imaging studies have demonstrated that perceptual learningcould enhance the representation of visual features in human earlyvisual cortex. In this study, we used functional magnetic resonance imaging(fMRI) to investigate how perceptual learning could change the representationof face view in human visual system. We trained subjects to discriminateface orientations around a face side view (e.g. 30 deg) over eightdays, which resulted in a dramatic improvement in sensitivity to face vieworientation. This improved sensitivity was highly specific to the trained faceside view. Before and after training, subjects were scanned to measure theirbrain responses (BOLD signal) to both the trained face view and untrainedface views. We analyzed BOLD signals from cortical areas throughout thevisual hierarchy, including early and middle-level visual areas (V1, V2,V3 and V4), occipital face area (OFA), superior temporal sulcus (STS) andfusiform face area (FFA). We found that, relative to untrained face views,BOLD signals in FFA and STS (but not other areas) to the trained face viewsignificantly increased after the training on face view orientation discrimination,which was parallel to the psychophysical result. Our data suggestthat the enhanced representation of a face view in higher visual areas couldsubserve our perceptual ability to discriminate face orientations around theface view.Acknowledgement: National Natural Science Foundation of China (Project 30870762,90920012 and 30925014)23.534 The Clark Kent Effect : What is the Role of Familiarity andEyeglasses in Recognizing Disguised Faces?Erin Moniz 1 (emoniz02@yahoo.com), Giulia Righi 2 , Jessie J. Peissig 1 , MichaelJ. Tarr 3 ; 1 Department of Psychology, California State University, Fullerton,2 Laboratories of Cognitive Neuroscience, Division of Developmental Medicine,Children’s Hospital Boston, 3 Center for Neural Basis of Cognition, CarnegieMellon UniversityPeople have the ability to purposely transform the appearance of the facialregion with the application of make-up, the growing or shaving of facialhair, the addition or removal of glasses, or the alteration of a hair style orcolor. All of these different types of transformations have an impact on theability to recognize a person, though it’s unclear how much of an impact,and the degree to which different transformations disrupt recognition. Thepurpose of this study was to add to existing knowledge about the ability ofhuman subjects to recognize naturalistic faces in disguise. We investigatedthe effects of different types of attribute changes that altered the appearanceof faces from presentation to test, for example the addition or subtractionof eyeglasses. Additionally, the effect of varying levels of familiarityon recognition was examined. Participants were first familiarized by viewingfaces three, six, or nine times while performing judgment tasks (e.g.,attractive vs. unattractive) with individuals either in disguise (wig and/orglasses), or shown with no disguise. During the testing phase, participantswere shown both previously learned and novel individuals, and the faceswere shown with and without disguise. Results indicated that any attributechange made from presentation to test lowered identification accuracy, andas the number of attribute changes increased, performance decreased. Eyeglasseshindered recognition, but results indicated little difference betweentinted and clear-lens glasses in their effect on performance. The d’ scoresfor addition vs. subtraction of eyeglasses replicated prior work showingthat encoding a face with eyeglasses and removing them before the recognitiontask (subtraction) was more damaging than an addition. Although nosignificant main effect was found for familiarity, post hoc tests did indicatea significant difference between familiarizing someone three times versusnine times.Acknowledgement: This research was funded by NSF Award #0339122 (EnhancingHuman Performance), the Perceptual Expertise Network (#15573-S6), a collaborativeaward from James S. McDonnell Foundation, and by the Temporal Dynamics of LearningCenter at UCSD (NSF Science of Learning Center SBE-0542013).23.535 Race-specific perceptual discrimination improvementfollowing short individuation training with facesRankin Williams McGugin 1 (rankin.williams@vanderbilt.edu), James Tanaka 2 , SophieLebrecht 3 , Michael Tarr 4 , Isabel Gauthier 1 ; 1 Department of Psychology, VanderbiltUniversity, 2 Department of Psychology, University of Victoria, 3 Department ofCognitive & Linguistic <strong>Sciences</strong>, Brown University, 4 Center for the Neural Basisof Cognition Department of PsychologyWe explore the effect of individuation training on the acquisition of racespecificexpertise with faces. The own-race-advantage (“ORA”) – superiorperformance for own-race faces relative to those of less familiar races – hasbeen explained by the tendency to individuate own-race faces but to categorizefaces of other races. Here we ask whether practice individuatingother-race faces yields improvement in perceptual discrimination for novelfaces of the trained race. We predicted that this improvement would notgeneralize to novel faces of another race to which participants were equallyexposed in an orthogonal task that did not require individuation, yet was atleast as difficult. Caucasian participants were trained to individuate faces ofone race through subordinate-level naming (African American or Hispanic)and to make difficult eye luminance judgments on faces of the other race. Inthe latter task, participants judged which eye was of a brighter luminance,while identity and brightest eye were always orthogonal. Given these taskswe are able to rule out differences in exposure, attention and reward inproducing race-specific improvements. Our results indicate that the skillsacquired during individuation training generalize to novel exemplars of acategory but, at least in the case of faces from two different races, they donot generalize to faces of another race experienced with equal frequencyin a task that required at least as much attention. Our work demonstratestraining effects that generalize to novel stimuli using a much shorter procedure(90 minutes of training, half of which was devoted to individuation)than in prior studies. The results suggest that differential effects in recognitionperformance could depend on differences in perceptual encoding dueto differential practice with individuation. This could magnify any ownraceface advantage arising from cognitive, perceptual, or social processesthat promote individuation of own-race faces relative to other-race faces.Acknowledgement: This work was supported by the Temporal Dynamics of LearningCenter (NSF Science of Learning Center SBE) and by a grant from James S. McDonnellFoundation to the Perceptual Expertise Network.23.536 The Effects of Familiarity on Genuine Emotion RecognitionCarol M Huynh 1 (ch1286@csu.fullerton.edu), Gabriela I Vicente 2 , Jessie J Peissig 3 ;1 California State University Fullerton, 2 California State University Fullerton,3 California State University FullertonWithin the field of emotion recognition there have been numerous studiesexploring the role of familiarity in emotion recognition. However, fewhave looked at the effect of familiarity using multiple genuine expressionsof emotion. It seems plausible to propose that the more familiar someoneis, for example a friend or family member, the more likely that the person’sexpression will be identified accurately. By focusing on only genuineexpressions of emotion, we remove any additional information that mayaccompany the use of posed emotions (LaRusso, 1978). Also, it is criticalto incorporate multiple expressions, rather than only two expressions asin many studies, to test for any differences between emotions, as well ascreating a more realistic task. In our study, we used laboratory familiaritytraining to compare the recognition of emotion in familiar and unfamiliarfaces. Half of the faces were familiarized by having people perform judgmenttasks. One group had a single judgment task, a second group had sixjudgment tasks, and a third group was not familiarized with any of thefaces. This training used photos of individuals expressing either a happyexpression for half of the familiarized participants or a neutral expressionfor the other half. For the testing phase, both familiarized and unfamiliarizedface stimuli were used, each shown with a variety of different emotions(e.g., happy, fear, disgust, confusion, and neutral). Participants thenhad to accurately categorize the facial expression. The results indicated thatthere is an effect of familiarity on the accuracy of emotion recognition. Themore familiar one is with a stimulus, in this case a person’s face, the morelikely one is to identify the emotion accurately. This experiment is importantbecause it contributes to our understanding of the effects of familiarityand its interaction with emotion recognition.Saturday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>81


Saturday Morning PostersVSS 2010 AbstractsSaturday AM23.537 The role of learning in the perceptual organization of a faceJennifer Bittner 1 (jlb503@psu.edu), Michael Wenger 1 , Rebecca Von Der Heide 1 ,Daniel Fitousi 1 ; 1 Department of Psychology, The Pennsylvania State UniversityTanaka & Farah (1993) have documented a behavioral regularity that hasbeen used to argue for holistic representation of faces. The basic regularityis that identification of an anatomical feature of face (e.g., a nose) is aidedwhen that feature is presented in the context of face, and is best when thatfeature is presented in the context of the original source face (Tanaka & Sengco,1997). Given that physical characteristics (e.g., similarity between thefacial form and that of the feature) are most likely inadequate for producingboth of these regularities, the role of learning must, by hypothesis, becritical for understanding the mechanisms for such regularities. The presenteffort investigates the potential role of learning using stochastic linearsystems models for the processing of multidimensional inputs. The modelsallow for dynamic representations of the presence or absence of dimensionaldependencies between features, in the form of channel interactions.The models are capable of making predictions at the level of behaviorallatencies and accuracies for a range of tasks, including those used in theoriginal demonstrations of the face superiority effect. Here we highlight thepotential role of learning in producing changes in both perceptual sensitivityand bias, as a function of both experience and stimulus context, and usethese as predictions for a set of experiments involving multidimensionaljudgments, in order to show how learning can lead to behaviors that havebeen taken as indicators of perceptual holism.23.538 Visual Short term Memory for One ItemMichael Mangini 1 (mangini@cord.edu), Michael Villano 2 , Charles Crowell 2 ;1 Psychology, Concordia College, 2 Psychology, University of Notre DameVisual short term memory (VSTM) is a limited capacity system thatabstracts visual information from sensory stimulation. Many studies haveinvestigated the storage capacity of this system expressed in the numbers ofobjects, features, or complexity. Here we investigate the accuracy of visualshort term memory for single items. On half of the trials, memory trials, asingle face or spatially filtered noise pattern is initially presented. After aone second memory delay, a two-alternative forced choice (2AFC) is presented.On the other half, perceptual trials, participants are presented witha simultaneous 2AFC match to sample task. Both noise and face stimuliare synthesized to contain equivalent low-level visual structure. They haveequal spatial frequency profiles and both stimulus classes are generatedfrom the summation of twenty randomly amplified orthogonal linear templates.Results showed participants were more sensitive to face stimuli. Thememory delay caused a significant decrease in performance. Interestingly,a significant interaction between the magnitude of the memory decrementand the stimulus type was observed. Specifically, memory trials for noisestimuli showed a larger performance decrement than that observed for facestimuli. These findings suggest that VSTM does not have the capacity tostore even a single complex item at the level of detail that is available whenan image is present. Significant degradation occurs within a second. Also,the accuracy of VSTM is stimulus specific. Faces seem to be representedby VSTM more efficiently than filtered noise. Because the noise and facestimuli have computationally similar degrees of variation, the differencesin performance must be due to internal representation. This suggests thatfor complex stimuli VSTM likely utilizes previously learned statisticalregularities, and is therefore not a general purpose mechanism. Models ofstochastic visual memory decay in low-level image space cannot accountfor our findings.Scene perception: Objects and scenesVista Ballroom, Boards 539–554Saturday, May 8, 8:30 - 12:30 pm23.539 When Do Objects Become Scenes?Jiye Kim 1 (jiyekim@usc.edu), Irving Biederman 1,2 ; 1 Department of Psychology,University of Southern California, 2 Neuroscience Program, University ofSouthern CaliforniaScene-like interactions of pairs of objects (a bird perched on a birdhouse)elicit greater BOLD activity in LOC than the same objects depicted sideby-side(a bird next to a birdhouse) (Kim & Biederman, 2009). Novelty ofthe interactions (a bird perched on an ear) magnified this gain, an effectthat was absent in the side-by-side depictions. LOC is the first cortical stagewhere shape is distinguished from texture (Cant & Goodale, 2009). Othercortical areas, such as IPS,DLFPC, and PPA did not consistently reveal thepattern of BOLD effects seen in LOC, although it is possible that the effectswitnessed in LOC reflected feedback from these areas. Due to the lowtemporal resolution of the BOLD signal, the time course of these possibleeffects could not be assessed with fMRI. We used EEG source estimation todetermine if interacting and novel depictions produced effects in parietaland prefrontal areas prior to when these effects occurred in occipito-temporalcortex. (The time course of PPA could not be observed with EEG dueto its medial location.) While maintaining fixation, subjects performed aone-back task while they viewed a series of two-object displays, presentedeither as interacting or side-by-side, and in novel or familiar combinations.Occipito-temporal cortex showed earlier divergence of interacting versusside-by-side conditions than parietal cortex. Although novel interactionsdid not produce a larger BOLD response in parietal cortex, there was adivergence of novelty and familiarity in the EEG signal at about the sametime as in occipito-temporal cortex. No consistent pattern was observed inprefrontal cortex. The picture that emerges is one in which scene-like relationsare not inferred at some stage following object identification, but arelikely achieved simultaneously with the perception of object shape.Acknowledgement: NSF BCS 04-20794, 05-31177, 06-17699 to I.B.23.540 The Scene Superiority EffectRichard Yao 1 (ryao2@uiuc.edu), Daniel J. Simons 1 , John E. Hummel 1 ; 1 Departmentof Psychology, University of Illinois Urbana-ChampaignIn the word superiority effect, two letters are easier to discriminate whenpresented in the context of a real word, even when the rest of the wordis non-predictive of the target letter. For instance, people can better discriminate“word” from “work” than they can discriminate “d” from “k.”The effect disappears when the letters appearing with the target form anon-word letter string (e.g., “orwk” and “orwd”). We explored whetherthis context effect for letters and words would generalize to objects inscenes. Subjects identified rapidly presented objects that were drawn froma single semantic category (i.e., “offices”). We used an adaptive staircasealgorithm (QUEST) to set object detectability at 40% accuracy when viewedagainst a phase-scrambled scene background. Subjects then performed thedetection task with objects superimposed on scene backgrounds that variedin semantic consistency (offices or beaches) and orientation (uprightor inverted). As for the word superiority effect paradigm, the backgroundwas irrelevant to the object detection task and was unpredictive of whichobject appeared on any given trial. Consistent with the word superiorityeffect, subjects were better able to identify target objects when they weredisplayed on semantically consistent backgrounds. Consistent with subjectreports that they were able to ignore the scene entirely as the experimentprogressed, the effect disappeared after approximately 100 trials. Together,these results suggest that the scene context can facilitate object identification,but only when the scene semantics are processed.23.541 What’s behind the box? Measuring scene context effectswith Shannon’s guessing game on indoor scenesMichelle Greene 1,2 (m.greene@search.bwh.harvard.edu), Aude Oliva 3 , JeremyWolfe 1,2 , Antonio Torralba 3 ; 1 Brigham and Women’s Hospital, 2 Harvard MedicalSchool, 3 Massachusetts Institute of TechnologyNatural scenes are lawful, predictable entities: objects do not float unsupported,spoons are more often found with forks than printers, and it makeslittle sense to search for toilets in dining rooms. Although visual contexthas often been manipulated in object and scene recognition studies, it hasnot yet been formally measured. Information theory specifies how muchinformation is required to encode objects in a scene, assuming no contextualknowledge. We can then measure, in bits per object, the informationbenefit provided by human observers’ contextual knowledge. We used adatabase of 100 indoor scenes, containing 352 unique objects labeled usingthe LabelMe tool. If all objects were equally probable in a scene, 8.46 bitsper object would be required (log2(352)). Taking object frequency intoaccount (i.e., chairs are more common than basketballs), would only reducethis number to 7.22 bits per object. To measure the information requiredby humans to represent objects in scenes, we adapted the guessing gameproposed by Shannon (1951). Between 5-80% of the objects in each scenewere occluded by opaque bounding boxes. Observers guessed the identityof each occluded object until the object was correctly named. More than60% of objects were correctly guessed on the first try, because context massivelyconstrains the identity of a hidden object (What might you guess82 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Morning Posterswas hidden next to a plate on a dinner table?). Fully 93% of objects werecorrectly guessed within 10 tries. Overall, we found that observers couldrepresent the database with just 1.86 bits per object when 5-10% of objectswere masked. Just 2.00 bits per object were needed even when the majorityof objects were masked. This technique can be used to measure the redundancyprovided by aspects of context such as scene category, object densityand object consistency.23.542 When the animal destroys the beach, the beach destroysthe animal. Mutually assured destruction in gist processingKarla Evans 1 (kevans@search.bwh.harvard.edu), Jeremy Wolfe 1 ; 1 Brigham andWomen’s Hospital, Harvard Medical SchoolObservers can report some semantic content of scenes when those scenesare presented for 20 msec, flanked in time by masks. It is likely that onlya single object could be selected for attentional processing in this time sothis gist processing would seem to involve non-selective processing of theentire image. Similarly, we find that expert observers (radiologists andcytopathologists) can detect subtle signs of cancer at above chance levelsin 250 msec exposures of mammograms and Pap-smears. These exposuresare unmasked but still preclude normal extended attentional scrutiny. Canmultiple gists be computed concurrently? Last year, we demonstrated limitson this ability. We cued observers with one of nine target categories (e.g.beach, animal, bridge) before presenting a masked scene for 20 msec. Targetswere present on 50% of trials. Critically, on half of target present trials,an un-cued target category was also present. That is, “beach” would be cuedbut the scene might include both beach and animal – a “trial-irrelevant”,but “task-relevant” target category. Observers were 76% correct when trialscontained only cued targets but only 52% correct when trial-irrelevanttargets were also present. Critically, animal would not interfere with beachif it were not a target on other trials in the same block. This year, we showthat interference is mutual. On each trial, observers reported on the presenceof the target and were also asked if any other categories were present.If observers missed the task-relevant “animal”, they were actually LESSlikely to be able to report a task-irrelevant beach. Of course, in real life, ifyou were looking for your fork and now for your glass, you are not blindedby the presence of both items in the visual field. We find that “mutuallyassured destruction” occurs for exposures shorter than 200 msec.Acknowledgement: This research was funded by a NRSA grant to KKE and NIH-NEI toJMW.23.543 The objects behind the scenes: TMS to area LO disruptsobject but not scene categorizationCaitlin Mullin 1 (crmullin@yorku.ca), Jennifer Steeves 1 ; 1 Centre for <strong>Vision</strong> Researchand Department of Psychology, Faculty of Health, York University, TorontoMany influential theories of scene perception are object centered (Biederman,1981) suggesting that scenes are processed by extension of objectprocessing in a bottom-up fashion. However, an alternative approach toscene processing is that the global gist of a scene can be processed in a topdownmanner without the need for first identifying its component objects(Oliva & Torralba, 2001). This suggests that global aspects of a scene maybe processed prior to the identification of individual objects. Evidence froma patient with object agnosia and bilateral damage to lateral occipital (LO)cortex, an area associated with object processing (Grill-Spector et al., 2001),also suggests that scene categorization can operate independently of objectperception (Steeves et al., 2004). We asked whether or not temporary interruptionto area LO in neurologically-intact controls with repetitive transcranialmagnetic stimulation (rTMS) impairs object and scene processing.Participants categorized greyscale images of objects and scenes as ‘natural’or ‘man-made’. Subsequently, we targeted area LO, which had been functionallydefined with fMRI, and participants underwent five minutes ofrTMS. Immediately following, they completed another version of the objectand scene categorization task. Preliminary results show that rTMS to areaLO impairs categorization of objects but not scenes. This suggests that theglobal gist used to rapidly categorize scenes remains intact despite an interruptionto object processing brain regions.Acknowledgement: Canada Foundation for Innovation, NSERC to JKES and OGSST to CRM23.544 Visual cortex represents the statistical distributions ofobjects in natural scenesDustin Stansbury 1 (stan_s_bury@berkeley.edu), Thomas Naselaris 2 , An Vu 3 , JackGallant 1,2,3,4 ; 1 <strong>Vision</strong> Science, University of California, Berkeley, 2 Helen WillsNeuroscience Institute, University of California, Berkeley, 3 Bioengineering,University of California, Berkeley, 4 Psychology, University of California, BerkeleyNatural scenes are comprised of collections of objects with specific typesof objects tending to occur in certain classes of scenes. We hypothesize thatthe visual system might exploit these co-occurrence statistics in order toclassify scenes more efficiently. If this is true, then a model that capturesthe distribution of objects in natural scenes should provide good predictionsof visual cortical activity during natural vision. To construct sucha model we adapted a recent probabilistic algorithm known as LatentDirichlet Allocation (LDA). Given thousands of object-labeled naturalimages, LDA analyzes the label co-occurrences and ‘learns’ the distributionof objects in various scene classes. (The number of scene classes is a freeparameter determined by the modeler; the specific scene classes are latentstates learned by LDA). We then presented thousands of natural scenesto subjects while recording fMRI-BOLD activity in retinotopic and objectselectivevisual cortex. Afterwards, we estimated the scene-specific selectivityof all recorded voxels by regressing the BOLD responses evoked byeach image onto the scene classes provided by LDA. We find that specificregions within lateral and ventral occipital-temporal areas are selective forvarious specific classes of natural scenes. (This is consistent with previousresults; Naselaris et al. 2009). Selectivities, as determined by our method,are generally consistent with selectivities defined by standard functionallocalizers. However, we also find scene selectivity in many areas that arenot identified by standard functional localizers. Finally, in order to determinehow many distinct scene classes might be represented in anteriorvisual cortex, we varied the number of scene classes learned by LDA. Thebest model descriptions were obtained when the model learned 8-18 sceneclasses. In summary, our results suggest that regions within occipital-temporalvisual cortex represent the distribution of objects in certain specificcategories of natural scenes.Acknowledgement: NEI23.545 Faces and Places in the Brain: An MEG InvestigationDavide Rivolta 1 (drivolta@maccs.mq.edu.au), Laura Schmalzl 1 , Romina Palermo 2 ,Mark Williams 1 ; 1 Macquarie Centre for Cognitive Science (MACCS), MacquarieUniversity, Sydney, Australia, 2 Department of Psychology, The AustralianNational University, Canberra, AustraliaFaces and places are ubiquitous in our environment and recognition ofboth is crucial for everyday life functioning. Magnetoencephalography(MEG) studies have shown that the perception of faces generates specificMEG components including one at 100 ms and 170 ms post stimulus onset,whereas the perception of places generates an MEG component between200 and 300 ms post stimulus onset. Given that humans grasp the “gist”of a scene within a small fraction of a second, we aimed to investigate theexistence of a potential MEG component associated with place perceptionoccurring earlier than that previously described in literature (between 200and 300 ms post stimulus onset). MEG activity was recorded while 11 participantswere presented with pairs of face and place stimuli (S1 followedby S2), that were either the same (repeated condition) or different (unrepeatedcondition). Fifty percent of the stimuli were famous and fifty percentunfamiliar. MEG activity associated with S1 was used to define regionsof interest (ROIs) for faces and places in both hemispheres. Subsequently,the timecourse of MEG activity associated with S2 was examined within theROIs. Our results showed that the MEG activity associated with face andplace perception differed across hemispheres. In right occipito-temporalROIs we found that the amplitude of MEG activity at 100 ms post stimulusonset was significantly higher for faces compared to places, in line with thepreviously described M100 for faces. In contrast, in left occipito-temporalROIs we found the opposite pattern, namely a significantly higher MEGactivity at 100 ms post stimulus onset for places compared to faces, a novelcomponent we referred to as M100p. Neither the M100 nor the M100p wereaffected by familiarity or by repetition, and are therefore likely to be associatedwith face/place categorization.Saturday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>83


Saturday Morning PostersVSS 2010 AbstractsSaturday AM23.546 Gamma oscillations decompose the visual scene intoobject-based perceptual cycles: a computational modelMiconi Thomas 1 (thomas.miconi@cerco.ups-tlse.fr), VanRullen Rufin 1 ; 1 CerCo Lab,CNRSSeveral hypotheses exist regarding the functional role of oscillations in thebrain. Among these is the well-known binding-by-synchrony hypothesis,suggesting that neurons coding for a specific object or concept tend to firein synchrony (i.e. within the same cycle), creating a neural code for bindingtogether features of a same object (Gray et al., 1989). Recently gamma oscillationshave also been proposed as a competitive “gating” process, implementinga winners-take-all mechanism, allowing only the most excitedneurons to fire at each cycle (Almeida, Idiart & Lisman 2009). Here we suggestthat both mechanisms can be seen as two aspects of a single process,namely the decomposition of a visual scene into “perceptual cycles”, withneurons for each objects firing at successive cycles of the oscillation. Wedescribe a simple model of V1 in which oscillations, binding by synchronyand scene decomposition all emerge automatically and spontaneously fromthe interactions between three simple, well-known mechanisms, namelyfeedback inhibition, refractory periods, and lateral (or feedback) connectionsimplementing gestalt principles. Despite its extreme simplicity, thesystem gives rise to spontaneous decomposition of the scene into perceptualcycles, such that neurons encoding different objects tend to fire on differentcycles. The system is applied both to artificial and natural images.We then show that these results persist when oscillations are exogenouslyimposed from a separate source (as opposed to endogenously generated,“gamma-like” oscillations). This suggests that the basic principle describedhere may extend to other oscillatory regimes besides gamma.Acknowledgement: EURYI, ANR 06JCJC-015423.547 In search of neural signatures of visual binding : a MEG/SSVEF studyCharles Aissani 1 (charles.aissani@upmc.fr), Benoit Cottereau 1 , Anne-Lise Paradis 1 ,Jean Lorenceau 1 ; 1 Université Pierre et Marie Curie-Paris 06, Unité Mixte deRecherche 7225, S-975, Centre de Recherche de l’Institut Cerveau-Moelle,Hôpital de la Pitié-Salpêtrière, Paris F-75013, FranceVisual processes are distributed in numerous specialized cortical areas.How neural responses are bound into assemblies to elicit a unified perceptionremains a central issue in cognitive neurosciences. To address thisissue, we conducted an MEG experiment to characterize neural mechanismsof visual coherent motion integration. Stimuli consisting of 2 verticals and2 horizontals disconnected bars oscillating at 2.3Hz and 3Hz, respectively,were arranged in a “square” like shape. Such periodic stimulations elicitstereotypical oscillatory MEG activities phase-locked to visual stimulationat the first and second harmonics in cortical areas processing the stimuluselements (SSVEP). Subjects’ perception, either rigid square or non-rigidsquare, was modulated by subtle changes in luminance distribution alongthe bars, resulting in 4 experimental conditions. This design allows separatingSSVEP relative to element computation (f, 2f, 4f…etc) and SSVEPlinked to motion integration at intermodulation frequencies (nf1 + nf2).The behavioural results confirmed that bars with low-luminance line-endsenhance rigid square perception. MEG analyses reveal focal and perceptindependent activities at first harmonics on occipital sensors, sources reconstructionshowing retinotopic segregation between 2.3Hz and 3Hz activitiesin V1. We also found more widespread percept independent activitiesat second harmonics over occipital and parietal sensors. Finally, contrastbetween rigid and non-rigid percept showed fourth-order intermodulationfrequency enhancement localized in the right frontal cortex, likely frontaleye fields (FEF). Overall tagged stimulation and SSVEP analysis providea precise localisation of cortical activities related to the computation offocal visual elements, consistent with retinotopy. This methodology furtherhighlights an electrophysiological signature of shape processing fromambiguous motion components at fourth-order intermodulation frequencyin FEF.23.548 Incongruent visual scenes : Where are they processed inthe brain ?Florence Rémy 1,2 (florence.remy@cerco.ups-tlse.fr), Nathalie Vayssière 1,2 , DelphinePins 3 , Muriel Boucart 3 , Michèle Fabre-Thorpe 1,2 ; 1 Université de Toulouse UPSCentre de Recherche Cerveau et Cognition France, 2 CNRS CerCo Toulouse,France, 3 Laboratoire Neurosciences Fonctionnelles et Pathologies, UMR 8160,Université Lille Nord de France, CHRU Lille, Lille FranceObject and context processing interfere in rapid object visual categorizationof briefly flashed natural scenes, suggesting that objects and context in avisual scene are processed in parallel with strong early interactions (Joubertet al. 2008). The present fMRI study aimed to investigate cerebral activationselicited by the processing of scenes with either congruent or incongruentobject/context relationships. Fifteen subjects were instructed to categorizeobjects (man-made objects or animals) in briefly presented stimuli(exposure duration = 100 ms), using a forced-choice two-button response.Half the objects were pasted in expected (congruent) contexts, whereas theother half was shown in incongruent contexts. Our behavioural results supportpreviously reported data, showing that object categorization is moreaccurate (+14%) and faster (-32ms) in congruent vs. incongruent scenes.Moreover, we found that both types of scenes elicited differential neuralprocessing. The processing of non congruent scenes induced increased activationsin the right parahippocampal and retrosplenial cortices, as well asin the right middle frontal gyrus (P


VSS 2010 AbstractsSaturday Morning Postersto the spatially consistent object pairs were significantly shorter than to thespatially inconsistent pairs. These contextual effects disappeared, however,when visual attention was drawn to one of the two objects, while its counterpartobject was unattended and irrelevant to task requirements. Followupexperiments further explored the specific attentional conditions underwhich contextual associations facilitate object recognition. Our resultsreveal that contextual facilitation is a robust phenomenon that occurs undera variety of visual and attentional conditions, even when objects are merelyglanced for a brief duration. Contextual facilitation disappears, however,if the associated information is strictly unattended, despite evidence forthe coarse processing of this information. ‘Contextual binding’ of multipleassociated objects, thus, requires attentional resources. Our findings haveimportant implications for the effects of attention on visual object recognitionwithin relative ecological, real-world, environments.Acknowledgement: This research was funded by the Open University Research Fund.23.551 Effects of set size and heterogeneity in set representationby statistical propertiesAlexander Marchant 1 (a.marchant@gold.ac.uk), Jan de Fockert 1 ; 1 Goldsmiths,University of LondonRecent evidence suggests that observers show accurate knowledge of themean size of a group of similar objects, a finding that has been interpretedto suggest that sets of multiple objects are represented in terms of theirstatistical properties, such as mean size (Ariely, 2001; Chong & Treisman,2003, Marchant & De Fockert, 2009). A surprising finding is that this effectcan be shown across different set sizes (from 4 to 16 members) with little orno detriment to judgements of mean set size (Ariely, 2001; Chong & Treisman,2005b, exp 1). However, these studies have always held heterogeneityconstant whilst manipulating set size. Here, we present data that replicatepast findings when heterogeneity is held constant, but show that the accuracyof mean set size estimations decrease when both set size and heterogeneityincrease. Our findings suggest that summary representations are notalways obtained by averaging together the whole set of items; a feat thatoften requires a greater capacity than known focused attention processesand therefore has required the proposal of a new perceptual mechanism(Chong & Treisman, 2003, 2005a, 2005b, Treisman, 2006). Instead, summaryrepresentations may be based on a sub-sample from the set, within thecapacity of focussed attention (Myczek & Simons, 2008). Increased variationin the set leads to more variation in the possible sub-samples and aless accurate approximation of the set summary statistic. These results haveimplications for current theory and the proposed role this mechanism playsin scene perception (Treisman, 2006; Oliva and Torralba, 2007).Acknowledgement: ESRC award PTA-030-2006-0021223.552 Object identification in spatially filtered scene backgroundChing-Fan Chu 1 (hululuu@gmail.com), Chien-Chung Chen 1 , Yei-Yu Yeh 1 ; 1 Departmentof Psychology, National Taiwan UniversityWe investigated the role of low-passed scene information on object identificationin a previous study. We showed that, with short presentation,identification of a low-passed object embedded in a low-passed scene wasnot better than that of object alone (Chu et al., 2009, VSS). However, thismight be due to lateral masking from the scene that has the same spatialfrequency spectrum as the object. If so, identification of an object should beimproved when the power spectrum of the spatially filtered object is differentfrom that of the spatially filtered scene. In the present experiments, theobjects and the scenes were processed by different filters.In Experiment 1,photos of 20 target objects were presented on 20 natural scenes. The objectswere either low-passed or high-passed with a 2 cpd cut-off frequency. Thescene backgrounds were either low-passed or high-passed with six possiblecut-off frequencies (0.4, 0.7, 1.4, 2.8, 4, and 5.6 cpd), resulting into a 2 x 2design. The viewing duration was 36 ms. The task of the observers was toname the target object. We found that identification of high-passed objectson low-passed scenes was better than the other three target-scene combinations.To investigate the critical role of low-passed scene information,spatial filtering was applied to natural scenes or to phase-scrambled sceneswhile objects were not filtered. The low-passed scrambled background produceda greater masking effect than the low-passed scene background. Ourresults suggest that the low spatial frequency information in scene backgroundbenefits the processing of high-spatial frequency components ofobjects through the reduction of lateral masking in the frequency domain.Acknowledgement: Supported by NSC 96-2413-H-002-006-MY3 to CCC and NSC 96-2413-H-002-007-My3 to YYY23.553 In search for a magnocellular deficit in Optic NeuritispatientsCeline Perez 1,2,3 (Celine.Perez@upmf-grenoble.fr), Celine Cavezian 1,2,3 , PamelaLaliette 1,2 , Anne-Claire Viret 1,2,3 , Isabelle Gaudry 1,2,3 , Noa Raz 4 , Netta Levin 4 ,Tamir Ben-Hur 4 , Olivier Gout 3 , Sylvie Chokron 1,2,3 ; 1 Laboratoire de Psychologieet NeuroCognition, CNRS, UMR 5105, UPMF, Grenoble, France, 2 ERT TREATVISION, Fondation Ophtalmologique Rothschild, Paris, France. , 3 Service deNeurologie, Fondation Ophtalmologique Rothschild, Paris France, 4 Departmentof Neurology, Hadassah Hebrew-University Hospital, Jerusalem, IsraelOptic neuritis (ON) is an acute inflammatory disease of the optic nerve.Following visual acuity recovery, several patients report visual discomfortalthough ophthalmologic assessments show a complete recovery. To evaluatewhat could induce these complaints, the present study investigatesvisual processing of healthy individuals and patients with recovered ON.Specifically, magnocellular pathway was assessed. Two types of visual taskswere administered in monocular vision to two different groups of controlsand patients. First, 18 controls and 7 patients(4 left ON, 3 right ON) had todetect and categorize low(LSF) or high(HSF) spatial-frequency scenes toassess magno- or parvocellular pathways. Then, performance of 16 controlsand 5 patients (4 left ON, 1 right ON) were recorded while performinga denomination task of forms and moving objects (Objects From Motion).These objects could only be perceived by a contradictory movement of blackand white dots and mainly require the implication of the magnocellularpathway. Patients showed normal visual analysis in low spatial frequencyscenes compared to controls, but had difficulties in naming OFM with theiraffected eye (AE) (F(1,19)=17.47;p


Saturday Afternoon TalksSaturday PMPerceptual organization: Contours and 2DformSaturday, May 8, 2:45 - 4:15 pmTalk Session, Royal Ballroom 1-3Moderator: Elisabeth Hein24.11, 2:45 pmDistortions of illusory shape and motion by unseen motionsBarton L. Anderson 1 (barta@psych.usyd.edu.au), Michael Whitbread 1 ; 1 Universityof SydneyMost models of illusory contours (ICs) have focused on geometric factors asthe primary determinant of IC shape. Here, we report a new class of IC displaysin which distortions of both perceived shape and motion are inducedby unseen, locally ambiguous motion signals that arise from the ambiguityof aperture problem. When an outline diamond is translated behind acounter-translating camouflaged triangular occluder, the perceived motionand shape of the triangular occluder are distorted: the translating occluderappears to contain a strong illusory rotational component of motion, andthe angular subtense of the triangle is substantially reduced. We performeda series of experiments that parametrically varied the velocity of the occludingand occluded figures and the aspect ratio of the occluded diamond and(illusory) occluding triangle. Observers adjusted a triangular figure inwhich the angles of the triangle, as well as the translational and rotationalcomponents of motion of the triangle matched the perceived motion andshape of the IC. Our results show that the distortions in perceived shapeand rotational motion was primarily a function of angular subtense of boththe occluded and occluding figures; other factors, such as the translationalvelocity, or the velocity of contour terminators, played a negligible rolein these distortions. Our results are consistent with a model in which themotions and shapes of the ICs are distorted by an induced motion impartedby the unseen orthogonal component of motion of the occluded diamond.If correct, these data suggest that the observed distortions in shape andperceived motion arise at a very early stage of cortical motion processing,prior to the resolution of the aperture problem.Acknowledgement: Australian Research Counsil24.12, 3:00 pmThe role of mid-level representations in resolving object correspondenceElisabeth Hein 1 (elisabeth-hein@uiowa.edu), Cathleen Moore 1 ; 1 Department ofPsychology, University of IowaTo maintain stable object representations over time despite discontinuitiesin the visual input, the visual system must determine how newly sampledinformation relates to existing object representations. Despite the long traditionof research investigating this “correspondence problem” it is stillunclear what factors influence its resolution. We examined the relative roleof spatio-temporal and feature information (e.g., color, size, orientation andcontrast polarity) in resolving object correspondence in ambiguous apparentmotion displays (Ternus displays). We found that feature informationplays an important role in resolving object correspondence and can evenoverwhelm spatio-temporal information under some conditions. Moreover,it is not just featural identity that can determine object correspondence, butfeatural similarity can as well. Finally, we found that correspondence wasdetermined by the perceived values (i.e., perceived size and lightness) ofstimuli rather than by the physical values (i.e., size and luminance). Thissuggests that object correspondence is established at higher levels of visualprocessing than has been previously thought. In summary, we argue thatthe visual system is remarkably flexible with regard to what information ituses to organize retinal information into functionally meaningful units andto update these representations over time.Acknowledgement: Supported by NSF grant BCS-0818536 to CMM24.13, 3:15 pmThe role of symmetry and volume in figure-ground organizationTadamasa Sawada 1 (sawada@psych.purdue.edu), Mary A. Peterson 2 ; 1 Departmentof Psychological <strong>Sciences</strong>, Purdue University, 2 Department of Psychology,University of ArizonaMany objects in our environment are symmetrical and volumetric. Thesetwo constraints are extremely effective in 3D shape recovery (Sawada, 2009).In contrast, the spaces between objects, representing the background, arealmost never symmetrical and volumetric. Prior studies of figure-groundorganization showed that symmetrical regions are perceived to be figuresonly slightly (albeit significantly) more often than expected by chance Salvagio,Mojica & Peterson, 2008). In this study we examine the hypothesisthat the addition of volume enhances the likelihood of seeing symmetricalregions as figures. Experiment. If the observer perceives a given region in astimulus as a figure (object), then she should be able to recognize the shapeof the region; at the same time, we don’t expect the observer to be ableto recognize the shape of the background (Rubin, 1915). We tested humanperformance on a shape-matching task using signal detection analysismethods. On each trial, the observer was shown two stimuli one after theother. Each stimulus was composed of several regions horizontally aligned.The viewing duration was 500 ms for each stimulus. A mask was shownfor 500 ms between the stimuli. The observer’s task was to memorize theshapes of the specified regions in the first stimulus and to judge whetheror not the shapes of the regions in the second stimulus were identical tothe memorized shapes. The shapes of the regions were controlled to beeither symmetrical or asymmetrical. The volumes of the regions were controlledby using the depth cue of surface-contour. The results suggest thatit is easier to recognize the shape of regions that are both symmetrical andvolumetric. We conclude that perceptual assignment of which regions arefigures depends on the presence of 3D symmetry.Acknowledgement: National Science Foundation, US Department of Energy, Air ForceOffice of Scientific Research24.14, 3:30 pmCSI Berkeley Episode II: Perceptual organization and selectiveattentionKaren B. Schloss 1 (kschloss@berkeley.edu), Francesca C. Fortenbaugh 1 , EliD. Strauss 2 , Stephen E. Palmer 1,2 ; 1 Department of Psychology, University ofCalifornia, Berkeley, 2 Program in Cognitive Science, University of California,BerkeleyLast year we described the Configural Shape Illusion (CSI), in which theshape of a rectangular target is distorted by an attached region, or “inducer”(Palmer, Schloss, & Fortenbaugh, VSS-2009): the target’s perceived aspectratio changes toward the aspect ratio of the whole configuration. We alsoshowed that the illusion increases as grouping increases, due to connectedness,proximity, lightness similarity, hue similarity, and shape similarity.We now show that CSI magnitude is an inverted U-shaped function ofinducer height that scales with overall target size, increasing rapidly as theheight of the inducer increases from zero and then diminishing slowly, butnever reversing in sign, as its height increases further. Because groupingstrength was previously shown to affect CSI magnitude, we also measuredperceived grouping between target and inducers of different sizes. Thegrouping function was qualitatively similar to the CSI function, and whenit was scaled by target size, the correlation between grouping strength andCSI magnitude was 0.91. We suggest that the CSI is caused by the inabilityto selectively attend to the target to the extent that it is grouped with theinducer, such that the size and shape of the global configuration influencethe perceived size and shape of the target. We tested this hypothesis usinga Stroop-like interference task in which participants were asked to categorizethe target as taller-than-wide or wider-than-tall, when the aspect ratioof the whole configuration (target plus inducer) was either consistent orinconsistent with the target’s aspect ratio. The pattern of reaction times wasconsistent with Stroop interference: response times slowed when the globaland target aspect ratios were inconsistent (taller/wider or wider/taller),but no facilitation when they were consistent (taller/taller or wider/wider).The results support a role for selective attention in causing the CSI.86 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong> See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Afternoon TalksAcknowledgement: National Science Foundation Grant BCS-074582024.15, 3:45 pmContour Grouping and Natural Shapes: Beyond Local CuesJames H. Elder 1 (jelder@yorku.ca), Timothy D. Oleskiw 1 , Erich W. Graf 2 , WendyJ. Adams 2 ; 1 Centre for <strong>Vision</strong> Research, York University, Canada, 2 School ofPsychiology, University of Southampton, UKThe perception of boundary shape depends upon the organization of localorientation signals into global contours. Models generally assume thatgrouping is based upon local Gestalt relationships such as proximity andgood continuation. While there have been reports that the global propertyof contour closure is involved in this process (e.g., Kovacs & Julesz 1993), amore recent study suggests otherwise (Tversky, Geisler & Perry 2004). Thisraises the question: is contour grouping completely insensitive to globalproperties of the stimulus, depending only upon local Gestalt cues?To address this question, we conducted a psychophysical experiment inwhich observers were asked to detect briefly-presented target contours innoise. Contours were represented as sequences of short line segments, andthe noise was composed of randomly positioned and oriented segments ofthe same length. We used QUEST to estimate the threshold number of noiseelements at 75% correct performance in a present/absent task.Three conditions were tested. In Condition 1, targets were the closedbounding contours of 391 animal shapes derived from the Hemera objectdatabase. These contours afford local Gestalt properties but also a host ofglobal properties, including closure. In Condition 2, we created first-ordermetamers of these contours by randomly shuffling the order of the anglesbetween neighbouring segments. This preserves all local Gestalt propertiesexactly, but destroys all higher-order properties. In Condition 3, we alsorandomized the signs of the angles, thus removing a convexity bias.While noise thresholds were similar for Conditions 2 and 3, they were significantlyhigher for Condition 1, suggesting a global influence on grouping.Further analysis suggests that this difference cannot be explained bydifferences in stimulus eccentricity, element density, or contour intersectionsproduced in shuffled stimuli. Instead the results point to a process ofperceptual organization that goes beyond local, first-order cues.Acknowledgement: This work was supporte by grants from NSERC and GEOIDEFace perception: Brain mechanismsSaturday, May 8, 2:45 - 4:15 pmTalk Session, Royal Ballroom 4-5Moderator: Bradley Duchaine24.21, 2:45 pmFunctional lateralization of face processingMing Meng 1 (ming.meng@dartmouth.edu), Tharian Cherian 2 , Gaurav Singal 3 , PawanSinha 4 ; 1 Dartmouth College, 2 Duke University, 3 Harvard Medical School, 4 MITSeveral fMRI researchers have noted that face induced brain activity is morereliably localized in the right fusiform gyrus than in the left. However, welack a precise characterization of the hemispheric differences in facial selectivity.Identifying the nature of these functional asymmetries is crucial forunderstanding the organization of face processing in the human brain. Toaddress this need, we undertook a three pronged approach: 1. We comparedbrain activation in the left and right fusiform gyri induced by a set ofnatural images that span a range of facial similarity from random non-facesto genuine faces. 2. We measured the modulatory influence of contextualinformation on brain activation patterns. 3. We evaluated the temporaldynamics of face processing in the left and right fusiform gyri using a rapidevent-related design. Results on all three fronts have revealed interestinghemispheric differences. Specifically, we found that: 1. Activation patternsin the left fusiform gyrus correlate with image level face-semblance, whilethose in the right correlate with categorical face/non-face judgments. 2.Contextual information transforms graded responses in the left fusiformto categorical ones, but does not qualitatively change the responses in theright. 3. Graded pattern analyses in the left occur earlier than categoricalanalyses in the right fusiform. Contextual modulation too is evident earlierin the left than in the right. Furthermore, face-selectivity persists in the righteven after activity in the left has returned to baseline. These results provideimportant clues regarding the functional architecture of face processing.They are consistent with the notion that the left hemisphere is involved inrapid processing of ‘low-level’ face semblance, and perhaps a precursor tocategorical analyses in the right (cf. Rossion, et al., 2000; de Gelder & Rouw,2001; Miller, Kingstone, & Gazzaniga, 2002).24.22, 3:00 pmRobust visual adaptation to face identity over the right occipitotemporalcortex: a steady-state visual potential approachAdriano Boremanse 1,2 (bruno.rossion@psp.ucl.ac.be), Ernesto Palmero-Soler 1,2 ,Benvenuto Jacob 2 , Bruno Rossion 1,2 ; 1 Institute of Psychological Science, Universityof Louvain, 2 Institute of Neuroscience, University of LouvainOver recent years, visual adaptation has been used as a tool to probe theresponse properties of face-sensitive areas of the visual cortex in neuroimagingstudies (e.g., Grill-Spector et al., 2006), as well as in scalp event-relatedpotentials studies which aim to clarify the time-course of sensitivity to facialfeatures and their integration in the human brain (e.g., Jacques et al., 2007).However, this approach is often limited by low signal-to-noise ratio (SNR),as well as the ambiguity of measurement and quantification of adaptationeffects. Here we tested the sensitivity of the visual system to face identityadaptation using steady-state visual evoked potentials (SSVEP, Regan,1966). Twelve subjects were submitted to a 90s sequence of faces presentedat a constant rate (3.5Hz or faces/second) while high-density electroencephalogram(EEG) was recorded (128 channels). Fast-Fourier Transform(FFT) of EEG data showed a clear and specific response at the fundamentalfrequency (3.5Hz) and its harmonics (7Hz, 10.5…) over posterior electrodesites. EEG power at 3.5Hz over a few contiguous occipito-temporal channelsof the right hemisphere was much larger when face identity changedat that rate than when the same face was repeated throughout the sequence.Significant effects of face identity adaptation were found in every singleparticipant following a few minutes of EEG recording only. This effect wasnot due to low-level feature adaptation, since it was observed despite largechanges of face size, but disappeared for faces presented upside-down. Thisfirst demonstration of SSVEP adaptation to face identity in the human brainconfirms previous observations using a much simpler, faster and higherSNR approach. It offers a promising tool to study unambiguously and morecomfortably the sensitivity to processing of visual features in individualfaces in various populations presenting a much lower SNR of their electricalbrain responses (e.g., infants, brain-damaged patients).24.23, 3:15 pmEarly visually evoked electrophysiological responses over thehuman brain (P1, N170) show stable patterns of face-sensitivityfrom 4 years to adulthoodDana Kuefner 1 (dana.kuefner@uclouvain.be), Adélaïde de Heering 2 , CorentinJacques 3 , Ernesto Palmero-Soler 1 , Bruno Rossion 1 ; 1 Universite Catholique deLouvain, 2 McMaster University, 3 StanfordWhether the development of face recognition abilities truly reflects changesin how faces, specifically, are perceived, or rather can be attributed to moregeneral perceptual or cognitive development is debated. Event-relatedpotential (ERP) recordings on the scalp offer promise for this issue becausethey allow brain responses to complex visual stimuli to be relatively wellisolated from other sensory, cognitive and motor processes. ERP recordingsin response to faces from 5-16-year-old children report large age-relatedchanges in amplitude, latency (decreases) and topographical distributionof the early visual component P1 and the occipito-temporal N170 (Taylor,Batty & Itier, 2004). To test the face specificity of these effects, we recordedhigh-density ERPs to pictures of faces, cars, and their phase-scrambled versionsfrom 72 children between 4 and 17 years, and adults. We found thatnone of the age-related changes in amplitude, latency or topography of theP1 or N170 were specific to faces. Most importantly, when we controlledfor age-related variations of the P1, the N170 appeared remarkably similarin amplitude and topography across development, with much smaller agerelateddecreases in latencies than previously reported. At all ages the N170showed equivalent face-sensitivity; it was absent for scrambled stimuli,larger and earlier for faces than cars, and had the same scalp topographyacross ages. These data also illustrate the large amount of inter-individualand inter-trial variance in young children’s data. This variability appears tocause the N170 to merge with a later component, the N250 in grand-averageddata, explaining the previously reported “bi-fid” N170 of young children.Overall, we conclude that the classic electrophysiological markers ofSaturday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>87


Saturday Afternoon TalksVSS 2010 AbstractsSaturday PMface-sensitive perceptual processes are present as early as 4 years, an observationwhich does not support the view that face-specific perceptual processesundergo a long developmental course from infancy to adulthood.24.24, 3:30 pmA genetic basis for face memory: evidence from twinsJeremy Wilmer 1 (jwilmer@wellesley.edu), Laura Germine 2 , Christopher Chabris 3 ,Garga Chatterjee 2 , Mark Williams 4 , Ken Nakayama 2 , Bradley Duchaine 5 ; 1 Departmentof Psychology, Wellesley College, 2 Department of Psychology, HarvardUniversity, 3 Department of Psychology, Union College, 4 Macquarie Centre forCognitive Science, Macquarie University, 5 Institute of Cognitive Neuroscience,University College LondonCompared to notable successes in the genetics of basic sensory transduction,progress on the genetics of higher level perception and cognitionhas been limited. We propose that investigating specific cognitive abilitieswith well-defined neural substrates, such as face recognition, mayyield additional insights. We used a classic twin design to determine therelative contributions of genes and environment to face recognition ability.Our measure of face recognition ability was the widely used CambridgeFace Memory Test (CFMT), a reliable, normed, well-validated test requiringstudy and then recognition of faces in novel views and novel lighting.We found that the correlation of scores between monozygotic twins (0.70)was both statistically indistinguishable from our measure’s test-retest reliability(0.70) and more than double the dizygotic twin correlation (0.30),evidence that genetic influence accounts for all of CFMT’s familial resemblanceand for a very large proportion of its total stable variation. We alsoused an individual differences based study to dissociate face recognitionability from other abilities. A low correlation between CFMT and verbalrecognition (0.17) demonstrated that the heritability we observed for CFMTwas not the result of motivation, attention, computer-literacy, or generalmemory. A modest correlation between CFMT and abstract art recognition(0.26) indicated that general visual processes make only limited contributionsto CFMT performance. Our results therefore identify a rare phenomenonin behavioral genetics: a highly specific cognitive ability that is highlyheritable. These results establish a clear genetic basis for one of the mostintensively studied and socially advantageous of cognitive traits, opening anew domain to genetic investigation.Acknowledgement: Funding for this project was provided by the Economic and SocialResearch Council to BD, an NIH fellowship to JW, an NSF grant to J. Richard Hackmanand Stephen M. Kosslyn, and a DCI Postdoctoral Fellowship to CFC. This research wasfacilitated through the Australian Twin Registry which is supported by an Enabling Grantfrom the National Health & Medical Research Council administered by The University ofMelbourne.24.25, 3:45 pmResting-state functional connectivity within the face processingnetwork of normal and congenitally prosopagnosic individualsMarlene Behrmann 1 (behrmann@cnbc.cmu.edu), Leslie Ungerleider 2 , Fadila Hadj-Bouziane 2 , Ning Liu 2 , Galia Avidan 3 ; 1 Department of Psychology, Carnegie MellonUniversity, 2 Neurocircuitry Section Laboratory of Brain & Cognition, NationalInstitutes of Mental Health, 3 Department of Psychology and the ZlotowskiCenter for Neuroscience, Ben-Gurion University of the NegevA recent development in human neuroscience is the discovery of restingstate networks (RSN) whose coordinated activity can be uncovered usingthe spontaneous, relatively slow fluctuations (


VSS 2010 AbstractsSaturday Afternoon Talksa second number represents the strength of the dominant image, its negativerepresents the suppressed image; (2) ocular- and image-componentstrengths add linearly, (3) the summed strengths are perturbed by additive,Gaussian internal noise; (4) the highest-strength eye+image combination isperceived after the interruption. The transformation of the raw probabilityof-perceptiondata into strengths removes all interactions. For these images,ocular strength was greater than image strength. For all subjects individuallyfor all conditions investigated, the strengths of ocular and image componentsdecline over time at same rate, completely in parallel.Acknowledgement: Supported in part by NIH Award R01-MH068004-02 and NSF grantBCS-0843489725.12, 5:30 pmSeparate contributions of magno- and parvocellular streams toperceptual selection during binocular rivalryRachel Denison 1 (rdenison@berkeley.edu), Sarah Hillenbrand 1 , Michael Silver 2,3 ;1 Neuroscience Graduate Program, University of California, Berkeley, 2 Schoolof Optometry, University of California, Berkeley, 3 Helen Wills NeuroscienceInstitute, University of California, BerkeleyIn binocular rivalry, conflicting images presented to the two eyes result ina visual percept that alternates between the two images, even though thevisual stimuli remain constant. By dissociating visual stimulus from consciouspercept, the study of binocular rivalry can shed light on the neuralselection processes that lead to awareness. These selection processes havebeen shown to occur at multiple levels in the visual processing hierarchy,but the factors that determine the level at which perceptual selection isresolved are not well understood. Interocular switch (IOS) rivalry is a specialform of binocular rivalry in which two conflicting images are swappedbetween the two eyes about three times per second. IOS rivalry elicits twotypes of percepts: eye rivalry, in which perceptual selection operates onmonocular representations, and stimulus rivalry, which requires integrationof information from both eyes over time and is thought to occur at ahigher level in the visual processing hierarchy. We varied the spatial andtemporal frequency of orthogonal gratings in IOS rivalry and measuredthe proportions of eye and stimulus rivalry. High spatial frequencies werepreferentially associated with stimulus rivalry, and for low spatial frequencygratings, higher temporal frequencies promoted eye rivalry. Thispattern correlates with the temporal and spatial frequency selectivities ofthe magno- and parvocellular visual streams. Specifically, it suggests thatthe magno stream is important for eye rivalry, while the parvo stream isassociated with stimulus rivalry. We tested this hypothesis directly by usingred/green isoluminant stimuli to reduce the activity of the magno streamand found that isoluminance increased the amount of stimulus rivalry, aspredicted by the magno/parvo framework. This framework accounts for anumber of stimulus dependencies reported in the IOS rivalry literature andsuggests that the magno- and parvocellular pathways have distinct roles inperceptual selection.Acknowledgement: National Science Foundation Graduate Research Fellowship25.13, 5:45 pmPlasticity of interocular inhibition with prolonged binocular rivalryChris Klink 1 (p.c.klink@uu.nl), Jan Brascamp 2 , Randolph Blake 2,3 , Richard vanWezel 4,5 ; 1 Functional Neurobiology, Helmholtz Institute, Utrecht University ,2 Vanderbilt <strong>Vision</strong> Research Center, Vanderbilt University , 3 Brain and Cognitive<strong>Sciences</strong>, Seoul National University , 4 Psychopharmacology, UIPS, UtrechtUniversity, 5 Biomedical Signals and Systems, MIRA, Twente UniversityAnti-Hebbian learning rules have been suggested as a mechanism of synapticplasticity in inhibitory synapses (Barlow & Foldiak, 1989). The basic ideais that the presence of coincidental pre- and postsynaptic activity increasesthe efficacy of the inhibitory synapse, while its absence decreases inhibitorystrength. We investigated the role of anti-Hebbian learning mechanisms forinterocular inhibition during binocular rivalry. In binocular rivalry, differentimages are presented to the two individual eyes. Rather than perceivinga mixed or averaged version of these images, observers typically perceivefluctuations between the two monocularly defined percepts. Computationalmodels aiming to explain these perceptual fluctuations usually implementa form of mutual inhibition between percept-coding neuronal populations.Here we demonstrate that binocular rivalry is at least partially based onlow-level cross-inhibition between monocular neurons with a differenteye-of-origin, and that there is plasticity in the strength of these interocularinhibitions that is consistent with anti-Hebbian learning principles. We presentedobservers with prolonged binocular rivalry stimuli and found thatthe strength of interocular inhibition decreased over time, resulting in ahigher incidence of mixed or superimposed percepts. With various stimuliand interleaved changes in eye-stimulus configuration, we demonstrate thatthis plasticity of inhibitory strength is stimulus- and eye-specific and existfor both simple gratings and more complex house/face stimuli. Furtherexperiments revealed that recovery from ‘lowered inhibition’ only occursif both eyes receive consistent visual information with similar features asthe preceding rivalry stimuli. Neither monocular stimulation nor binocularstimuli with a different orientation and spatial frequency changed thestrength of interocular inhibition back to baseline values. We conclude that,consistent with previously proposed anti-Hebbian learning rules, plasticityin interocular inhibition during prolonged binocular rivalry depends onsimultaneity of activity in pre- and postsynaptic monocular neurons.Acknowledgement: This work was supported by a High Potential grant from UtrechtUniversity (CK & RvW), a Rubicon grant from the Netherlands Organisation for ScientificResearch (JB), and NIH grant EY13358 (RB).25.14, 6:00 pmAttentional facilitation of perceptual learning without awarenessDavid Carmel 1,2 (davecarmel@nyu.edu), Anna Khesin 1 , Marisa Carrasco 1,2 ; 1 Departmentof Psychology, NYU, 2 Center for Neural <strong>Sciences</strong>, NYUBackground: Perceptual learning (PL) – practice-induced improvement inperceptual task performance – is a manifestation of adult neural plasticity.Endogenous (voluntary) attention facilitates PL, but some have argued thatattention plays no essential role in PL because PL can occur even whenobservers are unaware of the “trained” stimulus. We therefore askedwhether manipulating attention to stimuli observers remain unaware ofwould affect PL of those stimuli.Method: We manipulated endogenous (voluntary) spatial attention toassess whether PL would occur at attended and unattended locations, suppressingtrained stimuli from awareness using continuous flash suppression(CFS), a strong form of binocular rivalry where monocular stimuli arerendered invisible by dynamic displays presented to the other eye. During10 training sessions, observers viewed a CFS display and performedan attentional task on stimuli presented to the dominant eye. This taskrequired attention to stimuli presented in two diagonally-located corners ofthe display, while ignoring stimuli in the other two corners. Concurrently,the suppressed eye was shown Gabors (the trained stimuli) at retinal locationscorresponding to both attended and unattended locations. To equatethe amount of practice in directing attention to all locations, on half of eachsession’s blocks the dominant eye’s attended and unattended locationswere switched and no Gabors were presented to the suppressed eye. Beforeand after the training sessions, we measured contrast thresholds (withoutCFS) for trained stimuli at attended and unattended locations. To assesslearning specificity, we also measured thresholds to “untrained” Gaborswith orthogonal orientations.Results and conclusion: Performance for trained stimuli at attended locationsimproved dramatically. As these stimuli were suppressed by CFSduring training, this finding indicates that attention can facilitate PL withoutawareness. Smaller improvements were found for the other stimulus/location combinations, indicating that practice in directing spatial attentioncan also improve performance.Acknowledgement: This research was supported by an International Brain ResearchFoundation Postdoctoral Fellowship to DC and NIH Research Grant RO1 EY016200 to MC.25.15, 6:15 pmBaseline fMRI pattern activity in early visual cortex predicts theinitial dominant percept in subsequent binocular rivalryPo-Jang Hsieh 1 (pjh@mit.edu), Jaron Colas 1 , Nancy Kanwisher 1 ; 1 Department ofBrain and Cognitive <strong>Sciences</strong>, McGovern Institute, MITBinocular rivalry occurs when the two eyes receive conflicting images andrival for perceptual dominance such that only one monocular image maybe consciously perceived at a time. There is still no consensus regarding thepotential neural sites of these competitive interactions. Here we test whetherneural activity occurring before the stimulus can predict the initial perceptin binocular rivalry, and if so whether it is activity in early retinotopic areas,or higher extrastraite areas, that is predictive of the initial percept. Subjectswere scanned while viewing an image of a face in one eye and an image ofa house in the other eye with anaglyph glasses. The rivalrous stimulus waspresented briefly for each trial, and subjects indicated which image he orSaturday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>89


Saturday Afternoon TalksVSS 2010 AbstractsSaturday PMshe preferentially perceived. Our results show that pre-trial fMRI patternactivity in the foveal confluence is correlated with the subsequent percept,whereas pre-trail activity in the FFA and PPA were not predictive of theinitial percept, suggesting a greater causal role for the foveal confluencethan higher extrastriate areas in determining the initial percept.25.16, 6:30 pmLeft global visual hemineglect in high Autism-spectrum Quotient(AQ) individualsDavid Crewther 1 (dcrewther@swin.edu.au), Daniel Crewther 1 , Melanie Ashton 1 ,Ada Kuang 1 ; 1 Brain <strong>Sciences</strong> Institute, Swinburne University of Technology,Melbourne, AustraliaThis study explores the visual perceptual differences between individualsfrom a normal population (mean age 25 yr) showing high versus lowAutism-spectrum Quotient (AQ). A perceptual rivalry stimulus – the diamondillusion, containing both global and local percepts was used to explorethe effects of occluder contrast (that hide the vertices of the diamond) andperipheral viewing, in populations of high (n=23) and low (n=15) AQ.Additionally, multifocal nonlinear visual evoked potentials (VEP), achromatic(24% and 96% contrast) were used to test for the presence of underlyingphysiological differences in magno- and parvocellular function. Bothgroups showed an increase in the percentage of global perception withincreasing contrast of the occluding stripes, however, no difference wasfound between AQ groups. A relative increase in global perception withincreasing eccentricity of the stimulus from fixation was also seen in bothgroups. Remarkably, the high AQ group showed a significant reduction inglobal perception when the stimulus was presented in left hemifield, butnot for presentation in right hemifield. This global perceptual hemineglectsuggests the possibility of abnormal parietal function in individuals withhigh AQ. While VEPs were similar at low contrast between hiAQ and loAQgroups, at high contrast there appeared to be interference with normal processingparticularly of the magnocellular second order kernel slice. SevenVEP parameters used in a discriminant analysis correctly classified high orlow group membership in 95% of the participants.Scene perceptionSaturday, May 8, 5:15 - 6:45 pmTalk Session, Royal Ballroom 4-5Moderator: Frans Cornelissen25.21, 5:15 pmScene categorization and detection: the power of global featuresJames Hays 1 (hays@csail.mit.edu), Jianxiong Xiao 1 , Krista Ehinger 2 , Aude Oliva 2 ,Antonio Torralba 1 ; 1 Computer Science and Artificial Intelligence Lab, MassachusettsInstitute of Technology, , 2 Brain and Cognitive Science, MassachusettsInstitute of TechnologyScene recognition involves a different set of challenges from those posed byobject recognition. Like objects, scenes are composed of parts, but whereasobjects have strong constraints on the distribution of their parts, scene elementsare governed by much weaker spatial constraints. Recently, a numberof approaches have focused on the problem of scene classification usingglobal features instead of encoding the objects within the scene. An importantquestion is “what performance can be achieved using global imagefeatures?” In this work we select and design several state-of-art algorithmsin computer vision and evaluate them using two datasets. First, we usethe 15 scene categories dataset, a standard benchmark in computer visionfor scene recognition tasks. Using a mixture of features and 100 trainingexamples per category, we achieve 88.1% classification accuracy (humanperformance is 95%; the best prior computational performance is 81.2%).For a more challenging and informative test we use a new dataset containing398 scene categories. This dataset is dramatically larger and morediverse than previous datasets and allows us to establish a new performancebenchmark. Using a variety of global image features and 50 trainingexamples per category, we achieve 34.4% classification accuracy (chance isonly 0.2%). With such a large number of classes, we can examine the commonconfusions between scene categories and evaluate how similar theyare to the target scenes and reveal which classes are most indistinguishableusing global features. In addition, we introduce the concept of scenedetection — detecting scenes embedded within larger scenes — in orderto evaluate computational performance under a finer-grained local scenerepresentation. Finding new global scene representations that significantlyimprove performance is important as it validates the usefulness of a paralleland complementary path for scene understanding that can be used toprovide context for object recognition.Acknowledgement: Funded by NSF CAREER award 0546262 to A.O. and NSF CAREERAward 0747120 to A.T25.22, 5:30 pmThe Good, the Bad, and the Scrambled: A Perceptual Advantagefor Good Examples of Natural Scene CategoriesEamon Caddigan 1,2 (ecaddiga@illinois.edu), Dirk B Walther 1 , Li Fei-Fei 3 , Diane MBeck 1,2 ; 1 Beckman Institute, University of Illinois at Urbana-Champaign, 2 Departmentof Psychology, University of Illinois at Urbana-Champaign, 3 Department ofComputer Science, Stanford UniversityRecent research has shown that participants are better able to categorizebriefly presented natural scene images that have been rated as “good”exemplars of their category, and that this is reflected in the distributed patternsof neural activation obtained through fMRI (Torralbo, et al., 2009).The effect of typicality on categorization/decision processes is well documented(see Rosch, 1978), but it is possible that such effects may also reflectdifferences in perception. Here we asked whether subjects might actually‘see’ good exemplars of a category better than bad exemplars. We askedsubjects to simply report whether a very briefly presented (19 ms – 60 ms)image was intact or scrambled. Images drawn from six natural scene categories(beaches, city streets, forests, highways, mountains and offices)were rated as either “good” or “bad” exemplars of their categories. Theseimages were presented in either their original intact state, or 100% phasescrambled (Sadr & Sinha, 2004), and then followed by a perceptual mask.Note that the subjects were never instructed to categorize the scenes norwere they explicitly notified that the image set contained good and badcategory exemplars. We measured participants’ d’ separately for good andbad images, and found that participants were better able to discriminateintact from scrambled images when the images were good category exemplarsthan bad category exemplars. These results suggest that knowledgeabout scene category actually allows observers to ‘see’ natural scene imagesbetter, regardless of whether scene category is relevant to task.Acknowledgement: This work is funded by the NIH (LFF, DB, DW)25.23, 5:45 pmfMRI Decoding of Natural Scene Categories from Line DrawingsDirk Walther 1 (walther@illinois.edu), Barry Chai 2 , Eamon Caddigan 1,3 , Diane Beck 1,3 ,Li Fei-Fei 2 ; 1 Beckman Institute, University of Illinois at Urbana-Champaign,2 Computer Science Department, Stanford University, 3 Psychology Department,University of Illinois at Urbana-ChampaignUsing full color photographs of natural scenes, we have previously shownthat information about scene category is contained in patterns of fMRIactivity in the parahippocampal place area (PPA), the retrosplenial cortex(RSC), the lateral occipital complex (LOC), and primary visual cortex (V1)(Walther et al. J. Neurosc. 2009). If these regions are involved in representingcategory, then it should be the case that we could decode scene categoryfor any natural scene image that participants can categorize, including simpleline drawings. In keeping with this prediction, we found that we candecode scene category from fMRI activity patterns for novel line drawingpictures just as well as from activity for color photographs, in V1 throughPPA. Even more remarkably, a decoder trained on fMRI activity elicitedby color photographs was able to predict the correct scene categories whentested on activity patterns for line drawings just as often as for color photographs,indicating that the activation pattern elicited by color photographsgeneralized well to line drawings. Conversely, a decoder trained on activityfor line drawings was able to decode activity patterns for photographs justas well as for line drawings, but only in the PPA and V2/VP, suggestingthat, in these regions, category information is strongly determined by theedge and line information in a photograph. We conclude that line drawingscontain sufficient information about natural scene categories to producescene-specific fMRI activity patterns all along the visual processinghierarchy. Moreover, the specific encoding of this information appears tobe similar to that elicited by photographs as shown by successful decodingof scene categories across the two image types. Our findings suggest thatscene structure, which is preserved in line drawings, plays an integral partin representing scene categories.Acknowledgement: NIH 1 R01 EY01942990 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Afternoon Talks25.24, 6:00 pmfMRI evidence for two distinct ventral cortical vision systemsFrans W. Cornelissen 1 (f.w.cornelissen@rug.nl), Jan-Bernard Marsman 1 , RemcoRenken 2 , Koen V. Haak 1 ; 1 Laboratory for Experimental Opthalmology, UniversityMedical Center Groningen, University of Groningen Netherlands, 2 BCN NeuroimagingCenter, University Medical Center Groningen, University of Gronigen,NetherlandsThe repertoire of human visual recognition skills is amazingly broad andranges from rapid “gist”-based scene categorization to the fine scrutiny ofminute object details. While ventral occipital cortex is implied in all theseabilities, the computational organisation that enables this is still poorlyunderstood. Recent eye-tracking studies have shown that eye-movementcharacteristics can be used to distinguish between two different modes ofperception, one associated more with global processing, and the other withmore detailed visual analysis. We reasoned that if these perceptual modesreflect genuinely different cortical processing, we should be able to useeye-movements to tease apart the underlying neural correlate. In our functionalMRI experiment, participants freely viewed images of visual indoorscenes while their brains were scanned and their eye-movements tracked.We define two classes of eye-movement events to approximate the differentviewing modes. Brief fixations followed by large saccades were definedas “scanning” events, whereas long fixations followed by short saccadesrepresent “inspection” events. These events were subsequently used in theanalysis of the fMRI data. Independent component analysis indicated theexistence of two clusters in ventral occipital cortex. The cluster of activityin ventro-medial occipital cortex was preferentially associated with scanningevents while inspection events were preferentially associated withactivity in the ventrolateral cluster. Hence, this shows that fMRI signalsrecorded from ventral cortex can be segregated based on eye-movements.Information processing during scanning events is suggested to be of statisticalnature, given their brevity and the peripheral location of the saccadetarget. The longer inspection events presumably enabled additional scrutinyof features as well as computation of spatial relationships. Hence, ourwork suggests that the human ventral stream subdivides into two visionsystems that enable perception based on distinct visual information andneural computations.Acknowledgement: This study was supported by European Commission grant 043261(Percept) to Frans W. Cornelissen25.25, 6:15 pmOne cortical network for the visual perception of scenes andtexturesKoen V. Haak 1,2,3,4,5 (k.v.haak@med.umcg.nl), Remco Renken 3,4,5 , Frans W. Cornelissen1,3,4,5 ; 1 Laboratory for Experimental Ophthalmology, 2 School of Behaviouraland Cognitive Neurosciences, 3 BCN Neuroimaging Center, 4 UniversityMedical Center Groningen, 5 University of GroningenVisual scene perception is a core cognitive ability that allows us to recognizewhere we are and how to act upon our environment. Visual sceneperception is therefore crucial to our functioning. Despite this, the neuralimplementation of visual scene perception remains largely unexplored.Although previous neuroimaging studies have identified several sceneselectivebrain regions – most notably the parahippocampal place area(PPA), retrosplenial cortex (RSC), and a region along the transverse occipitalsulcus (TOS) – data thus far do not indicate the type of neural computationsunderlying visual scene perception. When a visual system computesa statistic based upon multiple visual features, it is said to perform texturalanalysis. Clearly, texture analysis is useful to characterize the texture ofsurface materials. But from a computational perspective, it can also be usedto characterize visual scenes. We reasoned that if the brain applies texturalanalysis to scenes, one would expect it to encode textures and scenes in thesame cortical regions. To test this hypothesis, we used long-interval fMRIrepetition priming to identify regions in which neuronal activity attenuateson repetition of visually presented textures. This approach allowed usto probe regions that encode visual texture independent of spatial imagetransformations. Such independence is important because the result of texturalanalysis (i.e., extracted statistical image information) should be stableacross varying retinal projections. This was verified by the observation thatrotated and scaled repetitions of the stimuli did not cancel the priminginducedreduction of activity. In addition, we used a classic fMRI ‘localizer’sequence in order to independently identify the PPA, RSC, and TOS. Ourresults reveal that the human brain encodes texture in regions that are alsoscene-selective. This, we argue, indicates that there is one cortical networkfor visual scene and texture perception that uses statistical image informationin its computations.Acknowledgement: This research was supported by European Union grants #043157 and#04326125.26, 6:30 pmThe structure of scene representations across the ventral visualpathwayDwight Kravitz 1 (kravitzd@mail.nih.gov), Cynthia Peng 2 , Chris Baker 1 ; 1 Laboratoryof Brain and Cognition, NIMH, NIH, 2 Department of Psychology, CarnegieMellon UniversityAs we navigate the world we encounter complex visual scenes that we canboth categorize and discriminate. Prior studies have reported scene categoryinformation in both early visual cortex (EVC) and the scene-selectiveparahippocampal place area (PPA). However, these studies used onlya small number of preselected categories, providing little insight into thediscrimination of individual scenes or unbiased test of the categoricalstructure of the representations. Here we use a multivariate ungroupedapproach to establish the differential discrimination and categorical structureof scene representations in EVC and PPA. We presented 96 unique,diverse, and highly detailed scenes in an event-related fMRI paradigm witheach scene being a unique condition. The scenes were chosen to equallysample from all the combinations of three broad classes based on apparentdepth (near/far), content (manmade/natural), and the gross geometryof the scene (open/closed). We then used multi-voxel pattern analysis toestablish how the responses of PPA and EVC. Importantly, neither ourstimuli nor analyses had any bias towards any particular organization orcategorization of the scene stimuli. We found that the response of both PPAand EVC could be used to discriminate individual scenes from one another.However, the scene representations in these two regions differed in theircategorical structure. The response of PPA grouped scenes by their geometry(open/closed) despite differences in their perceptual content, consistentwith a role in navigation. In contrast, early visual cortex grouped scenesbased on the distance to the nearest foreground object (near/far). In neitherregion did we find evidence for strong grouping by the scene categoriesoften assumed in the prior literature (e.g. beaches, cityscapes). These resultssuggest while both regions can discriminate scenes each encodes differentaspects of complex scenes, providing insight into the transformation ofvisual information along the ventral visual pathway.Acknowledgement: NIMH Intramural ProgramSaturday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>91


Saturday Afternoon PostersAttention: Temporal selection andmodulationRoyal Ballroom 6-8, Boards 301–309Saturday, May 8, 2:45 - 6:45 pm26.301 The Effect of Extensive Repeated Viewing on Visual RecognitionJohn O’Connor 2 (john.d.oconnor@us.army.mil), Matthew S. Peterson 1 , RajaParasurman 1 ; 1 Department of Psychology, George Mason University, 2 US ARMYRDECOM CERDEC Night <strong>Vision</strong> & Electronic Sensors DirectorateThe US Army RDECOM CERDEC Night <strong>Vision</strong> & Electronic Sensors Directorate(NVESD) has conducted numerous experiments involving combatrecognition and identification in support of sensor model development.Participants sometimes report feeling fatigued after completing experiments,especially those in excess of 400 trials. If such a fatigue were to exist,it would introduce error into and reduce the sensitivity of NVESD perceptionmodeling experiments. The authors conducted an experiment with25 participants investigating fatigue associated with extensive repetitions(1008 trials) of visual vehicle recognition and identification tasks. Participantswere allowed up to 8 seconds to recognize (Tank, APC, SPA) andidentify (e.g. T-55 vs. T-72), each of 144 vehicle thermal images (12 trackedvehicles at twelve aspects) presented 7 times. Results indicate no significantreduction in recognition accuracy and no significant effect on responsetimes.This contrasts with vigilance tasks, in which the goal is to detect the presenceof a rare target and performance declines within the first half houror so (Parasuraman, 1986). Likewise, we did not observe the typical sensitivitydecrement (Parasuraman, 1979), which occurs for rapidly presentedevents (e.g. 30-60 Hz). Sensitivity decrements increase as memory loadincreases and image quality decreases, yet performance did not declinein our task, which used a large number of possible targets and degradedthermal images. We propose that our training protocol (2 days of combatID training with a 96% pass criterion) was able to mitigate the effects offatigue on attention, despite subjects never reaching ceiling-level performance.Alternatively, time-on-task effects might be restricted to detectiontasks, as in most vigilance experiments, unlike the tasks used in the presentstudy. Results indicate that fatigue associated with extensive repetitions ofrecognition or identification tasks will not introduce error into perceptionexperiments such as those used to support NVESD sensor modeling.26.302 Is it better to burn out or fade away? The effect of suddenoffsets on target recoveryPhilip C. Ko 1 (philip.c.ko@vanderbilt.edu), Adriane E. Seiffert 1 ; 1 Department ofPsychology, College of Arts and <strong>Sciences</strong>, Vanderbilt UniversityDoes the ability to track moving objects through a temporary disappearancedepend on how they disappeared? In the target recovery paradigm,participants track multiple targets moving in a display amongst identicaldistractors and across a momentary blank of the display. Participants mustrecover the targets after the blank to successfully discriminate them fromdistractors at the end of a trial. Keane and Pylyshyn (2006) showed superiorperformance when objects paused during the blank compared to whenthey continued to move during blank. They suggested this pause-advantageindicated that people do not extrapolate positions of moving objectsduring tracking. An alternative account of the pause-advantage is that theobjects’ disappearance causes the positions of the targets to be memorizedand then matched to the objects’ positions when they reappear. We investigatedwhether the pause-advantage depended on the strength of the transientproduced by the offset of the objects. To increase the strength of thetransient, the objects “burned out” by increasing in size just before theiroffset. To decrease the transient, the objects “faded away” by graduallydecreasing in luminance before their offset. The results showed a significantinteraction between transient strength (burn out, fade away) and condition(pause, move), F(1,9) = 58.61, p


VSS 2010 AbstractsSaturday Afternoon Postersthat inter-observer differences in the effects of attention and awareness onthreshold elevation are correlated. Inter-observer differences in the effectson afterimage formation, however, are not. Our results indicate that attentionand awareness are qualitatively similar in their effects on afterimageformation and threshold elevation, and that this similarity is particularlypronounced at the level of threshold elevation.Acknowledgement: Rubicon Grants from the Netherlands Organisation for ScientificResearch (JB and JvB), NIH EY 13358 (RB)26.305 Temporal extension of figures: Evidence from the attentionalblinkLauren Hecht 1,2 (hechtl@grinnell.edu), Shaun Vecera 1 ; 1 University of Iowa, 2 GrinnellCollegeRecent research on figure-ground organization has focused on the behavioralconsequences of figure-ground assignment, including faster responsesand higher accuracy for figures; however, other outcomes exist. For instance,figure-ground assignment has consequences for temporal processing: perceptualprocessing begins earlier for figures than for background regions,demonstrating a ‘prior entry’ effect (Lester et al., 2009). Additionally, figuresare afforded extended processing relative to backgrounds (Hecht &Vecera, in preparation). One implication of this ‘temporal extension’ effectis that figures have poorer temporal resolution compared to backgroundregions, and detection of a stimulus presented in close temporal proximityto a figure should be impaired relative to detection of a stimulus appearingafter a background region. Consequently, performance in tasks requiringrapid temporal processing of items presented in close succession should beimpaired for figure regions relative to ground regions. To assess this conjecture,observers monitored an RSVP stream of letters for two targets. Thefirst was a target letter, embossed on the surface of the figure or the backgroundregion of a bipartite figure-ground display presented in the RSVPstream. At varying delays in the RSVP sequence, the second target (i.e.,a number) appeared in the stream. In accordance with the proposal thatfigures have degraded temporal resolution relative to background regions,the attentional blink was moderated by figure-ground assignment. Specifically,reports for the second target were less accurate following figure trials.In other words, when the first target appeared on a figure, the attentionalblink was larger than when the first target appeared on a backgroundregion. These results support the ‘temporal extension’ effect of figures.Acknowledgement: Grinnell College CSFS and travel grants awarded to Lauren N. Hecht26.306 Temporal resolution of attention in foveal and peripheralvisionCristy Ho 1 (cristyho@hku.hk), Sing-Hang Cheung 1 ; 1 Department of Psychology,The University of Hong KongPurpose: Attentional blink (AB) refers to people’s inability to detect andidentify the second of two targets presented in close temporal succession,typically in a rapid serial visual presentation (RSVP) stream. Is the attentionalmechanism governing the temporal selection of sensory informationthe same or different between foveal and peripheral vision? Here we assessthe temporal dynamics of attention in foveal versus peripheral vision usingthe AB paradigm. Method: Six normally sighted young adults participatedin each of the two experiments (E1 and E2). RSVP streams of 24 distractorletters with two target letters (T1 and T2) of a different color periodicallyembedded within them were presented at a rate of 9.44 Hz in foveal orin peripheral vision. Eccentricities of stimuli in the peripheral conditionwere 4.0° (E1) and 8.0° (E2) in lower right visual field. The stimuli subtendedvisual angle of about 1.3° and 2.6° (E1), and 0.5° and 3.0° (E2) inthe foveal and peripheral conditions, respectively. Foveal and peripheraltrials were presented in separate blocks (4 blocks per condition) of 80 trials.Results: Analysis of the proportion of correct T2 reports given correct T1identification, P(T2|T1), showed the typical AB findings in both the fovealand peripheral conditions. Inverted Gaussian functions were fitted to theP(T2|T1) data to estimate the magnitude of the AB effects. Average magnitudesof AB in the foveal and peripheral conditions were 0.73±0.06 and0.46±0.07 (E1), and 0.57±0.08 and 0.34±0.08 (E2), respectively. Magnitudeof AB was significantly smaller in peripheral than in foveal vision in bothexperiments (ps


VSS 2010 AbstractsSaturday Afternoon Postersat fixation. We manipulated whether attention was directed to the peripheralstimuli by asking participants to monitor either the five peripheral locationsfor a color/shape/texture conjunction or the fixation stream for an“a”. We manipulated the degree of competition among items by employingsequential and simultaneous presentation conditions. Competition couldtake place only among simultaneously presented stimuli (Kastner et al.,1998). We find robust competition among items (greater activation undersequential presentation than under simultaneous presentation) acrossattentional conditions. Although directing attention to five items increasedthe overall activity they evoked in V4, this effect did not interact with presentationcondition; that is, there was no evidence that directing attentionto five items reduced their competitive interactions relative to when theywere unattended. Our data indicate that when simultaneously directed tomultiple competing items, attention is an ineffective remedy for competitionfor representation.Acknowledgement: NIMH grant R03 MH08201226.314 Judging peripheral change: Attentional and stimulus-driveneffectsJenna Kelly 1 (kelly_j@denison.edu), Nestor Matthews 1 ; 1 Psychology, DenisonUniversityIntroduction: Previous research has revealed performance advantages forstimuli presented across (bilateral) rather than within (unilateral) the leftand right hemifields on a variety of spatial attention tasks (e.g., Awh &Pashler, 2000; Chakravarthi & Cavanagh, 2009; Reardon, Kelly, & Matthews,2009). Here we investigated whether a bilateral advantage wouldalso be observed for tasks limited by the temporal resolution of attention.Method: Twenty-three Denison University undergraduates completed a3x3 within-subject experiment. The independent variables were attentionalcondition (bilateral, unilateral, diagonal) and distracter condition (absent,static, dynamic). Stimuli were Gabor patches in the 4 corners of the screen(14.55 deg diagonally from fixation); 2 were pre-cued as targets on eachtrial. In two-thirds of trials, 2 Gabor distracters were presented betweeneach pair of corner target positions. In half of these trials, the distracters didnot change orientation throughout the duration of stimulus presentation. Inthe remaining distracter trials, distracter orientations changed orthogonallyat random time intervals. After correctly identifying a foveally flashed letter,participants judged whether or not the cued targets had changed orientationsimultaneously. Results: When distracters were absent, proficiency(d’/RT) was significantly lower in the diagonal condition than in eitherof the bilateral or unilateral conditions, which were statistically indistinguishable.This diagonal disadvantage was eliminated in the presence ofeither distracter type. Discussion: The significantly lower proficiency in thediagonal condition—in which targets were in opposite hemifields—arguesagainst an effect of laterality on this task. This lack of laterality effects inthe temporal resolution of attention contrasts with the significant lateralityeffects previously reported on spatial attention tasks (Awh & Pashler;Chakravarthi & Cavanagh; Reardon, Kelly, & Matthews). This suggestsdifferent constraints on the spatial versus temporal resolution of attention,which is consistent with the conclusion of Aghdaee and Cavanagh (2007).26.315 Concurrent Task Performance and the Role of Attention inChange DetectionGabriela Durán 1,2 (gduran@uacj.mx), Wendy S. Francis 2 , Marlene Martínez 1 ;1 Universidad Autónoma de Ciudad Juárez, 2 University of Texas at El PasoDetecting a change in a scene is easy when motion cues are available todraw attention to the location of the change. However, when visual contactwith the scene is even briefly disrupted, as in the flickering task of Rensink,O’Regan, and Clark (1997), change detection becomes difficult. A brief maskintervening between alternate versions of a photographed scene slowedchange detection markedly relative to immediately successive presentations.The difficulty in the flickering task was attributed to the requirementof focused attention to maintain image components in memory for comparisonand the need to direct attention serially to different componentsuntil the area of change became the focus of attention. However, attentionwas not manipulated directly.The present series of experiments tested whether limiting the attentionalresources available would disrupt change detection by having participantsperform concurrent n-back or number-repetition control tasks. Experiments1 and 2 examined the effects on change detection when alternate forms ofthe images switched every 320 ms. Experiment 1 allowed the use of motioncues, in that alternate forms of images were presented in immediate succession.Change-detection performance was equivalent for the two concurrenttasks. Experiment 2 removed motion cues by using the flickering taskwith a mask between alternate forms. Change detection performance wasadversely affected by concurrent performance of the n-back task relativeto the control. Experiment 3 replicated two presentation conditions fromthe Rensink study, using a 640 ms image alternation rate. In addition tothe mask between alternate forms, in one condition, the presentation timeof each form was divided by a mask. In both versions of the task, concurrentperformance of the n-back task slowed change detection. Overall, theresults support the conclusion that attention is important for change detectionwhen motion cues are not available.26.316 Conspicuity of peripheral visual alertsJeffrey B. Mulligan 1 (jeffrey.b.mulligan@nasa.gov), Kelly S. Steelman-Allen 2 ; 1 NASAAmes Research Center, 2 University of Illinois at Urbana-ChampaignMeasurement of peripheral visibility of a target typically involves a fixatingsubject intently waiting for the appearance of the stimulus. In realworldenvironments such as aircraft cockpits and automobiles, however,the operator is usually engaged in a variety of non-monitoring tasks whenvisual alerting signals appear. We use the term conspicuity to distinguishthe attention-getting power of a visual stimulus from simple visibility. Wehave developed an experimental paradigm to study visual conspicuity:subjects perform a demanding central task, in which they use a computermouse to keep a wandering target spot in the central portion of the screen,while simultaneously monitoring a set of peripheral numeric displays forcolor change events. When such an event occurs, the subject must make ajudgment concerning the displayed numeric value, indicating the responsewith a mouse click on one of two buttons located above and below the item.The strengths of the various alerts are varied within a run using a staircaseprocedure, allowing us to estimate noticeability thresholds. Visibility of thealerting signals is measured separately in a control experiment in whichthe subject fixates a location within the central task area, while monitoringthe peripheral alert locations. Thresholds in the dual-task experiments arelower than would be expected based on the results of the control experiment,due to the fact that the subjects actively sample the alert locationswith fixations while performing the central task. Not surprisingly, moresampling fixations are made to high-frequency alert locations. The resultsare modeled using N-SEEV (Steelman-Allen et al., HFES 2009), a computationalmodel of attention and noticing that predicts visual sampling basedon static and dynamic visual sampling, the bandwidth and value of informationin each channel, and the subject’s attentional set.Acknowledgement: The Integrated Intelligent Flight Deck Technologies project of NASA’sAviation Safety Program26.317 An overview of the attentional boost effectYuhong V. Jiang 1 (jiang166@umn.edu), Khena M. Swallow 1 ; 1 Department ofPsychology & Center for Cognitive <strong>Sciences</strong>, University of MinnesotaWe report a series of studies on a new attentional phenomenon: the attentionalboost effect, and relate it to perceptual learning, dual-task interference,and event perception. The attentional boost effect is the surprisingfinding that when two continuous tasks are performed concurrently, atransient increase in attention to one task enhances, rather than impairs,performance in another task. In the standard paradigm our participantsencoded a series of scenes presented at 500 ms/item into memory whilesimultaneously monitoring an unrelated stream of letters (also presentedat 500 ms/item) for an occasional target (e.g., a red X among other coloredletters). Previous studies have shown that attention to the letter tasktransiently increases when a target is detected relative to when a distractoris rejected. Instead of producing dual-task interference, however, performanceon the scene encoding task was enhanced by target detection:Memory for scenes that were encoded when the target letter appeared wassignificantly better than memory for scenes presented before or after thetarget letter. The attentional boost effect contrasts with the majority of thedual-task performance and attention literature, showing that increasingattention to one task can trigger an attentional process that supplements,rather than impairs, performance on a second task. We report experimentsthat illustrate the generality of the attentional boost effect across differentsensory modalities, different primary tasks, and different time scales (e.g.,perceptual processing, long-term memory). We also relate and differentiateSaturday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>95


VSS 2010 AbstractsSaturday Afternoon Postersbiologically realistic computational modeling can suggest plausible neuralmechanisms of compensation in hemianopia, which can be tested empirically,and which may have some use in guiding rehabilitation strategies.Acknowledgement: LL was supported by a Michael Smith Foundation for Health ResearchPost-doctoral Fellowship. JB was supported by a Michael Smith Foundation for HealthResearch Senior Scholar Award and a Canada Research Chair26.322 Processing visual scene statistical properties in patientswith unilateral spatial neglectMarina Pavlovskaya 1,2 (marinap@netvision.net.il), Yoram Bonneh 3 , NachumSoroker 1,2 , Shaul Hochstein 4 ; 1 Loewenstein Rehabilitation Hospital, Raanana,Israel , 2 Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel,3 Department of Neurobiology, Brain Research, Weizmann Institute of Science,Rehovot, Israel, 4 Life <strong>Sciences</strong> Institute, Hebrew University, Jerusalem, IsraelChong and Treisman (2003, 2005, 2008) found that people judge the meansize of a set of circles as quickly and accurately as that of a single item, suggestingthat statistical properties may be processed without focused attention.The lack of awareness of left-side input in cases of Unilateral SpatialNeglect (USN) has been attributed to an inability of focusing attention tothe left side, suggesting that the processing of statistical properties maybe spared. Five USN patients and five controls compared size of a referencecircle to a single circle or to the average size of a briefly presentedcloud of circles in either the right or left visual fields or spanning both sides.When spanning both sides, their separate averages were either identicalor different (difference from reference in ratio 1:4), with the ‘different’condition used to assess relative impact of each side in judging the mean.USN patients were able to make comparisons and average size in eitherhemifield, though their left-side performance was somewhat degraded. Inthe spanning condition, while the controls indeed averaged across sides,lowering their threshold, patients showed a higher threshold when needingto depend on the left side of the cloud (when the right-side cloud wascloser to the reference). However, they did use both sides of the cloud sothat their spanning-condition thresholds were intermediate between thoseof controls and those expected if they attended only to the right side. Weconclude that USN patients perform a weighted average across sides, givingdouble weight to the right side, perhaps due to “extinction”. The abilityof USN patients to extract the statistical properties of the visual scene onthe neglected side points to a relatively spared spread-attention mechanismserving this operation.26.323 Non-spatial attention engagement in Neglect patientsSimone Gori 1 (simone.gori@unipd.it), Milena Ruffino 2 , Milena Peverelli 3 , MassimoMolteni 3 , Konstantinos Priftis 1,4 , Andrea Facoetti 1,2 ; 1 General Psychology Department,University of Padua, 2 Istituto Scientifico “E. Medea” di Bosisio Parini,Lecco, 3 Centro Riabilitativo “Villa Beretta” (Ospedale Valduce), Costa Masnaga,Lecco, 4 IRCCS San Camillo, Lido-VeneziaAccording to the most recent studies, the non-spatial, temporal attentiondisengagement -measured by the attentional blink- is impaired in neglectpatients. However, it is far to be clear whether the mechanism of temporalattentional engagement is also impaired in neglect patients. In orderto investigate the temporal attentional engagement, two experiments wereconducted in a sample of 19 patients with right cerebrovascular lesion (9with Neglect: N+ and 10 without Neglect: N-) and 9 healthy controls (C).We measured the backward masking as well as the para-contrast maskingfor centrally presented stimuli. The results showed a specific impairment ofthe non-spatial attentional engagement in N+. Precisely, N+ showed both adeeper backward and paracontrast masking and a sluggish backward andpara-contrast masking recovery in comparison with the two control groups(N- and C). These results suggest that the non-spatial disengagement deficits-typically associated with neglect- could be explained by postulating aprimary attentional engagement deficit of the “When” system, controlledby the right inferior parietal cortex.26.324 Differentiating Patients from Controls by Gazing PatternsPo-He Tseng 1 (pohetsn@gmail.com), Ian Cameron 2 , Doug Munoz 2 , Laurent Itti 1,3 ;1 Department of Computer Science, University of Southern California, 2 Centrefor Neuroscience Studies and Department of Physiology, Queen’s University,3 Neuroscience Program, University of Southern CaliforniaDysfunction in inhibitory control of attention was shown in children withAttention Deficit Hyperactivity Disorder (ADHD), Fetal Alcohol SpectrumDisorder (FASD), and elderly with Parkinson’s Disease (PD). Previousstudies explored the deficits in top-down (goal oriented) and bottom-up(stimulus driven) attention with a series of visual tasks. This study investigatesthe difference in attentional selection mechanism while patientsfreely viewed natural scene videos without performing specific tasks, andthe difference is utilized to develop classifiers to differentiate patients fromcontrols. These specially designed videos are composed of short (2-4 seconds),unrelated clips to reduce top-down expectation and emphasize thedifference in gaze allocation at every scene change. Gaze of six groups ofobservers (control children, ADHD children, FASD children, control youngadults, control elderly, and PD elderly) were tracked while they watchedthe videos. A computational saliency model computed bottom-up saliencymaps for each video frame. Correlation between salience and gaze of eachpopulation was computed and served as features for classifiers. Leave-oneoutwas used to train and test the classifiers. With eye traces of less than 4minutes of videos, the classifier differentiates ADHD, FASD, and controlchildren with 84% accuracy; another classifier differentiates PD and controlelderly with 97% accuracy. A feature selection method was also used toidentify the features that differentiate the populations the most. Moreover,videos with higher inter-observer variability in gaze were more useful indifferentiating populations. This study demonstrates attentional selectionmechanisms are influenced by PD, ADHD, and FASD, and the behavioraldifference is captured by the correlation between salience and gaze. Furthermore,this task-free method shows promise toward future screeningtools.Acknowledgement: National Science Foundataion, Human Frontier Science Program26.325 Cross-modal integration in a patient with partial damage tothe Inferior and Superior ColliculusMartijn van Koningsbruggen 1 (m.koningsbruggen@bangor.ac.uk), Robert Rafal 1 ;1 Wolfson Centre for Clinical and Cognitive Neuroscience, Bangor University,Bangor, UKSimple reaction times for visual targets are reduced for bi-lateral visualstimuli, compared to just one stimulus presented in one hemifield - theredundant target effect. The same reaction time pattern can be obtainedfor auditory targets: reaction times for targets presented to both ears arefaster than for targets presented to only one ear. More interestingly, if twostimuli from different modalities are presented simultaneously, reactiontimes decrease even further relative to uni-modal bilateral stimuli. Thisreduction is larger than predicted based on statistical facilitation alone,and has been attributed to sensory cross-modal integration. The SuperiorColliculus (SC) is thought to be crucial for integrating auditory and visualinformation. Here we tested a rare patient who suffered from a traumatichaemorrhagic avulsion of the dorsal midbrain resulting in a near-completelesion of the left inferior colliclus, and damage to caudal part of the left SC.The patient’s reaction times were prolonged for both auditory and visualtargets in the contra visual field relative to her ipsilesional visual field. Inaddition, there was no benefit of RTs of presenting bilateral visual targets.Unlike healthy controls, the patient’s reaction times to auditory targetswere slower than to visual targets. However, the patient detected bilateralauditory targets faster than unilateral targets, suggesting an intact auditoryredundant target effect. More interestingly, similar to healthy controls, thepatient demonstrated a benefit in reaction times to cross-modal targets inboth visual fields. When a visual stimulus was presented simultaneouslywith an auditory target, RT were faster than when only a visual, or only anauditory target was presented.26.326 Impaired selection- and response-related mechanisms inadult-ADHDLilach Shalev 1 (mlilach@mscc.huji.ac.il), Yarden Dody 1 , Carmel Mevorach 2 ; 1 Schoolof Education, Hebrew University, Jerusalem, Israel, 2 Behavioural Brain <strong>Sciences</strong>Centre, School of Psychology, University of Birmingham, UKA substantial amount of research has been directed at identifying the neurocognitiveprocesses responsible for the inattentive, hyperactive, andimpulsive behaviors observed in children and adults with attention deficit/hyperactivity disorder (ADHD). Of the many potential cognitive processesthat have been suggested, a large body of evidence points to impairmentsin executive functions. The present study focuses on response suppressionand on executive control in adults with and without ADHD using a globallocaltask with a manipulation of saliency (Mevorach, Humphreys & Shalev,2006b). This task enables us to separate out three types of effects: effectsreflecting the selection of local and global elements in displays, effectsreflecting the ability to attend to the more or less salient aspects of a displaySaturday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>97


Saturday Afternoon PostersVSS 2010 AbstractsSaturday PMand effects reflecting the ability to filter out irrelevant incongruent information.Participants with ADHD demonstrated an exaggerated effect of relativesaliency. Typically, interference from the distractor is greater when thetarget is low in saliency and the distractor high in saliency, compared withwhen the target has high saliency and the distractor low. In individualswith ADHD this bias towards high saliency stimuli, and the difficulty inattending to low saliency stimuli, was greater than in the control participants.In addition, the ADHD group yielded an increased congruity effectcompared to the control group. That is, participants with ADHD showedmore difficulties in filtering out irrelevant incongruent information. Theformer effect represents a difficulty in selection mechanisms whereas thelatter effect represents a difficulty in response related mechanisms in adult-ADHD.26.327 Global and local attentional processing following opticneuritisCeline Cavezian 1,2,3 (ccavezian@fo-rothschild.fr), Celine Perez 1,2,3 , MickaelObadia 3 , Olivier Gout 3 , Monte Buchsbaum 4 , Sylvie Chokron 1,2,3 ; 1 Laboratoire dePsychologie et NeuroCognition, CNRS, UMR5105, UPMF, Grenoble, France.,2 ERT TREAT VISION, Fondation Ophtalmologique Rothschild, Paris, France. ,3 Service de Neurologie, Fondation Ophtalmologique Rothschild, Paris, France.,4 Department of Psychiatry, Radiology and Neuroscience, Mount Sinai School ofMedicine, Mount Sinai Medical Center, New York, NAfter an optic neuritis (i.e., acute inflammation of the optic nerve), severalpatients report visual discomfort although ophthalmologic assessmentsshow a complete recovery. To evaluate if attentional impairment could contributeto these complaints, the present study investigated global and localprocessing in healthy individuals and patients with recovered optic neuritis.Ten healthy controls (38.39±6.15 years) and eleven patients (33.81±5.57years) with recovered right or left optic neuritis episode(s) completed a letter-detectiontask. The target to-be-detected (the letter “O”) was presentedeither in the right or the left hemifield and either as a small letter surroundedby flankers or as a single large letter. Response time and responseaccuracy were recorded. Although no significant group effect was observedregarding the number of erroneous responses, a trend toward a significantgroup x stimulus size x hemifield interaction was observed (F(1,19)=42.63;p=.059). With large stimuli, participants, either controls or patients, showeda lower number of errors when stimuli were presented in the left than inthe right hemifield. When small stimuli were presented, healthy controlsshowed a similar number of errors in both hemifields whereas optic neuritispatients made a greater number of errors when these stimuli werepresented in the right than in the left hemifield. Regarding response time,no significant group main effect or interaction was observed. However, asignificant stimulus size x hemifield interaction revealed that small stimuliwere processed faster when presented in the right hemifield whereas largestimuli were processed faster when presented in the left hemifield. Altogether,our results did not suggest any global or local attentional anomalyin optic neuritis patients. Both healthy controls and patients showed theclassic hemispheric specialization for local and global attention, in that theright hemisphere is dominant for global processing while the left hemisphereis dominant for local processing.Acknowledgement: This work was supported by Foundations Edmond and Benjamin deRothschild (Genève, Switzerland and New York, USA).Neural mechanisms: Adaptation,awareness, actionOrchid Ballroom, Boards 401–411Saturday, May 8, 2:45 - 6:45 pm26.401 BOLD activation in the visual cortex for spontaneous blinksduring visual tasksCécile Bordier 1 (cecile.bordier@ujf-grenoble.fr), Michel Dojat 1 , Jean-Michel Hupé 2 ;1 Grenoble Institut des Neurosciences (GIN) - INSERM U836 & Université JosephFourier, 2 CerCo, Toulouse University & CNRSWe are usually unaware of the temporary disappearance of the visual sceneduring blinks, even though blinks cause a large illumination change of theretina, as well as related BOLD responses in the visual cortex (Bristow etal., NeuroImage, 2005). We wondered whether spontaneously occurringblinks (in contrast to blocks of voluntary blinks, op.cit.) triggered significant(possibly contaminating) BOLD responses in standard retinotopic andvisual experiments. Methods. We monitored in a 3T scanner the monoculareye signals of 14 subjects who observed 4 different types of visual stimuli,including rotating wedges and contracting/expanding rings, Mondriansand graphemes, while fixating. All stimuli were presented centrally anddid not exceed 3 degrees eccentricity. We performed event-related singlesubjectanalyses on blinks. Results. We observed the pattern of activationsdocumented for voluntary blinking, with the strongest activation in theanterior calcarine and PO sulcus. This activation was present for every subjectand whatever the visual stimulus. It peaked outside of the regions codingour visual stimuli, but often abutting them. ROI based analysis of the3-deg central V1 region revealed indeed significant BOLD modulation followingblinks. Discussion. We replicated the intriguing finding that blinksactivate mostly the periphery of retinotopic areas. Since a blink shouldtrigger both a decrease of BOLD signal (due to luminance decrease) anda BOLD response to the dark transient, the anisotropy of the net BOLDresponse may be caused by different sensibilities to luminance and transientsin the center and the periphery. In that case, we may expect thatstimulus response modulations by attention also modulate the net BOLDresponse to blinks. In any case, our results strongly advocate for systematicmonitoring of blinks during fMRI recording, since any correlation, evenweak, between the distribution of blinks and a tested protocol could triggerartefactual activities in retinotopic areas.26.402 Cortical adaption to reversing prisms in normal adultsmeasured by fMRILing Lin 1 (llin3@uci.edu), Brian Barton 1 , Alyssa Brewer 1 ; 1 Department of Cognitive<strong>Sciences</strong>, University of California, IrvineA number of studies have investigated visuomotor adaptation to alteredvisual input by wearing inverting or reversing prism spectacles (e.g., Stratton,1897, Miyauchi et al., 2004). Behavioral adaptation usually developswithin a few days. However, cortical adaptation results in the literaturehave been controversial. Last year, we have presented part of a study inwhich we investigated visuomotor adaptation to reversed visual inputs (Linet al, VSS 2009). In this study, subjects wore left-right reversing prismaticgoggles continuously for 14 days. Every few days we measured the BOLDresponses to retinotopic stimuli comprised of wedges and rings. For eachsubject, we defined the baseline organization of the occipital and parietalvisual field maps using phase-encoded traveling wave analysis. Then wemeasured receptive field alterations within these maps across time points,using population receptive field (pRF) analysis (Dumoulin and Wandell,2008).We have shown that a systematic shift of visual field coverage in intra-parietalsulcus (IPS) region, from the normal coverage in the contralateral spacebefore prism adaptation, to expanding into the ipsilateral space throughoutthe adaptation period (Lin et al, VSS 2009). Now we present results to showa systematic shift of visual field coverage back to the baseline status betweenthe day after adaptation period and a two-month later measurement. Theseresults confirm cortical adaptation and differentiate between early stages(early visual field maps) and later stages (IPS) of the dorsal pathways forvisual and visuomotor processing. A mechanism of re-weighting the inputsto these neurons is proposed to interpret the neuronal re-tuning results.Furthermore, whether a new ipsilateral receptive field has emerged duringprism adaptation besides the original contralateral receptive field for thevoxels within IPS maps has been investigated by modeling each pRF withtwo Gaussians each in one hemifield (Dumoulin et al, SfN 2009).26.403 Putting the Prisms Back On: Both Maps of Visual SpacePersist, as Revealed by Rapid Cortical Re-adaptation to Left-RightVisual Field ReversalAlyssa A. Brewer 1 (alyssa.brewer@gmail.com), Brian Barton 1 , Ling Lin 1 ; 1 Universityof California, IrvineIntroduction: In this study, we exploit the dynamic nature of posteriorparietal cortex to examine cortical functional plasticity induced by a completereversal of visual input in normal adult humans. Using retinotopicfMRI measurements, we have previously demonstrated changes withinthe spatial representations of multiple parietal visual field maps followingextreme alterations of visual input from left-right reversing prisms (Linet al, VSS 2009). Data from adult barn owls suggest that after long-termadaptation to a large shift in visual input from prisms the altered representationsof visual space persist (Linkenhoker and Knudsen, 2002). Here98 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Afternoon Posterswe investigate whether there is a difference in the timing or degree of asecond adaptation to the left-right visual field reversal in adult humansafter long-term recovery from the initial adaptation period. Methods: Threesubjects previously participated in a 14-day continuous adaptation to leftrightreversing prisms. These same subjects returned for a 4-day re-adaptationto the reversed visual field 1-9 months later. Subjects again performeda daily battery of visuomotor testing and training. We used traveling wavestimuli to measure the occipital and partietal visual field maps in eachsubject before, on the 4th day, and one day after the 4-day re-adaptationperiod. The receptive field alterations within these maps across time pointswere analyzed using the population receptive field method (Dumoulin andWandell, 2008). Results/Conclusion: The data demonstrate a faster timecourse for both behavioral and cortical re-adaptation. By the end of thismuch shorter re-adaptation period, the measurements again show a shiftof visual field representation from contralateral towards ipsilateral visualspace in parietal cortex. These measurements of cortical visual field mapsin subjects with severely altered visual input demonstrate that the changesin the maps produced by the initial long prism adaptation period persistover an extended time.26.404 Novel insular cortex and claustrum activation observedduring a visuomotor adaptation task using a viewing windowparadigmLee Baugh 1 (umbaughl@cc.umanitoba.ca), Jane Lawrence 1 , Jonathan Marotta 1 ;1 Perception and Action Lab, Department of Psychology, University of ManitobaPrevious literature has reported a wide range of anatomical correlateswhen participants are required to perform a visuomotor adaptation task.However, traditional adaptation tasks suffer a number of inherent limitationsthat may, in part, give rise to this variability. The overt nature of therequired visuomotor transformation and sparse visual environment do notmap well onto conditions in which a visuomotor transformation wouldnormally be required in everyday life. For instance, when one must utilizethe relationship between a vehicle’s steering wheel and the resultant movementto drive down the street, the nature of the required transformationis most likely encompassed in the higher order goal of driving down theroad. To further clarify these neural underpinnings, functional magneticresonance imaging (fMRI) was performed on twelve (5M, age range 20 – 45years old; mean age = 27) naive participants performing a viewing windowtask in which a visuomotor transformation was created by varying the relationshipbetween the participant’s movement and the resultant movementof the viewing window. The viewing window task removes the focus of theexperiment away from the required visuomotor transformation and morenaturally replicates scenarios in which haptic and visual information wouldbe combined to achieve a higher-level goal. Activity related to visuomotoradaptation was found within previously reported regions of the parietallobes, frontal lobes, and occipital lobes. In addition, previously unreportedactivation was observed within the claustrum and insular cortex, regionswell-established as multi-modal convergence zones. These results confirmthe diverse nature of the systems recruited to perform a required visuomotoradaptation, and provides the first evidence of participation of the claustrumand insular cortex to overcome a visuomotor transformation.26.405 Multiple scales of organization for object selectivity inventral visual cortexHans Op de Beeck 1 (hans.opdebeeck@psy.kuleuven.be), Marijke Brants 1,2 ,Annelies Baeck 1,2 , Johan Wagemans 2 ; 1 Laboratory of Biological Psychology,University of Leuven (K.U.Leuven), Belgium, 2 Laboratory of ExperimentalPsychology, University of Leuven (K.U.Leuven), BelgiumObject knowledge is hierarchical. For example, a Labrador belongs to thecategory of dogs, all dogs are mammals, and all mammals are animals. Severalhypotheses have been proposed about how this hierarchical propertyof object representations might be reflected in the spatial organization ofventral visual cortex. For example, all exemplars of a basic-level categorymight activate the same feature columns or cortical patches (e.g., Tanaka,2003, Cerebral Cortex), so that a differentiation between specific exemplarsis only possible by comparing the responses of neurons within these columnsor patches. According to this view, category selectivity would beorganized at a larger spatial scale compared to exemplar selectivity. Littleempirical evidence is available for such proposals from monkey studies,and no direct evidence from experiments with human subjects. Here wedescribe a new method in which we use fMRI data to infer differencesbetween stimulus properties in the scale at which they are organized. Themethod is based on the reasoning that spatial smoothing of fMRI data willhave a larger beneficial effect for a larger-scale functional organization. Weapplied this method to several datasets, including an experiment in whichbasic-level category selectivity (e.g., face versus building) was comparedwith subordinate-level selectivity (e.g., rural building versus skyscraper).The results reveal a significantly larger beneficial effect of smoothing forbasic-level selectivity compared to subordinate-level selectivity. This is inline with the proposal that selectivity for stimulus properties that underliefiner distinctions between objects is organized at a finer scale than selectivityfor stimulus properties that differentiate basic-level categories. Thisfinding confirms the existence of multiple scales of organization in ventralvisual cortex.26.406 Theta-burst transcranial magnetic stimulation to V1impairs subjective confidence ratings and metacognitionDobromir Rahnev 1,2 (dar2131@columbia.edu), Linda Bahdo 1 , Moniek Munneke 2 ,Floris de Lange 2 , Hakwan Lau 1,2 ; 1 Department of Psychology, Columbia University,2 F.C. Donders Centre for Cognitive Neuroimaging, Radboud UniversityNijmegen, The NetherlandsLesions to V1 may lead to blindsight - the ability to perform visual tasksdespite the absence of visual awareness. Blindsight is a controversial phenomenon,partly because it is a rare condition and researchers rely onverbal reports. Previous work has shown that brain stimulation appliedduring visual presentation can induce blindsight-like behavior in normalpopulations. Here we adopt a different approach, capitalizing on a relativelynew protocol of stimulation known as theta burst stimulation (TBS).TBS has been shown to suppress visual activity for up to ~30 minutes. Duringthis period, we can perform intensive psychophysical testings withoutthe tactile and auditory interference caused by brain stimulation. We usedpost-decisional wagering to objectively assess the effect of TBS on subjectiveawareness and metacognition. Subjects received TBS to the visual cortex,a control site (Pz), and sham TBS in counter-balanced sessions. Subjectsperformed a grating orientation task and indicated their confidence. Highconfidenceresponses resulted in higher score when the discrimination wascorrect but negative score when incorrect. Subjects were told to maximizetheir overall score. TBS to the visual cortex resulted in a decrease both inperformance and confidence. Further, we found that the TBS resulted ina lowered correlation between confidence ratings and accuracy, and thismisplacement of confidence ratings led to subjects’ failure to maximizetheir overall score. This impairment of metacognitive ability has previouslybeen shown to be one of the hallmarks of blindsight. The current studyshows that TBS to the visual cortex lowers visual performance, subjectiveconfidence ratings, and metacognitive capacity. The fact that TBS is appliedbefore but not during the psychophysical testing means that there is theopportunity to perform brain imaging during the task in order to furthercharacterize the neural mechanisms that underlie these effects.26.407 Awareness-related activity in prefrontal and parietalcortices reflects more than superior performance capacity: Ablindsight case studyMatthew Davidson 1 (matthew@psych.columbia.edu), Navindra Persaud 2 , BrianManiscalco 1 , Dean Mobbs 3 , Richard Passingham 4 , Alan Cowey 4 , Hakwan Lau 1,5 ;1 Department of Psychology, Columbia University, 2 Faculty of Medicine, Universityof Toronto, 3 MRC Cognition and Brain <strong>Sciences</strong> Unit, Cambridge University,4 Department of Experimental Psychology, Oxford University, 5 Donders Institutefor Brain, Cognition and Behaviour, Radboud University NijmegenBackground: Brain imaging studies on visual awareness have often reportedactivity in prefrontal and parietal cortices. One interpretation could be thatsuch activity reflects the capacity to perform visual tasks, which is usuallyhigh when one is aware of the stimulus, and at near-chance-level when oneis unaware. The opportunity to study a blindsight patient allowed us totest this interpretation, by creating performance-matched conditions withdramatic differences in subjectively reported levels of awareness. PatientGY has a damaged primary visual cortex in the left hemisphere (“blind”),with the right side being relatively intact (“normal). It is well-documentedthat he can perform forced-choice tasks better than chance in his “blind”visual field.Methods: We presented gratings of strong contrast to his “blind” field,and weak contrast to his “normal” field, such that performance in a spatial2AFC task was matched between the two. We assessed the level of aware-Saturday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>99


Saturday Afternoon PostersVSS 2010 AbstractsSaturday PMness by standard subjective report (“Seeing” vs “Guessing”), confidencerating, and post-decision wagering. All of these measures supported theconclusion that the level of awareness differed dramatically, even thoughperformance capacity was matched between the “blind” field and “normal”field stimulations.Results: 1. Comparing “normal” field vs. “blind” field stimulations, wefound robust activations in the prefrontal and parietal cortices. 2. Whereasaccuracy in the “normal” field (correct vs. incorrect trials) was driven byactivity in the occipital and temporal cortices (sometimes bilaterally), accuracyin the “blind” field was driven mainly by subcortical activity, with aclear lack of activation in the occipital cortex.Conclusions:1. Activity in prefrontal and parietal cortices is likely to reflectthe ability to monitor and report perceptual certainty appropriately, ratherthan just superior visual performance capacity. 2. Blindsight is supportedby subcortical mechanisms, as previously suggested.26.408 Blindsight and enumeration: A case studyJames Reed Jones 1 (jjones04@uoguelph.ca), Don Dedrick 2 , Lana Trick 1 ;1 Psychology, College of Social and Applied Human <strong>Sciences</strong>, University ofGuelph, 2 Philosophy, College of Arts, University of Guelph“Blindsight” is a term first coined by Weiskrantz et al. in 1974 to describeresidual visual performance in the cortically blind. It has been postulatedthat blindsight could be due to the retinotectal pathway projecting informationpast V1 to later cortical structures. Our interest was in whether thepathways responsible for blindsight could also support enumeration. Wetested a 53-year-old male, C.H., who had suffered a medial right occipitallobe stroke seven months previous. The stroke resulted in an upper lefthomonymous quadrantanopia. In order to determine whether there wereany residual abilities in the blind field, we first tested basic detection anddiscrimination skills. C.H. was able to determine whether or not a 7° X 7°visual angle object was presented in his blind field with near perfect accuracythough his accuracy was only 90% for smaller (4.6° X 2.2°) figures. C.H.also discriminated between large X’s and O’s and horizontal and verticalbars with good accuracy when the figures were large (78% for X vs. O and73% for horizontal vs. vertical bars). Throughout these tests C.H. staunchlymaintained that he could not see the stimuli and that he was only guessing.Because it was clear that C.H. had some residual ability in his blind field,we tested his enumeration. The enumeration task involved 1-3 items. Theseitems were 1.5° X 1.5° black diamonds, 0-2 in his blind field and 1-3 in hisnon-blind field. Across all conditions, C.H. was able to use the informationin his blind field and identify the total number of items presented at levelsof performance that were well above chance. Because C.H. appears to useblind field information to enumerate, we postulate that enumeration abilitymay be mediated by the same structures that support blindsight.26.409 Unconscious Activation of the Prefrontal No-Go NetworkSimon van Gaal 1,2 (s.vangaal@uva.nl), Richard Ridderinkhof 2 , Steven Scholte 1 ,Victor Lamme 1 ; 1 Cognitive Neuroscience Group, Department of Psychology,University of Amsterdam, 2 Amsterdam center for the study of adaptive controlin brain and behavior (Acacia), Department of Psychology, University ofAmsterdamHow “intelligent” is the unconscious fast feedforward sweep? To test thisissue, we used functional magnetic resonance imaging to investigate thepotential depth of processing of unconscious information in the humanbrain. We devised a new version of the Go/No-Go task that included conscious(weakly masked) No-Go trials, unconscious (strongly masked) No-Go trials as well as Go trials. Replicating typical neuroimaging findings,we observed that response inhibition on conscious No-Go trials was associatedwith a (mostly right-lateralized) “inhibition network”, including thepre-supplementary motor area (pre-SMA), the anterior cingulated cortex,middle, superior and inferior frontal cortices as well as parietal cortex. Herewe demonstrate, however, that also an unconscious No-Go stimulus cantravel all the way up to the prefrontal cortex, most prominently the inferiorfrontal cortex and the pre-SMA. Interestingly, if it does so, it brings abouta substantial slow-down in the speed of responding; as if participants triedto cancel their response but just failed to do so completely. The strengthof activation in the “unconscious inhibition network” correlated with theextent of unconsciously triggered RT slowing, which suggests that theobserved prefrontal activations are truly “functional”. These results expandour understanding of the limits and depths of the unconscious fast feedforwardsweep of information processing in the human brain.26.410 Functional specialisation in Supplementary Motor Area(SMA): A functional imaging test of the spatial vector transformationhypothesisStephen Johnston 1 (s.johnston@bangor.ac.uk), Charles Leek 1 ; 1 School ofPsychology, Bangor University, UKA recent debate in the literature involves the extent to which supplementarymotor area is dedicated to the planning of motor responses. It hasbeen suggested that this region may play a role in a more general mannerthrough the calculation of spatial vector transformations. Evidence for thisis provided via a number of non-motor studies, such as mental rotationtasks, where the activation in this region is more closely tied to the calculationof spatial relations than to motor demands (see Leek & Johnston,Nature Reviews Neuroscience, 10, 78, 2009). Here we present a series offunctional imaging experiments that attempt to further elucidate the functionalproperties of supplementary motor area by contrasting the demandsplaced on this region from a variety of motor and non-motor tasks thatmake use of spatial vector transformations in a number of ways. The resultsindicate that the anterior sub-division of SMA, responds more strongly todemands associated with non-motor transformation tasks compared withthe more posterior regions of SMA that responds more strongly to motorbased tasks. The experiments are discussed in terms of the spatial vectortransformation hypothesis.26.411 Functional specialization in Supplementary Motor Area(SMA): Evidence from visuo-spatial transformation deficits inParkinson’s diseaseCharles Leek 1 (e.c.leek@bangor.ac.uk), R. Martyn Bracewell 1, 2 , John Hindle 1,2 ,Stephen Johnston 1 ; 1 School of Psychology, Bangor University, Bangor, UK,2 School of Medical <strong>Sciences</strong>, Bangor University, Bangor, UKLeek & Johnston (2009, Nature Reviews Neuroscience, 10, 78-79) have suggestedthat one function of the anterior (pre-) SMA in humans is the computationof abstract visuo-spatial vector transformations. According to thishypothesis, pre-SMA should be involved in any visual task that requiresthe transformation or remapping of a spatial location (vector) regardlessof whether there is a motor component to the task. We have previouslyexamined this using functional brain imaging of pre-SMA during the performanceof visuo spatial transformation (e.g., mental rotation, mentalgrid navigation) and non-transformational (VSTM for static spatial locations,non-spatial numerical operations) - see Johnston & Leek, 2010 <strong>Vision</strong><strong>Sciences</strong> <strong>Society</strong>). Here we report evidence from studies that tested thishypothesis using data from Parkinson’s disease (PD) patients. One knownaspect of the underlying pathology of PD is the consequent effects of dopaminedepletion in the basal ganglia upon functioning of medial frontalcortex. Thus, PD provides a good model for studying SMA dysfunctionand its effects on visuo-spatial processing. The spatial vector transformationhypothesis predicts that PD patients, with impaired SMA function, arelikely to exhibit deficits on tasks that require spatial vector transformation.The results showed that, as predicted, PD leads to impairments on transformationalbut not on non-transformational tasks. These findings supportthe vector transformation hypothesis and suggest that regions of the SMAare involved in highly abstract visuo-spatial computations that go beyondthe preparation and planning of movement. Indeed, these findings suggestthat the SMA supports abstract visuo-spatial processes that are potentiallyrecruited in a wide range of visual and spatial tasks.Perceptual learning: Specificity andtransferOrchid Ballroom, Boards 412–426Saturday, May 8, 2:45 - 6:45 pm26.412 Interference and feature specificity in visual perceptuallearningLi-Hung Chang 1,2 (clhhouse@bu.edu), Yuko Yotsumoto 1,2,3 , Jose Nanez 4 , TakeoWatanabe 1 , Yuka Sasaki 2,3 ; 1 Department of Psychology, Boston University,2 Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts GeneralHospital, 3 Department of Radiology, Harvard Medical School, 4 Department ofSocial and Behavioral <strong>Sciences</strong>, Arizona State University100 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Afternoon PostersPerceptual learning (PL), defined as experience-dependent performanceimprovement on a visual feature often shows specificity to the trained feature.Recently it was reported that when one type of task is trained, followedby a similar but different type of task, PL on the first task is disrupted withspecificity to some trained features. Systematic investigation of the relationshipbetween PL specificity and interference may lead to a better understandingof the PL mechanism. In the present study, we examined whetherfeature specificity is related to interference in the texture discriminationtask (TDT), which shows learning specificity to the orientation of backgroundelements but not to that of target elements. We conducted a seriesof experiments where the orientation of target elements or background elementswas manipulated under two types of training paradigms, blocked orroving, with 36 participants. First, we found that TDT learning was interferedwith when orientations of background elements were changed in theblocked paradigm but not in the roving. Second, the changes in orientationof target elements resulted in a reverse effect: TDT learning occurred in theblocked paradigm but not with roving. Given that TDT learning is specificto background element orientation but not to target element orientation,these results indicate that interference in TDT learning (blocked) is featurespecific while that is not the case for roving. These results provide importantimplications regarding the mechanism for TDT learning and interference.First, learning of background element orientation in TDT and disruptionmay mainly involve a low-level stage of visual processing. Second, itmay be either that learning of target element orientation is not requisite forTDT learning or that learning of target element orientation mainly involvesa higher stage of visual information processing, given that roving is suggestedto impede more central stages.Acknowledgement: Supported by NIH-NEI R21 EY018925, NIH-NEI R01 EY015980-04A2and NIH-NEI R01 EY019466 to TW. YY is supported by Japan <strong>Society</strong> for the Promotion ofScience. YS was support by ERATO Shimojo Implicit brain function project, Japan ScienceTechnology26.413 Task transfer effects of contrast training and perceptuallearningDenton J. DeLoss 1 (ddelo001@student.ucr.edu), Jeffrey Bower 1 , George J.Andersen 1 ; 1 Department of Psychology, University of California, RiversidePrevious research has shown transfer of perceptual learning (PL) trainingwith contrast and orientation across different locations in the visualfield (Xiao et. al., 2008). The current study examined the cross-transfer ofPL training with orientation and contrast within a specific location in thevisual field. Subjects were given 4 days of testing and PL training. A twointerval forced choice procedure was used with two sequentially presenteddisplays. On each display subjects were presented with a centrally locatedletter and a Gabor patch located in one quadrant (both embedded in noise).Subjects were required to indicate a change in the letter to control for eyefixation. The Gabor patch was orientated horizontally or vertically (thestandard) or was tilted off horizontal or vertical. Subjects were requiredto indicate whether the first or second display contained the tilted orientation.All stimuli were presented with 40% of the pixels replaced with2-dimensional Gaussian noise. During the first day participants’ orientationdiscrimination (0-10 degrees off-axis in 0.25 degree increments) andcontrast sensitivity (1-100% contrast) thresholds were determined usingtwo interleaving staircase functions (1 up/4 down and 1 up/2 down) forvertically and horizontally oriented Gabor patches. During days 2 and 3,participants were trained on contrast sensitivity using two interleavingstaircases (1 up/4 down and 1 up/2 down); during this training all Gaborpatches were either oriented vertically or 10 degrees off axis. Day 4 usedthe same procedure as day one to assess contrast sensitivity and orientationdiscrimination thresholds. Results suggest that perceptual learning trainingresulted in improved performance for contrast for both the trained (vertical)and untrained (horizontal) orientations; the improved performancefor contrast training was not found to transfer to orientation discrimination.The importance of these findings to task specificity of perceptual learningtraining will be discussed.Acknowledgement: Research supported by NIH EY018334 and AG031941.26.414 Task-specific perceptual learning of texture identificationZahra Hussain 1 (zahra.hussain@nottingham.ac.uk), Allison B. Sekuler 2,3 , PatrickJ. Bennett 2,3 ; 1 School of Psychology, University of Nottingham, 2 Dept. ofPsychology, Neuroscience & Behaviour, McMaster University, 3 Centre for <strong>Vision</strong>Research, York UniversityPrevious studies have shown that practice lowers identification thresholdsfor textures embedded in noise, and that such perceptual learning is stimulus-specificand long lasting. Here we ask whether better detection of therelevant signal is sufficient to improve identification, or whether experiencein distinguishing the textures is necessary. In other words, is perceptuallearning of texture identification task specific, or does it simply reflectimproved stimulus detectability? Separate groups of subjects practiced texturedetection or identification on Day 1; on Day 2 they performed eitherthe same task as on Day 1, or transferred to the untrained task. Stimuliwere 10 briefly (200 ms) presented band-limited noise textures embeddedin static noise. Detection performance was measured with a Yes/No task,and identification performance was measured with a 10-AFC task. Texturecontrast was varied across 7 levels using the method of constant stimuli;feedback was provided on each trial. We calculated d’ at each contrast levelto compare psychometric functions across tasks and days. On Day 1, psychometricfunctions spanned the full performance range for both tasks. OnDay 2, after practice with the same task, texture identification improvedsubstantially at all contrasts, whereas texture detection improved onlyslightly. There was no transfer of learning from detection to identification,and some evidence for transfer in the opposite direction. Therefore, betterdetection of the stimulus is not sufficient to improve identification of noisypatterns. Learning requires telling the patterns apart - perceptual learningof texture identification is task-specific as well as stimulus-specific.Acknowledgement: Natural <strong>Sciences</strong> and Engineering Research Council of Canada(NSERC)26.415 Learning like an expert: a training study on the effects ofvisual noise in fingerprint identificationBethany Jurs 1 (jursb@uwstout.edu); 1 University of Wisconsin - StoutStudies in visual expertise focus on identifying mechanisms that allowexperts to be more efficient and effective at identifying their items of expertise.Many visual expert groups, such as fingerprint examiners, are requiredto identify and characterize their items of expertise under visually demandingconditions. Therefore, one potential mechanism that develops could bea learned relative immunity to the effects of visual noise. To investigate thisthe present study uses a combination of EEG and behavioral measures todetermine the effects of visual noise across the course of an extensive visualtraining paradigm. Participants were trained on a fingerprint identificationtracing task that simulates the actual real-world training fingerprint examinersundergo. Throughout the course of training, participants performeda separate XAB task in which they were instructed to identify which oftwo clear print images matched a test image presented in bandpass filterednoise. To distinguish any training effects from those caused by repeatedexposure to the XAB task, a separate control group who did not undergoany additional training was also measured. Results show significant overallbehavioral improvements across both groups for the low visual demandcondition and only improvements in the training group for the high visualdemand condition. However, EEG data show a qualitative and quantitativeshift in the P250 component in opposite directions between the control andtraining group across both conditions. This interaction provides evidencethat training causes a change in neural processing that is separate from thatwhich results from repeated exposure. These results have potential implicationsto both the perceptual learning and visual expertise literature andpoint to the possibility of different types of learning styles that develop forvisually complex stimuli in degraded conditions.26.416 Transfer of object learning across distinct visual learningparadigmsAnnelies Baeck 1,2 (annelies.baeck@psy.kuleuven.be), Hans Op de Beeck 1 ; 1 Laboratoryof Biological Psychology, University of Leuven (K.U.Leuven), 2 Laboratoryof Experimental Psychology, University of Leuven (K.U.Leuven)Perception and identification of visual stimuli improve with experience.This applies to both simple stimuli and complex objects, as shown in perceptuallearning paradigms in which visual perception is challenged bydegrading the stimuli, e.g. by backward masking or adding simultaneousnoise. In each of these paradigms, perceptual learning is specific forthe stimuli used during training. However, there can also be differencesbetween paradigms, because they challenge the visual processing systemin different ways. It is thus possible that paradigm-specific processes areneeded to optimize performance. This would result in a failure of perceptuallearning effects to generalize across paradigms. Here we present thefirst study designed to investigate whether visual object learning is specificSaturday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>101


Saturday Afternoon PostersVSS 2010 AbstractsSaturday PMto the type of stimulus degradation used during training. Sixteen participantswere trained to recognize and name pictures of common objects. Thestimulus set included 40 object images, but each participant was trainedon only half of them. Half of the participants was trained in a backwardmasking paradigm, and the other half in a simultaneous noise additionparadigm. After five days, performance thresholds were measured in fourtests: (1) the trained paradigm with the 20 trained objects, (2) the trainedparadigm with the 20 new, untrained objects, (3) the untrained paradigmwith the trained objects and (4) the untrained paradigm with new objects.Both groups showed a learning effect that increased gradually across days.These training effects were specific for the trained objects. In addition, anobject-specific transfer to the untrained paradigm was found. The grouptrained in the simultaneous noise addition paradigm showed a completetransfer of performance to the backward masking task. The transfer wasonly partial when reversed. These findings indicate that both general learningprocesses and processes specific for the type of stimulus degradationare involved in perceptual learning.Acknowledgement: This work was supported by the Research Council of K.U.Leuven(CREA/07/004), the Fund for Scientific Research – Flanders (1.5.022.08), and by theHuman Frontier Science Program (CDA 0040/2008).26.417 Perceptual learning with bisection stimuli can only bedisrupted on a short timescaleKristoffer C. Aberg 1 (kristoffer.aberg@epfl.ch), Michael H. Herzog 1 ; 1 Laboratory ofPsychophysics, Brain Mind Institute, Ecole Polytechnique Federale de Lausanne(EPFL)We previously showed that perceptual learning was disrupted when bisectionstimuli with short outer line distances were presented randomly interleavedtrial-by-trial with bisection stimuli with long outer line distances, socalled roving. Here, we went to the other extreme and presented short andlong bisection stimuli in separate, directly following sessions. It has beenpreviously reported that such a presentation regime disrupts consolidationof perceptual learning. However, we found no disruption of learning, neitherwith our bisection stimuli nor with the same experimental setup usedin a previous study. We propose that consolidation in perceptual learningwith simple stimuli is either rapid or cannot be disrupted.Acknowledgement: Pro*Doc [Processes of Perception of the Swiss National Fund (SNF)]26.418 Explicit and implicit learning in motion discriminationtasksMariagrazia Benassi 1 (mariagrazia.benassi@unibo.it), Sara Giovagnoli 1 , RobertoBolzani 1 ; 1 Department of psychology, University of BolognaPerceptual learning has been studied as a mechanism by which people automaticallyand implicitly learn. Alternatively, learning can occur explicitlymediated by conscious feed-back which controls and guides the subjects’performance. The aim of this study was to examine whether the explicitand implicit learning could produce different patterns of results in a visualmotion discrimination task.We exposed 12 participants to four different sessions. A preliminary trainingsession consisted of 120 trials of motion discrimination task repeatedthree times in different days to measure the explicit learning. The subjecthad to discriminate among 4 directions in motion test presented at 10% ofcoherence (considered as parathreshold). A pre-test in which the baselinewas evaluated by the correct responses average in the motion discriminationtask for each direction. The implicit learning session was tested usingthe classic paradigm of task-irrelevant perceptual learning (TIPL) (Seitzand Watanabe, 2003) in which learning was mediated by subliminallypairing one selected direction with specific targets of an unrelated trainingtask. This phase consisted of 120 trials repeated 7 times in three days.After that a post-test similar to the pretest indicated the implicit learningeffect. In the explicit learning the improvement is relatively poor and notsignificant . In the implicit learning the effect was clearly significant for thetrained direction, while the subjects did not improve their performances inthe other directions. These results of the stronger learning effect obtainedin TIPL suggest that visual motion learning can benefit more from directlower level processing then from mediated attention mechanisms relatedto explicit learning.26.419 Learning to discriminate face viewNihong Chen 1 (cnh@pku.edu.cn), Taiyong Bi 1 , Qiujie Weng 1 , Dongjun He 1 , FangFang 1 ; 1 Department of Psychology, Peking UniversityAlthough perceptual learning of simple visual features has been studiedextensively and intensively for many years, we still know little about themechanisms of perceptual learning of complex object recognition (e.g.face). In a series of seven experiments, human perceptual learning in discriminationof the orientation of face view was studied using psychophysicalmethods. We trained subjects to discriminate face orientations arounda face side view (e.g. 30 deg) over eight days, which resulted in a dramaticimprovement in sensitivity to face view orientation. This improved sensitivitywas highly specific to the trained face side view and persisted forsix months. Different from perceptual learning of simple visual features,this view-specific learning effect could strongly transfer across changes inretinal location, face size and face identity. A strong transfer also occurredbetween two partial face images that were mutually exclusive but constituteda complete face. However, the transfer of the learning effect betweenupright and inverted faces and between face and paperclip object was veryweak. These results shed light on the mechanisms of perceptual learning offace view discrimination. They suggest a large amount of plastic changesat a level of higher visual processing where size-, location- and identityinvariantface views are represented, but not at a level of early visual processingor cognitive decision.Acknowledgement: the National Natural Science Foundation of China (Project 30870762,90920012 and 30925014)26.420 Promoting generalization by hindering policy learningJacqueline M. Fulvio 1,2 (fulvi002@umn.edu), C. Shawn Green 1,2 , Paul R.Schrater 1,2,3 ; 1 Department of Psychology, University of Minnesota, 2 Centerfor Cognitive <strong>Sciences</strong>, University of Minnesota, 3 Department of ComputerScience, University of MinnesotaA pervasive question in perceptual and motor learning concerns the conditionsunder which learning transfers. In reinforcement learning, an agentcan learn either a policy (i.e., a mapping between states and actions) or apredictive model of future outcomes from which the policy can be computedonline. The former is computationally less expensive, but is highlyspecific to the given task/goal, while the latter is computationally moreexpensive, but allows the agent to know the proper actions to take evenfor novel goals. Policy learning is appropriate when forward look aheadis not required and the number of policies to be learned is small, whilemodel learning is appropriate under the opposite conditions. Therefore, bymanipulating these factors in a given task, the degree of transfer should bepredictably altered as well. The current study tests this hypothesis with anavigation task requiring subjects to steer an object through a novel flowfield to reach visible targets as quickly as possible. We vary the predictivecomponent of the task by manipulating the amount of control subjects haveover the object. Half steer the object for the entire duration of the experiment,which favors policy learning, while the rest lose control intermittently,which adds a look ahead component to the task and favors modellearning. We vary the number of policies to be learned by manipulatingthe number of target locations the subject reaches where large numbers areexpected to favor model learning. Half have only two target locations toreach while the rest have twelve. Performance on transfer tasks where theenvironment is held constant, but the goal is altered, is better for those subjectstrained under conditions that favor model learning. These results suggestthat developing training tasks that discourage simple policy learningis critical if generalization is desired.Acknowledgement: ONR N 00014-07-1-093726.421 The Role of Sleep in Implicit Statistical and Rule LearningKimberly MacKenzie 1 (kjmack@brandeis.edu), Jozsef Fiser 2 ; 1 NeuroscienceProgram, Brandeis University, Waltham MA 02454, 2 Volen Center for ComplexSystems and Department of Psychology, Brandeis University, Waltham MA02454Statistical learning is an established method of measuring implicit knowledgegained through observation. Rule learning employs a similar paradigm,but the knowledge gained is assumed to be more abstract andexplicit. These two forms of learning have been considered separate mechanisms,and little is known about how their representations are stored inlong-term memory or whether sleep provides a benefit for consolidation ofsuch representations, as has been found in many implicit procedural learningtasks. The current study examines whether sleep benefits both statisticaland implicit rule learning in a similar manner, and whether a shortpractice before test offers greater explicit insight into the underlying rules,as had been reported previously for abstract numerical rules. In our experi-102 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Afternoon Postersments, subjects first observed scenes of arbitrary shapes arranged as tripletsrepeated in random order for two minutes. The triplets contain a simplestatistical structure, as particular triplets of shapes always appear togetherin fixed order, and two embedded rules: a size rule following an AAB pattern(small-small-large), and a color rule following ABA (dark-light-dark).After a twelve-hour delay, either overnight or over the day, subjects weretested on their knowledge of both the statistical structure and the size rule.A subset of subjects also completed a short “reminder” practice sessionbefore test. We found that the simple statistical structure was retained aftertwelve hours during the day but performance was improved by sleep (Day,M=64.6; Night, M=79.3). Rule knowledge was not retained, but emergedafter sleep (Day, M=53.6; Night, M=61.8). However, a short practice sessionbefore test did not provide greater access to the implicitly learned rules(Day, M=54.7). These results indicate that sleep benefits the implicit knowledgegained during statistical and rule learning in a similar manner, butdoes not necessarily lead to improvement in discovery of explicit rules.26.422 Laterality-Specific Perceptual Learning on Gabor DetectionNestor Matthews 1 (matthewsn@denison.edu), Jenna Kelly 1 ; 1 Department ofPsychology, Denison UniversityIntroduction: Several studies have demonstrated visual performanceadvantages for stimuli distributed across the left and right hemi-fields(bilateral stimulation) versus stimuli restricted entirely within one lateralhemi-field (unilateral stimulation) (Awh & Pashler, 2000; Alvarez & Cavanagh,2005; Chakravarthi & Cavanagh, 2009; Reardon, Kelly, & Matthews,2009). In the present perceptual learning study, we investigated the extentto which practice-based improvements in peripheral Gabor detection arespecific to –versus generalize across- bilateral and unilateral training regimens.Method: Twenty Denison University undergraduates completed thestudy. The independent variables were training group (bilateral trainingversus unilateral training), session (pre versus post), Gabor target laterality(bilateral versus unilateral), and Gabor distracter (present versus absent).Each trial began with a pair of bilateral or unilateral cues indicating theperipheral positions (14.55 deg diagonally from fixation) at which a Gabortarget would appear, if present. Half the trials contained Gabor distracterspositioned between cued target positions. After correctly identifying afoveally flashed letter, participants judged whether a Gabor target had beenpresent or absent at either cued peripheral position. For each participant,bilateral and unilateral performance was measured before and after fivelaterality-specific (i.e., bilateral only or unilateral only) training sessions.Results: Signal detection analyses revealed laterality-specific improvementsin the proportion of hits (i.e., “present” responses on target-presenttrials) when distractors were present (F(1,18) = 8.833, p = 0.008, partialeta-squared = 0.329), but not when distracters were absent (F(1,18) = 0.002,p = 0.963, partial eta-squared


Saturday Afternoon PostersVSS 2010 AbstractsSaturday PMity was also improved in some children in both groups. Currently a numberof children from two groups are doing E-acuity training and the results willbe reported as part of the abstract.Our preliminary results indicate that training induced grating acuityimprovement is mainly shown in non-patching group, and this improvementis not directly related to letter acuity improvement, probably as aresult of different mechanism underlying grating acuity and letter identification,which is inconsistent to previous reports. Moreover, perceptualtraining alone may not be a sufficient treatment for older amblyopic children.Rather it works best for children with previous patching history.26.426 Transfer of perceptual learning to completely untrainedlocations after double trainingRui Wang 1 (heartygrass@gmail.com), Jun-Yun Zhang 1 , Stan Klein 2 , Dennis Levi 2 ,Cong Yu 1 ; 1 State Key Laboratory of Cognitive Neuroscience and Learning,Beijing Normal University, 2 School of Optometry, UC BerkeleyVisual perceptual learning can transfer completely to a new location if thenew location is trained with an irrelevant task (Xiao et al., Current Biology,2008). This feature-plus-location double training result suggests thatperceptual learning may occur in non-retinotopic brain areas, and learningtransfers as spatial attention to the new location is improved throughlocation training. What unknown is how double training will affect performancein other completely untrained locations.We found that (1) Vernier learning was normally location and orientationspecific. However, if Vernier was trained at the V and H orientations, oneorientation at the upper left visual field (ori1_loc1) and the other at thelower left visual field (ori2_loc2), each serving as location training for theother orientation, learning transferred not only completely to trained locations(ori1_loc2 & ori2_loc1), but also equally to the completely untrainedvisual quadrants in the right visual field (ori1_loc3/ori2_loc3). (2) Similarresults were found when Vernier was trained at one quadrant and motiondirection was trained at a diagonal quadrant as location training, in whichVernier learning transferred to the diagonal quadrant, as well as to othercompletely untrained quadrants at the same (5o) and different (10o) eccentricities.(3) However, in a texture discrimination task (Karni & Sagi, 1991),although the usually location specific learning could transfer to a diagonalquadrant when the transfer location was trained with detecting an array ofovals among circles, learning transferred less significantly to a third completelyuntrained quadrant.The first two experiments suggest that the observers may have learned thestrategy to attend to a peripheral target in a clear field after double training.However, the third experiment indicates that training for precise spatialattention is still required for learning to transfer to a completely untrainedlocation in a cluttered area.Acknowledgement: Natural Science Foundation of China grants 30725018Motion: Mechanisms and IllusionsOrchid Ballroom, Boards 427–438Saturday, May 8, 2:45 - 6:45 pm26.427 Detection of radial frequency motion trajectoriesCharles C.-F. Or 1 (cfor@yorku.ca), Michel Thabet 1 , Hugh R. Wilson 1 , FrancesWilkinson 1 ; 1 Centre for <strong>Vision</strong> Research, York University, Toronto, Ontario,CanadaHumans are extremely sensitive to radial deformations of static circularcontours (Wilkinson, Wilson, & Habak, 1998, <strong>Vision</strong> Research). Here weinvestigate the detection of motion trajectories defined by these radial frequency(RF) patterns over a range of radial frequencies, using the methodof constant stimuli combined with a two-interval forced-choice (2IFC)paradigm. The stimulus was a radially symmetric difference-of-Gaussiansblob (peak spatial frequency: 2.74 cpd, bandwidth: 1.79 octaves at halfamplitude) moving around the trajectory defined by an invisible RF pattern(motion RF), or by a circle of equivalent mean radius, for one completerevolution. The observer’s task was to identify the interval containing themotion RF as a function of deformation amplitude; threshold was definedas 75% correct performance. Radial frequencies of 2 – 5 cycles were testedat a mean radius of 1.0 arc deg and a mean rotation speed of 3.14 arc deg/s(2.0 s for a complete revolution). Detection thresholds ranged from 0.8 – 4.6arc min and followed a power function with an average exponent of –1.48as a function of radial frequency. This decreasing trend was consistent withthat found in static RFs, although detection thresholds for motion RFs weresignificantly higher (0.2 – 0.5 arc min for static RFs). Whether the sensitivityto motion RFs is dependent on local cues or global shape is currently underinvestigation. Importantly, we showed that these novel stimuli should be auseful tool to investigate trajectory learning and discrimination.Acknowledgement: This work was supported by CIHR Grant #172103 and NSERC Grant#OP227224 to F.W. and H.R.W., and the CIHR Training Grant in <strong>Vision</strong> Health Research toM.T.26.428 No impact of luminance noise on chromatic motion perceptionDavid Nguyen-Tri 1 (david.nguyen-tri@umontreal.ca), Rémy Allard 2 , Jocelyn Faubert 1 ;1 École d’optométrie, Université de Montréal, 2 Laboratoire Psychologie de laPerception, Université Paris DescartesThe purpose of the present experiments was to investigate the mechanismunderlying the perception of chromatic motion. In Experiment 1, we measuredcontrast thresholds in a direction discrimination task at TFs rangingfrom 1 to 16 Hz. Results show a bandpass sensitivity function for luminancemotion, and lowpass function for chromatic motion, with greater sensitivityfor chromatic motion at TFs below 4 Hz, roughly equal sensitivities at4 Hz, and greater sensitivity to luminance motion at TFs above 4 Hz. InExperiment 2, a direction discrimination task was used to measure contrastthresholds for luminance and chromatic motion as a function of noise contrastin two conditions: an intra-attribute condition (luminance signal andnoise, chromatic signal and noise) and an inter-attribute condition (luminancesignal with chromatic noise, chromatic signal with luminance noise).Analysis of threshold versus noise contrast curves in the intra-attribute conditionshows different calculation efficiencies and levels of internal equivalentnoise for luminance and chromatic motion direction discrimination.Inter-attribute noise failed to produce an increase in contrast thresholds atany TF. This shows a double dissociation between colour and luminancemotion processing. Taken together, the results of Experiments 1 and 2 indicatethat chromatic motion and luminance motion are processed by distinctmechanisms and are consistent with the notion that chromatic motion isprocessed by a tracking mechanism. Further experiments will investigatethe mechanism underlying chromatic motion processing at higher TFs.Acknowledgement: NSERC Essilor chair26.429 Motion adaptation affects perceived shapePaul Hibbard 1 (pbh2@st-andrews.ac.uk), Peter Scarfe 2 , Michelle Robertson 1 ,Stacey Windeatt 1 ; 1 School of Psychology, University of St Andrews, 2 Cognitive,Perceptual and Brain <strong>Sciences</strong> Research Department, University CollegeLondonAdaptation to a moving image causes subsequently presented stationarystimuli to appear to move in the opposite direction. Motion adaptationalso affects the perceived location and size of stimuli. After adaptationto motion in one direction, the positions of subsequently presented staticstimuli appear shifted in a direction opposite to that of the adapting stimulus(Snowden (1998), Current Biology 8, 1343-1345; Nishida & Johnston,(1999), Nature, 397, 610-612). Similarly, adaptation to expanding and contractingmotion has been shown to alter the perceived size of objects consistentwith the direction of the motion after-effect (Whitaker, McGraw &Pearson (1999), <strong>Vision</strong> Research, 39, 2999-3009). Here we demonstrate thatadaptation to motion also affects the perceived shape of stimuli. Observerswere presented with rectangular stimuli, and asked to determine whether,in comparison with a square, they appeared stretched or squashed in thevertical direction. Judgements were made in three conditions (i) a baseline,with no adaptation (ii) after adaptation to a vertically expanding pattern ofmotion, during which dots above the horizontal midline moved upwards,and dots below the horizontal midline moved downwards, and (iii) afteradaptation to a vertically compressing pattern of motion, during whichdots above the horizontal midline moved downwards, and dots below thehorizontal midline moved upwards. In each case, the point of subjectiveequality (the apparently square rectangle) was calculated to determinewhether perceived shape was affected by motion adaptation. Adaptationto a vertically compressing motion caused subsequently presented rectanglesto appear stretched in the vertical direction; adaptation to expandingmotion had no effect on perceived shape. We conclude that, in addition toits effects on apparent motion, position and size, motion adaptation canalso affect perceived shape.104 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Afternoon Posters26.430 Phantom motion aftereffect using multiple-aperturestimuli: A dynamic Bayesian modelAlan L. F. Lee 1 (alanlee@ucla.edu), Hongjing Lu 1,2 ; 1 Department of Psychology,UCLA, 2 Department of Statistics, UCLAUsing random-dot kinematograms, previous studies found that motionaftereffect (MAE) exists not only in the area of adaptation, but also in nonadaptedvisual field, a phenomenon termed “phantom MAE”. The presentstudy examined whether phantom MAE also exists for a stimulus comprisedof multiple drifting gratings, and to what extent MAE can affect localmotion processing. In addition, we developed a computational account ofphantom MAE within the framework of Bayesian sequential learning. InExperiment 1, an adapting stimulus exhibited global motion via randomlyorienteddrifting gratings in two non-adjacent quadrants. In a subsequenttesting stimulus, drifting gratings were shown either in the adapted (to measureconcrete MAE) or in non-adapted areas (to measure phantom MAE).For translational, circular and radial adapting motion, phantom MAE wasfound to be significant, although weaker than concrete MAE. The existenceof phantom MAE demonstrates that motion aftereffect is processed in aglobal manner. In Experiment 2, random motion flow was assigned to thetesting stimuli. Grating orientations in the testing stimuli were sampledfrom uniform distributions (range = ±30 degrees), centered at directionseither orthogonal or parallel to illusory motion direction. Observers wereasked to discriminate motion direction after adapting to coherent or randommotion stimuli. Responses to testing stimuli with orthogonal orientationswere different from responses to parallel orientations only after adaptingto coherent motion, but not after adapting to random motion, indicating atop-down influence of MAE on local motion processing. A dynamic Bayesianmodel was developed to quantify adaptation-induced changes on multiplemotion channels, each selective to a specific velocity. The simulationshowed that prolonged exposure to a moving stimulus changed the widthof the tuning function differently for different motion channels. The modelpredicted the existence of phantom MAE, as well as the qualitative differencebetween phantom and concrete MAEs.26.431 The Accordion Grating illusion measured by a nullingparadigmEnrico Giora 1 (enrico.giora@gmail.com), Simone Gori 2 , Arash Yazdanbakhsh 3,4 ,Ennio Mingolla 3 ; 1 Department of Psychology, University of Milano-Bicocca, Italy,2 Department of General Psychology, University of Padua, Italy, 3 Cognitiveand Neural Systems Department, Boston University, MA, USA, 4 NeurobiologyDepartment, Harvard Medical School, Boston, MA, USADynamical viewing of an elementary square-wave grating by movingtoward it creates the Accordion Grating illusion. Observers report a nonrigidperceptual distortion of the grating including two illusory effects:(i) an expansion only perpendicular to the stripes when moving the headtowards the pattern and (ii) distortion of the physically straight stripesof the grating. Whilst the illusory expansion perpendicular to the stripescan be explained by the interactions between ambiguous and unambiguousmotion signals generated at line interiors and line ends, a differentialgeometry model with a 3-D representation of the classical aperture problemis here proposed to account for the illusory curvature. Four subjects weretested in a nulling psychophysical experiment. The expected perceptualcurvature was balanced by a physical counter-distortion calculated by themodel. The amount of physical curvature necessary to nullify the illusionled to a precise quantification of the illusion and verified the proposedmodel.26.432 Spatial scaling for the Rotating Snakes illusion.Rumi Hisakata 1 (hisakata@fechner.c.u-tokyo.ac.jp), Ikuya Murakami 1 ; 1 Dept. ofLife <strong>Sciences</strong>, University of TokyoIn the Rotating Snakes illusion, vigorous motion is perceived in a staticfigure comprised of repetitive luminance micropatterns: black, dark-gray,white, and light-gray. Our previous study (Hisakata & Murakami, 2008)showed that the illusion strength increased with eccentricity, implying thatmotion processing units related to this illusion have preferred stimulussizes that systematically vary with eccentricity. To investigate the quantitativedetails about the effect of eccentricity on the illusion, we measuredthe illusion strength while manipulating stimulus size and eccentricity. Anarray of micropatterns arranged as a ring with the strip width of 2 degrotated about the fixation point. The background was filled with staticrandom noise. After the stimulus ring was presented for 500 ms, subjectsanswered whether the ring appeared to rotate clockwise or counter-clockwise.The size of each micropattern was manipulated by changing thenumber of micropatterns per ring, and the eccentricity was manipulatedby changing the radius of the stimulus ring. Results indicated that the illusionstrength, i.e., the physical velocity that just nulled the illusory motion,decreased with decreasing size and reached the minimum at a particularsize at each eccentricity. We applied the spatial scaling technique to theillusion strength and found that all data converged into a single functionwhen both stimulus size and nulling velocity were scaled according to scalingfactors as a linear function of eccentricity. Compared with the scalingfactors in previous studies, the estimated scaling factors for the RotatingSnakes illusion were analogous with those estimated for contrast detectionthresholds and motion detection thresholds, both believed to reflect corticalarchitecture at early stages of the visual system. These results suggest thatprocessing units at early stages internally produce motion signals relatedto illusory motion from the retinal image of the stimulus for the RotatingSnakes illusion.Acknowledgement: Supported by MEXT #2002000626.433 fMRI adaptation to anomalous motion in the “RotatingSnakes” patternsHiroshi Ashida 1 (ashida@bun.kyoto-u.ac.jp), Ichiro Kuriki 2 , Ikuya Murakami 3 , AkiyoshiKitaoka 4 ; 1 Graduate School of Letters, Kyoto University, 2 Research Instituteof Electrical Communication, Tohoku University, 3 Department of Life <strong>Sciences</strong>,University of Tokyo, 4 Department of Psychology, Ritsumeikan UniversityStationary patterns with a specially designed repetitive pattern, such as the“Rotating Snakes” (Kitaoka & Ashida, 2003), can elicit illusory perceptionof motion. By using a conventional fMRI contrast, we have shown that the“Rotating Snakes” figure activates human MT+ (Kuriki et al, 2008). Activityin V1 was not evident, which can be either because motion signals arisewithin MT+ (Thiele et al, 2004), or because our motionless control stimulus(for comparison of BOLD signals) that consisted of the same local patternsmight have elicited local motion signals (while globally cancelledout). In this study, we used an fMRI adaptation paradigm that does notrequire an explicit control stimulus, in order to assess direction-selectiveresponses in the visual areas to the “Rotating Snakes” pattern. Four disksthat comprised repetitive patterns of white-yellow-black-blue were used.They appeared as rotating in this direction when viewed naturally. Afteran adapting stimulus (S1) followed by a blank interval, a probe stimulus(S2) of either the same or the reversed color order (hence eliciting illusorymotion in the same or the opposite direction) was presented. The spatialphase was altered between S1 and S2 to avoid local coincidence. The fixationmark was blurred to relax fixation to some extent because hard fixationcan abolish illusory motion. Attention was controlled by a fixation task.Regions of interest were defined for each participant by separate localizerruns. A 3-T scanner (Siemens Trio Tim) was used. Event-related averagesof time-courses revealed larger BOLD responses for reversed S2 than forthe same S2 in MT+, indicating direction-specific adaptation. The differencewas smaller but evident in V1-V4 and V3A. The overall results suggest thatlocal motion sensors in V1 are indeed activated by the illusion figure, whichis in line with most of currently proposed models.Acknowledgement: JSPS Grants-in-Aid for Scientific Research B2033014926.434 Is the Rotating Snakes an Optical Illusion?Christopher R. L. Cantor 1,2 (dalek@berkeley.edu), Humza J. Tahir 1,2 , Clifton M.Schor 1,2 ; 1 Program in <strong>Vision</strong> Science, University of California at Berkeley,2 School of Optometry, University of California at BerkeleyThe “rotating snakes” (Kitoaka 2003) is a well-known illusion in which astatic image of a repetitive pattern moves as it is examined during free viewing(the apparent movement ceases after several seconds of stable fixation).We have discovered that it is possible to eliminate this illusion by viewingthe image through either a pinhole or defocused by a +2D plus lens.We posit a largely optical, rather than neural, explanation of the effect.The optics of the eye are not uniform over visual space nor stationary overtime. Under natural viewing conditions (>3mm pupil) the MTF varies significantlywhen measured at different visual eccentricities. Fluctuations inaccommodation create temporal variations in the magnitude of defocus ofthe retinal image. Viewing the image through pinholes or a plus lens producesa uniformity in the MTF regardless of eccentricity, and also reducesor eliminates the impact of temporal fluctuation of accommodation on retinalimage quality.Saturday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>105


Saturday Afternoon PostersVSS 2010 AbstractsSaturday PMThe “rotating snakes” illusion can be viewed online at http://www.ritsumei.ac.jp/~akitaoka/rotsnake.gif26.435 Minimum motion threshold correlates with the fixationinstability of the more wobbling eyeIkuya Murakami 1 (ikuya@fechner.c.u-tokyo.ac.jp); 1 Department of Life <strong>Sciences</strong>,University of TokyoEven if we look at a stationary object with maintained fixation, the eyesare actually making tiny random oscillations. Previously a correlation wasfound between the minimum detection threshold for unreferenced motionand fixation instability, such that observers with poorer fixation performanceshad higher thresholds (Murakami, 2004). Therefore fixation instabilityis a part of internal noise that limits motion perception. Which of thetwo eyes is more influential? To answer this question, the minimum motionthreshold was measured in binocular viewing and was compared with thefixation instabilities of the two eyes. Within a blurred window at 0 and8.5 deg eccentricities, a random-dot pattern moved in one of eight possibledirections differing by 45 deg. The threshold was determined as the speedcorresponding to the correct response rate of 53.3% in direction identification.Fixational eye movements of each observer were recorded and theSD of microsaccade-free instantaneous velocities was taken as the index offixation instability. Inter-observer correlations were based on these data for56 normal adults. The thresholds at both eccentricities positively correlatedwith the fixation instability of both eyes, duplicating the previous finding.Interestingly, the positive inter-observer correlation became more evident(r = 0.5, p


VSS 2010 AbstractsSaturday Afternoon PostersEye movements: Smooth pursuitOrchid Ballroom, Boards 439–446Saturday, May 8, 2:45 - 6:45 pm26.439 Integration of motion information for smooth pursuit duringmultiple object tracking (MOT)Zhenlan Jin 1 (jin@ski.org), Scott Watamaniuk 2 , Aarlenne Khan 3 , Stephen Heinen 1 ;1 The Smith-Kettlewell Eye Research Institute, 2 Wright State University, 3 Queen’sUniversityPreviously, we showed that observers could simultaneously perform multipleobject tracking (MOT) and pursue the array of MOT targets withoutloss of performance on either task (Watamaniuk et al., 2009). We proposedthat local motion information is maintained for MOT, while pursuit usesa velocity signal obtained by integrating the local motion of the MOT elements.However, in that work, the MOT array always moved at the sameconstant velocity, and therefore, a predictive pursuit movement couldmaintain eye velocity without requiring integration. Here, we test this, bychanging the speed of the array at a random time. The MOT stimulus wascomposed of nine dots (0.2 deg diameter) that moved randomly within avirtual region measuring 8 by 8 deg for 3 sec. Observers were initially cuedas to which four of the dots to remember by a brief color change. All dotsreturned to the same color until the end of the trial, when one dot againchanged color and had to be identified as a member of the cued set or not.The array began moving from left to right at 7deg/sec. In 60% of trials, thearray speed was randomly increased to 11.5 deg/sec or decreased to 2.3deg/sec for 500 msec, beginning 700-2000 msec after motion onset. Observerspursued the array either with or without performing the MOT task. Wefound that eye velocity changed in response to the array speed change aftera normal latency (112 msec mean) regardless of whether the MOT task wasperformed, and MOT performance was unimpaired. The results suggestthat local motion information is continuously integrated for pursuit evenwhen individual, non-consistent motion signals are attended, supportingsimultaneous access of global and local motion signals for pursuit andMOT tasks.26.440 Anticipatory smooth pursuit eye movements in response toglobal motionElio M. Santos 1 (santos86@eden.rutgers.edu), Martin Gizzi 2 , Eileen Kowler 1 ;1 Department of Psychology, Rutgers University, 2 NJ Neuroscience Institute atJFK Medical Center, Seton Hall UniversityAnticipatory smooth pursuit eye movements in the direction of expectedtarget motion can be elicited by symbolic visual or auditory cues (Kowler,1989). Stimuli in these studies, small targets moving against dark or structuredbackgrounds, convey the perceptual impression of an object movingacross space. Are such interpretations necessary for anticipatory pursuit?We studied random-dot kinematograms (RDKs), composed of dots withvery brief lifetimes. RDK’s can be pursued (Schütz et al., VSS 2009), butthey generate global motion signals, rather than the percept of discretemoving objects.RDKs were composed of dots (4’) moving coherently (2.5 o/s) either up,down, right or left. Dot lifetimes were 52, 104, or 208 ms. The directionof motion was cued by either: (1) the spatial offset of the initial stationaryfixation stimulus away from screen center, opposite to the direction ofupcoming target motion; or (2) a tone whose frequency indicated whetherdots moving downward would change direction to either down-right ordown-left. Uncued motion directions, and unlimited lifetime dots, werealso tested.Anticipatory smooth eye movements (ASEM) were prominent. Neithertheir onset time nor velocity depended on dot lifetime. By contrast, bothpeak eye velocity and steady-state pursuit gain varied with lifetime.Steady-state gains with the longest lifetime approached 1. Gains with theshortest lifetime ranged from 0.1 to 0.7, depending on subject and directionof motion.These results show that anticipatory smooth pursuit eye movements canbe elicited with global motion, and do not require representations of anobject moving across space. Dot lifetime did not affect anticipatory eyemovements, but did affect steady state pursuit. These differential effectsof dot lifetime suggest that the study of the pursuit of random-dot kinematogramsmay be a useful way to dissociate the response to expectedand immediate target motion.Acknowledgement: NSF 054911526.441 Compensation for equiluminant chromatic motion duringsmooth pursuitMasahiko Terao 1 (masahiko_terao@mac.com), Ikuya Murakami 1 ; 1 Department ofLife <strong>Sciences</strong>, University of TokyoWhen we move our eyes, the world appears to remain stable. The visualsystem reconstructs a stable world in spite of the motion of the retinalimage resulting from eye movements. During smooth pursuit eye movements,the retinal image moves in the opposite direction. To compensatefor such retinal image slip, the visual system is arguably comparing retinalimage velocity and estimated eye velocity. Our interest was whetherequiluminant motion is compensated in a similar way by velocity comparison.According to the conventional view that color and motion are processedthrough separate neural pathways, compensation for color motioncould have different properties. Alternatively, as we argued previously(VSS, 2007), early processes for luminance and color motions mediated bythe magnocellular and parvocellular pathways might feed into a commonvelocity comparator. It is also unclear whether S-cone chromatic modulation,for which the koniocellular pathway is suggested to be responsible, isalso compensated by similar velocity comparison. We measured the retinalimage velocity required to reach subjective stationarity for a sinusoidalgrating, using chromatic modulations determined in reference to a conecontrast color space in which two axes correspond to the chromatic tuningsof the LGN neurons (L-M axis and S axis). The grating was drifting atvarious velocities. Results indicated that the retinal velocity at the point ofsubjective stationarity for both L-M-axis and S-axis chromatic modulationswere faster than that for luminance modulation. Equiluminant chromaticmotion is known to appear to move slower than luminance stimuli (e.g.Cavanagh et al. 1984). Our results suggest that speed reduction at equiluminantmotion mediated by both parvocellular and koniocellular pathwaystakes place at an early processing level with retinocentric coordinates, followedby velocity comparison in which reduced retinal image velocity iscompared with estimated eye velocity to compensate equiluminant motionduring smooth pursuit.Acknowledgement: Supported by MEXT #2002000626.442 Pursuit eye movements on visual illusionsVincent Sun 1 (sun@alumni.uchicago.edu), Ming-Chuan Fu 2 ; 1 Department ofInformation Communications, Chinese Culture University, 2 Department of VisualCommunication Design, Jinwen University of Science and TechnologyThe dissociations between visual perception and visual guided action havelong been suggested (Goodale & Haffenden, 1998). Studies showed thatthe saccadic eye movements were guided by real physical stimuli ratherthan by the perceived visual illusions. In the present research, we exploredwhich guides the pursuit eye movements, physical stimuli or visual illusions.In the case of Hering illusions, where radiant lines induce a perceivedcurvature for a physically straight line, a ViewPoint PC60 video eye trackerwas used to record the eye movements during observers pursuing a reddot target moving along a straight line, a Hering illusory curve (physicallya straight line), or a curve with curvature match the Hering illusion. Theresults showed that the eye scanning paths of pursuing a target movingalone an illusory curve are more similar to those of viewing a real curvethan to those of viewing a straight line, what the illusory curve is physically.This suggested that the visual illusion guides the pursuit eye movementsin the case. We then applied similar paradigm to test the Wundt,Müller-Lyer, and Ebbinghaus illusory patterns, which exhibited illusorycurvatures, line segment lengths, and sizes, respectively. By analyzing thegaze paths of pursuing targets moving along those illusory rails, we foundthat the scanning paths followed the illusory rather than the real physicalpatterns. The results suggested that pursuit eye movements may be guidedby the what rather than by the how visual information processing streams.Acknowledgement: NSC 98-2410-H-034 -036 National Science Council Taiwan26.443 Temporal Integration of Focus Position Signal duringCompensation for Pursuit in Optic FlowJacob Duijnhouwer 1,2 (jacob@vision.rutgers.edu), Bart Krekelberg 1 , Albert van denBerg 2 , Richard van Wezel 3,4 ; 1 Center for Molecular and Behavioral Neuroscience,Rutgers University Newark, NJ, USA, 2 Functional Neurobiology, Utrecht University,The Netherlands, 3 Biomedical Signals and Systems, Twente University, TheNetherlands, 4 Psychopharmacology, Utrecht University, The NetherlandsSaturday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>107


Saturday Afternoon PostersVSS 2010 AbstractsSaturday PMThe pattern of motion on the retina, or optic flow, during smooth pursuiteye movements is the difference between the instantaneous motion in thescene, caused for example by self-motion, and that of the eyes. Previousstudies have shown that the optic flow component caused by pursuit ispartially removed from the percept. In those studies the distorting effect ofpursuit on the focus location has been attributed to the vector addition ofthe instantaneous pursuit velocity. For example, the expanding flow resultingfrom forward self-motion plus the laminar flow resulting from pursuitleads to a shift of the focus of expansion in the pursuit direction. However,during pursuit the focus also gradually moves over the retina in the directionopposite to the pursuit, and the potential effect of this on the perceivedfocus location is disregarded in the instantaneous vector sum account. In adifferent field of study, it has been shown that the momentary position ofa moving target is misestimated in the direction opposite to the directionof target motion, probably because observers report the target’s averageposition over a time interval prior to locating the target. Here we presentevidence that this temporal integration effect also plays an important rolein locating the focus. We presented expanding, contracting and rotatingflow fields during pursuit and asked observers to report the position of thefocus. We found that the mislocalization pattern bore signatures of both thevector sum account, which predicts shifts in different directions for eachflow type, and the temporal integration account, which predicts shifts inthe pursuit direction for all flow types. Additional experiments, in whichthe presentation duration, flow speed, and uncertainty of the focus locationwere manipulated, consolidated the idea that this novel component offocus shift indeed reflected temporal integration.Acknowledgement: Funded by The Pew Charitable Trusts (BK), the National Institute ofHealth R01 EY17605 (BK), and The UU High Potential Grant (RW).26.444 Bayesian analysis of perceived motion during smoothpursuit eye movementTom CA Freeman 1 (freemant@cardiff.ac.uk), Rebecca A Champion 1 , Paul AWarren 2 ; 1 School of Psychology, Cardiff University, 2 School of Psychological<strong>Sciences</strong>, University of ManchesterWe’ve known for over a century that estimates of pursuit speed are typicallylower than estimates of retinal speed. The benchmark is the Aubert-Fleischl phenomenon, the name given to the perceived slowing of movingobjects when they are pursued. Many other pursuit-related phenomena canbe accounted for in a similar way: stationary objects appear to move (theFilehne illusion), motion trajectories are misperceived, perceived headingoscillates (the slalom illusion) and slant increases. When compared quantitatively,the slowing required to explain these phenomena is remarkablyconsistent. Estimates of pursuit speed are evidently lower than estimatesof retinal speed – but why? Recent Bayesian accounts of retinal motionprocessing may provide an answer. These rely on a zero-motion prior thatreduces estimates of speed for less reliable motion signals. This wouldexplain the phenomena above if signals underlying eye-velocity estimateswere less precise, a prediction tested by comparing speed discriminationfor pursuit (P) and fixation (F). Using a standard 2AFC task, we found trialscontaining P-P intervals were harder to discriminate than F-F intervals.We also found that speed matches for F-P intervals revealed a strong perceivedslowing of pursued stimuli (Aubert-Fleischl phenomenon). A controlexperiment showed that poorer P-P discrimination was not due to theabsence of relative motion. We used a Bayesian observer to fit psychometricfunctions to the entire data-set. The model consisted of a measurementstage (single non-linear speed transducer + two sources of noise), followedby a Bayes estimator (SD of prior free to vary). The model fit the data well.In order to explain the other phenomena listed above, however, the modeldemonstrates that estimates of retinal motion and pursuit must be addedafter the Bayes estimation stage. Adding signals beforehand (at the measurementstage) cannot predict changes in velocity, just speed.Acknowledgement: The Wellcome Trust26.445 A recurrent Bayesian model of dynamic motion integrationfor smooth pursuitAmarender Bogadhi 1 (amar.bogadhi@incm.cnrs-mrs.fr), Anna Montagnini 1 , PascalMamassian 2 , Laurent Perrinet 1 , Guillaume Masson 1 ; 1 Team DyVA, INCM, CNRS& Université de la Méditerranée, Marseille, France , 2 LPP, CNRS & ParisDescartes, Paris, FranceThe quality of the estimate of an object’s global motion, over time is notonly affected by the noise in motion information but also by the spatiallimitation of the local motion analyzers (aperture problem). Perceptual andoculomotor data demonstrate that during the initial stages of the motioninformation processing, 1D motion cues related to the objects edges have adominating influence over the estimate of the objects global motion. However,during the later stages, 2D motion cues related to terminators (edgeendings)progressively take over leading to a final correct estimate of theobjects global motion. Here, we propose a recursive extension to the Bayesianframework to describe the dynamic integration of 1D and 2D motioninformation. In the recurrent Bayesian framework, the prior defined in thevelocity space is combined with the two independent measurement likelihoodfunctions (Likelihood functions representing edge-related and terminator-relatedinformation) to obtain the posterior. The prior is updated withthe posterior at the end of each iteration step. The recurrent Bayesian networkis cascaded with a first order filter to mimic the oculomotor dynamicsin the final output of the model. This oculomotor dynamics was tuned withsingle blobs moving in 8 different directions. The model parameters werefitted to human smooth pursuit recordings for different stimulus parameters(speed, contrast) across three subjects. The model results indicate thatfor a given velocity with increase in contrast, the latency decreases and fora given contrast with increase in velocity, the acceleration increases similarto what is being observed in smooth pursuit recordings . Also, The latencyfor a tilted line is shorter compared to the latency for the blob.Acknowledgement: CODDE project (EU Marie Curie ITN), CNRS26.446 Oculoceptive fields for smooth pursuit eye movementsKurt Debono 1 (kurt.debono@psychol.uni-giessen.de), Alexander C. Schütz 1 , KarlR. Gegenfurtner 1 ; 1 Department of Psychology, Justus-Liebig-University, Giessen,GermanyWhen confronted with several moving objects, the smooth pursuit systemhas to integrate information over a region of the visual field to determinethe direction of the eye movements. We spatially mapped the influence ofdifferent motion vectors with the ultimate goal of finding an ‘oculoceptivefield’ of the pursuit system. We asked subjects to pursue a random-dot patternconsisting of 20% correlated signal dots moving rightward or leftwardat 10°/s. The pattern was presented inside a circular window with a radiusof 20 degrees of visual angle. A perturbation was then added to the pattern,consisting of additional correlated dots moving at an angle offset obliquelyupwards or downwards from the pattern direction. The perturbation anglewas varied between 5° and 90°, and the perturbation was present throughoutthe duration of the stimulus in one of 5 regions forming a gaze-contingentcircular window with a 10° outer radius and a 2° inner radius. Theeffect of the perturbation was to deflect the pursuit from the horizontalpattern motion direction into the perturbation direction. The perturbationhad the largest effect when the angular difference to the pattern motiondirection was small (up to 10°), even though the vertical component of theperturbation was much bigger for larger angular differences. Perturbationswith an eccentricity angle larger than 10° tended to be discarded, indicatingthat this integration is not a simple vector summation process. Rather,motion signals close to the pursuit direction seem to be weighted muchheavier than others. Perturbations presented behind the pursuit target hadsimilar effects as those that were ahead of the pursuit target. Our resultsindicate that the analysis of visual motion during smooth pursuit is focusedon the direction of ongoing pursuit. The oculoceptive field for pursuit iscentered on the pursuit target.Acknowledgement: This work was supported by the CODDE EU training network and theDFG Forschergruppe FOR 560 “Perception and Action”108 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Afternoon PostersMemory: Encoding and retrievalOrchid Ballroom, Boards 447–460Saturday, May 8, 2:45 - 6:45 pm26.447 Measuring the accuracy and precision of visual representationsin validly and invalidly spatially pre-cued visual workingmemoryWilson Chu 1 (wchu3@uci.edu), Barbara Anne Dosher 1 , Zhong-Lin Lu 2 ; 1 Memory,Attention, Perception Lab (MAP-Lab), Department of Cognitive <strong>Sciences</strong>,University of California, Irvine, CA 92697 USA, 2 Laboratory of Brain Processes(LOBES), Department of Psychology, University of Southern California, LosAngeles, CA 90089, USAValid pre-cuing of spatial attention improves visual identification, especiallyin the presence of visual noise in the stimulus (Dosher & Lu, 2000).The current study used orientation-report methods to measure the accuracyand precision of stimulus representations in visual working memory(VWM) (Zhang & Luck, 2008) following either a valid or an invalid locationpre-cue. Gabor patches of varying contrast appeared at each of four cornersat 5 deg eccentricity about fixation. The Gabors were selected from 20 9°spacedorientations. External noise was added on half the trials, with thecontrast of the Gabors adjusted accordingly. One of the four locations waspre-cued 150 ms before the oriented Gabors and then the delayed reportcueappeared 800 ms after the Gabors indicating the to-be-reported location.Observers clicked on a Gabor from a 20-orientation palette to reportthe orientation. The precision and accuracy of the visual representation wasmeasured through the spread of responses about the orientation of the tobe-reportedstimulus. The pre-cue was valid on 5/8 of trials, while anotherlocation was cued for report in 3/8 of trials. The reports in validly cuedtrails, which could have supported encoding earlier in the delay interval,showed higher accuracy and good precision about the correct orientation.Performance in reporting the correct orientation was dramatically reducedin invalidly cued trials, where the report-cued location was unpredictableuntil 800 ms after the stimulus, and observer’s reports were more broadlytuned, incorporating more guessing errors. As in spatially cued attention,invalid cuing was especially damaging in high external noise for a numberof observers. Several observers showed very poor performance for invalidlycued trials in high external noise. These data can be considered as amixture of a good precision encoding and guessing, with the availability ofthe processes dependent on cuing and on external noise.Acknowledgement: NIMH26.448 Figure-ground perception is impaired in medial temporallobe amnesiaMorgan D. Barense 1 (barense@psych.utoronto.ca), K.W. Joan Ngo 1 , Mary A.Peterson 2 ; 1 Department of Psychology, University of Toronto, 2 Department ofPsychology, University of ArizonaAmnesia resulting from medial temporal lobe lesions is traditionally consideredto be a selective deficit in long-term declarative memory. In contrast tothis view, recent studies suggest that high-level perceptual processing mayalso be compromised in the disorder (e.g., Lee et al., 2005; Barense et al.,2007). Here, we tested figure-ground segmentation in two densely amnesicpatients with focal lesions to the medial temporal lobes resulting fromherpes viral simplex encephalitis. For each display, two adjacent regionsshared a contour and participants reported whether they perceived the leftor the right region as the figure (e.g., Peterson et al., 2000). In experimentalstimuli, the central contour portrayed a familiar object on one, high-denotative,side. In control stimuli, no known objects were portrayed on eitherside of the central contour, but one side was a part-scrambled version ofone of the high-denotative regions. Relative to age and education matchedcontrols, the patients failed to show effects of familiarity on figure assignment,with neither patient reporting seeing the figure on the high-denotativeside of the edge any more often than on the matched scrambled side.The lack of a difference arose because the patients were highly likely to seeboth the part-scrambled and the high denotative regions as figure. Moreover,both patients identified less than half of the familiar objects they sawas figures. The pattern of performance suggests that the patients may havebeen responding on the basis of the familiarity of the individual featuresof the objects, rather than on the basis of the overall familiar configurationof the object as a whole. These results suggest that fast access to familiarconfigurations and conscious object recognition of portions of figures maybe impaired in medial temporal lobe amnesia.Acknowledgement: Natural <strong>Sciences</strong> and Engineering Research Council of Canada26.449 Frequency of exposure modulates cortical activity in thecontextual associations networkElissa Aminoff 1 (aminoff@psych.ucsb.edu), Moshe Bar 2 ; 1 University of California,Santa Barbara, 2 Martinos Center for Biomedical Imaging at MGHObjects are typically encountered embedded in a context with other objects,rather than appearing in isolation. The parahippocampal cortex (PHC) andthe retrosplenial complex (RSC) are major components in a network that ismore active for stimuli with such typical contextual associations comparedwith stimuli with weak contextual associations. Contextual processing providesa bridge between previous research that ascribed spatial functioningto the PHC and RSC and other research that demonstrated that these areasmediate episodic memory. Here we aimed to enrich this bridge by askingwhat is the effect of frequency of occurrence on activation in this network.The idea was that highly contextual objects that are encountered more oftenwould elicit more associations than objects with weak contextual associations,and even compared with other highly contextual objects if these arenot encountered as frequently. Participants viewed four types of objects inan fMRI scanning session: strongly contextual objects that are encounteredfrequently, strongly contextual objects that are encountered rarely, weaklycontextual objects that are encountered frequently, and weakly contextualobjects that are encountered rarely. Both strength of contextual associationsand frequency of occurrence were determined using surveys. First,as has been shown before, activation in the context network demonstrateda strong main effect of context, whereby activation increased significantlyfor strong context compared with weak context. More importantly, both theRSC and the PHC were more active for frequent objects than to rare objects,supporting our context-related hypothesis. We will discuss the differencebetween the exposure effect on RSC and on PHC, and will also tie exposureto the potential role of the prefrontal cortex in the interaction between context,number of associations and frequency of occurrence.Acknowledgement: Supported by NIH NS050615 and NSF 084294726.450 How does occlusion affect search and memory processesfor targets and distractors?Carrick Williams 1 (cwilliams@psychology.msstate.edu); 1 Department ofPsychology, Mississippi State UniversityOcclusion of visual details is a ubiquitous problem in real-world visualsearch. The lack of visual details requires that a searcher either processincomplete objects or amodally complete objects in order to identify thetarget of the search. The current study had participants search for conjunctiontargets among pictures of real-world objects that were not occluded orwere occluded 25%, 50%, or 75%. In addition to the level of occlusion, thetype of occluder was manipulated in that the occluder was either a visiblemulti-colored mask or matched the background color (invisible occluder).Search was more difficult as the levels of occlusion increased, but the effectwas limited to the higher occlusion levels. Participants were also moreaccurate, but no faster, in searches when there was a visible occluder comparedto an invisible occluder, indicating that the ability to attribute thedisrupted visual information to a visible occluding element was advantageous.Following the search, participants’ memories for the search objectsencountered were tested. Interestingly, memory for the target objects weremore affected by the increasing levels of occlusion than memory for distractorobjects, especially at higher levels of occlusion. However, whetherthe object had been occluded by a visible occluder or not had no effect onmemory. In a separate experiment, the ability of the participant to rememberthe portion of the object occluded was tested by presenting partiallyoccluded objects and testing the occluder’s location. Participants were near,but reliably above, chance performance at remembering the location of theoccluder on distractor objects. However, participants were significantlybetter at locating the occluder on target objects. The memory results mayindicate differences in how amodal completion affects target and distractorobject visual memory representations in that targets may have more preciserepresentations compared to more abstract representations of distractors.Saturday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>109


Saturday Afternoon PostersVSS 2010 AbstractsSaturday PM26.451 Wait a few seconds: Newly learned spatial statisticsenhance visual short-term memoryD. Alexander Varakin 1 (avarakin@knox.edu), Melissa R. Beck 2 ; 1 Departmentof Psychology, Knox College, 2 Department of Psychology, Louisiana StateUniversityThe current experiments investigated how learned spatial statistics affectvisual short-term memory (VSTM). In all experiments, a VSTM task wasused in which a sample array was presented, followed by a 1000ms delay,followed by a test probe. Participants indicated whether the probe was presentin the sample array. Sample arrays consisted of six novel shapes, similarto those used in previous studies of visual statistical learning. Structuredarrays consisted of base pairs, i.e. pairs of shapes that always appearedin the same relative spatial positions (e.g. shape-A always appears aboveshape-B). On unstructured arrays, shapes were presented in random locations,with the restriction that the global configuration was one that couldappear in the structured arrays. In experiment 1, sample arrays were presentedfor 2000ms. Performance was better on structured arrays, suggestingthat spatial statistics can be used to enhance VSTM. Participants couldalso recognize the base pairs at the end of the experiment. In experiment 2,sample array inspection time was reduced to 500ms. All effects of spatialstructure were eliminated, consistent with the idea that learning spatial statisticsdepends on inspection time. In experiment 3, participants passivelyviewed the structured and unstructured arrays prior to performing theVSTM task (with 500ms inspection time). As in experiment 2, performanceon the VSTM task was equivalent for structured and unstructured arrays.However, unlike experiment 2, participants could recognize the base pairsat the end of the experiment. These experiments suggest that visual statisticallearning does not affect the basic units of VSTM. If visual statisticallearning affected VSTM’s units, then performance should have been betteron structured arrays in experiment 3. Thus, these findings suggest thatinformation in long-term memory can supplement limited capacity VSTM,as long as enough time is available to access VLTM.26.452 Unexpected events, predictive eye movements, and imitationlearningAbigail Noyce 1,2 (anoyce@brandeis.edu), Jessica Maryott 1 , Robert Sekuler 2 ;1 Department of Psychology, Brandeis University, 2 Volen Center for ComplexSystems, Brandeis UniversityW. Schulz proposed that prediction errors can facilitate the learning ofsequential behaviors. We assessed Schulz’s proposal, asking how predictionerrors alter the fidelity with which a remembered motion sequence wasreproduced. Subjects viewed a sequence five times, each time reproducingwhat they had just seen. Each sequence comprised six quasi-randomlydirected motions. Because eye movements are influenced by cognitivefactors such as learning and expectation, we supplemented measures ofreproduction fidelity with measures of eye movements made while subjectsviewed each sequence of motions. Beginning with the second viewing of asequence, tracking eye movements showed clear anticipation of the upcomingmotions. Thus, eye movements provide a sensitive indicator of subjects’knowledge of and expectations for complex, quasi-random sequences ofmotions. To determine the influence of prediction errors, a novel motiondirection was occasionally injected into a well-learned sequence. Thisunexpected, deviant motion transiently disrupted eye movements, requiringlarge, corrective saccades for catch-up. However, reproduction ofthis motion sequence was equivalent to that of well-learned, non-deviantsequences, and was greatly improved over reproduction of entirely novelsequences. These results undermine claims that eye movements providea substrate crucial to visuomotor learning. Immediately after a perturbedsequence, a final presentation either reinstated the original, well-learnedsequence, or preserved the deviant motion. On this final presentation, eyemovements anticipated the reappearance of the deviant motion component;when this anticipation was correct, catch-up saccades were smallerand velocity of smooth pursuit was higher. Motion sequence reproductionwas more accurate when subjects’ expectation was violated and the originalsequence appeared than when this expectation was confirmed. So, onepresentation of an unexpected, deviant motion produces strong learningfor that component, and violating expectations improves subjects’ ability toreproduce a motion sequence.Acknowledgement: Supported in part by CELEST, an NSF Science of Learning Center(SBE-0354378), and by NIH Training Grant T32GM084907.26.453 Neural basis for monitoring of multiple features-locationbinding: an event-related functional magnetic resonance imagingstudySachiko Takahama 1,2,3 (takahama@fbs.osaka-u.ac.jp), Izumi Ohzawa 1,3 , YoshichikaYoshioka 4,1,3 , Jun Saiki 5 ; 1 Graduate School of Frontier Biosciences, OsakaUniversity, 2 Kobe Advanced ICT Research Center, National Institute of Informationand Communications Technology, 3 CREST, Japan Science and TechnologyAgency, 4 Immunology Frontier Research Center, Osaka University, 5 GraduateSchool of Human and Environmental Studies, Kyoto UniversityFunctional magnetic resonance imaging (fMRI) studies using a multipleobject permanence tracking task (MOPT; Saiki, 2003) or a multiple objecttracking task have reported the involvement of the frontoparietal networkand inferior precentral sulcus (infPreCS) in the monitoring of location orfeature-location binding. In general, many objects have multiple features;therefore, coherent object representation requires monitoring multiple features-locationbinding. To investigate whether the enhanced activities inthe previously reported neural network or activities in additional regionscontribute to the monitoring of multiple features-location binding, we usedevent-related fMRI with an MOPT paradigm and compared brain activitiesamong different tasks using the same visual information. Visual objectswere defined by 4 sets of a tilted black bar embedded in a colored circle. Weprepared 3 change types: color (2 colored circles were replaced with eachother), orientation (2 tilted bars were replaced with each other), and conjunction(2 colored circles and tilted bars were replaced with each other).Depending on the change type to be monitored, we prepared 2 types oftasks: single feature-location binding (monitoring only single feature-locationbinding) and triple conjunction tasks (monitoring the binding of 2 featuresand location). The former group consisted of color (to detect eithercolor or conjunction change) and orientation (to detect either orientationor conjunction change) tasks, whereas the latter was the conjunction task(to detect only conjunction change). Behavioral data showed no significantdifference between tasks. In the search for regions showing selective activationin monitoring of triple conjunction, we identified a network comprisedof the superior parietal lobule, superior frontal gyrus, middle frontal gyrus,and infPreCS. In the monitoring of triple conjunction, infPreCS cooperatedwith subregions of the frontoparietal network, suggesting the contributionof enhanced activities in the neural network reported in previous studies inthe monitoring of object representation.Acknowledgement: This work was supported by KAKENHI (19500226, 19730464, and21300103).26.454 Neural response dynamics in parietal cortex for an algebraicprocessing taskChristopher Tyler 1 (cwt@ski.org); 1 Smith-Kettlewell Eye Research InstituteIntroduction. Neural signals exhibit a wide variety of temporal characteristics,but in human brain studies it is difficult to derive the neural responsetime courses for local cortical regions of interest. A biophysically-basedforward optimization procedure for the BOLD fMRI waveforms was constrainedby a plausible parametrized model of local neural populationresponses. This paradigm allowed us to determine the temporal dynamicsof the local neural populations during the various phases of the mental calculationprocess within the sequence of processing regions in the intra-parietalsulcus (IPS), which is well known to be involved in this visuo-cognitiveactivity. Methods. BOLD responses were measured throughout the humanbrain using a 3T scanner with a 1 s sampling rate and a jittered event-relateddesign for stimuli consisting of temporally sequenced numeric equationstogether with a trial solution and error feedback about response correctness.The responses to the visual number presentations in each component werefit by a model with three waveform parameters for the neural response andthree for the BOLD response, plus an overall scaling parameter. Results.The pattern of response dynamics differentiated bilateral areas correspondingto the angular gyrus and IPS regions 1-5 along the intraparietal sulcus.While the responses to the initial number and operator presentations weretypically brief throughout retinotopic cortex, the angular gyrus showedprolonged responses that could support the number memory. Typically,IPS1-4 showed strong involvement in the calculation phase, while IPS5 waspredominantly active during evaluation and response selection for the trialsolution. Conclusion. This novel optimization technique allows estimationof the neural signal dynamics underlying the BOLD waveforms in an110 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Afternoon Postersalgebraic processing task, allowing the differentiation of distinct functionalcomponents and their causal roles in the information flow among corticalresponse areas lying along the IPS.Acknowledgement: NSF Supported by NSF grant # 084622926.455 A computational model for change detection in familiarenvironmentsDmitry Kit 1 (dkit@cs.utexas.edu), Brian Sullivan 1 , Kat Snyder 1 ; 1 University of Texasat AustinDetecting visual changes in environments is an important computation thathas applications both in the study of human and computer vision. Additionally,task oriented descriptions of visual cognition [1] could use sucha mechanism for switching between ongoing tasks and updating internalvisuospatial memory representations. We conjecture that the number ofenvironments in which people spend most of their time is limited (out ofthe set of possible visual stimuli), that the environments do not frequentlyundergo major changes between observations and that over time subjectslearn distributions of spatial and visual features. These assumptions can beexploited to reduce computational complexity of processing visual informationby utilizing memory to store previous computations. A changedetection technique is then required to detect when predictions deviatefrom reality. Baldi and Itti [2] provided a Bayesian technique, however theydid not incorporate spatial location of features and require a commitmentto a distribution of features a priori. We propose a mechanism that insteaduses a low dimensional representation of visual features to generate predictionsfor ongoing visual stimuli. Deviations from these predictions can berapidly detected and could be used as a reorienting attentional signal. Themodel we present is computationally fast and uses a compact descriptionsof complex visual stimuli. Specifically, the model encodes color histogramsof naturalistic visual scenes captured while exploring an environment. Itlearns a spatial layout of visual features using a self-organizing map onlocation data and color data, compressed using the matching pursuit algorithm.We present tests of the model on detecting changes in a virtual environment,and preliminary data for human subjects’ change detection in thesame environment.[1] Sprague and Ballard, Proceedings of the 18th IJCAI, August 2003[2] Baldi and Itti, ICNN&B 2005.26.456 Using objects as symbols: Associative learning improveswhen confusable items serve as cues rather than as associatesAdam November 1 (adn@stanford.edu), Nicolas Davidenko 1 , Michael Ramscar 1 ;1 Department of Psychology, Stanford UniversityRecent research in adult word learning has shown that associative learningperformance depends on the temporal order of the learned pairs: associationsbetween novel objects and labels are learned more successfullywhen objects precede labels than when labels precede objects (Ramscar,Yarlett, Dye, Denny, & Thorpe, in press). Here we investigated whetherthis effect may be driven by labels being less confusable than objects, andwhether the effect can be replicated in a non-linguistic domain if this asymmetryin confusability among cues and associates is preserved. To test thishypothesis, we constructed two-tone shapes to serve as either labels orobjects depending on their level of confusability. Specifically, two pairs ofvisually similar “F” shapes served as objects with confusable Features andtwo pairs of visually dissimilar shapes “L” shapes served as easily discriminableLabels. Each F shape was arbitrarily assigned an L shape. Thirty-twosubjects passively observed sequential presentation of these cue-associatepairs in an unsupervised learning paradigm. We manipulated the temporalorder across pairs so that each subject learned two pairs in the F-L orderand two in the L-F order. After 8 minutes of unsupervised learning, wetested whether subjects had learned the associations in a 4-AFC task: Oneach trial, subjects were prompted with a target shape and asked to pickthe appropriate associate from amongst the four possible complementaryshapes. Subjects demonstrated better associative learning on pairs learnedin the F-L order compared to the L-F order, regardless of the order in whichthey were tested, analogous to the result observed in the linguistic domain.This provides preliminary evidence that the ordered learning effect foundin word-learning may be the result of a domain-general mechanism. Weframe our finding in terms of error-driven learning and explore the theoreticalimplications for recent word-learning research.26.457 Another look at mindsightHelene Gauchou 1 (helene.gauchou@gmail.com), Ronald Rensink 1 ; 1 Visual CognitionLab, Department of Psychology, University of British ColumbiaWhenever a change occurs in a visual display, some observers occasionallysense it (i.e., feel that something is happening) for several seconds beforethey are able to see it (i.e., form a visual picture of the event). Given thedifference in phenomenological experience, and various behavioral dissociationsbetween these two forms of experience, Rensink (2004) suggestedthat sensing and seeing involved different modes of perception; themode enabling sensing was termed “mindsight”. Alternatively Simons etal. (2005) suggested that both experiences were due to a single mode (i.e.,regular sight), with sensing simply a verification stage when perceptionof change is weak. During the past few years, new studies have broughtto light new evidence in favor of the mindsight hypothesis. However, thecontroversy about the existence of this mode of perception still remains. Itis therefore time to collect and confront the various experimental resultsin order to draw a clear picture of the state of experimental evidence forand against a distinct mode of visual perception. We review the differentarguments on both sides of this debate, clearly define the notions at stake,present some new results and several considerations (both conceptual andexperimental) to help settle this issue.Acknowledgement: Fyssen Fondation26.458 Speed-accuracy tradeoffs in cognitive tasks in action gameplayersA.F. Anderson 1 (aanderson@bcs.rochester.edu), D. Bavelier 1 , C.S. Green 2 ;1 Department of Brain and Cognitive <strong>Sciences</strong>, University of Rochester, 2 Departmentof Psychology, University of MinnesotaThree speeded-choice reaction time tasks were used to assess different cognitivefunctions in action gamers: (i) Proactive Interference, which measuresthe ability to suppress familiar but irrelevant information in memory, (ii)The Posner Ab task, which measures the speed with which information isretrieved from long term memory and (iii) an N-back task, which measuresworking memory efficiency.Action video game players (VGPs) displayed faster reaction times (RTs)than individuals who do not play fast paced video games (NVGPs). Sucha difference in baseline RTs is known to complicate evaluation of cognitiveeffects across populations. Indeed, cognitive effects are typically definedin terms of differences between conditions, and it is unlikely that a 100msdifference between two conditions has a similar meaning given a baselineRT of 800ms versus a baseline RTs of 400ms (see Madden, Pierce, and Allen,1996 for an extensive discussion of this problem). In addition, faster RTsin VGPs were accompanied, at times, by a small but significant decreasein accuracy preventing any straight-forward interpretation of the cognitiveeffects.We present two different ways of analyzing the data that have been proposedin the field to address this issue. When applying such corrections,VGPs and NVGPs displayed comparable cognitive effects. The outcome ofsuch between-groups study is weakened however by the use of paradigmsin which accuracy is fixed and typically near ceiling. We present a newcognitive decision making task that allows to sample the full chronometricand psychometric curve. VGPs presented a very clear speed-accuracy tradeoff,but overall were found to perform more correct decisions per units oftime.Acknowledgement: This research was supported by grants to D. Bavelier from theNational Institutes of Health (EY016880) and the Office of Naval Research (N00014-07-1-0937).26.459 The efficiency of encoding - how to get most images intovisual memoryGesche M. Huebner 1 (gesche.huebner@psychol.uni-giessen.de), Karl R. Gegenfurtner1 ; 1 Justus-Liebig-University GiessenThe ability of humans to extract meaningful visual information from brieflypresented images and to remember them is astonishing. But what is themost efficient way of presenting visual information to achieve maximumperformance in terms of remembered items? To address these questionswe tested participants in a memory task for natural images where we variedthe number of items presented simultaneously, the viewing time andthe interstimulus interval (ISI). The viewing phase was followed by a testphase consisting of a 2-AFC recognition task of the images. Performancein terms of percentage of correct answers for the various conditions wasSaturday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>111


Saturday Afternoon PostersVSS 2010 AbstractsSaturday PMthen converted into capacity estimates, under consideration of the guessingprobability and the number of items presented. This capacity estimationwas then scaled to a fixed time unit to be able to compare performanceunder the different conditions. It proved to be more efficient to only showone object per trial very briefly rather than to show more objects simultaneouslyfor a longer time period. In the final version of our experiment,we combined four presentation times (50, 100, 200, 300 ms) and four interstimulusintervals (0, 50, 100, 200 ms) resulting in 16 conditions. Performanceincreased significantly with longer trial durations from about 55%to 75% correct. Increases in the presentation time had a larger impact thanincreases in the ISI. Performance in all conditions was above chance level.When considering memory capacity for a given time unit, in all conditionsabout 1.3 objects were remembered per second. Thus, in terms of efficiency,variations in presentation time and ISI did not matter very much.26.460 Similar Scenes Seen: What are the limits of the visual longtermmemory fidelity?Olivier R. Joubert 1 (joubert@mit.edu), Aude Oliva 1 ; 1 Department of Brain & Cognitive<strong>Sciences</strong>, MITThe capacity of long-term memory (LTM) for pictures is outstanding:observers distinguish thousands of distinct pictures from foil exemplarsafter seeing each item only once (Standing, 1973; Brady et al., 2008). In contrast,change blindness shows that, even in short term memory, two versionsof the same picture are difficult to distinguish when they differ byonly a few objects. Clearly, there are limits to the resolution of visual LTM.Here, we investigated the fidelity of LTM by using foil images representingsimilar versions of a scene. During a learning phase, 312 color photographsof different categories were displayed for 2 seconds each. Observersperformed an N-back task to encourage sustained attention. Importantly,observers were explicitly informed prior to learning about the testing conditions.At test, they performed a 2-AFC task with one old image and a foilwhose resemblance with the target was manipulated: the foil could be amirror image of the same scene, the same scene zoomed in or out by 25 %,or a nearby scene cropped from a larger panoramic image. The control condition,a foil from a novel category, led to 93% recognition accuracy, as inrelated previous studies. The fidelity of memory was poorest (54%, chancelevel) when the foil depicted a “zoom-out” version of the old image. Participantsperformed well (84%) with foils depicting a translated non-overlappingversion, and were moderately accurate (79%) with foils image overlappingby 50%, a zoom-in (69%) or a left-right mirror of the old image (72%).In a broader context, these results contribute to understanding the natureof stored visual representations. LTM representations have been shown tobe sensitive to changes in scene viewpoint. Nevertheless, our results suggestthat visual long-term memory is “open-minded” about certain kinds ofviewpoint transformations: it does not mind a step backward.Acknowledgement: Fondation FyssenObject recognition: Features andcategoriesVista Ballroom, Boards 501–515Saturday, May 8, 2:45 - 6:45 pm26.501 Similarity-based multi-voxel pattern analysis reveals anemergent taxonomy of animal species along the object visionpathwayAndrew Connolly 1 (andrew.c.connolly@dartmouth.edu), James Haxby 1 ; 1 Departmentof Psychological and Brain <strong>Sciences</strong>, Dartmouth CollegeWe present an account for how the structure of the representation of livingthings emerges in the object vision pathway, investigating three regions:medial occipital (MO), inferior occipital (IO), and ventral temporal cortex(VT). We investigated the similarity structure for patterns defined byresponses to a variety of animate categories using functional magneticresonance imaging (fMRI). Participants (N=12) viewed photographs of sixanimal species—two species each of insects, birds, and primates. Pair-wisedissimilarities between condition patterns were used to construct similarityspaces for each region within each subject. The similarity structuresrevealed how categorical representations emerge along the visual pathway.Patterns in early visual cortex (MO), as compared to those in IO and VT, areless differentiated and do not have a clear category structure. IO revealsdifferentiation between vertebrates and insects, while in VT each categorybecomes clearly defined. Individual differences multi-dimensional scaling(INDSCAL) showed how similarity structures transform from one regionto the next. Thirty-six similarity structures from three brain regions—MO,IO, and VT—in each of 12 subjects were used to find a common multidimensionalscaling solution where weights on dimensions varied betweensimilarity structures. Differences in dimension weights reveal a reliabletranslation from MO through VT in similarity spaces organized accordingto low-level visual features in MO to semantic categories in VT. Similaritystructures were highly stable and replicable both within and betweensubjects—especially in VT with an average between-subject correlation ofr=.91. The consistency of similarity structures in IO and MO was also high,albeit not as strong as in VT (IO, r=.75; MO, r=.65). Similarity-based patternanalysis reveals a categorical structure in VT that mirrors our knowledgeabout animal species, providing a window into the structure of neural representationsthat form the basis of our categorical knowledge of the livingworld.26.502 In search of functional brain atlases: Deriving commoncategorical representational patterns across individuals in ventralvisual pathwayJ. Swaroop Guntupalli 1 (swaroopgj@gmail.com), Andrew C. Connolly 1 , James V.Haxby 1 ; 1 Department of Psychological & Brain <strong>Sciences</strong>, Dartmouth College,Hanover, NH, USACategory information represented in activation patterns can be decodedusing multivariate pattern analysis of functional MRI (fMRI) measures.Cross-subject registration of brain anatomy aligns only coarse structureleaving fine-scale variability in representational patterns. This is a majorchallenge in building a functional brain atlas that can store common representationalactivation patterns. We have developed a new functional alignmentmethod, ‘hyperalignment’, that aligns each individual’s multi-voxelrepresentational space to a common space that generalizes across experiments.We studied 14 subjects in two different imaging centers using differentMRI scanners. We derived our alignment parameters using fMRI datathat we obtained from ventral temporal cortex while subjects watched amovie. We then used these parameters to align fMRI data from two differentexperiments, in which ten subjects were shown images of seven categoriesof objects and faces in a block design and the other four were shownimages from the same seven categories in a slow event-related design ina different scanner, into the same common space. A classifier trained onthe hyperaligned face/object block design experiment data to classify theseseven categories predicted the categories in the hyperaligned slow eventrelateddata from the other four subjects with a mean accuracy of 52.7%.This between-subject classification (BSC) performance was equivalent tomean within-subject classification accuracy (WSC) for those four subjects(55.4%) and was significantly higher than BSC after anatomical alignment(BSCA) (35.8%). Mean BSC accuracy of hyperaligned data for subjectsscanned in block design study was 61.3% (WSC=60.3%, BSCA=47.8%).These results demonstrate that hyperalignment provides a better way ofderiving common representational patterns than does anatomical registration.Moreover, these common representational patterns can be mappedback into the brain of any reference subject opening doors for a new type offunctional brain atlas that can store the high-dimensional patterns that arespecific to an unlimited variety of neural representations.26.503 The Relationship between Multivariate Pattern ClassificationAccuracy and Hemodynamic Response Accuracy in VisualCortical AreasPeter J. Kohler 1 (peter.kohler@dartmouth.edu), Sergey V. Fogelson 1 , Eric A.Reavis 1 , Jyothi S. Guntupalli 1 , Peter U. Tse 1 ; 1 Department of Psycholgical andBrain <strong>Sciences</strong>, Dartmouth CollegeTraditional univariate analysis of fMRI data identifies differences in theaverage activity of specific brain regions under different conditions. Incontrast, Multi-Variate Pattern Analysis (MVPA) classifies patterns offMRI activity under different conditions. Both methods infer neural activitybased on a hemodynamic response following the onset of a stimulus.It is an open question whether peak classification accuracy using MVPAoccurs at, before, or after the peak in the BOLD signal level. Because neuronalactivity is fast, it is possible that pattern classification accuracy is highwithin hundreds of milliseconds, even when the BOLD signal level is low.In other words, even very low average levels of hemodynamic activity, such112 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Afternoon Postersas that which occurs during the initial negative dip in BOLD signal level,might produce highly informative activation patterns classifiable usingMVPA. Alternatively, it is possible that the peak in MVPA classificationaccuracy occurs at the same temporal lag as the peak in BOLD signal level.To assess these possibilities, we performed an fMRI experiment with a slowevent-related design, using faces and houses as stimuli, and explored theactivity within functionally defined regions of interest from striate cortex toobject-selective temporal cortex. We compared the average hemodynamicresponse to the classification accuracy over time. Our results suggest thatthere is a correlation between BOLD signal level and classification accuracysuch that the peak in classification accuracy occurs at approximately thesame temporal lag from stimulus onset as the peak in the BOLD signal levelfollowing stimulus onset.26.504 Detrimental effect of head motion covariates on GLM andmultivoxel classification analysis of FMRI dataKai Schreiber 1 (genista@gmail.com), Bart Krekelberg 1 ; 1 Center for Molecular andBehavioral Neuroscience, Rutgers UniversityHead movements, or other global nuisance signals, can be a severe problemin FMRI analyses, yielding artefactual activations. To reduce this problem,the first step of data preprocessing is spatial alignment of the collected brainvolumes over time, and the voxel wise removal of BOLD signal componentscorrelated with the nuisance signals. We investigated the influence of theremoval of nuisance signals on GLM and support vector machine analysesby creating simulated data sets and removing nuisance regressors of varyingcorrelation with the stimulus time course. We report that for both typesof analyses, false positive and false negative rates increased with increasingsimilarity between regressor and stimulus. Additionally, crossvalidatedclassification performance became ever more strongly biased downwardas the correlation between nuisance regressor and stimulus increased,down to a performance level of 0%, where every instance was misclassifiedin crossvalidation. On the other hand, when the nuisance regressorwas uncorrelated with the stimulus, classification performance was artefactuallybiased upward when a small number of time points was used.Overall, these results highlight the problematic nature of any signal thatcorrelates with the stimulus pattern in FMRI experiments. Head motion isa particularly relevant example, but other signals could include respiration,heartbeats, and eye movements. These problems are particularly seriousin the context of multivoxel analyses, which - due to their high sensitivity- are also especially sensitive to global nuisance signals as well as biasesintroduced by their attempted removal.Acknowledgement: Funded by The Pew Charitable Trusts.26.505 The neural representation of spatial relationships byanatomical bindingKenneth Hayworth 1 (khaywort@usc.edu), Mark Lescroart 2 , Irving Biederman 2,3 ;1 Center for Brain <strong>Sciences</strong>, Harvard University, 2 Neuroscience Program,University of Southern California, 3 Department of Psychology, University ofSouthern CaliforniaVisual spatial relations can be signaled implicitly with cells sensitive to conjunctionsof features in particular arrangements. However, such hardwiredcircuits are insufficient for explaining our ability to visually understandspatial relations, e.g. top-of. A neural binding mechanism (Malsburg, 1999)is required that can represent two (or more) objects simultaneously whiledynamically binding relational roles to each. Time has been suggested asthe binding medium--either through serial attentional fixations (Treisman,1996) or synchronous firing (Hummel & Biederman, 1992). However, usingtime is problematic for several reasons, not the least of which is that such representationsare no longer simple vectors of neural firing but would requirecircuitry for decoding and storage beyond traditional associative memorymodels. An alternative is that the visual system uses “anatomical binding”in which one set of neurons is used to encode features of object#1 while aseparate set encodes object#2. A series of fMRI experiments designed totest predictions of these various models provides evidence for anatomicalbinding in a manner consistent with Object Files/FINST theory (Kahnemanet al., 1992; Pylyshyn, 1989). Based on these results, we propose a MultipleSlots Multiple Spotlights model: connections within the ventral streamhierarchy are segregated among several semi-independent sets of neuronscreating, in essence, multiple parallel feature hierarchies each having itsown focus of attention and tracking circuitry (FINST) and each having itsown feature list output (Object File). When viewing a brief presentation ofa single object all ventral stream cells would respond to its features (agreeingwith existing single unit and speed of recognition results). Howeverwhen viewing multi-object scenes (or multi-part objects) under extendedprocessing times (>100ms) different spotlights could be allocated to differentobjects (or parts) producing a final neural representation that explicitlybinds feature information with relational roles.Acknowledgement: NSF 04-20794, 05-31177, 06-17699, NIH BRP EY01609326.506 Voxels in LO—but not V1—distinguish the axis structures ofhighly similar objectsMark D Lescroart 1 (lescroar@usc.edu), Irving Biederman 1 ; 1 University of SouthernCaliforniaMany theories of object recognition assume that the representation of anobject specifies its axis structure (e.g., Marr, 1982). Can LO (an area criticalfor shape recognition) distinguish between highly similar objects, allwith the same shaped parts, that differ only in the relative positions of theirparts, i.e., in their axis structures? We tested the issue using fMRI multivoxelpattern analysis. Our stimuli consisted of nine images, generatedfrom three views (rotations in depth and in the plane) of each of three differentnovel objects, all composed of the same three geons, but differing inthe arrangement of those parts. Unlike several prior studies, which useddiverse sets of colored photos of familiar objects that differed greatly inmany attributes, the images were all highly similar line drawings with noshading or familiar interpretation, and thus represent a theoretically cleartest of shape selectivity per se. While viewing single presentations of thenine images, subjects identified each object by button press (1, 2, or 3),ignoring the object’s orientation. A support vector machine classifier wastrained and tested on independent splits of the data in different regionsof interest. In V1, the classifier performed more accurately at separatinggroups of images of similar global orientation, and more poorly at separatinggroups of images based on the identity of the objects. In LO, this effectwas reversed: greater accuracy was achieved separating objects (that is, differentaxis structures) than different global orientations. We interpret thisdouble dissociation between V1 and LO as a fundamental shift in the shapesimilarity space, and conclude that LO is more sensitive to the relative positionsof an object’s component parts—i.e., its axis structure—than to theglobal orientation of the object.Acknowledgement: NSF BCS 04-20794, 05-31177, 06-17699 to IB.26.507 Categorical representation of visually suppressed objectsin visual cortexGideon Caplovitz 1,2 (gcaplovi@Princeton.edu), Michael Arcaro 1,2 , Sabine Kastner 1,2 ;1 Department of Psychology, Princeton University, 2 Princeton NeuroscienceInstitute, Princeton UniversityFunctional imaging has been used to understand how visual objects of differentcategories are represented in the brain. However, the relationshipbetween the fMRI-derived object representation and consciously experiencinga particular object remains poorly understood. Recent studies investigatingBOLD responses to visually suppressed objects suggest that evenin the absence of awareness, specific cortical regions differentially representdifferent objects categories. However, drawing strong conclusionsabout categorical representation within specific brain regions is difficultsince these studies have focused only on pairs of object categories and constrainedanalyses to very restricted and/or loosely defined regions of cortexHere, we extend this past work using fMRI combined with continuousbinocular switch suppression to simultaneously investigate representationsof visually suppressed objects across occipital, parietal, and temporal cortexand include faces, houses, tools and scrambled objects in our analyses.Univariate and multivariate (MVPA) analyses were conducted (N=8)within functionally defined ROIs including retinotopic areas: V1, V2, V3,V4 and V3A/B, V7, IPS1-5, SPL1 and object category areas: OFA, FFA, PPA,LOC and EBA. In the invisible conditions, univariate analyses found nodifferences in BOLD signal across object category in any ROI. In contrast,the MVPA yielded above-chance performance classifying the four imagecategories within the FFA and PPA as well as IPS2, IPS3, IPS4 and IPS5.However, secondary pair-wise MVPA revealed that this performance waslargely mediated by differentiating between intact and scrambled imagesin all but the FFA in which faces could be dissociated from houses. However,within the FFA the MVPA could not accurately classify faces versustools or faces versus scrambled pictures. Although MVPA analyses couldclassify at above-chance levels visually suppressed faces, tools, houses andSaturday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>113


Saturday Afternoon PostersVSS 2010 AbstractsSaturday PMscrambled objects within several areas of ventral and dorsal visual cortex,we find no evidence that the underlying representations are specifically categoricalin nature.26.508 The basis of global and local visual perception revealed bypsychophysical ‘lesions’Cibu Thomas 1 (cibut@nmr.mgh.harvard.edu), Kestutis Kveraga 1 , Moshe Bar 1 ;1 Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts GeneralHospital & Harvard Medical SchoolWe can extract the gist of scenes and objects very rapidly. This ability to “seethe forest before the trees” is known as the Global Precedence Effect (GPE).The GPE is affected primarily by three factors: the spatial frequency contentof the stimulus, the global and local grouping properties of the visualcortex, and the tuning curves of the magnocellular (M) and parvocellular(P) neurons in the lateral geniculate nucleus of the thalamus. Global informationis conveyed primarily by low spatial frequencies (LSF) and localinformation, by high spatial frequencies (HSF). Moreover, the M-pathwayis thought to be tuned primarily to LSF, while the P-pathway is thoughtto convey HSF. Therefore, the M-pathway is assumed to mediate globalprocessing (and GPE) and the P-pathway, to local processing. To examinewhether this mapping is true, we employed psychophysical techniques toselectively ‘lesion’ the M and P pathways and examined the relationshipbetween spatial frequencies, M and P pathways and global/local processing.In Experiment 1 (N=10), we used hierarchical stimuli that were eitherM-biased (achromatic, low luminance contrast), P-biased (chromatic, isoluminant),or unbiased (black-on-white, resolved well by M and P cells). InExperiment 2 (N=10), we used hierarchical stimuli that were either ‘scrambled’(phase of low and mid-range spatial frequencies randomly redistributedin the image), or unbiased. In both experiments subjects were testedfor global/local processing using a focused attention paradigm. Contraryto the prevailing view, we found that both M and P pathways contributesignificantly to global and local processing. Interestingly, we found thatP-biased stimuli show a stronger GPE than M-biased stimuli. Our dataalso suggests that LSF is necessary for the GPE and HSF is sufficient forlocal processing. These findings describe for the first time the relationshipbetween spatial frequencies, visual pathways, and global/local visual processing.Acknowledgement: Supported by National Institute of Neurological Disorders and StrokeGrant NS05061526.509 Perceiving the Center of Human FiguresJay Friedenberg 1 (jay.friedenberg@manhattan.edu), Tedd Keating 2 ; 1 Department ofPsychology, Manhattan College, 2 Department of Physical Education, ManhattanCollegePerceptual estimation of a center of mass has been studied extensivelyusing dot patterns and simple geometric shapes (Friedenberg & Liby, 2008).However, less work has examined center estimation for ecologically relevantshapes such as human figures. We have extensive perceptual andmotoric experience with human forms. It is of theoretical interest to seeif observers are as accurate with these biological forms and whether theirestimates are biased by the same factors found in the literature. In thisexperiment, sixty undergraduates judged the perceived centers of male andfemale human figures with limbs extended to the left or right in a varietyof configurations. In this way we could manipulate the location, orientationand length of elongation axes relative to the body’s main vertical symmetryaxis. The outer contours of these figures were presented in black against awhite background. Participants indicated their responses by drawing in adot. Errors were measured as the distance between the true and perceivedcenter. The orientation of the responses in terms of angular deviation fromthe vertical was also recorded. There was a significant main effect of limbextension both for the error data F(17, 998) = 4.0, p


VSS 2010 AbstractsSaturday Afternoon Posterspreceded and followed by visible shapes (102 ms). Critically, the first andlast shapes combined with the masked target to form (a) a linear motionpath, (b) a curved motion path, or (c) incoherent motion. Participants discriminatedbetween three possible masked target shapes under three differentlevels of mask intensity.Visual sensitivity was strongly influenced by the motion sequence, withmuch greater visibility when the target shape was consistent with a linearmotion path than with a curved or incoherent path. Increased mask intensityalso reduced target visibility more strongly for curved and incoherentpaths than for linear motion. More detailed analyses will quantify theunique influence of the preceding and subsequent context shapes on targetvisibility.This methodology is offered as a new way to study the influence of spatial-temporalcontext on shape perception. Experiments are underway toextend it to speeded action tasks involving either indirect responses (i.e.,key presses) or direct manual actions to the objects in motion (i.e., fingerpointing).Acknowledgement: Natural <strong>Sciences</strong> and Engineering Research Council of Canada26.513 Chinese character recognition is limited by overallcomplexity, not by number of strokes or stroke patternsOn-Ting Lo 1 (garkobe@gmail.com), Sing-Hang Cheung 1 ; 1 Department ofPsychology, The University of Hong KongPurpose. Strokes in Chinese characters can sometimes be grouped intoidentifiable stroke patterns. Are Chinese characters recognized holistically,by strokes or stroke patterns? Here we address this question by studyingrecognition efficiency for Chinese characters of different overall complexities,number of strokes or number of stroke patterns. Methods. Three normallysighted young adults participated in each of the three experiments.Stimuli were Chinese characters categorized into three groups, four characterseach, according to (1) perimetric complexity, (2) number of strokes,and (3) number of stroke patterns in Experiments 1, 2 and 3 respectively.Perimetric complexity was defined as perimeter2/ ‘ink’ area. Average complexitiesfor the three groups in Experiment 1 were 198.9, 272.1 and 335.5. InExperiments 2 and 3, perimetric complexity was controlled across the threegroups. Observers performed a 4AFC task with the character presented inuniform grey background or Gaussian noise (sigma = 75% of backgroundluminance) for 200ms. Contrast thresholds for 62.5% accuracy were measuredby method of constant stimuli with five RMS contrast levels. Humanperformance was compared to an ideal observer model to calculate efficiency.Results. Average recognition efficiencies were 39.99%, 8.09% and2.95% for low, medium and high complexities; 1.64%, 1.71% and 1.35% for11–12 strokes, 14–15 strokes, and 17–19 strokes; 1.14%, 1.24% and 1.01%for 2-pattern, 3-pattern, and 4-pattern groups respectively. Recognitionefficiency of Chinese characters decreased as a function of characters’ complexity,not of their number of strokes or stroke patterns. Conclusion. Chinesecharacter recognition efficiency was limited only by characters’ overallcomplexity, but not by number of strokes or stroke patterns. The resultssuggested strokes or stroke patterns were not the component features ofChinese characters. Previous findings supporting strokes or stroke patternsas featural components in Chinese characters may be confounded by characters’overall complexity.26.514 Spatial Frequencies Mediating Music Readingzakia hammal 1 (zakia_hammal@yahoo.fr), frédéric gosselin 2 , isabelle peretz 1 , sylviehebert 1 ; 1 Brain Music and Sound Laboratory BRAMS, Université de Montréal,Canada, 2 Département de Psychologie, Université de Montréal, CanadaThe purpose of this study was to examine Spatial Frequencies (SFs) mediatingmusic reading compared to text reading. The SFs Bubbles technique(Willenbockel et al., 2009), which consists in randomly sampling multipleSFs simultaneously on each trial, was used. A set of 70 piano excerptsselected from the unfamiliar piano repertoire was used for music readingand 50 sentences from MNRead Acuity Charts were used for text reading.The visual size of each letter and note was about 0.34°. Five pianists andfive naïve observers took part in the experiments. The percentage of correctlyproduced pitches and ‘ascii code’ was used as a performance measurefor music and text reading respectively. To find out which SFs drovethe participants’ correct responses for music and text reading, a multiplelinear regression was performed. A statistical test (Chauvin et al., 2005) wasthen used to determine thresholds that selected the diagnostic SFs for accurateperformance. The music reading results showed a significant SFs band(from 1 to 1.7 cycles per note (cpn)) peaking at 1.19 cpn, compared to twoSF bands for text reading: the first SFs band (from 1.08 to 1.3 cycles per letter(cpl)) peaking at 1.2 cpl and the second SFs band (from 1.6 to 2.6 cpl) peakingat 1.8 cpl. In a control experiment, five new pianists were instructed toplay the set of 70 excerpts, first sampled with the obtained diagnostic filterfor music reading, and then without sampling. Pianist performances withthe diagnostic filter (94%) were comparable (96%) to those without filtering(P > 0.05). The present findings show that music reading is mediated onlypartly by SF bands mediating text reading, which may explain why in somecases, difficulties in music reading are not necessarily accompanied by difficultiesin text reading.26.515 The Visual Perception of Correlation in ScatterplotsRonald Rensink 1 (rensink@psych.ubc.ca), Gideon Baldridge 1 ; 1 Departments ofPsychology and Computer Science, University of British ColumbiaA set of experiments investigated the precision and accuracy of the visualperception of correlation in scatterplots. These used classical psychophysicalmethods applied directly to these relatively complex stimuli.Scatterplots (of extent 5.0 deg) each contained 100 normally-distributedvalues. Means were set to 0.5 of the range of the scatterplot, and standarddeviations to 0.2 of this range. 20 observers were tested. Precision wasdetermined via an adaptive algorithm that found the just noticeable differences(jnds) in correlation, i.e., the difference between two side-by-sidescatterplots that could be discriminated 75% of the time. Accuracy wasdetermined by direct estimation: reference scatterplots were created withfixed upper and lower values, and a test scatterplot adjusted so that its correlationappeared to be midway between these two. This process was thenrecursively applied to yield several further estimates.Results show that jnd(r) = k (1/b – r), where r is the Pearson correlation,and k and b are parameters such that 0


Saturday Afternoon PostersVSS 2010 AbstractsSaturday PMIntroduction: There is growing interest in understanding the neural mechanismsmediating perception of natural scenes (Thorpe et al. 1996, Codispotiet al. 2006). Studies have demonstrated the presence of high-leveltask-related decision processes in natural scene categorization (VanRullen& Thorpe, 2001). Unlike previous studies restricted to limited categoriesincluding cars, people and animals, we can reliably detect the presence ofarbitrary searched objects from neural activity (electroencephalography,EEG). Here we analyzed EEG signals using multivariate pattern classifiers(MVPC) to predict on a single trial basis the presence or absence of cuedarbitrary objects during search in natural scenes. Method: Ten naive observersperformed a visual search task where the target object was specified bya word (500ms duration) presented prior to a natural scene (100ms). Fourhundred target present and four hundred absent images were presented.Observers used a 10-point confidence rating scale to report whether thetarget was present or absent. Results: The results revealed a positive deflectionin the event-related potential (ERP) over parietal electrodes during the300-700 ms post-stimulus time window that was larger for target presenttrials than absent trials (p


VSS 2010 AbstractsSaturday Afternoon PostersThe model can explain several search phenomena including how accuracyand RT are affected by set-size, target-distractor discriminability, distractorheterogeneity, target frequency and reward. It can capture the shape of theRT distribution in target present and absent trials for different tasks likefeature, conjunction and spatial configuration search.It explains that rare targets are missed (Wolfe et. al, 2005) because decreasingtarget frequency (e.g., from 50% to 2%) shifts the starting point of thedecision process closer to ‘no’ than ‘yes’ criterion, leading to high miss errorrates, and fast abandoning of search. Our model predicts that increasingpenalties on miss errors will decrease these errors and increase RTs. Wevalidated these predictions through psychophysics experiments with 4human subjects.To summarize, we have proposed a generative model of visual search thatwith only 2 free parameters, can explain a wide range of search phenomena.Acknowledgement: NSF, NGA26.522 Covert and overt selection on visual searchJoaquim Carlos Rossini 1,2 , Michael von Grünau 2 ; 1 Instituto de Psicologia, UniversidadeFederal de Uberlândia, MG, Brazil, 2 Department of Psychology, ConcordiaUniversity, Montreal, Québec, CanadaPurpose: Two experiments were conducted to investigate the effect ofcovert selection (attentional selection) and overt selection (oculomotorselection) in visual search tasks when the relevant stimuli (target and distractors)were presented among irrelevant stimuli (background). Methods:In the first experiment participants were required to make a visual searchfor a target accompanied by a variable number of distractors (5, 10, 15) presentedamong a uniform background with different luminance. The targetwas a “+” sign with the vertical segment displaced to the right or left ofthe center of the horizontal segment. In half of the trials an element withthe same characteristics as the target, but with the same luminance as thebackground and with the vertical segment displacement in the oppositedirection, was presented (intruder element). Targets, distractors and theintruder element were presented in random positions within the matrix. Inthe second experiment the same procedure was adopted, but the relevantstimuli (target and distractor) and irrelevant stimuli (intruder and background)differed along a colour dimension with equiluminance. Reactiontime and percentage of eye fixations were measured. Results: The presenceof the intruder played a role during attentional selection (covert selection)and caused a reaction time cost but did not show a significant effect in oculomotorselection (overt selection), evidenced by a non-significant percentageof fixations on intruder element. Conclusions: The results support theindependent selection model and are discussed in terms of stimulus-drivenactivity and goal-driven control on visual search.Acknowledgement: JCR- CAPES, Brazil and MvG- NSERC, FQRSC26.523 Gaze capture by task-irrelevant, eye of origin, singletonseven without awareness during visual searchLi Zhaoping 1 (z.li@ucl.ac.uk); 1 University College London, UKThe eye of origin of inputs is barely encoded in cortical areas beyond primaryvisual cortex. Thus human observers typically fail to perceive oculardistinctiveness - as when an item (an ocular or eye of origin singleton) ispresented to one eye among a background of all other items presented tothe other eye. Nevertheless, I recently showed (Zhaoping, 2008) that suchsingletons behave as exogeneous cues for attention. Visual search for anorientation singleton target bar among uniformly tilted distractor bars waseasier (harder) if the target (or respectively a distractor) bar was also an ocularsingleton. Using eye tracking (via electro-oculography or video tracking),I now confirm that this ocular singleton indeed automatically attractsgaze. Observers searched for an orientation singleton among hundreds ofuniformly tilted distractor bars to quickly report whether the target was inthe left or right half of the display spanning about 40x30 degrees. All barswere presented monocularly, and the gaze started at the center of the displayat stimulus onset. If the ocular singleton was present, the first saccadeafter stimulus onset was typically directed to the lateral side of the displaycontaining it, whether or not the ocular singleton was associated with thetrue (orientation-defined) target or with a distractor bar on the opposite lateralside from the target. In a second experiment using the greater accuracyof video eye tracking (albeit in a smaller display), observers had to quicklyfind and gaze at the orientation singleton target, an ocular singleton waspresent as a distractor bar in half the trials. The search display was maskedonce observers’ gaze arrived at the target. Observers were often unable toreport after mask onset whether they had seen the ocular singleton duringsearch, even if they had directed their gaze to it.Acknowledgement: Gatsby Charitable Foundation, and a (British) Cognitive ScienceForesight grant BBSRC #GR/E002536/0126.524 Non-parametric test to describe response time and eyemovement distributions in visual searchBruno Richard 1 (brichard21@gmail.com), Dave Ellemberg 2 , Aaron Johnson 3 ;1 Department of Psychology, Concordia University, Center for Learning andPerformance Studies (CLPS), 2 Department of Kinesiology, Universite deMontreal ,Centre de Recherche en Neuropsychologie et Cognition (CERNEC),3 Department of Psychology, Concordia University, Center for Learning andPerformance Studies (CLPS)Visual search is one of the most common paradigms used to study attention,and the three main tasks that have emerged to study visual search includefeature, conjunction and spatial-configuration. It is well documented thatfor the spatial-configuration tasks response times are twice as long whenthere is no target compared to the target present condition. Further, for bothtarget absent and present conditions, response time increase as the numberof distracters increases. The objective of the present study is to investigatetwo gaps in this literature. Little to nothing is known about the role of eyemovements in this relationship. Second, the current statistical analysesused to study these response times rest on the assumption of a normal distribution;that is not the case for the distribution of response times in thistask, which are known to be skewed. The present study measured responsetimes, the number of fixations and fixation duration in a group of 20 adultsby means of a spatial-configuration visual search task consisting of Gabor.The results indicate that eye movements are dispersed for conditions, inwhich the target was absent, and consistent, almost pattern like, when thetarget was present. Response time results varied according to the fixationmaps, but fixation duration measures did not. In agreement with previousreports, we found that response time was greater in the target absent conditionand increased systematically as the numbers of distracters increased.The Kolmogorov-Smirnov test showed a similar pattern of results, wherethe differences between the two slopes increased as the number of distractersincreased, but plateau after 8 distracters. The typical slope difference,at the 50% threshold, was found to be smaller than 2:1, suggesting that thedifference between target present and target absent search tasks might notbe as large as previously expected.Acknowledgement: NSERC to AJ26.525 Visual Similarity Predicts Categorical Search GuidanceRobert Alexander 1 (robert.alexander@notes.cc.sunysb.edu), Gregory Zelinsky 1 ;1 Department of Psychology, Stony Brook UniversityHow a target category is represented and used to guide search is largelyunknown. Of particular interest is how categorical guidance is possiblegiven the likely overlap in visual features between the target category representationand different-category real-world objects. In Experiment 1 weexplored how the visual similarity relationships between a target categoryand random-category distractors affects search guidance. A web-based taskwas used to quantify the visual similarity between two target classes (teddybears or butterflies) and random-object distractors. We created displaysconsisting of high-similarity distractors, low-similarity distractors, and“mixed” displays with high, intermediate, and low-similarity items. Subjectsmade faster manual responses and fixated fewer distractors on lowsimilaritydisplays than on high-similarity displays. In mixed trials, firstfixations were more frequently on high-similarity distractors (bear=49%;butterfly=58%) than low-similarity distractors (9%-12%). Experiment 2used the same high/low/mixed similarity conditions, but now these conditionswere created using similarity estimates from a computational model(Zhang, Samaras, & Zelinsky, 2008) that ranked objects in terms of color,texture, and shape similarity. The same data patterns were found, suggestingthat categorical search is affected by visual similarity and not conceptualsimilarity (which might have played some role in the web-basedestimates). In Experiment 3 we pit the human and model estimates againsteach other by populating displays with distractors rated as similar by:subjects (but not the model), the model (but not subjects), or both subjectsand the model. Distractors ranked as highly-similar by both the model andsubjects attracted the most initial fixations (31%-41%). However, when thehuman and model estimates conflicted, more first fixations were on distractorsranked as highly-similar by subjects (28%-30%) than the highly-similarSaturday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>117


Saturday Afternoon PostersVSS 2010 AbstractsSaturday PMdistractors from the model (14%-25%). This suggests that the two differenttypes of visual similarity rankings may capture different sources of variabilityin search guidance.Acknowledgement: NIMH grant 2 RO1 MH06374826.526 Graphical comparison of means in within subject designsJohn Hayes 1 (JRHayes@Pacificu.edu), Adam Preston 1 , James Sheedy 1 ; 1 College ofOptometry, Pacific University, Forest Grove, ORIn vision sciences it is common to have study designs with many withinsubject conditions. The data are often presented in bar graphs with standarderror bars. Non-overlapping standard error bars do not necessarilymean statistically significant difference. Similarly, overlapping 95% confidenceintervals do not necessarily mean lack of a significant difference. Wereviewed the literature that suggests a confidence interval can be derivedthat allows comparison between all means on a single chart. We then providea simple graphical method in Excel that uses stacked bar graphs to create84% confidence intervals in which non-overlapping bars are significantat an unadjusted p


VSS 2010 AbstractsSaturday Afternoon Posterssquares with a notch in one of the other corners. This is a very difficultsearch task. Although search slopes for static (0.0 deg/s) and for moving(7.2 deg/s) item displays were again very similar, there was now a cleardifference in error rates: there were more errors for the moving items. Critically,the maximum display size in this experiment was 18 items; only halfthe size used in the first experiment. Taken together, these two experimentssuggest that there is a fundamental difference in the processes involved invery difficult visual search tasks on the one hand and easier visual searchtasks on the other. Whereas the former operate at an item by item level,with only limited robustness against motion, the latter operate above thelevel of individual items and offer extensive robustness against motion. Atheoretical framework that encompasses these findings will be presented.Acknowledgement: This research was supported by a small grant from the ExperimentalPsychology <strong>Society</strong>26.531 Independent and additive effects of repetition of targetand distractor sets in active visual searchÁrni Ásgeirsson 1 (arnigunnar@hi.is), Maike Aurich 1 , Árni Kristjánsson 1 ; 1 Departmentof Psychology, School of Health <strong>Sciences</strong>, University of IcelandPriming of pop-out is a well known phenomenon in the literature on visualattention, where the repetition of features from a previous trial facilitates aresponse to the same feature on the next trial. While such effects have mostlybeen studied by measuring key-press responses, priming from target repetitionhas also been found to facilitate saccadic eye movements to targetscontaining repeated features. Much less studied is priming from repeatedcontext, or distractor-sets. Priming of context is presumably driven by inhibitionmechanisms speeding rejection of non-targets. In order to investigateany facilitatory effects of target and context repetition upon latencies of saccadiceye movements, and any interactions between the two, we measuredsaccades in a color-singleton task where the target color and the color of thedistractors varied independently of one another. The task was an “active”visual search where observers had to make a speeded saccade to the centerof the singleton target. Repetition of target color and distractor-set colorboth resulted in decreased saccadic latencies, but the effect was larger forrepetition of context than target. In contrast, target repetition had a largereffect upon search accuracy. Because of these discrepancies we calculatedinverse efficiency (saccadic latency/percentage correct) to control for possibletrade-offs between latency and accuracy. The inverse efficiency analysesshowed highly significant effects of repetition of both target and context,with no hint of an interaction between the two. The tight link betweenattention shifts and eye movement preparation is well documented, andour analyses of priming show that the repetition of both target and distractor-setshas a strong influence upon attention shifts and eye movementsand that these effects are independent of one another.Acknowledgement: University of Iceland Research Fund26.532 Prediction prevents rapid resumption from being disruptedafter the target’s location has changedStefania Mereu 1 (smereu@illinois.edu), Jeffrey Zacks 2 , Christopher Kurby 2 ,Alejandro Lleras 1 ; 1 Psychology Department, University of Illinois, 2 PsychologyDepartment, Washington UniversityRecent studies of rapid resumption (RR)—an observer’s ability to quicklyresume a visual search after an interruption (Lleras, Rensink and Enns,2005)—suggest that implicit predictions underlie visual perception (seeEnns and Lleras, 2008) because observers seem to construct implicit predictionsabout what information they expect to see after each interruption.The nature and content of a prediction (or perceptual hypothesis) canbe explored by subtly changing the information to be presented after aninterruption. Changes to the target’s relevant features such as location andidentity disrupt RR (Jungé, Brady and Chun, 2009; Lleras et al., 2005, 2007).These findings suggest that if the perceptual hypothesis about the targetcannot be confirmed, processing of the display (and the target) must startanew when the display reappears, leading to slower response times. Here,we manipulated the location of the target between looks at the display toinvestigate whether predictable changes in location could be learned byobservers and thereby incorporated into the test and confirmation of theperceptual hypothesis. Specifically, in a subset of trials (location-change trials),on each presentation of the search display targets cycled through aset of 5 predetermined locations, either in clockwork fashion (Experiment1) or jumbled (Experiment 2) fashion, although the starting location in thesequence changed from trial to trial. On control trials, the target did notchange location between presentations. Both experiments showed significantRR in the control condition. Interestingly, we obtained significant RRon location-change trials and this effect increased throughout the experiment,suggesting that sequence learning occurred and was slowly incorporatedinto the testing of perceptual hypotheses. These findings confirm thatan interrupted visual search can be rapidly resumed even if the content ofthe hypothesis has changed, when the observer is given the possibility topredict the forthcoming change.26.533 Perceptual load corresponds to known factors influencingvisual searchZachary J.J. Roper 1 (zachary-roper@uiowa.edu), Joshua D. Cosman 1 , Jonathan T.Mordkoff 1 , Shaun P. Vecera 1 ; 1 Department of Psychology, University of IowaOne recent account of the early versus late selection debate in attentionproposes that perceptual load determines the locus of selection. Attentionselects stimuli at a late processing level under low-load conditions butselects stimuli at an early level under high-load conditions. Despite thesuccesses of so-called ‘load theory,’ the notion of perceptual load remainspoorly defined. We investigated the factors that influence perceptual loadby using manipulations that have been studied extensively in visual search,namely target-distractor similarity and display heterogeneity. First, usingvisual search, we examined the search slopes as participants discriminatedtwo target letters. Consistent with previous work, search was most efficientwhen targets and distractors were dissimilar and the displays containedhomogeneous distractors; search became less efficient when target-distractorsimilarity increased and when the displays contained heterogeneousdistractors. Importantly, we next used these same stimuli in a typical perceptualload task that measured attentional ‘spill over’ to a task-irrelevantflanker. We found a correspondence between search efficiency and perceptualload; stimuli that generated efficient searches produced flanker interferenceeffects, suggesting that such displays involved low perceptual load.Flanker interference effects were reduced in displays that produced lessefficient searches, and both high target-distractor similarity and heterogeneousdisplays were required to abolish flanker effects. These results suggestthat ‘perceptual load’ might be defined in part by well-characterizedfactors that influence visual search.26.534 Modulating the attentional saliency of object onsets innatural scenesPeter De Graef 1 (Peter.DeGraef@psy.kuleuven.be), Geoffrey Hamon 1 , FilipGermeys 2, 1 , Karl Verfaillie 1 ; 1 Laboratory of Experimental Psychology, Universityof Leuven, Belgium, 2 European University College Brussels, BelgiumOne of the most powerful events for capturing a viewer’s attention andgaze is the appearance of a new object in the visual scene: attention andgaze shifts to the new object’s location occur within 100-200 ms after objectappearance. Previous research has identified determinants of this captureeffect at various levels of the visual processing hierarchy: the transient associatedwith the onset, the appearance of new contours, the appearance ofa new spatio-temporal entity, semantic category membership of the object,and semantic object-scene consistency. That immediate attentional captureby object onsets can be influenced by semantics is a controversial issue,and the purpose of our present set of studies was to determine whetherthis controversy might be resolved by looking at attentional capture effectsfrom a fixation-contingent perspective. Specifically, is it possible that theattentional effect of object onsets is determined in the interplay of feedforwardfeature analysis and re-entrant object recognition processes, andthat the relative contribution of these two processing streams is modulatedby the object’s peripheral position, its visibility, and its time of appearancerelative to the position and duration of the current fixation? We report aseries of eye-tracking experiments in which latency of oculomotor reactionsto object onsets is examined as a function of the object’s categoricaland contextual semantics, its eccentricity, contrast, orientation and timeof appearance relative to the ongoing fixation. In addition, to determinewhether object onsets have a special attentional status, onset effects werecompared against effects of unobtrusive attentional cues presented insideof critical objects that were present throughout the scene’s exposure. Attentionalcapture by object onsets was shown to be shaped by an interactionof featural, semantic, positional and temporal properties of the object onsetthus documenting the existence and the boundary conditions of semanticallymodulated attentional capture.Saturday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>119


Saturday Afternoon PostersVSS 2010 AbstractsSaturday PM26.535 Saliency enhances perceived contrast but degrades detectionDirk Kerzel 1 (dirk.kerzel@unige.ch), Sabine Born 1 ; 1 Faculté de Psychologie et des<strong>Sciences</strong> de l’Education, Université de GenèveNumerous studies have shown that saliency has a large influence on visualsearch. In contrast, very little is known about how salient objects are perceivedin typical search displays. We measured the perceived contrast ofa Gabor stimulus that either had the same orientation as the surroundingdistractors or a different orientation. Observers were shown a circular arrayof eight Gabors and two Gabors were marked as relevant. The task was tojudge which of the Gabors in the marked locations had a higher contrast.We observed that the perceived contrast of Gabors with a contrast differentfrom the context increased slightly. In another experiment, we investigatedwhether contrast enhancement, which we observed for above-thresholdGabors, would help observers to determine the location of a Gabor at contrastthreshold. Observers were asked to indicate the position of the Gaborin one of two marked locations while the same six task-irrelevant Gabors asin Experiment 1 were shown. We found that it was more difficult to localizea Gabor that had an orientation different from the surrounding Gabors.Saliency may harm detection for stimuli at threshold while boosting contrastof above-threshold stimuli.Acknowledgement: Swiss National Foundation PDFM1-11441726.536 Real-world Statistical Regularities Guide the Deployment ofVisual Attention, Even in the Absence of Semantic Scene RecognitionAshley Sherman 1 (ashley.sherman2@gmail.com), George Alvarez 1 ; 1 Department ofPsychology, Harvard UniversityPrevious research has shown that the contextual information in real-worldscenes helps guide visual attention when searching for a target within thescene (Torralba et al., 2006). However, it is unknown whether such contextualguidance can occur in the absence of semantic scene recognition. Toaddress this question, we generated texture patterns that were unrecognizableas real-world scenes, yet preserved the statistical regularities of realworldscenes (i.e., the global pattern of orientation and spatial frequencyinformation). In each texture, we imbedded the image of a pedestrian ata location where a pedestrian was likely to appear with either a low probabilityor a high probability (based on an independent set of rankings usingthe original, real-world images). On each trial, observers were instructedto locate the pedestrian and indicate, as quickly and accurately as possible,the direction in which the pedestrian was facing. Response times for thehigh probability trials (M = 1989 ms) were reliably faster than the low probabilitytrials (M = 2593 ms) (t(8) = 7.09, p .05). Thus, the advantage for the high-probability locations arises from theglobal context of a particular image. Combined, these results suggest thatthe statistical regularities of real-world scenes can guide the deployment ofvisual attention, even in the absence of semantic scene recognition.26.537 Knowing what not to look for: Difficulty ignoring irrelevantfeatures in visual searchJeff Moher 1 (jmoher1@jhu.edu), Howard Egeth 1 ; 1 Department of Psychologicaland Brain <strong>Sciences</strong>, Johns Hopkins UniversityForeknowledge of target-relevant information can be used to guide attentionin visual search. However, the role of ignoring in visual search - that is,having foreknowledge of information related to nontargets - remains relativelyunexplored. In a recent paper, Munneke et al. (2008) demonstratedthat participants could ignore the location of an upcoming distractor if thatlocation was cued prior to the display. In a series of experiments using asimilar design, we explored whether participants could ignore a specifiedfeature. Participants were asked to identify which of two possible uppercaseletters was present in a display consisting of four differently colored letters.On “Distractor-Cued” trials, participants were also told that the targetwould not be a specific color (e.g. “ignore red”). Participants were unableto successfully use these cues to speed search - in fact, they were slower tofind the target on Distractor-Cued Trials even though the cue containedrelevant information and was 100% valid. We also measured compatibilityeffects of the cued distractor (a lowercase letter either compatible orincompatible with the target). There were stronger compatibility effects onDistractor-Cued Trials later in the experiment, suggesting that participantswere not learning to suppress the irrelevant feature. Taken together, thesedata suggest that while knowing where not to look facilitates visual search(Munneke et al., 2008), knowing what not to look for hinders visual search.In subsequent studies we show that while establishing an attentional set toignore a feature prior to a given trial results in less efficient visual search,if a set is established, search can be more efficient when the to-be-ignoredfeature appears than when it doesn’t. This is consistent with Woodman andLuck’s (2007) “template for rejection.” Ongoing experiments are investigatingwhether there are cases for which knowing what feature to ignore facilitatesvisual search.Acknowledgement: T32 EY0714326.538 Probabilistic information influences attentional processTakashi Kabata 1,2 (kabata@stu.kobe-u.ac.jp), Eriko Matsumoto 1 ; 1 Graduate Schoolof Intercultural Studies, Kobe University, 2 JSPS Research FellowPurpose: Resent studies in visual attention have reported that attention isguided by the probabilistic information including the experimental tasks.In addition, some of these studies have suggested that the probability ofthe target appearance is available as an attentional cue without explicitknowledge regarding the probabilistic information. It is, however, unclearwhat kind of information participants can exploit as an attentional cue.In the present study, we investigated whether the probabilistic informationimplicitly defined by spatial location or symbolic cue was availablefor participants. Methods: Participants were conducted the visual searchtask. They were instructed to discriminate the target orientation presentedin the left or right placeholder as quickly and accurately as possible. In theexperiment 1, the spatial probability of the target appearance was manipulated.In 60% of trials, target stimuli were presented in one placeholder(high probability condition), in 20% of trials, they were in another placeholder(low probability condition), and in the rest of 20% trials, no targetstimuli were presented. In the experiment 2, cue validity was manipulated.The cues were colors of the center fixation. In 60% of trials, the color cueswere valid (valid condition), in 20% of trials, the cues were invalid (invalidcondition), and in the rest of 20% trials, no target stimuli were presented.Results & Conclusion: In the experiment 1, the target discrimination in thehigh probability condition was faster than the low probability condition.On the other hand, in the experiment 2, there is no difference in the reactiontimes between valid and invalid condition. These results suggest whenprobabilistic information is defined by spatial locations, it leads attentionalguidance despite that participants do not notice the information. In contrast,when probabilistic information is defined by symbolic cues, it doesnot lead attentional guidance.26.539 Bound to guide: A surprising, preattentive role for conjunctionsin visual searchJeremy Wolfe 1,2 (wolfe@search.bwh.harvard.edu); 1 Visual Attention Lab, Brigham& Women’s Hospital, 2 Dept. of Ophth., Harvard Medical SchoolAccording to Guided Search (and similar models), features are only conjoinedonce an object is attended. This assertion is supported by manyexperiments: e.g. conjunctions of features do not pop-out in visual searchand observers are poor at judging proportions of different types of conjunctionsin displays. Thus, observers appear to be insensitive to the preattentiveconjunctions of features. Now, consider two versions of a tripleconjunction search for red, vertical rectangular targets among distractorsthat could be red, green, or blue; vertical, horizontal, or oblique; and rectangular,oval, or jagged. In one condition, all 26 possible distractor typesare present on each trial (set sizes: 27 and 54). In the other condition, onlythree distractor types are present (e.g., red oblique ovals, jagged green verticals,and blue horizontal rectangles). Critically, in each condition, eachfeature is evenly distributed in the display: i.e. 1/3 of items are red, 1/3green, 1/3 blue, and similarly for orientation and shape. Since the preattentivefeature maps are identical in both conditions, search performanceshould not differ. However, RTs are faster for the condition with only threedistractor types (Grand means: 625msec vs. 835msec). How can we explainthis? Perhaps the easier search was done by selecting one feature (e.g. reditems) and looking for an oddball in that subset. However, in a controlexperiment, when the target was defined as the oddball in the otherwise120 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Afternoon Postershomogeneous red subset, search was ~200msec slower than in the threedistractor condition. Alternatively, it may be possible to reject groups ofidentical items even if the group is defined conjunctively. Regardless ofthe explanation, these data show that the preattentive conjunction of basicfeatures speeds search even though explicit appreciation of conjunctionsrequires attention.26.540 Spatio-temporal mapping of exogenous and endogenousattentionRoger Koenig-Robert 1,2 (roger.koenig@cerco.ups-tlse.fr), Rufin VanRullen 1,2 ;1 Université de Toulouse, UPS, Centre de Recherche Cerveau & Cognition,France, 2 CNRS, CerCo, Toulouse, FranceThe spatial distribution and the temporal dynamics of attention have beenstudied countless times. Although these two factors are well understood inisolation, their interaction remains much less clear. How does the shape ofthe attentional focus evolve across time? To answer this question we measureda quantitative space-time map of both endogenous and exogenousattention in humans. To sample attention effects in the space-time domainwe tested the visibility of a low contrast target presented at different distancesand delays from a cue in a noisy background. For exogenous attentionwe used a non-informative high-contrast peripheral cue at a randomlocation 5° from fixation. In the endogenous condition we used a centralinformative arrow cue pointing left or right. We sampled the spatial domainas the Euclidean cue-target distance, locating the target randomly on thescreen in the exogenous condition, and randomly along the horizontal midlinein the endogenous condition. As an indirect measure of attention, wedetermined, for each distance and delay from the cue, the background contrastcompensation required to keep performance at 75% (adjusted with astaircase procedure). After more than 94,000 trials in 13 subjects, the spacetimemapping of exogenous attention revealed a progressive enhancementfrom 50 to 275 ms, extending up to 8° from the cue. Endogenous attentionmaps (over 40,000 trials in 8 subjects) showed an early (100 ms) enhancingeffect centered on the cue, with a later deployment at the cued side peakingbetween 8 and 10° at 400 ms after cue onset. Finally, we measured the interdependencybetween the spatial pattern of visual attention and its temporaldynamics: most of the data could be explained by a constant spotlightshape, independent of time. Our results represent the first detailed spacetimemaps of both endogenous and exogenous visual attention.Acknowledgement: CONICYT, EURYI and ANR 06JCJC-015426.541 The Dynamics of Top-Down and Bottom-Up Control of VisualAttention during Search in Complex ScenesMarc Pomplun 1 (mpomplun@gmail.com), Alex Hwang 1 ; 1 Department of ComputerScience, University of Massachusetts BostonThe interaction of top-down and bottom-up control of visual attentionis of central importance for our understanding of vision, and most of itsextensive study has employed the paradigm of visual search. However,little is known about the dynamics of top-down and bottom-up mechanismsduring demanding search tasks in complex scenes that guide ourattention so efficiently in everyday situations. Here, we present and applya novel method to estimate the time course of visual span, that is, the areaaround gaze fixations from where visual features can exert top-down orbottom-up control of attention. The method assumes that a larger visualspan allows larger areas of relevant information to attract eye movementsfor its inspection within a single, central fixation (center-of-gravity effect,Findlay, 1982). Indeed, the distribution of gaze fixations can be predictedby convolving the distribution of relevant information with a Gaussian kernelwhose size matches the visual span (Area Activation Model, Pomplunet al., 2003). In this study, we computed separate top-down (Hwang et al.,2009) and bottom-up saliency maps (Itti & Koch, 2001) for 160 real-worldsearch displays and convolved each of them with Gaussian kernels of differentsizes. Those sizes that resulted in the best predictors of the positionsof 15 subjects’ search fixations were taken as estimates of visual span. Weused this method to estimate the strength and visual span of top-down andbottom-up control of attention by display features during different phasesof the search process. Top-down control was found to be weak initially butto quickly dominate search while narrowing its focus, whereas bottom-upcontrol revealed slowly diminishing influence and a constantly large visualspan. These results suggest that, throughout the search process, accumulatingscene knowledge determines the dynamics of attentional control, whichis not reflected in current models.Acknowledgement: Grant Number R15EY017988 from the National Eye Institute to MarcPomplun26.542 The salience of absence: when a hole is more than the sumof its partsLi Zhou 1 (lizhou@itp.ac.cn), Li Zhaoping 2 ; 1 Institute of Theoretical Physics,Chinese Academy of <strong>Sciences</strong>, 2 University College London, UKAn item can be conspicuous against a uniform background either by possessinga feature other items lack, or, typically to a lesser degree, by lackinga feature the others share. It has always been assumed that the conspicuityof feature absence arises from the saliency at its location. However, if thesalience of a location is determined by the largest neural response fromprimary visual cortex (V1) that it inspires, as suggested by a recent theory,then the only way an absence or a hole could become conspicuous is if thesaliency of its surrounding stimuli attracts attention to its vicinity (Li, 2002);it would lead to no V1 activity by itself. Specifically, the absence of input atthe hole reduces suppression of the V1 responses to the stimuli surroundingit, in a way that depends on spatial- and feature-specific suppressionbetween nearby V1 neurons, making the surrounding stimuli more salient.If this enhanced saliency of the surround determines the conspicuity of thehole, then altering the visual input strength to those surrounding stimulishould alter the reaction time (RT) for finding the hole in a visual searchtask, in a way that is predictable from V1 interactions. We test this predictionby measuring observers’ RTs for finding a target among distractorcrosses when the target consists of just one of the two bars of a cross. Whenthe target bar has sufficiently low contrast, the RT does not increase whenits contrast is reduced further, indicating that the V1 response it evokes isimmaterial to its conspicuity. Meanwhile, changing the contrast of variousbars in the surrounding crosses alters the RTs according to the feature andspatial specificities of V1 interactions. The saliency of a hole may only be asubsequent impression inferred from our perceptual experience.Acknowledgement: (a)National Basic Research Program of China (973 Program)No.2007CB935903, (b)Tsinghua University’s support, (c) Gatsby Charitable Foundation,(d) a Cognitive Science Foresight grant BBSRC #GR/E002536/0126.543 Performance Costs and Benefits for Simultaneous DynamicEvents in Visual SearchMeera Mary Sunny 1 (m.m.sunny@warwick.ac.uk), Adrian von Muhlenen 1 ; 1 Departmentof Psychology, University of WarwickAttention capture is not only measured in terms of reaction time (RT) benefitsto find a certain target, but also in terms of RT costs produced by certaindistractors. Thus, one would expect that if the number of such distractorsis systematically increased, RT costs would be even higher. The presentstudy looked at cumulative interference effects from multiple dynamicevents on target detection. In Experiment 1 the search display consisted ofa combination of static, abrupt-onset (onset) and moving items which couldall be target or distractors with an equal probability. In line with previousstudies, participants were faster when the target was an onset than whenit was a static item and slowest when it was a moving item. Surprisingly,the type of distractor(s) did not have any effect on search performance nordid it depend on the type of target. In Experiment 2, motion was replacedby an onset of motion (motion-onset). Based on previous studies (Abramsand Christ, 2003), which have shown that motion-onset captures attention,we expected that the competition between onset and motion-onset itemswould lead to a distractor-type effect. Again, motion-onset targets did notcapture attention, and there was no effect of distractor type. In Experiment3, the display size was increased from three to eight items and the numberof onsets was systematically varied between zero and eight. Resultsshowed the typical advantage for onset targets in comparison to static targets.Furthermore, RTs to an onset target increased as a (power) functionof number of onsets, while RTs to a static target were unaffected by thenumber of onsets. Thus, the cost of having multiple onset distractors didoccur with eight-item, but not with three-item displays, suggesting that alimited capacity bottleneck might be involved in the attentional prioritizationprocess.26.544 Neural mechanisms underlying active ignoring in theageing brainHelen Payne 1 (h.e.payne@bham.ac.uk), Harriet Allen 1 ; 1 Brain and Behavioural<strong>Sciences</strong>, School of Psychology, Birmingham, UKSaturday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>121


Saturday Afternoon PostersVSS 2010 AbstractsSaturday PMThere is evidence to suggest that we actively ignore information that is irrelevantto our current goals. This is demonstrated using the preview searchparadigm. Here half of the distracters in a visual search task are presentedbriefly before the addition of the remaining distracters (and target) to thedisplay. Results for young adults show that the time taken to find the targetin these “Preview” trials is reduced in comparison to a “Full” conditionwhere all distracters are presented simultaneously. This preview benefitsuggests that observers exclude the previewed distracter items from search.fMRI studies reveal enhanced neural activation in posterior parietal cortexin response to preview trials, reflecting a distinct active ignoring process.Ageing is associated with various cognitive costs including the abilityto inhibit processing. Thus, older adults may show less preview benefitbecause they are unable to ignore the previewed items. There is some evidencethat older adults do not benefit from the preview display in a similarmanner to young adults. A key aim of our study was to compare neuralactivation during active ignoring between old and young participantgroups to investigate how ageing affects ignoring. We found that old (M= 71.8 years) and young adults (M = 21.8 years) who demonstrated a clearbehavioural preview benefit showed similar areas of neural activation toeach other. Contrasting preview trials against full trials revealed activationin the precuneus and superior parietal lobule (SPL), areas consistently activatedin previous fMRI studies with young adults. Furthermore, activity inthe SPL was significantly greater for older adults. These results show that1) older adults are able to ignore previewed distracter items and, 2) thefunction of the posterior parietal cortex, an area implicated with distractersuppression, can be retained in older adults.26.545 The effects of feature preview history and responsestrategy on inter-trial suppression of selective attentionEunsam Shin 1 (shine@missouri.edu), Alejandro Lleras 2 ; 1 Department of Psychological<strong>Sciences</strong>, University of Missouri, 2 Department of Psychology, Universityof Illinois at Urbana-ChampaignIn a color-oddball search task, when a target’s color in the current searchdisplay has been passively viewed in a preceding target-absent display(TAD), the response time (RT) to the target is slower than when the distractor’scolor in the current search display was passively viewed. The RT differencebetween the target-color preview and the distractor-color previewis known as distractor previewing effect (DPE). Four experiments wereconducted to investigate the effects of target appearance predictability onthe DPE by distributing trials in a blocked and a random fashion, in whichthe number of TAD presentations was fixed and varied within each block,respectively. Simultaneously, we examined history effects of multiple previewsof target and distractor features (ranging from 0 to 2 in Exps. 1A and2A; from 0 to 5 in Exps. 1B and 2B) on target response in the blocked (Exps.1A and 1B) and random (Exps. 2A and 2B) designs. For the consecutive 2TAD presentations a single (target or distractor) color was repeated twice,or target and distractor colors were alternated prior to the search displayin Exps. 1A and 2A. In Exps. 1B and 2B, either a target or a distractor (notboth) color was repeated in consecutively presented TADs. We found: (a)the size of the DPE increased as the number of TADs increased, with thatincrease more consistent in the random than in the blocked design; (b) theDPE occurred in both the 2 and 1 TAD conditions in the blocked design, butonly in the 2 TAD condition in the random design; (c) the color previewedin the immediately preceding TAD (i.e.,one-back) influenced the RT morethan the color in the two-back. These results demonstrate cumulative historyeffects with more emphasis on recent events and top-down responsestrategy on target selection.Spatial vision: Mechanisms and modelsVista Ballroom, Boards 546–557Saturday, May 8, 2:45 - 6:45 pm26.546 Locating the functional vertical midline with a motionprobePascal Mamassian 1 (pascal.mamassian@univ-paris5.fr); 1 CNRS & Université ParisDescartes, FranceThe vertical midline splits the visual fields into two halves that are representedin contralateral hemispheres. While space is retinotopically encodedacross most visual areas within each hemisphere, the vertical divisionbetween hemifields necessarily disrupts this topological organization. Weare interested here in measuring the functional consequences of the verticalsplit. In particular, we investigate how crossing the vertical midline impairsmotion sensitivity.Observers were engaged in a motion speed change detection that occurredmidway along the trajectory of a rotating dot. Two dots diametricallyopposed on a virtual circle travelled each a quarter of the circle. Only oneof the dots changed speed for a brief duration and observers had to reportwhich dot presented the speed change (the one above or below fixation).The speed change could occur just before or just after the dot crossed thevertical midline. Viewing was monocular.Observers were significantly worse in detecting the speed change whenit occurred after crossing the midline than before. In addition, the rangeover which motion sensitivity was impaired increased with the speed ofthe stimulus.On the theoretical side, the loss of motion sensitivity after the vertical midlinepossibly reflects an impairment to predict the future location of themoving dot, or an impairment to communicate this prediction across hemispheres.On the practical side, this phenomenon is useful to estimate thelocation of the functional midline and to determine the extent to which thearea around the vertical midline is represented in both hemispheres.Acknowledgement: CODDE project (EU Marie Curie ITN), CNRS26.547 Modeling the representation of location within two-dimensionalvisual space using a neural population codeSidney Lehky 1 (sidney@salk.edu), Anne Sereno 2 ; 1 Computational NeuroscienceLaboratory, The Salk Institute, 2 Department of Neurobiology and Anatomy,University of Texas Health Science Center-HoustonAlthough the representation of space is as fundamental to visual processingas the representation of shape, it has received relatively little attention. Herewe develop a neural model of two-dimensional space and examine how therepresentation is affected by the characteristics of the encoding neural population(RF size, distribution of RF centers, degree of overlap, etc.). Spatialresponses of the model neurons in the population were defined by overlappingGaussian receptive fields. Activating the population with a stimulusat a particular location produced a vector of neural responses characteristicfor that location. Moving the stimulus to n locations along the frontoparallelplane produced n response vectors. To recover the geometry of thevisual space encoded by the neural population, the set of response vectorswas analyzed by multidimensional scaling, followed by a Procrustes transform.The veridicality of the recovered neural spatial representation wasquantified by calculating the stress, or normalized square error, betweenphysical space and this recovered neural representation. The modelingfound that large receptive fields provide more accurate spatial representations,thus undermining the longstanding idea that large receptive fields inhigher levels of the ventral visual pathway are needed to establish positioninvariant responses. Smaller receptive field diameters degrade and distortthe spatial representation. In fact, populations with the smallest receptivefield sizes, which are present in early visual areas and, at a single cell level,contain the most precise spatial information, are unable to reconstruct evena topologically consistent rendition of space. Development of this neuralmodel provides a general theoretical framework not only for understandingneurophysiological spatial data, but also for testing how various neuronalparameters affect spatial representation.Acknowledgement: Funded by NSF26.548 Faster periphery and slower fovea for coherent perceptionOren Yehezkel 1 (yehez@post.tau.ac.il), Anna Sterkin 1 , Yoran Bonneh 2 , Uri Polat 1 ;1 Faculty of Medicine, Goldschleger Eye Research Institute, Sheba MedicalCenter, Tel Hashomer, Tel Aviv University, Israel., 2 Department of HumanBiology,University of Haifa, Haifa, Israel.Central vision, the fovea, is thought to be processed differently from theperipheral parts of the visual field, relying on different physiologicalstreams. However, because usually both fovea and periphery are simultaneouslystimulated, one would expect mutual modulations between thetwo representations in order to achieve a unified percept. We measured ERPresponses to different sizes of Gabor patches, occupying from strictly foveal(0.4 degrees) to a combined foveal and peripheral parts of the visual field(up to 14 degrees). Annuli (rings produced from a Gabor with the fovealopening filled with mean-luminance background) were used to stimulatethe surround. The results show 3 main components representing the fovealand the peripheral processing. 1) P1-amplitude increased with increasing122 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSaturday Afternoon Postersabsolute area of stimuli, similarly to our findings for increasing contrasts.Moreover, it reflected a linear summation of sensory representation ofcomplementary center and surround stimuli. However, surprisingly, thelatency showed a faster processing in periphery than in the fovea. 2) P2-amplitude showed no linear summation between the two parts. However,latency showed significant additional gains in the speed of processing forthe combination of center and surround, compared to the parts in isolation,suggesting that the periphery accelerates the processing of the fovea. 3) N2-amplitude showed no linear summation, but a step change from the strictlyfoveal to peripheral stimulation, despite the linear shortening of latencieswith increasing stimulation area. Moreover, the difference in the amplitudefor the peripheral stimulus vs. the one combining both fovea and peripherysupport our earlier suggestions that N2 reflects lateral interactions from thefovea. Surprisingly, stimulation of periphery increases the speed of fovealprocessing. Our results suggest interactions between the representation ofthe fovea and the periphery, rather than an independent representation.Thus, faster peripheral processing compensates for spatial distance, resultingin a coherent percept.Acknowledgement: Supported by grants from the National Institute for Psychobiology inIsrael, funded by the Charles E. Smith Family and the Israel Science Foundation26.549 Blur clarifiedAndrew Watson 1 (andrew.b.watson@nasa.gov), Albert Ahumada 1 ; 1 NASA AmesResearch CenterA review of the literature on blur detection and discrimination reveals alarge collection of data, a few theoretical musings, but no predictive model.Among the key empirical findings are a “dipper” shaped function relatingblur increment threshold to pedestal blur, as well as a nonlinear effect ofluminance contrast. We have found that these phenomena and others areaccounted for by a simple model in which discrimination is based on theenergy of differences in visible contrast. Visible contrast is computed fromthe luminance waveform, as modified by local light adaptation and localcontrast masking. The energy of the difference between two visible contrastwaveforms, within a pooling aperture, determines threshold. This modelcan also predict detection thresholds for one dimensional waveforms suchas Gabor signals. When fit to the ModelFest Gabors, it gives reasonable predictionsfor classic blur detection and discrimination data as well.Acknowledgement: Supported by NASA’s Space Human Factors Engineering Project, WBS466199.26.550 Extended depth of focus spectacles for full visual fieldpresbyopia correction via brain adaptationAlex Zlotnik 1 (alex.zlotnik@gmail.com), Shai Ben Yaish 1 , Oren Yehezkel 2 , MichaelBelkin 2 , Zeev Zalevsky 3 ; 1 Xceed Imaging, Petach Tikva, Israel, 2 GoldshlegerEye Research Institute, Tel Aviv University, Tel Hashomer, Israel, 3 Faculty ofMedicine, Goldschleger Eye Research Institute, Sheba Medical Center, TelHashomer, Tel Aviv University, Israel.Extended depth of focus (EDOF) techniques was previously adapted forophthalmic usage as a solution for presbyopia and astigmatism. The aimof this research is to use the brain adaptation ability in order to producehomogenous EDOF over the full visual field (VF), with EDOF engravingpositioned in discrete positions in the optical system. A set of EDOF profileswas engraved every 3mm over the whole external surface of a spectaclelens. We studied 14 presbyopic patients aged 48-68 (average readingaddition of 2.2 D., astigmatism of 0.50-1.00 D.). The VF was tested by examiningthe visual acuity at tens of random points within the VF of about 30degrees by displaying various Snellen letters. Subjects had to identify theletters, with assigned scores from one to four for the responses: (1) identifyingthe letter displayed, (2) identifying a similar letter to the one displayed,(3) naming a letter not similar to the one displayed, or (4) not recognizingthe letter. Results in LogMAR units: without the EDOF profile: Best correctedvisual acuity (BCVA) was -0.01 and distance corrected near visualacuity (DCNVA) was 0.465. With the EDOF spectacle lens: BCVA=0.05 andDCNVA=0.079. Additionally, the EDOF lens overcame up to 1.00 D. astigmatism.Stereo perception, color vision, and contrast sensitivity remainedunaffected. In 96% of the VF correct answer was recorded (category 1). In2%, small errors were measured (category 2). In the remaining 2%, eitherlarge errors or no recognition were recorded. The high EDOF performanceshowed a solution for presbyopia that was obtained over the full VF, allowinggood reading ability. This was achieved using brain adaptation processforcing the reader to gaze only through predefined directions that coincidedwith the discrete locations.26.551 Orientation and shape tuning of van Lier aftereffectTakao Sato 1 (Lsato@mail.ecc.u-tokyo.ac.jp), Yutaka Nakajima 2 ; 1 Department ofPsychology, Graduate School of Humanities and Socioloty, University of Tokyo,2 Intelligent Modeling Laboratory, University of Tokyovan Lier et al(2009, Current Biology) have reported an intriguing coloraftereffect. They adapted observers with two differently colored, overlappingfour-point stars sharing the center, but with a 45 deg relative rotation,and subsequently presented achromatic test outline of one of the stars. Perceivedafterimage was stronger inside the test, and the color of aftereffectinside the test pattern extended to the central area where it was colored grayin adaptation phase. In the present study, we tried to evaluate orientationand shape selectivity of the phenomenon by using almost the same stimulias they used. For orientation tuning, the test pattern was rotated relativeto the up-right adaptor. The original phenomenon was reproduced whenthe adaptor and test overlapped exactly, but most observers see afterimageonly within the test contour pattern. Afterimage was switched on and offaltogether depending on test orientation. Similar afterimage was observedup to 15 deg of rotation, when the test pattern was rotated. Observers perceiveda color afterimage including the central area that corresponds to theadaptor with nearer orientation. In addition, the after images in rotatedconditions did not exactly fill-in the test contour, but it had original uprightorientation with discrepancy to the rotated test contours. For shapetuning, the base width of the stars was manipulated, and similar aftereffects were observed up to 20% to the fatter side and more than 50% tothe thinner side variations of the width of the test contour. Here again, discrepanciesbetween afterimage and test contour similar to those found forrotation was observed. The afterimages were either thinner or fatter thantest contour depending of the width relationship between adaptor and test.These results indicate that switching colors and spatial filling-in are mediatedby separate mechanisms.26.552 Integration of visuospatial position information is modulatedby retinal eccentricity and attentionJessica Wright 1 (jessica@vision.rutgers.edu), Adam Morris 1 , Bart Krekelberg 1 ;1 Center for Molecular & Behavioral Neuroscience, Rutgers University, Newark,NJPerception of spatial position is a basic function of the visual system, yetthere are still many questions regarding how position is computed in thebrain and how this information is integrated across space. The quantificationof these processes is an important first step in elucidating the underlyingneural mechanisms. We propose that spatial integration can be modeled asa weighted average of visual position information and that weights at particularlocations in space are modulated by various factors including retinaleccentricity and spatial attention. The current study utilized psychophysicalmethods in human subjects to quantify the extent to which different regionsof the visual field influenced performance on a centroid estimation task.Subjects located the centroid of briefly presented one-dimensional and twodimensionalarrays of dots positioned randomly within a large region ofthe visual field. To probe the effects of endogenous and exogenous spatialattention, a central or peripheral cue was used to bias attention toward oneside of the display. Using statistical models, we generated maps of weightsthat described the influence each region of the visual field had on the centroiddeterminations in each of the conditions. The data suggest that 1) subjectsestimate centroids reliably, but with some degree of idiosyncratic bias;2) spatial locations are not utilized equally when determining the centroid,with most subjects prioritizing foveal regions over peripheral regions and3) endogenous and exogenous attention modulate the contribution of spatiallocations to the overall percept with higher weights typically allocatedto attended areas. Taken together, these results suggest that subjects prioritizeinformation at different regions of space based on the reliability of theinformation or signal associated with that region. Reliability depends notonly of the level of acuity due to retinal eccentricity, but also on cognitiveinfluences such as attention.Acknowledgement: Pew Charitable Trusts and the National Health and Medical ResearchCouncil of Australia26.553 Comparing properties of the spatial integration of localsignals into perceived global structureAndrew Meso 1 (andrew.meso@mcgill.ca), Robert Hess 1 ; 1 McGill <strong>Vision</strong> Research,Dept. of Ophthalmology, McGill UniversitySaturday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>123


Saturday Afternoon PostersVSS 2010 AbstractsSaturday PMSensitivity to global structure was investigated using a stimulus containingperceived vertical or horizontal bands generated by superimposinga pair of narrowband noise images modulated by two out of phase periodicfunctions (Watson & Eckert, 1994 JOSA, A(11)496-505). We probeda moving version of the stimulus in which components making up thepair of noise images have opposite directions of motion and then a staticanalogue in which the pair of components have orthogonal directions ofcarrier orientation. The stimuli contain local signals characterised by thecarrier frequency which have to be integrated over a larger spatial extentdetermined by the modulation frequency, which we therefore considereda global parameter. We obtained threshold luminance and modulator contrastsensitivities using a two interval 2AFC psychophysical detection task.We found that the motion stimulus showed band-pass tuning of the ratio ofcarrier to modulation frequency with a peak corresponding to an optimumsensitivity where the modulator is of a scale of ten times the carrier. Thisoptimal sensitivity was found to be scale invariant over a range of retinalimage sizes varied up to a factor of 10 with a fixed number of modulatorcycles. This result suggests a coupling between the spatial frequencyof local motion detection stages and the integration process, which happenat a larger scale. In the case of the static orientation stimulus, a muchbroader tuning was found, which showed an optimum at a higher ratio(


Sunday Morning TalksColor and lightSunday, May 9, 8:15 - 10:00 amTalk Session, Royal Ballroom 1-3Moderator: Qasim Zaidi31.11, 8:15 amChromatic variations suppress suprathreshold brightness variationsFrederick Kingdom 1 (fred.kingdom@mcgill.ca), Jason Bell 1 , Gokhan Malkoc 2 , ElenaGheorghiu 3 ; 1 McGill <strong>Vision</strong> Research, Department of Ophthalmology, McGillUniversity, Montreal, Canada, 2 Laboratory of Experimental Psychology, Universityof Leuven, Belgium , 3 Dogus University, Faculty of Arts and <strong>Sciences</strong>,Department of Psychology Acıbadem, Kadıköy 34722 Istanbul-TurkeyAim. To determine the relative perceptual saliencies of suprathresholdcolor (chromatic) and luminance variations when the two are combined.Method. The stimulus was similar to that used by Regan & Mollon in theirstudy of the relative saliencies of the cardinal color directions (in Cavonius,ed., Colour <strong>Vision</strong> Deficiencies XIII, 1997). It consisted of left- and/orright-oblique modulations of color or luminance defined within a latticeof circles, with each circle ringed by a black line to minimize any impressionof transparency when the different modulations were combined. Therewere two conditions. In the ‘separate’ condition, the color and luminancecontrasts were presented separately in a 2IFC procedure and the subjectindicated on each trial the interval containing the more salient modulation.In the ‘combined’ condition, the two modulations, which were orthogonalin orientation, were added together and the subject indicated on each trialwhether the dominant perceptual organization was left or right oblique. Foreach color direction and for each condition, the relative color to luminancecontrast at the PSE was calculated. Results. For all color directions, PSEs forthe ‘separate’ and ‘combined’ conditions were significantly different: moreluminance contrast relative to color contrast was required to achieve a PSEin the ‘combined’ compared to ‘separate’ condition, suggesting that in thecombined condition the luminance variations were being masked by thecolor variations. Conclusion. Suprathreshold color variations mask suprathresholdbrightness variations.Acknowledgement: Canadian Institute of Health Research grant #11554 given to F.K.31.12, 8:30 amUncovering multiple higher order chromatic mechanisms in conecontrast spaceThorsten Hansen 1 (Thorsten.Hansen@psychol.uni-giessen.de), Karl Gegenfurtner 1 ;1 General and Experimental Psychology, Justus Liebig University GiessenDespite good psychophysical and physiological evidence, the number andnature of multiple higher-order chromatic mechanisms is still under debate.This is mainly due to several studies that defined their stimuli in cone contrastspace (CCS) and failed to find support for higher order mechanisms.We measured detection thresholds for chromatic directions in cone contrastspace using a noise masking paradigm (Hansen & Gegenfurtner (2006),Journal of <strong>Vision</strong>, 6(3):5, 239–259). Our choice of masking directions (38and 47 deg) was guided by an analysis of the nonlinear mapping of anglesbetween cone contrast space and a post-receptoral color space (DKL). Whenthe noise contrast was sufficiently high (40% rms cone contrast), we foundclear evidence for selective masking, indicating multiple mechanisms tunedto these directions. Why did earlier studies in CCS fail to find evidencefor higher order chromatic mechanisms? First, the noise directions in CCStested in previous studies (90 deg ΔM/M, 135 deg isolum) map to almostidentical angles in DKL space (7.1 and 1.6 deg), implying that effectivelyonly one higher order mechanisms (L−M) was stimulated. Second, themasking contrast in these studies was generally very low (


Sunday Morning TalksVSS 2010 AbstractsSunday AMResults: The univariate analysis showed little difference in central responseamplitudes but distinct differences in the surround for all three color directions:Luminance stimuli created a large negative BOLD response while S-cone and (L-M)-cone isolating isoluminant stimuli did not. Although isoluminantstimuli did not generate a mean activity change in the surround, itwas possible they caused changes in small-scale patterns of voxel responsesin this region. By removing periphery mean responses, we used the classificationroutine to ask whether we could predict the color of foveal targetsbased on responses of peripheral cortical neurons. We found that we coulddiscriminate all three color directions using a multivariate pattern classifieroperating on peripheral voxels as well as central ROIs.Conclusions: The spatially-extended negative BOLD effect is largely generatedby luminance stimuli. This is consistent with single studies showingthe largest suppression from extraclassical receptive fields in magnocellularneurons of the early visual system. However, our ability to classify isoluminantstimuli based on peripheral population responses confirms that someV1 neurons must have large, chromatically-tuned suppressive surrounds.Acknowledgement: NIH EY018157-02, NSF BCS-071997331.15, 9:15 amEffects of image dynamic range on perceived surface glossJames Ferwerda 1 (jaf@cis.rit.edu), Jonathan Phillips 1 ; 1 Munsell Color ScienceLaboratory, Carlson Center for Imaging Science, Rochester Institute ofTechnologyOne of the defining characteristics of glossy surfaces is that they reflectimages of their surroundings. High gloss surfaces produce sharp reflectionsthat show all the features of the surround, while low gloss surfacesproduce blurry reflections that only show bright “highlight” features. Dueto the presence of light sources and shadows, the illumination field incidenton a glossy surface can have high dynamic range. This means that thereflections can also have high dynamic range. However, in a conventionalimage of a glossy object, the high dynamic range reflections are compressedthrough tone mapping to make the images fit within the output range ofthe display. While the utility of conventional images demonstrates thatthe general characteristics of glossy objects are conveyed by tone-mappedimages, an open question is whether the tone mapping process distorts theapparent gloss of the imaged object. We have conducted a series of experimentsto investigate the effects of image dynamic range on perceived surfacegloss. Using a custom-built high dynamic range display, we presentedhigh dynamic range (HDR) and standard dynamic range (tone mapped,SDR) images of glossy objects in pairs and asked subjects to choose theglossier object. We tested objects with both simple and complex geometriesand illuminated the objects with both artificial and natural illuminationfields. We analyzed the results of the experiments using Thurstonian scaling,and derived common scales of perceived gloss for both the HDR andSDR object renderings. Our findings are that 1) limiting image dynamicrange does change the apparent gloss of depicted objects - objects shownin SDR images were perceived to have lower gloss than identical objectsshown in HDR images; 2) gloss differences are less discriminable in SDRimages than in HDR images; and 3) surface geometry and environmentalillumination modulate these effects.31.16, 9:30 amInteraction of diffuse and specular reflectance in the perceptionof object lightness and glossinessMaria Olkkonen 1 (mariaol@sas.upenn.edu), David Brainard 1 ; 1 Department ofPsychology, University of PennsylvaniaPurpose. To judge object surface properties, the visual system must estimatereflectance from the light signal arriving at the eyes. We ask to whatextent observers are able to do this under geometrically varying light fields,and focus on the interaction between two distinct reflectance properties:diffuse and specular. These are physical correlates of the percepts of lightnessand glossiness. If the two reflectance attributes are processed independently,future experiments can be simplified by studying each in isolation,while interactions require continued joint measurement. Methods.Observers adjusted the diffuse and specular components of one grayscalesphere to match the appearance of second, reference, sphere. Spheres wererendered using the Debevec (SIGGRAPH98) light fields and presented ona high-dynamic-range display. For symmetric matches, both spheres wererendered using the same light field. For asymmetric matches, a differentlight field was chosen for each sphere. Matches were collected for differentcombinations of reference sphere diffuse and specular reflectance. Surfaceroughness was held constant across the two spheres; the measurementswere repeated for two levels of roughness. Performance was quantifiedby the slope of regression lines of matched versus reference reflectance.Results. Symmetric matches were close to veridical (average slopes 0.99 diffusecomponent; 1.00 specular). Asymmetric matches deviated systematicallyfrom veridical (average magnitude of slope deviation 0.06 diffuse; 0.31specular), showing an effect of light field on perceived lightness and glossiness.The matched diffuse component decreased with increasing referencesphere specular component. In contrast, the matched specular componentwas roughly independent of reference sphere diffuse component. Matcheswere similar for the two levels of roughness (r = 0.94). Conclusions. Thespatial structure of the illumination affects the perceived lightness andglossiness of 3D objects. Specular component matches were independentof the diffuse component, but not vice-versa. Changing surface roughnesshad little effect.Acknowledgement: This research was funded by NIH RO1 EY10016, P30 EY001583 andthe Emil Aaltonen Foundation31.17, 9:45 amRoles of color & 3-D information in recognizing material changesAli Yoonessi 1 (ayoonessi@sunyopt.edu), Qasim Zaidi 1 ; 1 Graduate Program in<strong>Vision</strong> Science, State University of New York College of OptometryChemical and physical properties of objects provide them with specificsurface patterns of colors and 3-D textures. Endogenous and exogenousforces alter these colors and patterns over time. The ability to identify thesechanges can have great utility in judging the state and history of objects. Toevaluate the role of color and 3-D texture cues, we used calibrated imagesacquired from 15 different viewpoints of 26 real materials undergoingchanges (Courtesy of Shree Nayar and Jinwei Gu). Materials included fruits,foods, woods, minerals, metals, fabrics and papers, and changes includeddrying, burning, decaying, rusting, oxidizing and heating. Observers wereasked to identify materials and types of changes for color and gray-scaleimages. Observers obtained 3-D information by varying the viewing angleof the image (deformation of the frame provided estimates of the slant andtilt of the material with respect to the observer). The images were shownin three sets of trials: one image of the surface, two images of the samesurface at the beginning and end of a natural change, and image sequencesof the time-varying appearance (number of time samples varied from 10to 36). The presence of color cues improved performance in all conditionsbut most dramatically in the organic category. This may be because certaincolor patterns occur only in organic fruits and vegetables. Identificationof materials improved if observers saw two states of the material, butthe complete image sequence did not improve performance if images wererestricted to fronto-parallel view-points. The ability to examine the materialsurface from several viewpoints improved performance, thus showing theimportance of the 3-D micro-structure of the surface texture. The role ofcolor in object recognition has been controversial, but this controversy maybe resolved as color’s role in material perception becomes clearer.Acknowledgement: Grants EY07556 & EY13312 to QZ.Perceptual learning: Mechanisms andmodelsSunday, May 9, 8:15 - 10:00 amTalk Session, Royal Ballroom 4-5Moderator: Paul Schrater31.21, 8:15 amLearning shapes the spatiotemporal dynamics of visual processingZoe Kourtzi 1 (z.kourtzi@bham.ac.uk), Sheng Li 1,2 , Stephen Mayhew 1 ; 1 School ofPsychology, University of Birmingham, UK, 2 Department of Psychology, PekingUniversity, ChinaPerceptual decision making has been suggested to engage a large networkof sensory and frontoparietal areas in the human brain. However, relativelylittle is known about the role of learning in shaping processing in theseregions at different stages of decision making from sensory analysis toperceptual judgments. Here, we combine psychophysical and simultaneousEEG-fMRI measurements to investigate the spatiotemporal dynamicsof learning to discriminate visual patterns. Observers were instructed to126 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Morning Talksdiscriminate between radial and concentric Glass pattern stimuli that wereeither embedded in different noise levels (coarse discrimination) or variedin the spiral angle between radial and concentric patterns (fine discrimination).Our behavioral results showed that training enhanced the observers’sensitivity in the coarse task, while changed the internal decision criterion(i.e. categorical boundary) in the fine task. Information theory-based analysesof EEG single-trials revealed two temporal components that containeddiscriminative information between radial and concentric patterns: anearly component (120 ms post-stimulus) associated with the analysis ofvisual stimuli and a later component (240 ms) related to the global patterndiscrimination. Further, using multivariate pattern classification analysiswe tested whether we could predict learning-dependent changes in theobservers’ choices from fMRI signals related to these EEG components. Weobserved learning-dependent changes in prefrontal circuits at the later EEGcomponent for both tasks. In contrast, learning-dependent modulation inhigher occipitotemporal areas (LO, KO/LOS) differed between tasks: forthe coarse discrimination learning-dependent changes were associatedwith the first EEG component, while for the fine discrimination with thelater component. These findings demonstrate that learning shapes thedynamics of neural processing in visual areas in a task-dependent manner.In particular, learning shapes sensitivity in early detection and integrationprocesses for coarse discrimination tasks, while later decision criteria processesfor fine categorical judgments.Acknowledgement: BBSRC: D52199X, E02743631.22, 8:30 amAdaptive Sequencing in Perceptual LearningEverett Mettler 1 (mettler@ucla.edu), Philip Kellman 1 ; 1 University of California, LosAngelesQuestion: In real-world perceptual learning (PL) tasks learners come toextract distinguishing features of categories, enabling transfer to novelinstances. This kind of learning can be accelerated by structured interventionsinvolving a series of classification trials (e.g., Kellman, Massey & Son,2009, TopiCS in Cognitive Science). Little is known about practice schedulesthat optimize PL, nor their relation to laws of learning for factual items.Method: We tested an adaptive sequencing algorithm for PL that arrangedspacing for categories as a function of the individual learner’s trial-by-trialaccuracy and reaction time. Participants learned to classify images from 12butterfly genera. Each genus contained 9 exemplars from 3 species (Experiment1) or 9 exemplars from 1 species (Experiment 2 - low variability categories).1 of the 9 exemplars was not presented in training and was usedas a test of novel transfer. Training trials were 2AFC where participantsmatched one of two images to a genus label. During training participantsreceived either: 1) random presentation, 2) adaptive sequencing, or 3) adaptivesequencing with sets of 3 sequential category exemplars (mini-blocks).Participants completed pre and post-tests immediately before and aftertraining, and an additional post-test after a 1-week delay. Results: Learningefficiency (accuracy per learning trials invested) was reliably greater foradaptive sequencing. Effects persisted over a 1-week delay and were largerfor novel items. In experiment 2 where the variability of category exemplarswas lower, adaptive sequencing resulted in even greater learning efficiencygains. Mini-blocks hurt efficiency in both experiments, especially for novelitems. Conclusion: Results suggest that, across a range of category distributions,adaptive sequencing (but not blocking) increases the rate of learningand benefits novel transfer – key components of PL and fundamentalaspects of learning in many domains.Acknowledgement: Supported by US Dept. of Education, Institute for Education <strong>Sciences</strong>(IES) Grant R305H060070 to PK.31.23, 8:45 amAugmented Hebbian Learning Accounts for the Complex Pattern ofEffects of Feedback in Perceptual LearningJiajuan Liu 1 (jiajuanl@usc.edu), Zhonglin Lu 1 , Barbara Dosher 2 ; 1 Laboratory ofBrain Processes (LOBES), University of Southern California, 2 Memory, Attention,and Perception Laboratory (MAPL), University of California, IrvineA complex pattern of empirical results on the role of feedback in perceptuallearning has emerged: Whereas most perceptual learning studies employedtrial-by-trial feedback, several studies documented significant perceptuallearning with block, partial, or even no feedback, and no perceptual learningwith false, random, manipulated block, and reversed feedback (Herzog& Fahle, 1997). Shibata et al (2009) showed that arbitrary block-feedbackfacilitated perceptual learning if it is more positive than the observer’sactual performance. At high training accuracies, feedback is not necessary(Liu, Lu & Dosher, 2008), and significant learning was found in low trainingaccuracy trials when they were mixed with high accuracy trials (Petrov,Dosher, & Lu, 2006; Liu, Lu & Dosher, 2009). We conducted a computationalanalysis of the complex pattern of empirical results on the role of feedbackwith the Augmented Hebbian Reweighting Model (AHRM; Petrov, Dosher& Lu, 2005), in which learning occurs exclusively through incrementalHebbian modification of the weights between representation units and thedecision unit, by simulating existing feedback studies in the literature. TheHebbian learning algorithm incorporates external feedback, when present,simply as another input to the decision unit. Without feedback, thealgorithm uses observer’s internal response to update the weights. Blockfeedback was used to modify the weights of the bias unit in the model.The simulation results are both qualitatively and quantitatively consistentwith the data reported in the literature. Augmented Hebbian Reweightingaccounts for the complex pattern of results on the role of feedback in perceptuallearning.31.24, 9:00 amChanges induced by attentional training - capacity increase vs.allocation changesHoon Choi 1 (hoonchoi@bu.edu), Takeo Watanabe 1 ; 1 Department of Psychology,Boston UniversityAttentional blink (AB) is a phenomenon in which identification of the secondvisual target (T2) is impaired in rapid serial visual presentation (RSVP)when it is presented within half a second after the appearance of the firsttarget (T1). Even though AB has been thought to reflect the limited capacityof visual systems, we found that this robust phenomenon was removedafter a single day of attentional training with a modified RSVP task in whichT2 was spotlighted red while both T1 and all the distractors were white.Thereafter AB was continually not observed at least for a few months (Choi& Watanabe, 2009 VSS). How was the attentional training able to overcomethis kind of capacity limitation? Training could have increased the overallattentional capacity of our visual system, or it could have simply changedthe allocation of attentional resources. To address this question, in the currentstudy we measured AB before and after the training at various SOAs(stimulus onset asynchrony) between T1 and T2 while a 200ms fixed SOAwas employed during the training. If the training simply changed the allocationof attentional resources, a certain tradeoff (AB occurring at anotherSOA) should be observed. After 2 days of training with a spotlighted T2 atthe fixed SOA, AB effects were eliminated at multiple SOAs that had ABeffects prior to training. Training also increased the performance of identifyingT1. When T2 was presented immediately after T1 without any distractors,AB did not occur (lag 1 sparing) but the performance in detecting T1was poor. However, after training the performances in identifying T1 weresignificantly improved with no change in performance of identifying T2.These results thus indicate that attentional training increases the attentionalcapacity rather than changing the attentional resource allocation.Acknowledgement: This study was supported by NIH-NEI (R21 EY018925, R01EY015980-04A2, & R01 EY019466)31.25, 9:15 amAccounting for speed-accuracy tradeoff in visual perceptuallearningCharles Liu 1 (ccyliu@bu.edu), Takeo Watanabe 1 ; 1 Department of Psychology,Boston UniversityIn the perceptual learning literature, researchers typically focus onimprovements in accuracy, such as proportion correct or dprime. In contrast,researchers who investigate the learning, or practice, of cognitiveskills focus on improvements in response times (RT). Here, we argue for theimportance of accounting for both accuracy and RT in perceptual learningexperiments, due to the phenomenon of speed-accuracy tradeoff: at a givenlevel of discriminability, faster responses tend to produce more errors. Aformal model of the decision process, such as the diffusion model (Ratcliff& McKoon, 2008), can explain the speed-accuracy tradeoff. In this model, aparameter known as the drift rate represents the perceptual strength of thestimulus: higher drift rates lead to more accurate and faster responses. Weapplied the diffusion model to analyze responses from a yes-no coherentmotion detection task. Participants were trained for 5 days and completed500 trials per day. On each trial, participants were shown a field of mov-Sunday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>127


Sunday Morning TalksVSS 2010 AbstractsSunday AMing dots for 200 ms within a 14-degree aperture. On “signal” trials, 15% ofdots moved coherently in a specific direction at a constant speed, while theremaining dots were replotted at random locations. On “noise” trials, alldots were replotted randomly. The results showed a significant range ofindividual differences in speed-accuracy tradeoff. When accuracy and RTmeasures were analyzed separately, inconsistent patterns of learning wereobserved across sessions. However, the diffusion model analysis indicatedthat drift rates improved consistently across sessions. These results suggestthat part of the variability typically observed in perceptual learningexperiments may be attributed to speed-accuracy tradeoff, and that driftrates offer a promising new index of perceptual learning. We discuss furtheradvantages of diffusion modeling in perceptual learning, including theability to dissociate decision time from non-decision time, and perceptualbias from response bias.Acknowledgement: NIH-NEI R21 EY018925 NIH-NEI R01 EY015980-04A2 NIH-NEI R01EY01946631.26, 9:30 amLearning internal models for motion extrapolationPaul Schrater 1,2 (schrater@umn.edu), Nate Powell 3 ; 1 Department of Psychology,University of Minnesota, 2 Department of Computer Sci. and Eng., University ofMinnesota, 3 Department of Neuroscience, University of MinnesotaPrediction and extrapolation form key problems in many perceptual tasks,as exemplified by tracking object motion with occlusion: an object movesalong a variable path before disappearing and a prediction of where theobject will reemerge at a specified distance beyond the point of occlusionis made. In general, predicting the trajectory of an object during occlusionrequires an internal model of the object’s motion to extrapolate future positionsgiven the observed trajectory. In recent work (Fulvio, Maloney & Schrater,VSS2009), we showed that people naturally adopt one of two kindsof generic motion extrapolation models in the absence of feedback (i.e. nolearning)- a constant acceleration model (producing quadratic extrapolation)or a constant velocity model (producing linear extrapolation). Howsuch predictive models are learned is an open question. To address thisquestion, we had subjects extrapolate the motion of a swarm of samplepoints generated by random walks from two different families of dynamics- one periodic and one quadratic. For both motion models, the idealobserver is a Kalman filter, and we compute normative learning predictionsvia a Bayesian ideal learner. Simulation results from the ideal learnerpredict that learning motion models will depend on several factors, includingdifferential predictions of the motion models, consistency of the motiontype across trials and limited noise. To test these predictions, subjects performeda motion extrapolation task that involved positioning a “bucket”with a mouse to capture the object as it emerged from occlusion, and feedbackwas given at the end of each trial. While subject performance was lessthan ideal, we provide clear evidence that they adapt their internal motionmodels toward the generative process in a manner consistent with statisticallearning.Acknowledgement: ONR N 00014-07-1-093731.27, 9:45 amAttention mediates learned perceptual bias for bistable stimuliBenjamin T. Backus 1 (bbackus@sunyopt.edu), Stuart Fuller 1 ; 1 Graduate Program in<strong>Vision</strong> Science, SUNY College of OptometryLong-lasting biases in the appearance of ambiguously rotating stimulican be induced by stereo-disambiguated training stimuli (Haijiang et al.,2006). Does this learning depend on visual attention? Methods: Observers(N=13) participated on three consecutive days. Each session consistedof 480 trials. Each trial contained a 2-sec movie of a rotating Necker cube.Observers fixated a central square, verified by a gaze-tracking camera. Anarrow (750 ms) at fixation indicated one of four possible task locations. Twolocations were assigned an “attended rotation direction” (ARD) of clockwiseand two locations were assigned an ARD of counter-clockwise. OnTest trials (128/session) an ambiguous cube appeared at one location. OnTraining trials (352/session), stereo-disambiguated cubes appeared at allfour locations. Training stimuli always rotated according to the ARD whenobservers attended to them (25% of Training trials). In Experiment 1, equalnumbers of ARD and anti-ARD stimuli were shown at each location. Thus,75% of Training trials at a given location were unattended, of which 1/3had ARD and 2/3 had anti-ARD. In Experiment 2 the unattended cubesalways rotated anti-ARD. On Day 3 the ARDs were reversed at all locations(both experiments) to assess long term learning. Results: In Experiment 1,81±6% (mean ± SE across observers) of Test trials agreed with the ARDon Day 1, increasing by 6±3% (to 87±6%) on Day 2. These learned biaseswere robust, dropping by only 9±2 % (to 77±6%) with reverse Training onDay 3. In Experiment 2, however, only 41±3% of Test trials agreed with theARD on Day 1, increasing to 44±2% on Day 2, dropping 12±5% to 32±5% onDay 3. Conclusions: Long term bias for 3D rotation can be learned with orwithout attention, but 2-3 unattended trials are needed during training tocounteract a single attended trial.Acknowledgement: NIH R01-EY-013988, HFSP RPG 3/2006, NSF BCS-0810944Development: MechanismsSunday, May 9, 11:00 - 12:45 pmTalk Session, Royal Ballroom 1-3Moderator: Daniel Dilks32.11, 11:00 amComponents of attention in normal and atypical developmentJanette Atkinson 1 (j.atkinson@ucl.ac.uk), Oliver Braddick 2 , Kate Breckenridge 1 ;1 Visual Development Unit, Dept of Developmental Science, University CollegeLondon, UK, 2 Dept of Experimental Psychology, Oxford University, UKNeuropsychological and neuroimaging studies of attention indicate that thehuman brain contains distinct networks for selective attention, sustainedattention, and attentional (executive function = EF). However, attention testbatteries designed to analyse these separate attention functions have nothitherto been available for developmental ages less than about 6 years.We have designed, pilot-tested, and validated an Early Childhood AttentionBattery (ECAB) whose subtests can be understood by children agedbetween 3-6 years. Normative data on 156 children in this age rangeshowed that a three factor model based on the hypothesised distinct attentionnetworks fitted the data well for children over 4.5 years, but youngerchildren’s data was equally well fit by a two factor model with substantialcross-loading. These results suggest that the differentiation of attention networksemerges over the tested age range, perhaps because more generalconstraints limit performance in the younger children.We have used the ECAB to analyse and compare groups of 32 children eachwith two developmental disorders showing distinct cognitive profiles, WilliamsSyndrome (WS) and Down Syndrome (DS), with developmental agestoo low for other attention tests (e.g. TEA-Ch). In relation to test norms fortheir mental age, the results provide evidence for syndrome-specific patternsof impairment. Both syndrome groups performed relatively well ontests of sustained attention and poorly on aspects of selective attention andEF. The DS group showed a specific strength in auditory sustained attention,whilst the WS group showed a particular deficit in visuo-spatial EFtasks.We discuss these results in relation to the interaction of attention mechanismswith the dorsal cortical stream, neuroimaging [Meyer-Lindenberget al, Neuron, 2004] and behavioural [Atkinson et al, NeuroReport 1997;Dev Neuropsychol, 2003] evidence for dorsal stream deficits in WS, andconsider how they relate to the broader concept of “dorsal stream vulnerability”in developmental disorders.Acknowledgement: Research Grants G0601007 from the Medical Research Council &RES-000-22-2659 from the Economic & Social Research Council32.12, 11:15 amThe Convexity Assumption: Infants use Knowledge of Objects toInterpret Static Monocular Information by 5 MonthsSherryse Corrow 1 (sherryse.leanna@gmail.com), Al Yonas 1 , Carl Granrud 2 ; 1 ChildPsychology, University of Minnesota, 2 Psychological <strong>Sciences</strong>, University ofNorthern ColoradoThe adult visual system uses top-down information to interpret ambiguousimages. When the 2D contours of a cube are presented to the retina, forexample, adults generally perceive a 3D cube. In the absence of informationto the contrary, the adult visual system assumes that objects are convex.Our question is, when do infants begin to form and use such assumptionsto interpret visual input? We presented a wire half-cube, with its vertexpointed away, to 5- and 7-month-old infants (n = 17 and 20 respectively),and observed the infants’ reaching behavior under monocular and binocularviewing conditions. For adults, the cube’s vertex appears closer thanthe outer edges when the display is viewed monocularly; but the cube’sactual layout is perceived when viewed binocularly. In the monocular con-128 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Morning Talksdition, the infants in both age groups reached significantly more often tothe central region of the display than to the outer edges (5 months, p=0.009;7 months, p=0.016). Furthermore, infants reached more often to the centerof the display in the monocular condition than the binocular condition (5months, p


Sunday Morning TalksVSS 2010 AbstractsSunday AMdeprivation had different effects on the two types of motion perception.Z-scores based on age-appropriate norms indicated a large deficit in processingboth speeds of global motion (mean Z-score = -4.21 and -5.85 forfast and slow speeds, ps0.70). The adverse effect of visual deprivationwas equivalent at the two speeds of global motion (p>0.20) and greaterthan the (non-significant) effect on biological motion (ps red > blue > green > pink > grey. All colors wereapproximately isoluminant and randomly assigned to the stimuli. The animalswere required to select the target, sustain attention to it, and detecta transient change in its motion direction. We recorded the activity of 222neurons, of which 147 (66%) showed an increase in activity during tasktrials relative to baseline. Out of the 147, 68% reliably encoded the positionof the target. This latter group was subdivided into three distinct populations:one group encoded target position transiently, starting ~150ms aftercolor cue onset (21%, ‘selection neurons’), a second group signaled targetposition in a sustained manner, starting ~350 ms after cue onset (42%, sustainedattention neurons), and the third group combined features of theaforementioned groups (37%). Using ROC (receiver operating characteristic)analysis we found that these neurons effectively discriminated targetand distractor through their firing rate as early as ~150 ms after cue onset.Moreover, immediately following color cue onset, discrimination occursearlier for greater distance between target and distractor position in thecolor-rank scale. This finding follows the animals’ behavioral performance;proportion of correct discriminations was higher the greater the distancebetween target and distractor in the color scale. Overall, our results indicatethat different populations of dlPFC neurons may be involved in targetselection and sustained attention and that the neurometric performance ofthese units closely follows the one of the animals.Acknowledgement: CIHR32.23, 11:30 amA neural pooling rule for attentional selection in human visualcortexFranco Pestilli 1,2,3 (fp2190@columbia.edu), Marisa Carrasco 2 , David Heeger 2 ,Justin Gardner 1,2 ; 1 Gardner Research Unit, RIKEN Brain Science Institute, 2-1Hirosawa, Wako, Saitama 315-0198, JAPAN, 2 Department of Psychology andCenter for Neural Science, New York University, New York, NY, 10003 USA,3 Department of Neuroscience, Columbia University, 1051 Riverside Drive, Unit87, New York, New York, 10032 USATo characterize sensory and decisional processes enabling attention toenhance behavioral performance, human observers performed contrastdiscrimination judgments following two types of attentional cues, whilecortical activity was measured with fMRI. Four sinusoidal gratings werepresented, one in each visual quadrant, at 8 different “pedestal” contrasts.Stimuli were shown in two 600-ms intervals separated by a 200-ms blankinterval, one of which (randomized across trials) had a near thresholdcontrast increase across the 2 intervals. After stimulus offset, an arrow atfixation indicated the target location. Observers maintained central fixationand pressed 1 of 2 buttons to indicate the interval with higher contrast. Thethree non-target locations had different contrasts that remained unchangedacross intervals. Half the trials were preceded by a focal attention cue(arrow at fixation indicating target location), and half were preceded bya distributed cue (4 arrows indicating four possible target locations). fMRIresponse amplitudes were measured in each of several visual cortical areas,separately for each visual quadrant, pedestal contrast, and attentional conditionand then combined across quadrants.Robust increases in fMRI responses and behavioral performance improvementswere observed with focal versus distributed attention cues. Thechanges in fMRI responses could account for the improved behavioral performanceonly assuming that focal cues caused a 4-fold noise reduction.Whereas sensory noise reduction could account for part of this effect, neitherour data nor previous studies support the full 4x reduction we found.130 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Morning TalksRather, the data were well-fit by a model in which most of the ostensiblenoise reduction was attributed to the selection and pooling of sensory signalsinto a decision, utilizing a max-pooling decision rule. We conclude thatincreases in neural activity with attention in early visual cortex enhanceperformance by selecting relevant sensory signals for decision.Acknowledgement: 5T32-MH05174 & Fellowship by The Italian Academy for AdvancedStudies in America to FP R01-MH69880 to DH R01-EY016200 to MC BurroughsWellcome Fund Career Award in Biomedical <strong>Sciences</strong> to JG32.24, 11:45 amGain in the most informative sensory neurons predicts taskperformanceMiranda Scolari 1 (mscolari@ucsd.edu), John Serences 1 ; 1 University of California,San DiegoTraditional accounts hold that selective attention facilitates perception byincreasing the gain of sensory neurons that are maximally responsive totask relevant stimulus features. In contrast, recent theoretical and empiricalwork suggests that attention strives to maximize performance on thecurrent perceptual task, even if this means applying sensory gain to neuronstuned away from the relevant features (Navalpakkam and Itti, 2007;Scolari and Serences, 2009). For example, when discriminating a 90° orientedline from a set of distractors oriented at 92°, sensory gain should beapplied to neurons tuned to flanking orientations because they undergo alarger change in firing rate in response to the target and distractors. Giventhe high-density of orientation-selective neurons in primary visual cortex,we hypothesized that such “off-channel” gain in V1 should predict performanceon a difficult orientation discrimination task. We used fMRI andvoxel tuning functions to determine if correct trials were associated withmore off-channel sensory gain when compared to incorrect trials. Subjectscompleted a 2AFC task in which a grating (sample stimulus) was presentedat one of 10 possible orientations for 2s, followed by a 400ms delay, and thena second grating which was rotated clockwise or counterclockwise from thesample (the size of the offset was determined on a subject-by-subject basis,and ranged from 1-4.75°). Subjects exhibited a larger BOLD response in V1in the most informative voxels (e.g., ±36° offset from the sample) on correcttrials than on incorrect trials; this pattern was not observed in othervisual areas (V2v, V3v, and V4v). These results indicate that performanceon demanding perceptual tasks is not always predicted by the gain of themaximally responsive sensory neurons. Instead, the magnitude of gain inthe most informative neurons predicts perceptual acuity, even if these neuronsare tuned away from the relevant feature.Acknowledgement: National Institutes of Health Grant R21-MH083902 (J.T.S.)32.25, 12:00 pmFeature-based attention in the human thalamus and superiorcolliculusKeith A. Schneider 1 (schneiderkei@missouri.edu); 1 Department of Psychological<strong>Sciences</strong>, University of Missouri–ColumbiaFeature-based attention is known to enhance the responses of corticalneurons tuned to the attended feature, but its control mechanisms remainpoorly understood relative to those of spatial attention, which involve anetwork of cortical and subcortical structures. Subcortical structures arethought to serve as important control points in the flow of information,but while feature-based attention has been observed in the cortex, it isnot known whether its effects also can be observed in the subcortex. Wetherefore functionally imaged the human subcortical visual nuclei whilesubjects detected changes in separate fields of moving or colored dots. Wefound that when the fields were disjoint, spatially attending to one fieldenhanced hemodynamic responses in the superior colliculus (SC), lateralgeniculate nucleus (LGN) and two retinotopic pulvinar nuclei. When thetwo dot fields were spatially coincident, feature-based attention to themoving versus colored dots enhanced the responses of the pulvinar andvoxels located along the ventromedial surface of the LGN, correspondingto the location of the magnocellular layers, while voxels along the dorsolateralsurface of the LGN, corresponding to the location of the parvocellularlayers, showed the opposite effect; the SC was inconsistently modulatedamong subjects. These feature-based attentional modulations could not beexplained by differential allocations of spatial attention. All of the subcorticalnuclei demonstrated enhancement in hemodynamic activity precedingthe attentional switches between the features; however, suppression wasobserved in voxels along the lateral edge of the LGN, perhaps correspondingto the thalamic reticular nucleus, and voxels in what appeared to be thesuperficial layers of the SC. We conclude that feature-based attention operatesthroughout the visual system via modulation of activity in neuronsthat encode the attended feature.32.26, 12:15 pmThe flipside of object individuation: Neural representation forobject ensemblesJonathan S. Cant 1 (jcant@wjh.harvard.edu), Yaoda Xu 1 ; 1 <strong>Vision</strong> Science Laboratory,Psychology Department, Harvard UniversityImagine you are in a supermarket looking for apples. You need to first findthe right fruit pile by representing collections of objects without encodingeach object in the collection in great detail. After you locate the apple pile,you can proceed to pick out the best-looking apples by encoding individualobjects with their detailed features. While a huge amount of research efforthas been dedicated to understanding how we represent specific objects,our knowledge of the cognitive and neural mechanisms underlying objectensemble representation is still incomplete. Using the fMRI-adaptationparadigm, we showed participants a sequence of three images that wereeither all identical, all different, or shared object ensemble statistics (e.g.three different non-overlapping snapshots of the same apple pile). Usingan independent localizer approach, we found that the lateral occipitalcomplex (LOC) showed a significant release from adaptation (i.e. a rise inactivation compared to the ‘identical’ condition) in both the ‘shared’ and‘different’ conditions (which did not differ from each other). In contrast,ventral medial visual cortex including areas in the collateral sulcus and theparahippocampal place area (PPA) showed statistically equivalent levels ofrepetition attenuation (i.e. a reduction in activation compared to the ‘different’condition) in both the ‘identical’ and ‘shared’ conditions (which didnot differ). These results indicate that while the LOC is involved in encodingspecific object features (which is consistent with previous findings),the ventral medial visual cortex may be involved in representing ensemblestatistics from object collections. Notably, although our stimuli containedminimal amount of 3D scene information, the PPA exhibited adaptationwhen ensemble statistics were repeated. This suggests that the PPA maycontribute to scene representation by extracting ensemble statistics ratherthan the 3D layout of a scene.Acknowledgement: This research was supported by an NSERC post-doctoral fellowship toJ.S.C, and NSF grant 0855112 to Y.X.32.27, 12:30 pmNeural signatures of shape discrimination decisions at thresholdJustin Ales 1 (justin.ales@gmail.com), Lawrence Appelbaum 2 , Anthony Norcia 1 ;1 Smith-Kettlewell Eye Research Institute, 2 Center for Cognitive Neuroscience,Duke UniversityThe lateral occipital cortex (LOC) is known to selectively activate to intactobjects versus scrambled controls, to be selective for figure-ground relationship,and to exhibit at least some degree of invariance for size and position.Because of these features, it is considered to be a crucial part of the objectrecognition pathway. Here we determined if the LOC is involved in shapediscriminations. High-density EEG was recorded while subjects performeda threshold-level shape discrimination task on figures segmented by eitherphase or orientation cues. Our paradigm allowed us to separate responsesdue to the figural cue from the responses corresponding to the discriminationof shape. The appearance or disappearance of a figure region in thestimuli generated robust visual evoked potentials localized throughoutretinotopic cortex. Contrasting responses from trials containing a shapechange (hits) with trials in which no change occurred (correct rejects)revealed activity preceding the subject’s response in the LOC that wasselective for presence of the target shape change. Task-dependent activitythat was time-locked to the subjects’ response was found in frontal cortex.Activity in the LOC was determined to be related to shape discriminationfor several reasons: Shape-selective responses were silenced when subjectsviewed identical stimuli but their attention was directed away from theshapes to a demanding letter discrimination task; shape-selectivity waspresent across all cues used to define the figure; shape-selective responseswere present under conditions where stimulus-locked activity was absent.These results indicate that decision-related activity is present in the LOCwhen subjects are engaged in threshold-level shape discriminations.Acknowledgement: RPB Disney award, NEI R01EY06579, R01EY018875-01S109,P30EY006883-24, C.V. Starr FellowshipSunday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>131


Sunday Morning PostersSunday AMSpatial vision: Crowding and eccentricityRoyal Ballroom 6-8, Boards 301–317Sunday, May 9, 8:30 - 12:30 pm33.301 The mechanism of word crowdingDeyue Yu 1 (dion@berkeley.edu), Melanie Akau 1 , Susana Chung 1 ; 1 School of Optometry,University of California, BerkeleyWord reading speed in peripheral vision is slower when words are in closeproximity of other words (Chung, 2004). This word crowding effect couldarise as a consequence of interaction of low-level letter features betweenwords, or the interaction between high-level holistic representations ofwords. We evaluated these two competing hypotheses by examining howword crowding changes for five configurations of flanking words: the controlcondition — flanking words were oriented upright; vertical-flip — eachflanking word was the up-down mirror-image of the original; horizontalflip— each flanking word was the left-right mirror-image of the original;letter-flip — each letter of the flanking word was the left-right mirror-imageof the original; and scrambled — letters in each flanking word were scrambledin order. The low-level feature interaction hypothesis predicts similarword crowding effect for all the different flanker configurations, while thehigh-level representation hypothesis predicts less word crowding effect forall the alternative flanker conditions, compared with the control condition.Six young adults read sequences of six random four-letter words presentedone at a time, using the rapid serial visual presentation paradigm. Words(2° print size) were presented at 10° in the nasal field of the left eye of eachobserver. For each flanker configuration, reading speed was determinedwhen the target words were presented alone, or flanked above or belowby other words at one of four vertical word spacings (0.7×, 1×, 1.5×, and2× standard line spacing). Across observers, the reading speed vs. spacingfunctions were very similar for all flanker configurations. Reading speedwas unaffected by flankers until the word spacing was reduced to 0.7×(p


VSS 2010 AbstractsSunday Morning Postersnance-modulated C (as for standard luminance letters; Simmers et al, 1999).For these blur levels, contrast-modulated Cs are more subject to contourinteraction effects than luminance-modulated Cs. This could be a result oflarger integration areas for contrast-modulated stimuli and a differentialeffect of blur on contrast and luminance modulation sensitivity functions.Acknowledgement: NA and MIH hold PhD Scholarships funded by the Government ofMalaysia.33.305 Effects of contrast on foveal acuity and contour interactionusing luminance and contrast modulated CsMonika A Formankiewicz 1 (monika.formankiewicz@anglia.ac.uk), M Izzuddin Hairol1, Sarah J Waugh 1 ; 1 Anglia <strong>Vision</strong> Research, Anglia Ruskin University, CambridgeCB1 1PT, UKThe effects of flanker contrast on contour interaction have been investigatedfor large letters (e.g. Chung et al, 2001). However, except from clinicalstudies of crowding using fixed letter separation (eg. Kothe and Regan,1990; Simmers at al, 1999), it is not known how foveal acuity thresholds andcontour interaction are affected by decreased target and flanker contrast.We measured gap resolution thresholds for a luminance-modulated andcontrast-modulated square C in isolation and in the presence of flankingbars. Stimuli were created by adding or multiplying binary noise to asquare-wave signal. The modulation depths of the target and the flankerswere either (1) equal and changed in unison or (2) different, in a ratio of~0.5 to ~1.5.Gap resolution thresholds increase with a decrease in contrast (or modulationdepth), at a slightly faster rate for luminance-modulated than contrast-modulatedCs. For both types, the peak magnitude and extent of theinteraction decreases as the contrast of the C and the bars is reduced. Therelative peak magnitude is greatest when the flankers abut the target, saturatesat high contrasts, and at saturation is higher for contrast-modulated(~logMAR 0.2) than for luminance-modulated (~logMAR 0.1) stimuli.When the contrast of the flankers is higher than that of the target, the magnitudeof the interaction is greater than when the flankers are of similar orlower contrast.The reduction of contour interaction with lowered contrast is in agreementwith the findings of clinical studies that used noiseless letters. Thelack of scaling of the extent of interaction with resolution threshold (lettersize) suggests that contrast masking cannot fully explain foveal contourinteractions. The effects of relative contrasts show that, like the interactionobserved with large letters in peripheral vision (Chung et al, 2001), fovealcontour interaction is not grouping by contrast.Acknowledgement: MIH holds a PhD Scholarship funded by the Government of Malaysia33.306 Crowding and Multiple Magnification TheoryRick Gurnsey 1 (Rick.Gurnsey@concordia.ca), Gabrielle Roddy 1 , Wael Chanab 1 ;1 Department of Psychology, Concordia University, Montreal, QC, CanadaBackground: Although uniform stimulus magnification often compensatesfor eccentricity dependent sensitivity loss, crowding is frequently cited asa refutation of magnification theory. However, if one assumes multiplesources of eccentricity-dependent sensitivity loss then changes in crowdingwith eccentricity may be characterized simply in terms of non-uniformmagnifications with eccentricity (Latham & Whitaker, 1996, OPO). Method:In three experiments we measured size thresholds for relative target/crowder separations of 1.25, 1.70, 2.32, 3.16, 4.31, 5.87, 8.00 and ∞ times targetsize. The sizes of target and crowders were varied uniformly to find thesize eliciting threshold-level performance. Thresholds were measured at 0,1, 2, 4, 8 and 16° in the lower visual field. The three tasks were grating orientationdiscrimination (Latham & Whitaker, 1996), T orientation discrimination(Tripathy & Cavanagh, 2002, VR) and letter identification (Pelli et al.2007, JoV). We plotted target size at threshold as a function of separation atthreshold. Results: In all cases size thresholds at fixation were independentof target/crowder separation. In other words, there was no effect of crowdingat fixation. At all other eccentricities log(threshold size) decreasedroughly linearly as log(separation) increased until asymptote was reached,at which point size thresholds were independent of separation. However,the rate at which threshold size decreased with separation increased witheccentricity and in some cases reached a point at which a critical separationwas achieved; i.e., separation at threshold was independent of target size(Pelli, 2008, COIN). Conclusions: Although there are systematic changesin the size/separation curves from fovea to periphery there seems to be aqualitative change between fixation and periphery: at fixation size thresholdsare separation independent and at the furthest eccentricities separationthresholds approach size independence. Contrary to our expectationsmultiple linear magnifications seem inadequate to characterize the data.Acknowledgement: NSERC33.307 Position and orientation are bound in crowdingJohn Greenwood 1 (john.greenwood@ucl.ac.uk), Peter Bex 2 , Steven Dakin 1 ; 1 Instituteof Ophthalmology, University College London, 2 Schepens Eye ResearchInstitute, Harvard Medical SchoolThe recognition of complex objects, like letters, requires the encoding ofmultiple visual attributes, such as orientation and position, and their bindinginto features. While the encoding of individual attributes is impairedby clutter, a process known as crowding, it is unclear whether crowdingis driven by interactions between these attributes or between features (i.e.conjunctions of attributes). To test this idea, we investigated the interactionbetween crowding effects on two feature attributes: position andorientation. Stimuli were crosses composed of two near-orthogonal lines,presented 15deg. in the upper visual field. When crowded, the target crosswas flanked to the left and right by flanker crosses (with 2.25deg. separation).Observers were required to judge either the orientation (clockwise/counterclockwise) or position (up/down relative to the stimulus centre) ofthe near-horizontal target feature. We first confirm the effects of crowdingon the discrimination of orientation and position separately. For eachfeature attribute, crowding induces both threshold elevation (i.e. impaireddiscrimination) and a systematic bias in target identification towards theidentity of the flanker features (i.e. responses are biased towards either theflanker feature-positions or orientations). We next examined the interactionbetween these crowding effects by requiring conjoint judgements ofboth the orientation and position of the near-horizontal target feature. Ifcrowding occurred independently for each feature attribute, errors in targetidentification should be equally likely in either orientation or position,with a lower probability of combined errors for both attributes. In contrast,when targets and flankers differed in both orientation and position, errorswere significantly more likely to occur for both attributes simultaneously(matching flanker identities) than for either attribute in isolation. This suggeststhat crowding takes place either after, or during, binding of the positionand orientation of features within objects.Acknowledgement: Funded by the Wellcome Trust.33.308 Visual acuity and contour interaction for luminance-modulatedand contrast-modulated Cs in normal foveal visionM Izzuddin Hairol 1 (i.hairol@anglia.ac.uk), Monika A Formankiewicz 1 , Sarah JWaugh 1 ; 1 Anglia Ruskin UniversityInteractions between suprathreshold luminance-modulated and contrastmodulatedstimuli have been found for letter contrast detection (Chunget al, 2007) and Gabor contrast matching (Ellemberg et al, 2004) howeverit is not known how they interact for the classical visual acuity task (Flomet al, 1963). Luminance-modulated and contrast-modulated square Cs andbars were constructed by adding or multiplying square-wave modulatingsignals to dynamic binary noise. C gap acuity thresholds were obtainedusing a 4AFC paradigm with the method of constant stimuli. Bar separationsvaried from abutting to two letter widths. Cs were either equated intheir visibilities (~3.5x above contrast thresholds) or presented at maximumproducible modulations. Threshold versus separation data were fitwith a Gaussian to objectively determine magnitude and extent of contourinteraction. When the C and bars are of the same type, typical patterns ofcontour interaction are observed. Acuity threshold elevation for abuttingbars is significantly larger for the contrast-modulated C (0.2 logMAR) thanfor the luminance-modulated C (0.1 logMAR); p0.1). When the luminance-modulatedC is flanked by contrast-modulated bars (212), acuity isadversely affected in similar fashion to same-type contour interaction butwith peak magnitude: 0.15 logMAR. Reduced bar visibility significantlyreduces the effect (p


Sunday Morning PostersVSS 2010 AbstractsSunday AM33.309 Size illusion and crowdingJungang Qin 1 (jungang.qin@usc.edu), Bosco S. Tjan 1,2 ; 1 Department ofPsychology, 2 Neuroscience Graduate Program, University of Southern CaliforniaCrowding represents an essential bottleneck for form vision in the peripheralfield (Levi, 2008). The severity of crowding depends on the center-tocenterspacing between target and flankers and the critical spacing correspondsto a 6 mm separation on V1 cortex (Pelli, 2008). The perspectivecue of a scene can induce changes in both the perceived object size andspacing. An increase in the perceived size leads to a corresponding increasein the spatial extent of fMRI activation in V1 (Murray, et al., 2006). Do theperceived object size and spacing affect crowding? We examined this questionby presenting letters against a computer-rendered hallway scene withstrong perspective cue. Letters of constant size appeared larger when presentedat 3/8 the screen height measured from the top of the display (“far”condition) than at 6/8 the screen height(“near” condition). Contrast thresholdswere measured for identifying a target letter when it was presentedeither alone or flanked by other letters. The letters were presented eitherat the “near” or “far” location, each with its own fixation to ensure a targeteccentricity of 10°. In the control conditions, the hallway scene was replacedwith a local patch of the scene covering the “near” or “far” letter area witha 1.43 x-height margin. Contrast thresholds were normalized with respectto the corresponding local-patch conditions. We found that the normalizedthreshold elevation due to crowding was reduced by an average of 0.24log units at the “near” position (where letters appeared smaller) comparedto the “far” position (where letters appeared larger). This counterintuitivefinding suggests that the smaller perceived size might induce a narrowerdistribution of spatial attention or reduced positional uncertainty, which inturn reduces crowding.Acknowledgement: NIH/NEI R01-EY016093, R01-EY01770733.310 Effects of Kanizsa’s illusory contours on crowding strengthSiu-Fung Lau 1 (jonathan01hk@gmail.com), Sing-Hang Cheung 1 ; 1 Department ofPsychology, the University of Hong KongPurpose: Crowding is the detrimental effect of surrounding objects on theidentification of a target object. The cortical locus of crowding remains acontroversy. Processing of Kanizsa’s illusory contours has been shown tostart at V3. Here we attempt to localize crowding relative to V3 by askingif illusory contours influence crowding. Methods: Five normally sightedyoung adults participated. Target stimulus was a Kanizsa’s square inducer(0.5º diameter) presented at 4.5º eccentricity in the lower right visual fieldfor 200ms. Subjects identified the target orientation in a 4AFC task. In the4-flanker condition, 3 inducers were positioned to form a Kanizsa’s squarewith targets of 1 orientation. Center-to-center distance between the targetand the lower right inducer was 1º. The fourth flanker was another inducerplaced at 1º radially from the target on the fixation side. In the 2-flankercondition, the upper right and lower left inducers were removed. Discriminationindex (d’) for each target orientation was calculated from 60 trialsfor both conditions. Results: Illusory contours (IC) could be perceived onlyin 1 of the 4 target orientations in the 4-flanker condition. The 4-flankercondition had lower d’, indicating strong crowding, than the 2-flankercondition for all target orientations. Average differences in d’ between the4-flanker and 2-flanker conditions for the IC-present and IC-absent trialswere 0.50+0.16 and 0.72+0.16 respectively. IC-present trials resulted in significantlysmaller d’ difference than IC-absent trials (t(4) = 2.60, p = .03, onetailed),indicating higher resistance to crowding from 2 additional flankers.Conclusion: Effect of additional flankers was reduced by the formationof illusory contours. The results suggest that illusory contour processingoccurs before crowding, and thus, the cortical locus for crowding would beafter V3. Preliminary results from a follow-up experiment with classificationimage supported the utilization of illusory contours in our task.33.311 Temporal crowding with normal observers and its interplaywith spatial crowdingEinat Rashal 1 (einatrashal@gmail.com), Yaffa Yeshurun 1 ; 1 University of HaifaSpatial crowding refers to cases in which a target is flanked by other stimulipresented simultaneously with the target, and temporal crowding refers tocases in which the target is surrounded in time by other stimuli (i.e., stimulithat appear before and after the target). Recently, Bonneh, Sagi and Polat(2007) have demonstrated that temporal and spatial crowding in amblyopicobservers are interrelated. However, only low crowding was found fortheir normal group, possibly due to foveal presentation. This study examinedwhether similar relations between temporal and spatial crowding canbe found for normal observers with peripheral presentation. To measuresimultaneously both temporal and spatial crowding, a rapid sequence of3 displays was presented on a given trial at 9º of eccentricity. Each displayincluded 3 letters. In one of these displays, the central letter was an orientedT, and the observers had to indicate the T’s orientation. The spatial distancebetween the T and its flankers and the temporal spacing (ISI) between thedisplays was systematically manipulated to determine the extent of spatialand temporal crowding when measured concurrently. As expected wefound spatial crowding: accuracy improved as the target-flankers spacingincreased. This spatial crowding emerged regardless of the target temporalposition but it significantly interacted with ISI: the effect was morepronounced at shorter ISIs. We also found temporal crowding: accuracyincreased as the ISI between the displays increased, though this effect wasconsiderably smaller than the spatial effect. Interestingly, the extent of thistemporal crowding was larger for smaller target-flankers spacing and wasmore pronounced when the target appeared at the first temporal position.Hence, when the stimuli are presented at peripheral locations both spatialand temporal crowding can be demonstrated with normal observers. Moreover,as with amblyopic observers, these two types of crowding interact.33.312 Symmetry and Crowding Across the Visual FieldGabrielle Roddy 1 (gabsterod@yahoo.com), Wael Chanab 1 , Rick Gurnsey 1 ; 1 Departmentof Psychology, Concordia University, Montreal, QC, CanadaBackground: There is a consensus that crowding is a property of peripheral(but not foveal) vision and that crowding zones are elliptical and orientedtowards the fovea. However, Latham and Whitaker (1996) found evidenceof crowding at fixation and suggested that multiple linear magnificationswere required to explain the changes in crowding from fixation to theperiphery. Past studies of crowding have involved gratings (e.g., Latham& Whitaker, 1996, OPO) or alphanumeric characters (e.g., Cavanagh, 2002,VR; Pelli et al. 2007, JoV). Here we ask whether the basic characteristicsof crowding apply to biologically relevant stimuli; specifically, symmetry.Furthermore, we ask whether multiple linear magnifications explain thechanges in crowding from fixation to the periphery. Method: We measuredsize thresholds for target/crowder separations of 1.25 to 8.00 times targetsize, as well as a no-crowder condition, in a symmetry discrimination task.Thresholds were measured with the target at fixation and 8° below or tothe right of fixation. In one condition the crowders flanked the target verticallyand in another they flanked the target horizontally. In all cases weplotted target size at threshold as a function of separation at threshold.Results: At fixation, size thresholds were independent of target/crowderseparation. At all other eccentricities threshold size decreased as separationincreased until asymptote was reached, at which point size thresholds wereindependent of separation. As well, crowding was stronger when flankerswere presented parallel to the fixation-to-target axis, consistent with thesuggested structure of crowding zones. Conclusions: Consistent with previousliterature it appears that there is a qualitative difference in crowdingacross the visual field; symmetry appears to behave like previously studiedstimuli. Therefore, contrary to our expectations, and previous data (Latham& Whitaker, 1996), multiple linear magnifications seem inadequate to characterizethe data.Acknowledgement: NSERC and CIHR grants awarded to Rick Gurnsey33.313 Size PoolingYvette Granata 1 (ygranata@gmail.com), Ramakrishna Chakravarthi 3 , Sarah Rosen 1 ,Denis Pelli 1,2 ; 1 Psychology, New York University, 2 Center for Neural Science,New York University, 3 CNRS, Faculte de Medecine de Rangueil, Universite PaulSabatier, Toulouse, FranceDoes crowding affect apparent size? Korte (1923) and Liu & Arditi (1999)reported that a crowded string of letters appears shorter in length, possiblyabbreviated, with letters missing. We present three rectangles, a targetbetween flankers, all 1 deg high arranged horizontally, 10 deg to the rightof fixation. The flankers are 1 deg wide. We tested several target widths:0.75, 1.0 and 1.25 deg. Target-flanker spacing is 1.25 deg, center to center.We also tested without flankers. A reference rectangle is always present 5deg above the target object, beyond the range of crowding. While maintainingfixation, the observer adjusts the width of this reference to match theapparent width of the target object. Relative to the unflanked condition,flankers decreased the apparent width of the target by 19% when the flank-134 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Morning Postersers were narrower and increased by 9% when the flankers were wider thanthe target. Thus, the apparent width of the target is a weighted average thatincludes the flankers. This is size pooling.Acknowledgement: NIY EY0443233.314 Pool party: objects ruleSarah Rosen 1 (sarahbrosen@gmail.com), Ramakrishna Chakravarthi 1,2 , Denis Pelli 1 ;1 Psychology and Neural Science, New York University, 2 Centre de RechercheCerveau et Cognition, CNRS & Université Paul SabatierTo recognize an object we first detect features and then combine or “pool”them. Crowding, the inability to identify a peripheral object surrounded byflankers, is thought to be a breakdown of feature pooling. Thus, it has beenwidely used to study object recognition. Features within objects are usuallyspatially distributed. How much do the flanker features furthest from thetarget affect crowding? Do features need to be part of an unbroken object tobe combined with target features?We create a set of six target objects consisting of jagged vertical sidesattached by flat tops and bottoms. These objects differ only in the featureson their jagged sides. All flankers are derived from this set. We use wholeobjects as flankers in one condition, and broken objects as flankers in others.We break an object by introducing a gap between its jagged sides, violatingobject closure. At various eccentricities, we present a target object betweentwo flankers. We vary the width of the flankers by keeping the near jaggedside a constant distance from the target while moving the far jagged side(for unbroken objects, top and bottom stretches, for broken objects, size ofgap increases). We find that, in the whole object condition, as the flanker’sfar side moves closer to the target, crowding increases. However, for brokenobjects, moving the far side closer to the target does not affect crowding.Further, there was little crowding with broken objects. We ruled outtarget-flanker similarity as an explanation. Our finding that the far side of aflanker combines with the target object only when it is attached to the nearside suggests that objects, not features, pool. We conclude that whethera flanking feature is combined with the target depends on what flankingobject the feature belongs to.33.315 Object CrowdingJulian M. Wallace 1 (julian.wallace@usc.edu), Bosco S. Tjan 1, 2 ; 1 Departmentof Psychology, University of Southern California, 2 Neuroscience GraduateProgram, University of Southern CaliforniaCrowding occurs when stimuli in the peripheral field become harder toidentify when flanked by other items within a spatial extent that dependson eccentricity (Korte, 1923; Bouma, 1970). Crowding has been demonstratedextensively with simple stimuli such as gabors and letters. Herewe characterize crowding for everyday objects. We presented three-itemarrays of objects and of letters, arranged radially and tangentially in thelower visual field. Observers identified the center object, and we measuredcontrast energy thresholds as a function of target-to-flanker spacing (center-to-center).We found that object crowding is similar to letter crowdingin spatial extent, but is much weaker (~2.5x vs. ~11x in threshold elevationrelative to an unflanked target). We also examined whether the exteriorand interior features of an object, operationally defined, are differentiallyaffected by crowding. We used a circular aperture to present either just theinterior portion of an object or everything else but the interior (a ‘donut’object). For both apertures and donuts, critical spacing and threshold elevationwere similar to those of intact objects. To sum up, crowding betweenobjects does not significantly differ from that between letters in terms ofspatial extent and the anisotropy along the radial and tangential directions.However, crowding-induced threshold elevations for objects (intact,aperture, donut) are much lower than that for letters. Taken together, thesefindings suggest that crowding between letters and objects are essentiallydue to the same mechanism, which affects equally the interior and exteriorfeatures of an object. However, for objects, it is easier to compensate for theloss in performance by increasing contrast.Acknowledgement: NIH R01-EY017707, R01-EY01609333.316 Objects crowded by noise flankersKilho Shin 1 (kilhoshi@usc.edu), Julian M. Wallace 1 , Bosco S. Tjan 1,2 ; 1 Departmentof Psychology, University of Southern California , 2 Neuroscience GraduateProgram, University of Southern CaliforniaCrowding is a key limiting factor of form vision in the peripheral field. Aprominent theory of crowding is that of inappropriate feature integration(Levi, 2008; Pelli & Tillman, 2008). However, it is not known what featuresare inappropriately integrated. Tjan and Dang (2005, VSS) addressed thisquestion by flanking a letter target with different types of flankers. Theyfound that a noise flanker obtained by phase-scrambling a letter crowded aletter target as effectively as a letter flanker. Here we repeated their experimentwith gray-scale images of everyday objects in place of letters. Wemeasured contrast threshold elevation for identifying a target object presentedat 10° below fixation as a function of target-flanker spacing (center-to-center:1°, 1.5°, 2.25°, 3.375°). At small spacing where the target andflanker stimuli overlapped, the target was made to occlude the flankers. Weused four types of flankers: intact objects, anisotropic pink noise (phasescrambledobjects), isotropic pink noise (phase-scrambled and orientationscrambledobjects), and white noise. As with letters, we found that anisotropicpink noise flankers led to essentially the same threshold vs. spacingfunction as intact object flankers, with a maximum threshold elevation ofabout 0.7 log units. White noise flankers led to the least amount of crowding,with threshold elevation less than 0.2 log units. Isotropic pink noiseflankers produced an intermediate amount of crowding, with a maximumthreshold elevation of about 0.6 log units. Taken together, these results suggestthat the features that are inappropriately integrated are those that arecommon to the intact and the phase-scrambled objects, thus strongly implicatingnarrowband Gabor-like features. Comparisons between the isotropicand anisotropic pink noise conditions further suggest that the orientationsof these Gabor features are relevant for crowding.Acknowledgement: NIH/NEI R01-EY017707, R01-EY01609333.317 Unconscious processing of emotion in crowded displayNathan Faivre 1 (nathan.faivre@ens.fr), Vincent Berthet 1 , Sid Kouider 1 ; 1 Laboratoryof Cognitive <strong>Sciences</strong> and Psycholinguistic, Ecole Normale Supérieure, Paris,FRANCE.We present a new “gaze-contingent substitution” paradigm, aiming at characterizingunconscious processes in crowded displays. Crowding occurswhen nearby flankers impede the identification, but not the detection, ofa peripheral stimulus. The origins of crowding effects in the visual systemare still only poorly understood. According to bottom-up proposals, hardwiredlimitations in the primary visual cortex cause the information aboutthe crowded stimulus to be lost very early, any information being pooledwith that of the flankers. According to top-down proposals, however, thesmallest region of the visual field that can be isolated by attention is muchcoarser than the smallest details resolvable by vision. Crowding would thenreflect a partial conscious read-out of perceived information due to a lackof attentional resolution. In this work, we show that not only static but alsodynamic emotional primes (i.e., videos of faces) rendered unconscious bycrowding can bias a decision on subsequent preference judgments. Importantly,control experiments show that this unconscious transfer of valencedoes not occur with inverted faces. Other methods such as continuous flashsuppression, during which stimuli are suppressed from awareness throughbinocular rivalry, are also studied. Comparison of the two methods revealsstronger unconscious effects in crowding. These results are discussed inlight of current theories of crowding and favor a top-down explanation inwhich crowded stimuli are not lost in V1 but rather can bias decisions atan abstract level. These results motivate continued research on the unconsciousprocessing of dynamic stimuli.Perception and action: Navigation andmechanismsRoyal Ballroom 6-8, Boards 318–331Sunday, May 9, 8:30 - 12:30 pm33.318 Perceiving pursuit and evasion by a virtual avatarWilliam Warren 1 (Bill_Warren@brown.edu), Jonathan Cohen 1 ; 1 Dept. of Cognitive &Linguistic <strong>Sciences</strong>, Brown University, Providence, RIHow do we perceive the behavioral intentions of another pedestrian? Inthis study, a virtual avatar is programmed to pursue or evade the participantin an ambulatory virtual environment. The avatar is driven by oursteering dynamics model (Fajen & Warren, 2007; Cohen, Cinelli, & Warren,VSS 2008, 2009). We investigate whether the perception of pursuitand evasion is based on the avatar’s trajectory, which is contingent on theparticipant’s movements, and on the direction of the avatar’s head fixation.Participants wore a head-mounted display (63° H x 53° V) and walkedtoward an approaching avatar, while head position was recorded using anSunday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>135


Sunday Morning PostersVSS 2010 AbstractsSunday AMinertial/ultrasonic tracking system (70 ms latency). Avatars could (a) pursueor evade the participant, and (b) fixate or look straight ahead; pursuersfixated the participant and evaders fixated and walked to a point behindthe participant. In Exp. 1, a single avatar appeared at 6, 7, 8, or 9 m, andparticipants reported whether it was pursuing or evading them. Mean d’was 2.5, and head fixation only contributed at 6 m. Mean RT was 2-3 s;head fixation provided a half-second advantage for pursuit at 7 and 8 m,but a half-second disadvantage for evasion at all distances. In Exp. 2, two,three, or four avatars appeared; one was a pursuer, and the others wereevaders. The participant reported which avatar was pursuing them. RTincreased with the number of distractors, and there were small improvementswith fixation. The results indicate that pursuit/evasion is reliablyperceived from the avatar’s contingent trajectory alone, and that they arejudged sequentially. Head fixation provides a modest additional advantageat close ranges.Acknowledgement: NIH R01 EY1092333.319 Follow the leader: Behavioral dynamics of followingKevin Rio 1 (kevin_rio@brown.edu), Christopher Rhea 1 , William Warren 1 ; 1 Departmentof Cognitive & Linguistic <strong>Sciences</strong>, Brown UniversityCan human crowd behavior be explained as an emergent property of localrules, as in flocking (Reynolds 1987) and fish schooling (Huth & Wissel1992)? Here we derive one such possible ‘rule:’ a dynamical model of followinganother pedestrian. We collected position data from pairs of pedestrianswalking in a 12m x 12m room, using an inertial/ultrasonic trackingsystem (IS-900, 60 Hz). The ‘leader’ (a confederate) walked in a straightpath. After 3 steps at a constant speed, the leader would (a) speed up, (b)slow down, or (c) remain at the same speed for a variable number of steps(3, 4, or 5), and then return to his original speed. The ‘follower’ (a subject)was instructed to maintain a constant distance from the leader (1m or 3m).We evaluate several candidate following models, in which the follower’sacceleration is controlled by (a) nulling change in distance, (b) nullingchange in relative speed, or (c) more complex functions of these variables,drawing inspiration from studies of vehicle-following in traffic (Brackstone1999). For each model, we cross-correlate the predicted acceleration of thefollower with the observed acceleration in the tracking data. Future workwill investigate the visual information that serves as input to the model.Once a control law for following is characterized, it can be integrated withother components for steering toward goals, avoiding obstacles, and interceptingmoving targets (Fajen & Warren 2003, 2007). This will allow us toempirically determine whether human crowd behavior does indeed emergefrom such local interactions.Acknowledgement: NIH R01 EY1092333.320 Why does the rabbit escape the fox on a zig-zag path?Predator-prey dynamics and the constant bearing strategyCharles Z Firestone 1 (charles_firestone@brown.edu), William H Warren 1 ; 1 Departmentof Cognitive and Linguistic <strong>Sciences</strong>, Brown UniversityIt is frequently observed that prey often evade predators by darting back andforth on a zig-zag path, rather than simply outrunning them on a straightpath. What might account for this behavior? Previous work has shown thathumans, dragonflies, and bats intercept moving targets by nulling changein the bearing angle of the target. The present research investigated whethera zig-zag escape path may be an effective countermeasure to this constantbearing strategy. Computer simulations randomly generated hundreds ofthousands of ‘prey’ escape paths, each of which was tested against Fajen &Warren’s (2007) dynamical model of the ‘predator’s’ constant bearing strategy.Parameters included the angle and frequency of turns in the escapepath, initial distance between predator and prey, relative speed of predatorand prey, and the predator’s visual-motor delay. Performance was measuredas ground gained by ‘prey’ over ‘predator.’ Zig-zag paths emergedas the most effective escape route, and succeeded even when the prey wasslower than the predator and a straight path would have failed. Analysisrevealed a strong positive correlation between the variability in the bearingangle and the ground gained by the prey, suggesting that zig-zag paths succeedby disrupting the predator’s efforts to hold the bearing angle constant.A rule of thumb for prey also emerged from the data: When the predator‘zigs,’ you should ‘zag.’ We are currently collecting data on human ‘predators’pursuing virtual ‘prey’ in an ambulatory virtual environment, to testthe simulation predictions and to determine whether humans maintain theconstant bearing strategy. Future work will test an interactive escape strategyin which the prey’s ‘zags’ are contingent upon the predator’s ‘zigs.’The results suggest that zig-zag escape paths are common because they areeffective countermeasures to the constant bearing strategy.Acknowledgments: NIH R01 EY10923Acknowledgement: NIH R01 EY1092333.321 The influence of external landmarks, the sun, and castshadows on learning a wormhole environmentJonathan Ericson 1 (Jonathan_Ericson@brown.edu), William Warren 1 ; 1 Cognitive &Linguistic <strong>Sciences</strong>, Brown UniversityIn previous research, we created a non-Euclidean hedge maze by introducingtwo “wormholes” that seamlessly transported participants betweenlocations, by means of rotating the virtual environment. Participantsrelied on the topological graph structure (route knowledge) to navigate,and were unaware of the radical violations of global Euclidean structure(rips and folds in space). Here we provide more information about mazeorientation by adding external landmarks, a sun, and cast shadows. Participantsactively walk in a 40 x 40 ft. virtual environment while wearing ahead-mounted display (63˚ H x 53˚ V), and head position is recorded witha sonic/inertial tracker (70 ms latency). Participants learn the locations ofnine objects (places) by freely exploring the maze in one of three conditions:(1) a control condition with uniform lighting and no external landmarks,(2) a test condition in which the light source and external landmarks rotatewith the maze, and (3) a test condition in which the light source and externallandmarks remain fixed with respect to the laboratory. We then probetheir spatial knowledge in each condition using a shortcut task, in whichparticipants walk from Home to object A, the maze disappears, and theyare instructed to walk directly to the remembered location of object B. Ifparticipants use the cast shadow and landmark information to detect themaze rotation, they may report violations and walk to the metric locationof object B more frequently than in the control condition. If participantsrely on the graph structure of the maze despite the additional orientationinformation, they should walk through a wormhole to the alternative targetlocation, B1, as they do in the control condition. Acknowledgments: NSFBCS-0214383, BCS-0843940Acknowledgement: NSF BCS-0214383, BCS-084394033.322 Learning a new city: Active and passive components ofspatial learningElizabeth Chrastil 1 (elizabeth_chrastil@brown.edu), William Warren 1 ; 1 Cognitiveand Linguistic <strong>Sciences</strong>, Brown UniversityWhen arriving in a new city, how do you learn its layout? It seems thatactively walking around would lead to better spatial knowledge thanpassively riding in a taxi, yet the literature is decidedly mixed. However,“active” learning has several components that are often confounded. Wetest the contributions of four components to spatial learning: visual information,vestibular information, motor/proprioceptive information, andcognitive decisions. Participants learn the locations of 10 objects in anambulatory virtual maze environment, and are then tested on their graphand survey knowledge of object locations. Six learning conditions arecrossed with two test conditions, for a total of 12 groups of participants: (a)Free Walking: participants freely explore the environment for 10 minutes,providing all components of active exploration. (b) Guided Walking: participantsare guided along the same paths, removing the decision-makingcomponent. (c) Free Wheelchair: participants steer through the maze in awheelchair by pressing buttons to indicate left and right turns, minimizingmotor/proprioceptive information. (d) Guided Wheelchair: participantsare wheeled through the maze along paths that match the Free Walkingcondition, removing motor/proprioception and decision-making, (e) FreeVideo: participants steer through a desktop VR maze by pressing buttons,removing motor/proprioceptive and vestibular information. (f) GuidedVideo: participants watch a participant’s-eye video of the Free Walkingcondition, providing passive learning. In the test phase, participants arewheeled to object A and instructed to walk to the remembered location ofobject B: (i) Survey knowledge task: the maze disappears and participantstake a direct shortcut from A to B. (ii) Graph knowledge task: participantswalk from A to B within the maze corridors, with detours. We expect thatactive decisions will be sufficient for graph knowledge, whereas activemotor/proprioceptive and/or vestibular information will be necessary formetric knowledge, and both will surpass passive learning.Acknowledgement: NASA/RI Space Grant136 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Morning Posters33.323 Putting New Zealand on the map: Investigating cognitivemaps in human navigation using virtual environmentsDiane M. Thomson 1 (dmt9@waikato.ac.nz), John A. Perrone 1 ; 1 The University ofWaikato, Hamilton, New ZealandThe mechanisms underlying navigation in complex environments are currentlynot very clear, particularly the role of visual rotation information. Wetherefore examined the accuracy of human path integration abilities basedon purely visual information (e.g., depth cues, optic flow information andlandmarks), focusing mainly on the effects of self-rotation by the navigator.Participants navigated either actively or passively through realistic largescalevirtual environments in a driving simulator, along routes consistingof roads linked by a traffic circle. The environments were modelled on realNew Zealand locations. Angular estimates of the starting point locationwere recorded for a number of different conditions. In the active mode,participants used a steering wheel, accelerator and brake pedals to controltheir simulated motion; whilst passive participants observed a pre-recordedroute. The environments were varied between highly structured (urban)and less structured (rural) settings, in order to increase or reduce optic flowinformation and depth cues; and between those with landmarks present atthe intersections, and those with no landmarks. Route layouts were manipulatedto include different combinations of road-lengths and intersectionexit-road angles. Participants were able to perform path integration usingvisual information, but with systematic errors. There was a clear effect ofnavigation mode: in all environments, participants in the active conditiontended to be more accurate than those in the passive condition, except whenthe route consisted of a short approach road to an intersection followed by along exit road. Clear effects of route layout were also observed: the patternof errors (overestimation vs. underestimation of the direction of the startinglocation) depended on the angle-distance configuration. However, thepresence of more structure and landmarks did not increase accuracy: thepattern of errors was similar between the urban and rural environments,and between environments with and without landmarks.Acknowledgement: The New Zealand Road Safety Trust33.324 Learning relative locations in single and multiple- destinationroute planning in the real labyrinthKayoko Ohtsu 1 (id-ant@moegi.waseda.jp), Yoshihiro Oouchi 2 ; 1 Graduate School ofEducation, Waseda University, 2 Teikyo-Gakuen Junior CollegeA few studies have examined the relation between route planning duringwayfinding and spatial learning. Route planning often involves manipulationof spatial representation about a current space when one cannot see adestination directly. At an early stage of learning an environment, one hasto retrieve some representation about a destination and estimate its directionrelative to a current place to plan and choose a route. In other words,that is a process of knowing the relative location of the two places. Wetested the hypothesis that a route planning would enhance spatial learningthrough the wayfinding task using a real environment. Incidental learningoutcomes between two types of route planning were compared: (I) singledestination (estimating one relative location at a time) and (II) multipledestinations (estimating a number of relative locations at a time). In theexperiment, 50 participants explored the simple symmetric labyrinth (7 by7 meters) freely and visited 4 targets (exploring phase), and then they wereasked to revisit these targets in a predefined order (visiting phase). Half theparticipants were given (i) the next goal target every time after reaching thecurrent one while the other half were given (ii) the next 3 goal targets anda order to make a round of visits. After the task, the participants judged 12relative directions to and from the 4 target positions. Results suggest thatperformance in multiple destinations condition is better than that of singleone. Since there was no difference in total task execution time and bothconditions had brought about similar physical experiences (the amountof walking and a migration pathway), we conclude the manipulation ofspatial representations about multiple destinations in the exploring phasewould have formed more elaborated spatial knowledge.33.325 Investigating the potential impact of presence on theaccuracy of participants’ distance judgments in photo-realisticand non-photorealistic immersive virtual environmentsVictoria Interrante 1 (interran@cs.umn.edu), Lane Phillips 1 , Brian Ries 1 , MichaelKaeding 1 ; 1 Department of Computer Science, University of MinnesotaThe reported experiment seeks to provide insight into the impact of renderingstyle on participants’ sense of presence in an immersive virtual environment(IVE). This work is motivated by recent findings that a) people tendnot to severely underestimate distances in an immersive virtual environmentwhen that IVE is a high-fidelity replica of the same physical spacethat they know they are concurrently occupying; but b) people will underestimatedistances when the virtual replica environment is rendered in aminimalist, line-drawing (NPR) style. We ultimately seek to disambiguatebetween two alternative hypotheses: a) is the decline in distance judgmentaccuracy due to participants’ decreased sense of presence in the NPR IVE,which interferes with their ability to act on what they see as if it were real,or b) is it better explained by the lack of sufficient low-level cues to 3D spatiallocation in the NPR IVE, that were formerly provided by the statisticsof the photographic texture? We conducted a between-subjects experimentin which users were fully-tracked and immersed in an IVE that was eithera photorealistically or a non-photorealistically rendered replica of our lab.We quantitatively assessed their depth-of-presence using physiologicalmeasures of heart-rate and galvanic-skin-response, along with characteristicgait metrics derived from full-body tracking data. Participants in eachgroup were asked to perform a series of tasks that involved traversing theroom along a marked path. They did the exercises first in the regular replicaIVE and then in a stress-enhanced version, in which the floor surroundingthe marked path was cut away to reveal a two-story drop. We measured thedifferences in each participant’s physiological measures and tracked gaitmetrics between the stressful and non-stressful versions of each environmentand then compared the results between the rendering conditions andfound significant differences between the groups.Acknowledgement: IIS-071358733.326 Effects of augmented reality cues on driver performanceMichelle Rusch 1, 2 (michelle-rusch@uiowa.edu), Elizabeth Dastrup 3 , Ian Flynn 1 ,John Lee 4 , Shaun Vecera 5 , Matt Rizzo 2, 1 ; 1 University of Iowa, Department ofMechanical and Industrial Engineering, 2 University of Iowa, Department ofNeurology , 3 University of Iowa, Department of Biostatistics, 4 University ofWisconsin-Madison, Department of Industrial and Systems Engineering, 5 Universityof Iowa, Department of PsychologyIntroduction: Intersections are among the most hazardous roadway locations,particularly for left turns. This study evaluated effects of augmentedreality (AR) cues on decisions to turn left across gaps in oncoming traffic.Method: Ten middle-aged drivers (Mean=40.6 years, SD=7.5; males=4)were tested on six simulated rural intersection scenarios. Drivers activatedthe high beam lever the moment they judged it safe to turn and releasedthe lever the moment it was unsafe. A transparent ‘no turn left’ AR cueassisted the driver. It was positioned where oncoming traffic crossed theintersection, subtended 10°, signaled 4s time-to-contact (TTC) (cf., Nowakowskiet al., 2008), and persisted until oncoming traffic passed. Uncuedblocks (N=3) always preceded cued blocks (N=3). The three different cuedblocks contained either: 1) 0% false alarms (FAs) and 0% misses, 2) 15%FAs, 0% misses, and 3) 15% misses (no cue despite .05, all cases).Conclusions: AR cues may have influenced driver behavior. The safetycushion in uncued conditions increased after AR cue exposure. This moreconservative behavior may reflect cue related learning or general learning;however, if this finding were due to general learning we would expectsmaller cushions. The small proportion of FAs and misses did not appearto affect response to the AR cues, based on the finding of no differencesbetween the cued conditions.Acknowledgement: Supported by NIH grant R01AG026027 & the University of Iowa’sInjury Prevention Research Center Pilot Grant Program33.327 Looking where you are going does not help path perceptionLi Li 1 (lili@hku.hk), Joseph Cheng 1 ; 1 Department of Psychology, The University ofHong Kong, Pokfulam, Hong Kong, China SARSunday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>137


Sunday Morning PostersVSS 2010 AbstractsSunday AMIt has been mathematically shown that when travelling on a circular pathand fixating a target on the future path, flow lines for environmental pointson the path would be vertical. Thus, by integrating all the vertical lines inthe flow field, observers could recover the path trajectory directly from retinalflow without recovering heading (e.g., see Wann & Swapp, 2000). Herewe test whether fixating a target on the future path helps path perception.Observers viewed displays (110°Hx94°V) simulating their traveling on acircular path over a textured ground (T=3 m/s, R=±3°/s or ±6°/s) for 1 s.Three display conditions were tested. In the path-fixation condition, thesimulated gaze direction in the display pointed to a target along the pathat 20° away from the starting position; in the non-path-fixation condition,the simulated gaze direction was on a target 10° inside or outside the pathat the same distance; and in the heading-fixation condition, the simulatedgaze pointed to the instantaneous heading (i.e., the tangent to the path). Atthe end of the trial, a probe appeared at 10 m. Observers used a mouse toplace the probe on their perceived future path. For five observers (3 naïve),path errors (defined as the deviation angle between the perceived and theactual path at 10 m) were accurate only for the heading-fixation condition(mean error: 2.72° & 0.52° for R=3°/s & 6°/s, respectively). For the pathandnon-path-fixation conditions, path errors displayed a positive slope(0.6 & 0.98, respectively), consistent with the fact that observers estimatedthe path curvature based on the total amount of rotation in the flow field.The findings suggest that fixating a target on the future path does not necessarilyhelp the perception of the path trajectory. Path perception largelydepends on solving the translation and rotation problem in retinal flow.Acknowledgement: Supported by: Hong Kong Research Grant Council, HKU 7471//06H33.328 Simulation of the retina in a sensory substitution deviceBarthélémy Durette 1,2 (barthelemy.durette@psychol.uni-giessen.de), NicolasLouveton 2 , David Alleysson 3 , Jeanny Hérault 2 ; 1 General and ExperimentalPsychology, Justus Liebig University, 2 DIS, Gipsa-lab, Institut NationnalPolytechnique de Grenoble, 3 Laboratoire de Psychologie et Neurocognition,Université Pierre Mendès FranceVisual sensory substitution devices transmit the information from a videocameravia a substitute sense, usually the tactile or auditory sense (Bachy-Rita,1969). They have proved to give blind subjects perceptual abilities(see e.g. Auvray and Myin, 2009), and to induce activity in cerebral regionsusually dedicated to vision (Poirier et al., 2005 ; Merabet et al., 2009). Inthis experiment, we manipulate the image of the video-camera to mimickthe spatio-temporal response of the Magnocellular ON pathway (Hérault,2009) and tested its effect on the mobility of blindfolded subjects. Blindfoldedsubjects equipped with a visuo-auditory substitution system namedTheVIBE (Auvray et al., 2005) were told to complete a maze in the alley ofa park. After four learning sessions lasting around 30 minutes, the imagefrom the video-camera was reversed without the subject being aware ofit. We measured a significant difference of performance showing that thedevice has an effect on the mobility of the subjects. Moreover, in the groupincluding signal processing, we found a significant correlation between theeffect of the image reversal and the improvement of the participants duringthe learning sessions, indicating an actual benefit of the device for mobility.We also report qualitative observations on the subjects’ behavior. Inparticular, most of the participants made continuous low amplitude oscillatorymovements resembling miniature eye movements. Auvray, M. et al.(2005) Journal of Integrative Neuroscience, Imperial College Press, 2005,4, 505 Auvray, M. & Myin, E. (2009), Cognitive Science, 33(7). Bach-y-Rita,P. et al.. (1969) Nature, 1969, 221. Poirier, C. et al. (2005) Neurobiol LearnMem 85 (1). Hérault, J. & Durette, B. (2007) IWANN 2007, Springer-VerlagMerabet, Lotfi B. et al. (2009). NeuroReport 20 (2)33.329 How Path Integration Signals Create the Spatial Representationsupon which Visual Navigation BuildsHimanshu Mhatre 1 (hmhatre@gmail.com), Anatoli Gorchetchnikov 1 , StephenGrossberg 1 ; 1 Department of Cognitive and Neural Systems, Center for AdaptiveSystems, and Center of Excellence for Learning in Education, Science andTechnology, Boston University, 677 Beacon Street, Boston, MA 02215Navigation in the world uses a combination of visual, path integration,and planning mechanisms. Although visual cues can modify and stabilizenavigational estimates, path integration signals provide “ground truth”upon which vision builds, and enables navigation and dead reckoningin the dark. A complete understanding of visually-based navigation thusrequires an understanding of how path integration creates the spatial representationsupon which vision can build. Grid cells in the medial entorhinalcortex use vestibular path integration inputs to generate remarkablehexagonal activity patterns during spatial navigation (Hafting et al., 2005).Furthermore, there exists a gradient of grid cell spatial scales along the dorsomedial-ventrolateralaxis of entorhinal cortex. It has been shown how aself-organizing map can convert the firing patterns across multiple scalesof grid cells into hippocampal place cell firing fields that are capable of spatialrepresentation on a much larger scale (Gorchetchnikov and Grossberg,2007). Can grid cell firing fields themselves arise through a self-organizingmap process, thereby providing a unity of mechanism underlying theemergence of entorhinal-hippocampal spatial representations? A self-organizingmap model has been developed that shows how path integrationsignals may be converted through learning into the observed hexagonalgrid cell activity patterns across multiple spatial scales. Such a model overcomeskey problems of the useful oscillatory interference model of grid cellfiring (Burgess et al., 2007). The proposed new model hereby clarifies howpath integration signals generate hippocampal place cells through a hierarchyof self-organizing maps. Top-down attentional matching mechanismsare needed to stabilize learning in self-organizing maps (Grossberg, 1976).Such hippocampal-to-entorhinal feedback mechanisms illustrate howvisual cues can build upon and modify entorhinal and hippocampal spatialrepresentations during navigation in the light.Acknowledgement: Supported in part by CELEST, an NSF Science of Learning Center(SBE-0354378) and the SyNAPSE program of DARPA (HR00ll-09-3-0001, HR0011-09-C-0011)33.330 Hardware and software computing architecture for roboticsapplications of neuroscience-inspired vision and navigationalgorithmsChin-Kai Chang 1 (chinkaic@usc.edu), Christian Siagian 1 , Laurent Itti 1 ; 1 iLab,Computer Science Department, University of Southern CaliforniaBiologically-inspired vision algorithms have thus far not been widelyapplied to real-time robotics because of their intensive computation requirements.We present a biologically-inspired visual navigation and localizationsystem which is implemented in real-time using a cloud computingframework. We create a visual computation architecture on a compactwheelchair-based mobile platform. Our work involves both a new designof cluster computer hardware and software for real-time vision. The visionhardware consists of two custom-built carrier boards that host eight computermodules (16 processor cores total) connected to a camera. For all thenodes to communicate with each other, we use ICE (Internet CommunicationEngine) protocol which allow us to share images and other intermediateinformation such as saliency maps (Itti & Koch 2001), and scene “gist”features (Siagian & Itti 2007). The gist features, which coarsely encode thelayout of the scene, are used to quickly identify the general whereaboutsof the robot in a map, while the more accurate but time consuming salientlandmark recognition is used to pin-point its location to the coordinatelevel. Here we extend the system to also be able to navigate in its environment(indoors and outdoors) using these same features. That is, the robothas to identify the direction of the road, use it to compute movement commands,perform visual feedback control to ensure safe driving over time.We utilize four out of eight computers for localization (salient landmarkrecognition system) while the remainder are used to compute navigationstrategy. As a result, the overall system performs all these computing taskssimultaneously in real-time at 10 frames per second. In short, with the newdesign and implementation of the highly-capable vision platform, we areable to apply computationally complex biologically-inspired vision algorithmson the mobile robot.Acknowledgement: NSF, ARO, General Motors, and DARPA33.331 The Relationship Between Blink Rate and Navigation TaskPerformanceKevin Barton 1 (krbarton@uwaterloo.ca), Daniel Smilek 1 , Colin Ellard 1 ; 1 Departmentof Psychology, University of WaterlooEmerging research has suggested a correspondence between eye-blink rateand task performance. Recently, Tsai, Viirre, Strychacz, Chase, & Jung (Aviation,Space, and Environmental Medicine, 2007) demonstrated an increasein blink rate as a function of split attention during a basic driving task.However, the relationship between blink rate and navigation performancein a more complex navigation task requires further investigation. The presentstudy investigated the relationship between blink rate and navigationperformance as a function of the complexity of an environment using a138 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Morning Posterstwo-step navigation task in virtual reality. Participants were asked to navigatethrough two novel virtual environments to a central landmark, andthen were asked to navigate back to the starting position. The two environmentsconsisted of identical buildings, but differed in their arrangementwithin the environment, resulting in a high and low intelligibility environment.Additionally, the influence of textural information was manipulatedbetween subjects by providing either unique or uniform textures for eachbuilding within the environments. An analysis of variance on the overallmovement paths, duration of navigation, and blink rate for both the exploratoryand wayfinding tasks revealed a significant main effect of configurationon the distance, duration, idiosyncracy of each path, and blink rate ofeach participant. Critically, this effect was observed during the wayfindingtask, but not the exploration task, with a higher blink rate and longermovement paths being observed in the low intelligibility environmentrelative to the high intelligibility environment. Only limited evidence wasfound for the influence of textural information on these results. Taken as awhole, these results provide early evidence for the differential allocation ofattention during navigation through complex environments, resulting inreduced navigation performance.Acknowledgement: Funding for this work was provided by the Natural <strong>Sciences</strong> andEngineering Research Council of Canada (NSERC) and Social <strong>Sciences</strong> and HumanitiesResearch Council of Canada (SSHRC)Perceptual organization: TemporalprocessingOrchid Ballroom, Boards 401–409Sunday, May 9, 8:30 - 12:30 pm33.401 Integration of visual information across timeMordechai Z. Juni 1 (mjuni@nyu.edu), Todd M. Gureckis 1 , Laurence T. Maloney 1,2 ;1 Department of Psychology, New York University, 2 Center for Neural Science,New York UniversityIntroduction. We examine how observers combine spatial cues to estimatethe center of a Gaussian distribution when the cues are presented one at atime in rapid succession. The rule of combination minimizing spatial variancein the estimate is to give equal weight to each cue independent of theorder of presentation.Methods. On each trial, we sampled nine values from a spatial univariateGaussian (SD = 3.32 cm). The mean of the Gaussian varied from trial totrial. We drew small vertical ticks whose x-coordinate was the sample valueand whose y-coordinate was the center of the display marked by a horizontalreference line. Each tick was visible for 150 msec followed by a 150msec delay between successive ticks. Observers estimated the center of theGaussian by clicking on the horizontal reference line. Observers completed200 trials without feedback, followed by 300 trials with corrective feedbackindicating the true center of the Gaussian.Results. For each observer, we estimated the weights assigned to the first,second, etc. tick in the sequence separately for the first 200 trials (no feedback)and the last 200 trials (with feedback). Before feedback, observersassigned unequal weights as a function of temporal order (F(8,32) = 4.21,p .05). Crucially, there was a significant interaction betweensequence position and the presence of feedback, F(8,68) = 3.14, p


Sunday Morning PostersVSS 2010 AbstractsSunday AMdecreased for other spatial configurations. Coupling was weakest whenstimuli with opposite motion direction were presented within the same -left or right- hemifield. Overall, mirror symmetry and common fate reliablyinfluence the dynamic coupling of bistable stimuli. The distribution of therelative coupling strengths across experimental conditions indicates thatit does not result from a decision bias or from ‘high-level’ Bayesian inferences.Our findings rather suggest that bistable neural attractors underlyingthe processing of each stimulus are coupled. The effect of common fatecould reflect activity of neurons with large receptive fields encompassingthe two stimuli, e.g. in area MT/MST. On the other hand, the strong effectof mirror symmetry could reflect the contribution of long-range connectionsthrough the corpus callosum. We tested this hypothesis by changingthe relative locations of the inductive and target stimuli in each hemifield,thus altering mirror symmetry while keeping stimulus separation identical.Results showed decreased coupling whenever vertical symmetry was broken.Overall, we suggest that coupled dynamics of bistable stimuli reflectlong-range connectivity, thus allowing a behavioural mapping of its functionalproperties.33.405 Dynamics of ménage à trois in moving plaid ambiguousperceptionJean-Michel Hupé 1 (Jean-Michel.Hupe@cerco.ups-tlse.fr); 1 CerCo, ToulouseUniversity & CNRSThe perception of ambiguous moving rectangular plaids with transparentintersections is tristable rather than bistable. Not only does it alternatebetween coherent and transparent motion, but also for transparent motionwhich grating is perceived in front is ambiguous and alternates (Hupé &Juillard, SFN 2009). The dynamics of perceptual tristability can inform usabout how the visual system deals conjointly with two computational challengesamong the most important in perceptual organization: motion integrationvs. segmentation and depth ordering. Twenty-six subjects reportedcontinuously the three possible percepts of red/green plaids displayed for1 minute (for transparent motion they had to indicate whether the red orthe green grating was in front). The sequence between the three possibletransitions was neither random nor hierarchical (as observed in multistablebinocular rivalry by Suzuki & Grabowecky, 2002). Rather, switchesbetween two transparency states were typically interleaved by a coherentpercept. Moreover, the duration of that coherent percept determinedwhether switching to the opposite depth ordering should occur (for shortcoherent percept duration) or not. The preferential status of the coherentinterpretation in this threesome may explain why the first percept is systematicallycoherent and lasts longer than subsequent coherent percepts,even for parameters that strongly favor the transparent interpretation(Hupé & Rubin, 2003. Such a behavior is not typical in bistable perception,but it was also observed for auditory streaming: Pressnitzer & Hupé, 2006).I tested this hypothesis by making the plaid perception bistable, introducingeither occlusion or stereo cues to remove the ambiguity of depth ordering.Both manipulations resulted in the first percept (when coherent) havingnow the same duration as subsequent coherent percepts. Interestingly,the preference for coherency (first percept bias) was affected by stereo butnot occlusion cues, meaning that the first percept bias for coherency and itslonger duration are two independent phenomena.Acknowledgement: Agence Nationale de Recherche ANR-08-BLAN-0167-0133.406 Temporal Dynamics in Convexity Context EffectsElizabeth Salvagio 1 (bsalvag@email.arizona.edu), Mary A. Peterson 1 ; 1 The Universityof ArizonaConvex regions are more likely to appear as objects (figures) than abuttingconcave regions, but context modulates this likelihood: In 100-ms displays,convex regions are increasingly likely to be seen as figures as the numberof alternating convex and concave regions increases from 2 to 8 (57% -89%; Peterson & Salvagio, 2008). These convexity-context effects occur onlywhen the concave regions are uniform in color. We hypothesized that convexity-contexteffects arise when the interpretation of a single large surfacepre-empts that of multiple same-color concave figures, which then allowsconvex figures to dominate. We investigated whether it takes time for surfacepre-emption to occur by presenting a mask at different inter-stimulus-intervals(ISIs) after the figure-ground display, following a tradition inwhich masks are used to test the dynamics of visual processing. When themask immediately followed the 100-ms display (0-ms ISI), simple effects ofconvexity were observed in that convex regions were seen as figure significantlymore often than chance (56%), p . 20. When the onset of the mask wasdelayed by 100 ms, convexity-context effects were evident, p


VSS 2010 AbstractsSunday Morning Postersface amodal completion results from early suppression in early visual areasand late enhancement in higher visual areas and attention plays a criticalrole in these neural events.Acknowledgement: This work is supported by the National Natural Science Foundation ofChina (Project 30870762, 90920012 and 30925014)33.409 Figure-ground signals in early and object specific visualareas: A combined fMRI, EEG and rTMS studyMartijn E. Wokke 1 (martijnwokke@gmail.com), H. Steven Scholte 1 , Victor A.F.Lamme 1,2 ; 1 University of Amsterdam, 2 Netherlands Opthalmic Research InstituteTwo processes can be discriminated when distinguishing a figure from itsbackground: boundary detection and surface segregation. The neural originand temporal dynamics of these two processes are still much disputed.In this study we used motion and texture defined stimuli that differentiatebetween edge detection and surface segregation. In order to investigatethe function of networks involved in figure-ground segregation, we combinedonline rTMS and EEG and disrupted processing in nodes of distinctvisual networks (dorsal vs. ventral). For motion defined stimuli rTMS/EEGresults indicate that rTMS alters figure-ground related processes differentiallydepending on whether the dorsal (V5/MT) or ventral (Lateral Occipital[LO]) network was stimulated. The data suggest that disrupting V5/MTimpairs surface segregation but not edge detection. This behavioral effectwas reflected in interrupted feedback signals to occipital areas as measuredby EEG. However, disrupting LO has the opposite effect, it enhances surfacesegregation and boosts feedback signals to occipital areas. To interpretthe rTMS/EEG results, BOLD-MRI was measured in both areas. fMRI datashowed that V5/MT differentiates between edge detection and surface segregationfor motion defined stimuli, while LO does not. For texture definedstimuli no differentiation was found between edge detection and surfacesegregation in both areas. In general, the rTMS/EEG and fMRI data suggesta battle of resources between the dorsal and ventral stream in the processof figure-ground segregation. When the stream that is less involved in thisprocess becomes disrupted, the stream that is more involved can becomemore dominant, resulting in better performance and enhanced feedbacksignaling to occipital areas.Perceptual organization: ObjectsOrchid Ballroom, Boards 410–418Sunday, May 9, 8:30 - 12:30 pm33.410 Binary Division Constrains Human but not Baboon CategoricalJudgements within Perceptual (colour) ContinuaJules Davidoff 1 (j.davidoff@gold.ac.uk), Julie Goldstein 1 , Ian Tharp 1 , Elley Wakui 1 ,Joel Fagot 2 ; 1 Goldsmiths University of London, Department of Psychology,Lewisham Way, LONDON SE14 6NW, 2 CNRS-Université de Provence, Laboratoryof Cognitive Psychology, Marseille FranceIn Experiment 1, two human populations (Westerners and Himba) and oldworldmonkeys (baboons: Papio papio) were given matching-to-samplecolour tasks. We report a similar strong tendency to divide the range ofcoloured stimuli into two equal groups in Westerners and in the remote population(Himba), but not in baboons. When matching the range of colours tothe two samples, both human groups produced a boundary at the midpointof the range and it was at this point where there was most uncertainty ofthe best match. The boundary depended on the range of stimuli and henceoverrode established colour categories. However, range differences did notaffect the names given to the colours by either Western or Himba observers.In Experiment 2, we showed that a distinctive stimulus (focal colour)in the range affected the equal division though observers again made aboundary. Experiment 3 employed an implicit task (visual search) to assesscolour categorization (Categorical Perception), and it was only in this taskthat categorization was immune to range effects and observed only at theestablished colour boundary. Nevertheless, prior exposure to the range ofcolours did affect naming producing binary division for a restricted rangeof colours. Thus, irrespective of whether colour categories are taken to beuniversal (Berlin & Kay, 1969) or language induced (Davidoff, Davies &Roberson, 1999), they are overridden in colour decision tasks by this strongerhuman tendency to divide continua into two. It is argued that binarydivision is the basic human mechanism whereby labels are used to establishcolour categories.33.411 The effect of temporal frequency on the local and globalstructure of Glass patternsMelanie Palomares 1 (mcp@ski.org), Anthony Norcia 2 ; 1 The University of SouthCarolina, 2 The Smith-Kettlewell Eye Research InstituteGlass patterns are moirés created from a sparse random-dot field pairedwith it spatially shifted copy. Because discrimination of these patterns cannotbe based on local features, they have been used extensively to studyglobal integration processes. Using a multi-frequency tagging technique torecord visual evoked potentials (VEPs), we can simultaneously measureneural sensitivity to local and global structure to Glass patterns. We havepreviously found that sensitivity to local and global structures of Glass patternshave different specificities: global responses were largely independentof luminance contrast while local responses were not (Palomares, et al,2009, Journal of Cognitive Neuroscience), global responses were unaffectedby directed attention while local responses were not and scalp topographiesof global responses were localized more laterally than local responses(Palomares, et al, VSS 2009). Here, we evaluated the specificity of local andglobal responses to the local temporal frequency of Glass patterns. If sensitivityto global structure is independent from local structure, one strongexpectation is that global responses to Glass patterns would remain unaffectedby the local update of the dots. Random dot patterns were spatiallyoffset to create concentric Glass patterns and alternated with randomizedversions every 600 ms. Thus the global structure changed at 0.83 Hz. Differentexemplars of concentric Glass patterns or randomly-oriented dipoleswere sequentially presented at faster rates (every 66, 50 or 33 ms); the localstructure changed at 15, 20 or 30 Hz. Our results show that sensitivity tolocal responses were highest at lower frequencies, while global responseswere best at higher frequencies. VEP source-imaging on fMRI-based regionsof interest suggest that this pattern is strongest in V4. Our data further demonstratethat sensitivity to local and global structure in dynamic Glass patternsis mediated by different, complementary mechanisms.Acknowledgement: National Institutes of Health (#EY014536, EY06579, EY19223) andthe Pacific <strong>Vision</strong> Foundation.33.412 Interpolation of Expanding/Contracting Objects behind anOccluding SurfaceHideyuki Unuma 1 (hide.unuma@kgwu.ac.jp), Hisa Hasegawa 2 , Philip J. Kellman 3 ;1 Kawamura Gakuen Women’s University, 2 Aoyama Gakuin University, 3 Universityof California, Los AngelesVisual systems of humans and animals seem to extract critical informationfor object perception from the changing visual stimulation producedby object and observer motion. Although objects in ordinary scenes areoften partially occluded, observers routinely perceive the shape of objectsdespite the occlusion and motion. This ability depends on interpolationprocesses that connect fragments across gaps in space and time to representdynamically-occluded objects. Palmer, Kellman, and Shipley (2006, JEP:G)proposed that spatiotemporal interpolation depends on a Dynamic VisualIcon (DVI) which represents and positionally updates previously visiblefragments. Little research, however, has explored the range of motions thatsupport interpolation or the ecological validity of transformations that maybe important to object interpolation in ordinary environments.In the present study, the effect of velocity gradients on interpolation ofobjects expanding or contracting behind occluding surface was examined.Six participants observed the shapes of interpolated objects through multipleapertures and made two-alternative forced choice of objects. Threeconditions of velocity gradients were compared using correct response rateas a measure of object interpolation: (1) acceleration condition where localspeeds of visual edges increased linearly towards periphery, (2) negativeaccelerationcondition where speeds of edges decreased linearly towardsperiphery, and (3) constant-speed condition where local speeds were heldconstant. The results showed that effects of velocity gradients were significant,and that observers perceived interpolated objects with higher probabilityin acceleration condition than in negative-acceleration or in constantspeedcondition. These results suggest that direction and velocity gradientsof moving edges which may represent approaching and receding objectshave critical effects on visual interpolation of moving objects.33.413 Object-based attention benefits demonstrate surfaceperception in two-dimensional figure-ground displaysAndrew Mojica 1 (ajmojica@email.arizona.edu), Brian Roller 1 , Elizabeth Salvagio 1 ,Mary Peterson 1 ; 1 University of ArizonaSunday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>141


Sunday Morning PostersVSS 2010 AbstractsSunday AMObjects tend to be convex rather than concave, but convexity is not a strongfigural cue in two-dimensional displays unless (1) multiple convex regionsalternate with multiple concave regions, and (2) the concave regions are thesame color (Peterson & Salvagio, 2008). To explain these effects, we hypothesizedthat the interpretation of a single large surface pre-empts that ofmultiple same-color concave shapes. Consequently, the competition fromconcave shapes at individual borders is reduced, and convex shapes dominate.On this surface pre-emption hypothesis, separated same-color concaveregions in multi-region figure-ground displays would be perceived asportions of a single surface whereas separated same-color convex regionsin the same displays would not. To test this hypothesis, we adapted a cuedtarget discrimination paradigm Albrecht, et al. (2008) had used with threedimensionaldisplays for use with our 2-D figure-ground displays. Weexamined whether object-based attention benefits—shorter reaction timesto a target appearing within the same object as a pre-cue rather than in a differentobject -- are obtained for two same-color concave regions separatedby a convex region but not for two same-color convex regions separatedby a concave region. Consistent with the surface pre-emption hypothesis,object-based attention benefits were obtained for targets shown on samecolorconcave regions flanking a convex region but not for targets shown onsame-color convex regions flanking a concave region (p


VSS 2010 AbstractsSunday Morning Postersthese methods is to address issues of model identifiability that can arisewhen there is a one-to-many mapping from empirical data to the GRTframework. Mean shift integrality is such a situation, under which inferentialerrors can occur because there are multiple solutions in GRT. A meanshift integrality arises when changing one dimension of a multidimensionalstimulus shifts the perceptual representation of all other dimensions. Wehave developed two techniques that can facilitate identification of a meanshift. The first is a collection of probit models that can be estimated simultaneouslyacross two dimensions (DeCarlo, 2003), allowing bivariate correlationswith perceptual distributions to be directly estimated. When a meanshift in distributions is accompanied by a continuous decision bound, theprobit models identify bivariate correlations of the same sign and similarmagnitude across all distributions. They also identify any shift in decisionbound relative to the distributions. The second approach is an applicationof polychoric and tetrachoric correlations both within and across all distributions.Tetrachoric correlations applied to data sampled from mean-shiftdistributions accompanied by a continuous decision bound shift revealedsignificant non-zero correlations in the response space. These estimatesare sensitive to the magnitude of the mean shift. Results from the twoapproaches are contrasted with more traditional multidimensional signaldetection theory approaches (Kadlec, 1995; 1999).Acknowledgement: This project was facilitated by funding from the World UniversitiesNetwork Research Mobility Programme.33.418 Hemifield modulation of approximate number judgmentsHeeyoung Choo 1 (h-choo@northwestern.edu), Steve Franconeri 1 ; 1 Department ofPsychology, Northwestern UniversityThe visual system offers several types of summary information about visualfeatures, including approximate number (Miller & Baker, 1968). Approximatenumber perception can be affected by adaptation in a hemifield specificmanner (Burr & Ross, 2008; but see also Durgin, 2008). After adaptingto a large and a small number of dots in different hemifields, a later presentationof identical sets of dots appears to have the opposite relationship(smaller and larger, respectively). This result raises the possibility thatnumerosity information is extracted independently over each hemifield.We directly tested this possibility by asking participants to judge the largerof two collections of dots, one presented across the hemifield boundary(between-hemifield presentation) and the other presented within one of thehemifield (within-hemifield presentation). The between- or within- hemifieldmanipulation was made by keeping the dot collections in the samelocations, but then changing fixation location. Participants systematicallyjudged the within-hemifield collections as having more dots. However, thiseffect disappeared when (1) the fixation location was manipulated so thatneither collection fell on the hemifield boundary, and (2) the two collectionswere presented sequentially. In other words, any hemifield modulationin numerosity judgments occurred only when a subset of dots crossedthe hemifield boundary. The results together suggest that when creating anapproximate number representation, dots falling in the same hemifield areto mandatorily pooled.Motion: Biological motionOrchid Ballroom, Boards 419–435Sunday, May 9, 8:30 - 12:30 pm33.419 The Effects of TMS over STS and Premotor Cortex on thePerception of Biological MotionBianca van Kemenade 1 (biancavankemenade@gmail.com), Neil Muggleton 2 ,Vincent Walsh 2 , Ayse Pinar Saygin 3 ; 1 Institute of Neurology, University CollegeLondon, 2 Institute of Cognitive Neuroscience, University College London,3 Department of Cognitive Science, University of California San DiegoMultiple brain areas have been identified as important for biological motionprocessing in neuroimaging and neuropsychological studies. Here, weinvestigated the role of two areas implicated in biological motion, the posteriorsuperior temporal sulcus (STS) and the premotor cortex, using offlinetranscranial magnetic stimulation (TMS). Stimuli were noise masked pointlight displays (PLDs) of human figures performing various actions, andscrambled versions of the same stimuli. Subjects had to determine whethera moving person was present in each trial. Noise levels were determinedindividually based on each subject’s 75% accuracy threshold estimatedadaptively prior to the session. After three baseline runs of 40 trials each,theta burst TMS was delivered over left premotor cortex (near the inferiorfrontal sulcus, IFS), left STS, or vertex in different sessions. The coordinatesof stimulation were based on our previous lesion-mapping study (Saygin,2007, Brain). Subjects then completed three post-TMS runs. A non-biologicalmotion task (detecting PLDs of translating polygons) served as a furthercontrol. Accuracy decreased significantly after TMS of the IFS, whilereaction times shortened significantly after TMS of the STS. Using signaldetection analysis, we observed that d’ and criterion values were significantlydecreased after TMS of the IFS, (but not STS), which was due tosubjects making significantly more false alarms post-TMS. None of theseTMS effects were found for the non-biological control task, indicating somespecificity to biological motion. Our findings constitute important stepstowards understanding the neural systems subserving biological motionperception, but future work is needed to clarify the precise functional rolesof these two areas in biological motion perception. We hypothesize thatduring biological motion perception, premotor cortex provides a modulatoryinfluence to help refine the computations of posterior areas. Alternatively,premotor cortex might be important for decision making regardingbiological motion.Acknowledgement: This work was supported by a European Commission Marie CurieAward to APS. We thank Jon Driver and Chris Chambers.33.420 Contribution of body shape and motion cues to biologicalmotion selectivity in hMT+ and EBA depends on cue reliabilityJames Thompson 1 (jthompsz@gmu.edu), Wendy Baccus 1 , Olga Mozgova 1 ; 1 Departmentof Psychology, George Mason UniversityThe perception of biological motion involves the integration of motion cueswith form cues such as body shape. Recently it was shown that voxel-wiseselectivity for biological motion within motion area hMT+ and extrastriatebody area (EBA) correlated with selectivity for static bodies but notwith motion (Peelen, Wiggett, & Downing, 2006). This suggested that theresponse to biological motion in these regions was driven entirely by theresponse to body selectivity and not by motion. Here we examined if thecontribution of motion and body shape selectivity to biological motionselectivity in hMT+, EBA, as well as the fusiform body area (FBA) is fixedor if it relies in part on the reliability of form and motion cues. We hypothesizedthat while form cues might be most reliable with foveal presentationof stimuli, reliability should decrease if stimuli are presented in moreperipheral locations. In contrast, we hypothesized that eccentricity wouldhave little effect on motion cue reliability, leading to an increased contributionof motion selectivity to biological motion selectivity at more peripherallocations. Using fMRI, we identified hMT+, EBA, and FBA using standardlocalizers. Participants then performed a one-back task on point-light biologicalmotion or tool motion stimuli presented centrally or more than 5oright of left of fixation. Using correlation-based multivoxel pattern analysis(MVPA), we replicated the finding that biological motion selectivity wasassociated with body selectivity but not motion selectivity in hMT+, EBA,and FBA - but only with foveal presentation. Presenting stimuli at moreperipheral locations led to a significant correlation between motion selectivityand biological motion selectivity in hMT+ and EBA. These findingssuggest that cue reliability is taken into account as form and motion cuesare integrated during the neural processing of biological motion.33.421 Multi-voxel pattern analysis (MVPA) of the STS duringbiological motion perceptionSamhita Dasgupta 1 (samhita@uci.edu), John Pyles 2 , Emily Grossman 1 ; 1 Departmentof Cognitive <strong>Sciences</strong>, Center for Cognitive Neuroscience, University ofCalifornia Irvine, 2 Center for the Neural Basis of Cognition, Carnegie MellonUniversityNeuroimaging studies have identified the human superior temporal sulcus(STSp) to have brain responses correlated to the perception of biologicalmotion (e.g. Grossman et al., 2000; Allison, Puce & McCarthy, 2000). Thehuman STS is believed to be the homologue to monkey superior temporalpolysensory area (STPa), in which single-unit physiology studies haveshown neurons responsive to biological motion (e.g. Perrett et al., 1996).Many neurons in monkey STPa are reported as sharply tuned to particularactions, and are proposed to form the basis of action recognition. The aimof this study is to determine whether the human STS, like the monkey STPa,has unique neural populations supporting the recognition of different biologicalactions. If such neuronal populations exist, they are likely organizedat a sub-voxel spatial scale. To overcome these spatial resolution limitationsof functional magnetic resonance imaging, we have measured the informa-Sunday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>143


Sunday Morning PostersVSS 2010 AbstractsSunday AMtion content in the fMRI BOLD responses using support vector machinesin conjunction with multi-voxel pattern analysis. We specifically measuredwhether the STS response discriminates between different biological actionas well as between those actions and motion-matched non-biological controlstimuli (‘scrambled’ motion). Human subjects viewed blocks of threedifferent conditions (jumping jack, profile view of walker and hand wavingin the air), and a scrambled animation condition. We measured classificationperformance in the STSp and in the motion sensitive hMT+, both independentlylocalized in separate scans. We found above chance classificationperformance in these regions, which is evidence of sufficient informationin the BOLD pattern to discriminate different unique actions. This studyprovides insight into neural activity at the sub-voxel level in human brainareas involved in biological motion perception, and our findings suggestthat action recognition is supported by highly-tuned neuronal ensemblesin visual cortex.Acknowledgement: NSF BCS0748314 to EG33.422 Attention-based motion analysis of biological motionperceptionSarah Tyler 1 (sctyler@uci.edu), Javier O. Garcia 1 , Emily D. Grossman 1 ; 1 Departmentof Cognitive <strong>Sciences</strong>, Center for Cognitive Neuroscience, University ofCalifornia IrvineHuman observers can recognize actions in point-light biological sequenceswith relative ease. This skill is believed to be the consequence of motionbasedvisual analyses, although the exact nature of these computationsremains unclear. Typically, point-light animation sequences are depictedas luminance-defined tokens that are easily detected as biological by ourfirst-order motion system. Contrast-defined (second-order) point-lightsequences are also readily recognized as biological (Ahlström, Blake, Ahlström,1997). More recently, some findings have implicated attention-based(third-order) motion as the critical motion analysis in biological motionperception (Thornton, et al., 2000; Garcia & Grossman, 2008). To determinewhether third-order motion analyses are sufficient for biological motion perception,we have constructed biological motion displays that are defined byalternating features (e.g. Blaser et al., 2000), and are thus encoded by attention-basedmotion systems. Target tokens within a larger array of gaborsdepict human actions by coherently varying on key dimensions (contrast,spatial frequency, gabor orientation, phase or drifting speed). We measuredthe magnitude of the feature differences (e.g. contrast increments)for threshold discrimination and detection performance in second-orderdisplays using an adaptive staircase. The alternating feature displays werecreated by varying each gabor dimension, frame-by-frame, of target tokensrelative to the background. In these alternating feature displays, the globalmotion signal is constructed by tracking these salient differences across featurespace. We found that for second-order motion, subjects require morefeature differences (e.g. higher contrast, larger orientation tilt) for biologicalmotion discrimination compared to detection, as expected. However,we also found that observers can readily detect and discriminate the thirdorderalternating feature displays. These findings are evidence that attention-basedthird-order motion analyses may promote biological motionperception through feature tracking.Acknowledgement: NSF BCS0748314 to EG33.423 Perceptual biases in biological motion perception andother depth-ambiguous stimuliNikolaus Troje 1 (troje@queensu.ca); 1 Queen’s University, Kingston, OntarioBiological motion stick-figures rendered orthographically and without selfocclusionsdo not contain any information about the order of their elementsin depth and therefore are consistent with at least two different in-depthinterpretations. Interestingly, however, the visual system often prefers oneover the other interpretation. In this study, we are investigating two differentsources for such biases: the looking-from-above bias and the facing-theviewerbias (Vanrie et al. 2004). We measure perceived depth as a functionof the azimuthal orientation of the walker, the camera elevation, and thewalker’s gender, which have previously been reported to also affect thefacing bias (Brooks et al, 2008). We also compare dynamic walkers withstatic stick-figure displays. Observers are required to determine whether0.5 s presentations of stick-figures are rotating clockwise or counter-clockwise– basically telling us in that way which of the two possible in-depthinterpretations they are perceiving. In contrast to previous work, this measureis entirely bias-free in itself. Data collected with this method show thatthe facing-the-viewer bias is even stronger than previously reported andthat it entirely dominates the viewing-from-above bias. Effects of walkergender could not be confirmed. Static figures which imply motion result infacing biases which are almost as strong as obtained for dynamic walker.The viewing-from-above bias becomes prominent for the profile views ofwalkers, for which the facing-the-viewer bias does not apply, and for otherdepth ambiguous stimuli (such as the Necker cube). In all these cases, wefind a very strong bias to interpret the 2D image in terms of a 3D sceneas seen from above rather than from below. We discuss our results in thecontext of other work on depth ambiguous figures and look at differencesbetween the initial percept as measured in our experiments and bistabilityobserved during longer stimulus presentations.Acknowledgement: NSERC, CIFAR33.424 Local motion versus global shape in biological motion: Areflexive orientation taskMasahiro Hirai 1,3 (masahiro@queensu.ca), Daniel R. Saunders 1 , Nikolaus F. Troje 1,2 ;1 Department of Psychology, Queen’s University, 2 School of Computing, Queen’sUniversity, 3 Japan <strong>Society</strong> for the Promotion of ScienceOur visual system can extract directional information even from spatiallyscrambled point-light displays (Troje & Westhoff, 2006). In three experiments,we measured saccade latencies to investigate how local features inbiological motion affect attentional processes. Participants made voluntarysaccades to targets appearing on the left or the right of a central fixationpoint, which were congruent, neutral or incongruent with respect to thefacing direction of a centrally presented point-light display. In Experiment1, we presented two kinds of human point-light walker stimuli (coherentand spatially scrambled) with three different viewpoints (left-facing,frontal view, right-facing) at two different stimulus durations (200 and 500ms) to sixteen observers. The saccade latency of the incongruent conditionwas significantly longer compared to that of the congruent condition forthe 200-ms coherent point-light walker stimuli, but not for the spatiallyscrambled stimuli. In Experiment 2, a new group of observers (N = 12) werepresented with two point-light walker displays. The only difference withrespect to Exp. 1 was, that in the scrambled version of the stimulus, thelocation of the dots representing the feet, was kept constant. Different fromthe results of Experiment 1, the saccade latency in the incongruent conditionwas significantly longer than that in the congruent condition irrespectiveof the stimulus types. In Experiment 3, we put into conflict the facingdirection indicated by the local motion of the feet and the facing directionas indicated by the global structure of the walker by presenting newlyrecruited observers (N = 12) with backwards walking point-light walkers.In agreement with the results of Experiment 2, the modulation of the saccadelatency was dependent on the direction of feet motion, irrespectiveof the postural structure of the walker. These results suggest that the localmotion of the feet determines reflexive orientation responses.Acknowledgement: JSPS Postdoctoral Fellowships for Research Abroad33.425 Searching for a “super foot” with evolutionary-guidedadaptive psychophysicsDorita H. F. Chang 1 (dchang@eml.cc), Nikolaus F. Troje 1,2 ; 1 Centre for NeuroscienceStudies, Queen’s University, Kingston, Canada, 2 Department ofPsychology, Queen’s University, Kingston, CanadaThe walking direction of a biological entity is conveyed by both globalstructure-from-motion information and local motion signals. Globaland local cues also carry distinct inversion effects. In particular, the localmotion-based inversion effect is carried by the feet of the walker. Here, wesearched for a “super foot”, defined as the motion of a single dot that conveysmaximal directional information and carries a large inversion effect,by using a psychophysical procedure driven by a multi-objective evolutionaryalgorithm (MOEA). We report on two rounds of searches involvingthe evolution of 25-27 generations each (1000 trials/generation) conductedvia a web-based interface. The search involved an eight-dimensional spacespanned by amplitudes and phases of a 2nd-order fourier representationof the dot’s motion in the image plane. On each trial, observers were presentedwith multiple copies of a “foot” chosen from a population of feetstimuli for the current generation and were required to indicate whetherthe perceived stimulus was right- or left- facing. The stimuli were shownat upright and inverted orientations. Upon completion of a generation,each stimulus was evaluated for its “fitness” based upon its ability to conveydirection and carry an inversion effect from observer accuracy rates.The fittest stimuli were then selected to form a subsequent generation for144 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Morning Posterstesting via methods of crossover and mutation. We show that the MOEAwas effective at driving increases in accuracy rates for the upright stimuliand increases in the inversion effect, quantified as the difference betweenupright and inverted stimuli, across generations. We show further that thetwo rounds of searches, beginning at different points in space, convergetowards the same region. We characterize the “super foot” in relation tocurrent theories about the importance of gravity-constrained dynamics forbiological motion perception.33.426 Distributions of fixations on biological motion displaysdepend on the task: Direction discrimination vs. gender classificationDaniel R. Saunders 1 (daniel.saunders@queensu.ca), David K. Williamson 1 , NikolausF. Troje 1 ; 1 Queen’s UniversityEven when a display of a person walking is presented only as dots followingthe motion of the major joints, human observers can readily determineboth the facing direction of locomotion and higher-level properties,including the gender of the individual. We investigated the spatial concentrationof direction and gender cues, by tracking the eye movements of16 participants while they judged either property of a point-light display.The walkers had different levels of ambiguity of both their direction andtheir gender, which affected the difficulty of the tasks. Fixation locationswere recorded throughout the 2 s presentation times. We analyzed thefixation data in two ways: first by creating fixation maps for the differentconditions, and second by finding the average number of fixations that fellinto three ROIs, representing the shoulders, pelvis and feet. In accordancewith past literature emphasizing the role of lateral shoulder sway in genderidentification, participants on average fixated more on the shouldersin the gender task than in the direction task. Analysis of individual differencesshowed that more fixations in the shoulder region predicted slightlybetter performance in the gender task. On the other hand, the number offixations on the pelvis, an area also known to contain gender information,was not significantly different between tasks. In accordance with studiesshowing that the motion of the feet contains cues to direction, participantsfixated significantly more often on the feet in the direction task. The feetwere rarely fixated in the gender task. In general, task difficulty did nothave an effect on fixation patterns, except in the case of walkers viewedfrom the side, which produced on average slightly fewer feet fixations inthe direction task.33.427 The Perceived Sex of Biological Motion Displays is Influencedby Adaptation to Biological Motion but Not Adaptation toStatic FacesEric Hiris 1 (ejhiris@smcm.edu), Katie Ewing 1 ; 1 Department of Psychology, St.Mary’s College of MarylandPrevious research has shown that adapting to biological motion creates anaftereffect in the perceived sex of subsequently viewed biological motiondisplays. Also, adapting to a face creates an aftereffect in the perceivedsex of subsequently viewed faces. We sought to determine whether a sexaftereffect in biological motion can be created from adapting to a face. Participantsfirst classified the sex of thirteen biological motion displays andthirteen faces that varied in appearance from male to female. A subset ofthese stimuli was used in an adaptation experiment. Participants adaptedto either biological motion or static faces that were male, neutral, or female.After 10 seconds of adaptation, participants viewed a biological motion testdisplay that in an unadapted state ranged from male to female. The datashowed that adapting to biological motion biased the perception of the teststimulus towards the opposite sex. However, there was no effect of adaptingto static faces. The results suggest that adapting to faces does not createsex aftereffects in biological motion perception. This suggests that thereare independent neural sites for sex adaptation for faces and for biologicalmotion.33.428 Effects of social context on walking and the perceptions ofwalkersRobin Kramer 1 (psp837@bangor.ac.uk), Robert Ward 1 ; 1 School of Psychology,Bangor UniversityResearch using point-light walker stimuli shows that biological motionalone can signal various types of information, such as age, sex, and identity.However, all these experiments involve creating videos of walkers in acontext in which actors are aware that they are being filmed and observed.Given the effects of social context on other behaviours, we decided toinvestigate whether walking while being aware versus unaware of beingobserved would affect perceptions of those actors. Walkers were filmed intwo conditions, first through the use of a hidden camera, and second with avisible camera operated by the experimenter. Point-light stimuli were thencreated from the videos. These stimuli were then viewed by a second set ofparticipants, who rated them for various traits including health and personality.Results demonstrated that perceptions of people differed dependingon the context in which they were filmed. For instance, actors were ratedas more extraverted and more feminine when they were unaware of beingfilmed and walked while alone. In addition, we were able to investigatehow accurate raters’ perceptions of these actors were. These findings haveimplications for both past and future studies of perception from biologicalmotion, highlighting the need to consider social context when exploring thenature of information signalling.33.429 Visual Sensitivity to Point-Light Actors Varies With theAction ObservedAdam Doerrfeld 1 (adoerrfeld@psychology.rutgers.edu), Kent Harber 1 , MaggieShiffrar 1 ; 1 Rutgers, The State University of New Jersey, at NewarkBackground & Research Question: Traditional models of the visual systemdescribe it as a general-purpose processor that analyzes all classes ofstimuli with the same menu of visual processes (e.g. Marr, 1982). In contrast,there are classes of theories that place emphasis on the uniqueness ofhuman motion perception (Blake & Shiffrar, 2007 for a review). Of theoriesthat emphasize the uniqueness of human motion perception, many implicitlyassume that visual sensitivity to human movement is action independent,as long as the observer can or has performed the observed action. Isvisual sensitivity to human movement action independent? We examinedwhether the ability to detect a masked point-light person varies as a functionof the action observed. Methods: Exp. 1 examined whether the detection ofa moving point-light person varies depending on the action observed (lifting,running, throwing or walking). Exp.’s 2 and 3 examined whether person-detectionvaries as a function of observers’ expectancies about upcomingactions. Exp. 2 examined whether knowledge of the upcoming actionwould influence the ability to detect a point-light person across observedactions. Exp. 3 examined whether or not being misinformed about theupcoming action would influence the ability to detect a point-light personacross observed actions. Results & Discussion: Similar patterns emergedfrom all three experiments: visual sensitivity was action dependent, beinggreatest for walkers and worst for lifters. Interestingly, differences in persondetection cannot be attributed to expectancies. Previous researchersmay have overestimated visual sensitivity to human movement by relyingheavily on the perception of point-light walkers. Furthermore, easily performablehuman actions do not represent a single perceptual category, asthe ability to detect a person varies with the action observed. Later experimentswill look at the role of dynamic symmetry (or lack thereof) as well asmotor or visual familiarity.Acknowledgement: National Science Foundation grant #EXP-SA 073098533.430 Multimodal integration of the auditory and visual signals indyadic point-light interactionsLukasz Piwek 1 (lukaszp@psy.gla.ac.uk), Karin Petrini 1 , Frank Pollick 1 ; 1 Universityof Glasgow, Department of Psychology, Glasgow, UKMultimodal aspects of non-verbal communication have thus far beenexamined using displays of a solitary character (e.g. the face-voice and/orbody-sound of one actor). We extend investigation to more socially complexdyadic displays using point-light displays combined with speechsounds that preserve only prosody information. Two actors were recordedapproaching each other with three different intentions: negative, positiveand neutral. The actors’ movement was recorded using a Vicon motion capturesystem. The speech was simultaneously recorded and subsequentlyprocessed with low-pass filtering to obtain an audio signal that containedprosody information but not intelligible speech. In Experiment 1, displayswere presented bimodally (audiovisual) and unimodally (audio-only andvisual-only) to examine whether bimodal audiovisual conditions wouldfacilitate perception of the original social intention, compared to the unimodalconditions. In Experiment 2, congruent (visual and audio signal fromsame actor and intent) and incongruent displays (visual and audio signalfrom different actor and intent) were used to explore changes in socialperception when the sensory signals gave discordant information. Resultssupported previous findings obtained with solitary characters: the visualSunday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>145


Sunday Morning PostersVSS 2010 AbstractsSunday AMsignal dominates over the auditory signal (however, auditory informationcan influence the visual signal when the intentions from both modalitiesare discordant). Results also showed that this dominance of visual overauditory is significant only when the interaction between characters is perceivedas socially meaningful i.e. when positive or negative intentions arepresent.33.431 Recognition of self-produced and friends’ facial motionRichard Cook 1 (r.cook@ucl.ac.uk), Cecilia Heyes 2,3 , Alan Johnston 1,4 ; 1 Division ofPsychology and Language <strong>Sciences</strong>, University College London, UK, 2 All SoulsCollege, University of Oxford, UK, 3 Dept of Experimental Psychology, Universityof Oxford,UK, 4 Centre for Mathematics and Physics in the Life <strong>Sciences</strong> andEXperimental Biology (CoMPLEX), University College LondonPrevious studies of walking gait have reported counter-intuitive self-recognitioneffects whereby actors are able to better identify allocentric displaysof their own walking gait, than those of friends. Insofar as actors typicallyhave little visual experience of their own gaits from third-person perspectives,such effects may indicate a contribution to perception from the motorsystem. Here we sought to determine whether participants showed a similarself-recognition effect when asked to identify facial motion derived fromthemselves, friends or strangers. Motion was isolated from form cues usinga markerless image processing technique, and used to animate an averagehead avatar. A single avatar stimulus was presented on each trial and participantswere required to respond self, friend or other. Participants firstcompleted a block of upright trials and then completed a second block withstimulus orientation inverted. Some evidence of superior self-recognitionwas found, in that stimuli were correctly identified more often when viewedby the actor from which they were derived, than when viewed by friendsor strangers. However, the self-recognition effect observed was primarilyattributable to performance in the inverted condition. Whereas orientationinversion drastically impaired friends’ ability to recognise an actor, inversionhad no effect on the ability of actors to recognise themselves. Thesefindings suggest that recognition of self-produced and friends’ motion maybe mediated by different cues. Since recognition of friends’ motion is sensitiveto orientation, friend recognition may require the perception of configural,correlated motion cues derived from across the whole face. In contrast,self-recognition may rely on local motion cues extracted from particularfeatures, and thus be insensitive to inversion. An observer’s motor systemmay also serve to enhance recognition through the representation of therhythmic or temporal characteristics of local motion cues.33.432 Dissociation between biological motion and shape integrationAyse Pinar Saygin 1 (apsaygin@gmail.com), Shlomo Bentin 2 , Michal Harel 3 , GeraintRees 4, 5 , Sharon Gilaie-Dotan 4, 5 ; 1 Department of Cognitive Science, UniversityCalifornia San Diego, 2 Department of Psychology, and Interdisciplinary Centerfor Neural Computation, Hebrew University of Jerusalem, 3 Department ofNeurobiology, Weizmann Institute of Science, 4 Institute of Cognitive Neuroscience,University College London, 5 Wellcome Trust Centre for Neuroimaging,University College LondonWhile studies have pointed to a relationship between form processing andbiological motion perception, the extent to which the latter depends onventral stream integration is unknown. Here, we took advantage of patientLG’s neuropsychological profile to address this question. LG has developmentalvisual agnosia, with severe difficulty in object recognition, butapparently normal motion perception. LG reports recognizing people fromthe way they move, suggesting he may use biological motion to supportperception. In a recent neuroimaging and behavioural study we presentedLG’s abnormal visual cortical organization (Gilaie-Dotan et al, 2009, CerebralCortex). LG exhibited deficits in form processing, normal motion processing,and significant abnormalities in his visual hierarchy. In particular,LG’s lateral occipital (LO) region did not show typical object selectivity,while motion sensitive MT+ showed typical activation patterns. Here, LGand age-matched controls performed motion-direction judgments on pointlight displays depicting either biological or non-biological motion. Usingpoint lights allowed us to investigate structure from motion perceptionwithout relying on shape connectivity that assists integration processes.The biological motion stimuli depicted a person walking to the right or tothe left (but without translation, as if on a treadmill), whereas non-biologicalmotion consisted of a rectangle moving to either direction. The stimuliwere embedded in noise dots in order to obtain sensitivity thresholds,which were calculated adaptively. The noise dots were created by spatiallyscrambling the target motion. While LG showed a significant deficitin the non-biological motion task, his biological motion performance wasclearly within the normal range. His intact biological motion perceptionwas further confirmed in a second experiment using different stimuli andtask (Saygin, 2007, Brain). These results suggest that successful biologicalmotion perception can be achieved without strict reliance on the integrityof hierarchical ventral stream integration.Acknowledgement: This work was supported by the European Union.33.433 Asymmetry in visual search for local biological motionsignalsLi Wang 1,2 (wangli@psych.ac.cn), Kan Zhang 1 , Sheng He 3 , Yi Jiang 1 ; 1 Instituteof Psychology, Chinese Academy of <strong>Sciences</strong>, 2 Graduate School, ChineseAcademy of <strong>Sciences</strong>, 3 Department of Psychology, University of MinnesotaThe visual search paradigm has been widely used to study the mechanismsunderlying visual attention, and search asymmetry provides a source ofinsight into preattentive visual features. In the current study, we showedthat observers were more efficient in searching for a spatially scrambled(or feet only) upright biological motion target among spatially scrambled(or feet only) inverted distractors than vice versa, suggesting that local biologicalmotion signals can act as a basic preattentive feature for the humanvisual system. Interestingly, such search asymmetry disappeared when theglobal configuration in biological motion was kept intact, indicating thatthe attentional effects arising from biological features (e.g., local motionsignals) and global novelty (e.g., inverted human figure) could interact andmodulate visual search. Our findings provide strong evidence for local biologicalmotion processing independent of global configuration, and shednew light on the mechanism of visual search asymmetry.Acknowledgement: This research was supported by the Knowledge Innovation Program ofChinese Academy of <strong>Sciences</strong> (KSCX2-YW-R-248 and 09CX202020), Chinese Ministry ofScience and Technology (No. 2007CB512300), and the US National Science Foundation.33.434 Search asymmetry in perceiving walkers: an approachingwalker is easier to be found than a deviating walkerKazuya Ono 1 (ono@real.tutkie.tut.ac.jp), Michiteru Kitazaki 1,2 ; 1 Department ofKnowledge-based Information Engineering, Toyohashi University of Technology,2 Research Center for Future Vehicle, Toyohashi University of TechnologyIn a social situation, an observer needs to perceive directions of other walkers.It is a dynamic social interaction in our everyday life. We aimed toinvestigate human perception of walkers, particularly the perceptual functionto identify an approaching or a deviating walker among distracters. InExperiment 1, we presented 2, 4, or 6 human walkers (front view, smoothshaded 3-dimensional computer graphics), one of which was approachingto an observer, and the other walkers deviated 6, 12, 24, or 48 deg from theobserver. Eight observers were asked to identify the approaching walkeras accurate and quick as possible. Reaction time increased as larger numberof walkers and as larger deviation of distracters. In Experiment 2, weused inverted walkers and found that the search efficiency was worse thanthat of upright walkers. In Experiment 3, we presented 3, 4, or 6 walkers,one of which was approaching to or deviating from the observer, and theother walkers deviated from or approaching to the observer, respectivelyto investigate search asymmetry (deviation angle was 6 or 12 deg). Identificationof an approaching walker among deviating walkers was quickerthan the opposite identification, particularly with the small deviation. InExperiment 4, we presented 6 walkers with 6, 30, or 60 deg deviations,and the other methods were identical to Experiment 3. We found that thesearch asymmetry reversed with 30 and 60 deg deviations. At large deviations,identification of a deviating walker was quicker than an approachingwalker. These results suggest that perception of approaching/deviatingwalkers with small deviations is different from that with large deviations.The former would be related to social perception in which an approachingwalker is more important for observers. The latter would be related ordinalobject perception in which deviation properties are more important.Acknowledgement: Supported by Nissan Science Foundation and The Global COEprogram ‘Frontiers of Intelligent Sensing’33.435 Can you see me in the snow? Action simulation aids thedetection of visually degraded human motionsJim Parkinson 1 (parkinson@cbs.mpg.de), Anne Springer 1 , Wolfgang Prinz 1 ; 1 MaxPlanck Institute for Human Cognitive and Brain <strong>Sciences</strong>146 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Morning PostersWhen viewing the actions of others, we often see them imperfectly, brieflydisappearing from view or otherwise obscured. Previous research showsthat individuals generate real-time action simulations aiding prediction ofan action’s future course, such as during brief occlusion from view (Grafet al, 2007).The current study investigates whether the action simulation directly aidsthe perception of visually degraded actions. Dynamic human actions suchas a basketball shot were presented using point-light (PL) actors embeddedin a dynamic visual black-and-white noise background resembling “TVsnow”. The PL actor was clearly visible at first (1-1.5 s), then briefly disappeared(400 ms ‘occlusion’) – during which the participant generates a realtimeaction simulation – and then reappeared (360 ms test motion). Priorto occlusion, the PL actor joints were easily visible squares of white pixels,but in the test motion, the PL joints were comprised of dynamic randomwhite and black pixels. By changing the percentage of white versus blackpixels in joints, and thus varying contrast against the noise background, thetest motion was visually degraded. The test contrast was adjusted usingan adaptive staircase method to measure contrast-thresholds for the detectionof test motion appearance. In the crucial manipulation, the test motionwas either a natural progression of the motion as it would have continuedduring occlusion, and thus temporally matching the simulation, or temporallyshifted earlier or later (±300 ms). Contrast-thresholds for detectionwere lower for natural compared to shifted test motions, suggesting thatwhen the visually degraded test motion temporally matches the simulationit is more easily detectable. Overall, these results suggest that real-timesimulation of human actions during occlusion aids the detection of visuallydegraded actions, indicating a strong perceptual role for action simulationin human motion processing.Attention: Numbers and thingsOrchid Ballroom, Boards 436–444Sunday, May 9, 8:30 - 12:30 pm33.436 Rapidly learned expectations alter perception of motionMatthew Chalk 1 (m.j.chalk@sms.ed.ac.uk), Aaron Seitz 1 , Peggy Series 1 ; 1 EdinburghUniversityExpectations broadly influence our experience of the world. However, theprocess by which they are acquired and then shape our sensory experiencesis not well understood. Here, we set out to understand whether expectationsof simple stimulus features can be developed implicitly through faststatistical learning, and if so, how they are combined with visual signalsto modulate perception. On each trial human participants were presentedwith either a low contrast random dot kinematogram moving coherentlyin a single direction, or a blank screen. They were tested on their ability toreport the direction (estimation) and the presence (detection) of the motionstimuli. Participants were exposed to a bimodal distribution of motiondirections where two directions, 64° apart from each other, were presentedin a larger number of trials than other directions. After a few minutes oftask performance, participants perceived stimuli to be moving in directionsthat were more similar to the most frequently presented directions thanthey actually were. Further, on trials where no stimulus was presented, butwhere participants reported seeing a stimulus, they were strongly biased toreport motion in these two directions. No such effect was observed on trialswhere they did not report seeing a stimulus. Modelling of participants’behaviour showed that their estimation biases could not be well explainedby a simple response bias nor more complex response strategies. On theother hand, the results were well accounted for by a model which assumedthat participants solved the task using a Bayesian strategy, combining alearned prior of the stimulus statistics (the expectation) with their sensoryevidence (the actual stimulus) in a probabilistically optimal way. Ourresults demonstrate that stimulus expectations are rapidly learned and canpowerfully influence perception of simple visual features, both in the formof perceptual biases and hallucinations.Acknowledgement: EPSRC, MRC33.437 Seeing without Knowing: Three examples of the impact ofunconscious perceptual processesShaul Hochstein 1 (shaul@vms.huji.ac.il), Anna Barlasov Ioffe 1 , Michal Jacob 1 , EinatShneor 1 ; 1 Interdisciplinary Center for Neural Computation and Life <strong>Sciences</strong>Institute, Hebrew University, Jerusalem, 91904, IsraelWhile it is quite evident that we are not aware of all cortical activity, evidenceis still sparse concerning what unconscious information is usable fortask performance. The famous case of Blindsight underscores the importanceof this issue - in brains with specific damage. We present three casesof the use of information of which (healthy) participants are not consciouslyaware.1. Following brief presentation of four pacman-like forms, which describea rectangle, a triangle (by three pacmen), or no form at all (e.g. with jumbledor outward-facing pacmen), subjects often report that they have notdetected the illusory contour form, (when one was present), but they arenevertheless well above chance at guessing its shape.2. Searching for a pair of identical patterns in an array of otherwise heterogeneouspatterns, (with target present in 50% of trials), eye movement patternsreflect early concentration on the target (when present), well beforeparticipants are aware of its presence.3. In a novel search task, performance is enhanced by use of utrocular (eyeof-origin)information, which participants are wholly unaware of.These examples demonstrate not only that information of which we areunaware is usable for task performance, it also points to the high-levelnature of such unconscious information. As in Reverse Hierarchy Theory,these phenomena point to a site-independent neural correlate of consciousperception.Acknowledgement: Israel Science Foundation (ISF)33.438 Tracking of food quantities by coyotes (Canis Latrans)Kerry Jordan 1 (kerry.jordan@usu.edu), Joseph Baker 1 , Kati Rodzon 1 , John Shivik 2 ;1 Department of Psychology, Utah State University, 2 Predator Ecology ResearchCenter, Utah State UniversityWhat types of visual quantitative competencies do nonhuman animals possess,in the absence of linguistic labels for quantity? A wealth of previousstudies have identified approximate systems of number representation invarious species, suggesting that we may share with other species a roughnonverbal numerical competence. Previous studies have demonstrated thatthe numerical discrimination abilities across these various species—includingthe nonverbal representations of humans—are mediated by the ratiobetween numerical options; such approximate systems of quantificationhave been dubbed ‘analog magnitude’ representations of number (seeBrannon and Roitman, 2003, for one review). The current experiment isthe first to specifically test coyotes’ quantitative discrimination abilities.In particular, we tested semi free-ranging coyotes’ ability to discriminatebetween large and small quantities of food items and investigated whetherthis ability conforms to predictions of Weber’s Law. We demonstrate hereinthat coyotes can reliably discriminate between large versus small quantitiesof food. As predicted by Weber’s Law, coyotes’ numerical discriminationabilities are mediated by the numerical ratio between the large and smallquantities of food. This trend is indicative of an analog magnitude systemof number representation. Furthermore, in this task, coyotes were not discriminatinglarge versus small quantities based on olfactory cues alone;instead, they were visually tracking quantity. Our results also indicate thatcoyotes do not show evidence of learning effects within this task; in otherwords, they do not perform better on trials completed first, compared totrials completed last. In the future, we plan to conduct this same study withdomestic dogs, in order to compare visual quantitative sensitivity betweenthese two closely related species.33.439 The Impact of Distracting Web Advertisements on BrandAwareness and Reading ComprehensionEvan Palmer 1 (evan.palmer@wichita.edu), Carolina Bates 1 , Anjana Rajan 1 , AndrewMiranda 1 ; 1 Human Factors Program, Department of Psychology, Wichita StateUniversityIt has been known for many years that moving objects and salient colorscapture attention. Many websites have colorful, animated advertisementsthat are intended to attract users’ interest. Previous work has evaluated theimpact of distracting ads on brand awareness, but there is little researchabout the impact of distracting ads on reading comprehension, even thoughdistracting ads are commonly placed near passages of text on websites. Weinvestigated the impact of various forms of advertisements on both brandawareness and reading comprehension. In the first phase of the experiment,participants attempted to verbally name 24 (of 100 possible) brandlogos and voice onset response time was measured. Logos were modifiedto contain no text or other identifying characteristics (e.g., the Red Bull logoSunday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>147


Sunday Morning PostersVSS 2010 AbstractsSunday AMwas not used because it contains two red bulls). In the second phase of theexperiment, participants read 24 modified encyclopedia passages while oneof four ad types was displayed next to the text. The four possible ad typeswere: no ad, static ad, color salient ad, or animated ad. Following eachpassage, participants answered four multiple choice questions and theirresponse time and accuracy were recorded. In the final phase of the experiment,participants named the 24 brand logos again and priming was calculated.Results indicate that animated ads received the most name primingfollowed by color salient ads. Static ads received no more priming than theno ad baseline condition. Despite the fact that animated ads were moreattended to (as indexed by name priming), reading comprehension whileviewing animated ads was slightly better than the other three ad conditions.Informal reports from participants indicated that animated ads werethe most distracting, yet they produced the highest brand recognition anddid not impact reading comprehension.33.440 Clarifying the role of gaze cueing using biologically naturaland unnatural gazesSteven L. Prime 1 (prime@cc.umanitoba.ca), Jonathan J. Marotta 1 ; 1 Department ofPsychology, University of ManitobaPrevious studies have reported reflexive attention shifts using a symboliccue of a schematic face with eyes moving to the left or right. It is thoughtthat these gaze cues elicit reflexive orienting because the eyes play animportant role in social cognition. However, there remains conflicting evidenceregarding the type of orienting produced by gaze cues. Here, wesought to clarify the role of gaze cues in attentional orienting by testing theextent to which the gaze cue effect depends on biologically natural gazes.Subjects were presented with a line-drawing of a natural or unnatural facelooking left, right, or straight ahead. In the 2-eye condition both eyes lookedin the same direction. In the 1-eye condition only one eye looked left orright and the other eye looked straight ahead. In the Cyclops condition theface had only one looking eye. Then a target (an F or T) appeared on eitherside of the face. The cue-target onset asynchrony (CTOA) was randomized(105ms, 300ms, 600ms, or 1005ms). All cues were uninformative and subjectswere told the direction of gaze did not predict target location. Subjectsmade speeded button press responses to identify the letter. Results showthat reaction times (RTs) in the 2-eye condition were faster for valid cuesat the 300ms and 600ms CTOAs, indicating that biologically natural gazecues can elicit reflexive attentional orienting. RTs in the 1-eye condition andthe Cyclops condition were only faster for valid cues at the 1005ms CTOA,suggesting that biologically unnatural gaze cues involve goal-driven attentionalorienting. Overall RTs were slowest in the Cyclops condition andfastest in the 2-eye condition. Our findings further clarify the role gaze cuesplay in attention and suggest a specialized brain mechanism for attentionalorienting in response to biologically natural gaze shifts.33.441 Attraction without distraction: Effects of augmented realitycues on driver hazard perceptionMark Schall Jr. 1 (mark-schall@uiowa.edu), Michelle Rusch 1, 2 , John Lee 3 , ShaunVecera 4 , Matt Rizzo 2, 1 ; 1 University of Iowa, Department of Mechanical andIndustrial Engineering, 2 University of Iowa, Department of Neurology, 3 Universityof Wisconsin-Madison, Department of Industrial and Systems Engineering,4 University of Iowa, Department of PsychologyIntroduction: Collision warning systems use alerting cues to enhanceawareness and response to hazards (Ho & Spence, 2005; Scott & Gray, 2008).These cues are meant to attract attention, yet may be distracting due tomasking. This study evaluated effects of: 1) static visual cues (solid shape)and 2) graded dynamic visual cues that converged around approachingtargets. We hypothesized that cues would reduce RT required to recognizepotential hazards (e.g., pedestrians).Methods: Six young drivers (Mean=25 years, SD=5; males=3, females=3)drove five simulated straight rural roadways under three conditions (staticcued; dynamic cued; uncued). We examined RT from when a potentiallyhazardous target event (90 trials) first appeared to when the driver detectedit. Subjects were also tested on detection of non-target (peripheral) objects(60 trials) that appeared on the roadside opposite the targets (forced choicequestions).Results: There was a main effect of condition on the RT (seconds) to perceivepotential hazards (F(2,22)=6.02) and no effect on periphery accuracy(F(2,22)=0.23). The RT for the uncued condition (Mean=3.18, SE=0.41) wasfaster than the static condition (Mean=4.79, SE=0.52, p = 0.002), but wasnot different from the dynamic condition (Mean=3.44, SE=0.52, p = 0.59).The RT was lower for the dynamic condition versus the static condition (p= 0.03).Conclusions: Results did not show direct RT benefits for the tested AR cues.In fact, static AR cues increased RT for detecting hazards. This was likelydue to local (lateral) masking or obstruction. AR cues did not impair perceptionof non-target objects in the periphery. The study was limited due totask simplicity and excessive cue salience. A follow up study is addressingthese limitations using a more difficult (dual) task and more ecologicallycongruent AR cues.Acknowledgement: Supported by NIH grant R01AG02602733.442 Attentional shifts due to irrelevant numerical cues: Behavioralinvestigation of a lateralized target discrimination paradigmChristine Schiltz 1 (christine.schiltz@uni.lu), Giulia Dormal 1 , Romain Martin 1 , ValerieGoffaux 1,2 ; 1 EMACS Unit, FLSHASE, University of Luxemburg, Luxemburg,2 Department of Neurocognition, University of Maastricht, The NetherlandsBehavioural evidence indicates the existence of a link between numericalrepresentations and visuo-spatial processes. A striking demonstration ofthis link was provided by Fischer and colleagues (2003), who reported thatparticipants detect a target more rapidly in the left hemifield, if it is precededby a small number (e.g. 2 or 3) and more rapidly in the right hemifieldif preceded by a large number (e.g. 8 or 9). This is strong evidence thatnumbers orient visuo-spatial attention to different visual hemifields (e.g.,left and right) depending on their magnitude (e.g., small and large, respectively).Here, we sought to replicate number-related attentional shifts usinga discrimination task. The participants (n=16) were presented 1 digit (1,2 vs.8,9) at the centre of the screen for 400ms. After 500ms, 1000ms or 2000ms,a target was briefly flashed in either the right or left hemifield and participantshad to report its colour (red or green). They were told that the centraldigit was irrelevant to the task. We hypothesized that the attentional shiftinduced by the centrally presented numbers should induce congruencyeffects for the target discrimination task, so that small (or large) numberswould facilitate the processing of left (or right) targets. Our results confirmedthis prediction, but only for the shortest digit-target interval (500ms).This is supported by a significant interaction between number magnitude(small/large) and target hemifield (left/right). The link between numericaland spatial representations further predicts a positive relation betweennumber magnitude and the difference in RT between left and right targets.Regression slopes were computed individually and a positive slope wasobtained for short number-target interval. These findings indicate that theattentional shifts induced by irrelevant numerical material are independentof the exact nature of target processing (discrimination vs. detection).33.443 Looking ahead: Attending to anticipatory locationsincreases perception of controlLaura Thomas 1 (laura.e.thomas@vanderbilt.edu), Adriane Seiffert 1 ; 1 VanderbiltUniversityAccording to the theory of apparent mental causation (Wegner & Wheatley,1999), people are more likely to perceive themselves as in control of a particularaction when thoughts about this action occur before the action itself.This priority hypothesis suggests a potential relationship between visualattention and the perception of control. In two experiments, we tested thehypothesis that observers would feel more control over an object if wedirected them to pay attention to a location where the object was headed.Participants attempted to keep a moving object inside a narrow verticalpath as it moved upwards for five seconds. The object took random stepsto the left and right that participants could counter with key presses. Wevaried the participants’ objective level of control over the object across trialsand asked participants to rate their subjective feeling of control over theobject at the end of each trial. We directed participants’ visual attention toparticular locations along the object’s path by having them discriminatethe color of a flash that was briefly presented during the task. In Experiment1, participants reported greater subjective feelings of control whenthe flash appeared where the object was headed than when it appearedwhere the object had already been. The results of Experiment 2 showed thatparticipants reported the highest levels of control when a brief autopilotfunction steered the object directly over the flash location. Taken together,these results suggest that we perceive more control over objects when theymove to where we are attending: If an object goes where we are looking,we feel like we made it go there. Although some researchers have primar-148 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Morning Postersily employed the theory of apparent mental causation to study high-levelmetacognitive issues, these experiments demonstrate the theory’s relevanceto vision science.33.444 Thinking of God Moves AttentionAlison L. Chasteen 1 (chasteen@psych.utoronto.ca), Donna C. Burdzy 1 , Jay Pratt 1 ;1 Department of Psychology, University of TorontoHow strongly do we associate “God” and “Devil” with our physical world?Humans have long used spatial metaphors for abstract concepts of thedivine, ranging from Mt. Olympus and the underground Hades in ancientGreece to the current conceptions of Heaven and Hell. Such metaphors areuseful as they provide a common metric, physical space, to which abstractinformation can be bounded and communicated to other people. Indeed,such spatial metaphors are so pervasive in divine concepts that many religiousand cultural traditions have representations in either or both verticaland horizontal space. Given the reliance on spatial metaphors in conceptsof the divine, it is possible that merely thinking of concepts of God or Devilmight invoke brain activity associated with the processing of spatial informationand orient people’s attention to associated locations. To examine ifexposure to divine concepts shifts visual attention, participants completeda target detection task in which they were first presented with God andDevil-related words. We found faster RTs when targets appeared at locationscompatible with the concepts of God (up/right locations) or Devil(down/left locations), and also found that these results do not vary by participants’religiosity. These results demonstrate that even a highly abstractconcept such as God can lead individuals to orient their attention to spatiallycompatible locations. These findings provide further evidence that thetraditional view of exogenous and endogenous attentional processes maynot be adequate, as divine concepts generated involuntary shifts of attentionwithout any corresponding peripheral events. Moreover, these resultsadd further support to the notion that abstract concepts like the divine relyon metaphors that contain strong spatial components.Search: Learning, memory and contextOrchid Ballroom, Boards 445–457Sunday, May 9, 8:30 - 12:30 pm33.445 Training, Transfer, and Strategy in Structured and UnstructuredCamouflage Search EnvironmentsDaniel Blakely 1 (blakely@psy.fsu.edu), Walter Boot 1 , Mark Neider 2 ; 1 Department ofPsychology, Florida State University, 2 Beckman Institute, University of Illiniois atUrbana-ChampaignThe visual scenes we search every day are far more complex than typicalsearch paradigms. Recent research has addressed this by examiningthe role target-background similarity plays in search. Previous studies oftarget-background similarity (camouflage) have utilized a paradigm thatincludes a complex background created from tiled square segments of thetarget object (Boot, Neider, & Kramer, 2009). These studies have foundlarge improvements with training and transfer to novel camouflage stimuli.Interestingly, participants were biased to look at salient non-target objectsrather than the target-similar background. Is this a true object bias? Analternative explanation is that the regular, crystalline structure of the backgroundencouraged participants to fixate breaks in this pattern (i.e., salientobjects). It is possible the high degree of transfer observed was a result ofthis strategy. We developed a modified paradigm to address these issues.Backgrounds were created through the random placement of geometric cutoutsof the target object to remove target location cues provided by breaksin a patterned background. Error rates and reaction times were increasedcompared to search performance with structured backgrounds, suggestingstructure was important in previous studies. Fixations on the randomizedbackgrounds were significantly greater, suggesting that previous evidenceof an object bias in camouflage search may have been attributable to searchstrategies developed specifically for structured backgrounds. An additionaltraining study examined search improvement and transfer in this more difficulttask. Participants were trained to find camouflaged targets embeddedwithin structured or unstructured randomized backgrounds. After foursessions of training, all participants searched for novel targets embeddedwithin unstructured backgrounds. Preliminary results suggest that transferof training to novel stimuli is much more limited when participants haveto search unstructured camouflage environments. These results have theoreticalimplications for object-based conceptions of attention and may haveimportant applied implications as well.33.446 The influence of expertise on comparative visual searchperformanceVera Bauhoff 1 (v.bauhoff@iwm-kmrc.de), Markus Huff 1 , Stephan Schwan 1 ; 1 KnowledgeMedia Research Center TübingenFrom studies using the comparative visual search paradigm it is knownthat there is a trade-off between inter-hemifield gaze shifts and visualshort-term memory (VSTM) when searching for differences between twosimultaneous presented displays. These gaze shifts were calculated fromeye and head movements between two images. Hardiess, Gillner and Mallot(2008) formed their visual search task by two shelves filled with objectsthat differed in shape and color. They were presented with a distancebetween 30° and 120°. The results showed the trade-off based on a smallernumber of shifts in greater distance conditions, suggesting higher workingmemory load. We extended their findings toward more complex materials,namely stills of pendulum clocks. The participants were asked to find differencesbetween two images. The presentation distance varied between30° and 120°. Furthermore, the factor expertise was varied to examine possibleeffects of prior knowledge on search effectiveness. As dependent variablewe measured inter-hemifield gaze shifts. The experiment comprisedtwo blocks: After the first block, half of the participants were given relevantinformation about the mechanical principles of a pendulum clock. The otherhalf received irrelevant information. We hypothesized that expert knowledgeincreases search effectiveness, as participants are able to encode largerinformation chunks. We were able to replicate former findings with morecomplex material. An increased distance leads to a reduced number of gazeshifts suggesting both more effort for gaze shifts and more use of VSTM inlarge distance conditions. Additionally, there was no influence of expertiseon search behavior. In both relevant and irrelevant information conditionsparticipants showed higher performance in the second block, suggesting ageneral change of strategy that is independent of prior knowledge concerningthe function of a pendulum clock. Consequently, we infer a powerfulrobustness of the trade-off effect in comparative visual search tasks.Acknowledgement: Supported by LANDESSTIFTUNG Baden-Württemberg grant to MH33.447 History repeats itself: A role for observer-dependent scenecontext in visual searchBarbara Hidalgo-Sotelo 1 (bhs@mit.edu), Aude Oliva 1 ; 1 Department of Brain andCognitive <strong>Sciences</strong>, MITEye guidance during visual search and naturalistic scene exploration isbased on combining information from image-based cues and top-downknowledge (e.g. target features, Zelinsky, 2008; scene context region, Torralbaet al, 2006). It is not known whether previous searches of a scene contributeto search guidance. How much information is gained by knowingan observer’s history of searching familiar scenes? To probe this question,we recorded eye movements while observers searched for a camouflagedbook in indoor scenes (100% target prevalence). In the repeated condition,scenes were searched 8 times by the same observer, while in the novel conditioneach scene was searched once. This large dataset of search fixationswas used to evaluate the similarity between scene locations fixated duringan observer’s repeated search relative to novel searches of the same scene.An ROC analysis was used to evaluate how accurately fixated locationswere predicted by distributions representing several types of top-downknowledge: (1) Categorical scene context: fixations drawn from differentobserver’s search of a novel scene, (2) Learned scene context: fixations drawnfrom different observer’s repeated searches of the scene; and (3) Observerdependentscene context: fixations from one observer’s repeated searchesof the scene. The results reported below used the first three search fixationsof each trial, but similar results were obtained using the first fixation exclusively.Categorical scene context predicted fixated locations of different,novel searchers with a high degree of accuracy (84%). Learned scene context,based on different searcher’s repeated fixations, was similarly accurate(85%). Observer-dependent scene context, interestingly, provided a significantimprovement in prediction accuracy relative to baseline controls andother forms of context (90%). In summary, having an observer’s history ofsearch fixations in a specific scene provides, on average, more accurate andless variable predictions of where that observer is likely to look.Sunday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>149


Sunday Morning PostersVSS 2010 AbstractsSunday AMAcknowledgement: NSF CAREER awards (0546262) to A.O. and NEI Training Grant toB.H.S.33.448 Observers are inconsistent and inaccurate in judging theirown visual detection ability at different retinal locationsCamille Morvan 1, 2, 3 (camille.morvan@gmail.com), Hang Zhang 1, 2 , LaurenceMaloney 1, 2 ; 1 Department of Psychology, New York University, 2 Center for NeuralScience, New York University, 3 Department of Psychology, Harvard UniversityBackground: Recent computational models of human visual search presupposethat the visual system has access to accurate estimates of visualdetection ability for stimuli at different retinal locations (E.g. Najemnik &Geisler, Nature, 2005). To test this assumption, we designed a decision taskthat revealed subjects’ estimates of their performance for different contrastlevels at different retinal eccentricities. Methods: In a calibration session,we mapped the subject’s probability of correct response as a Weibull functionof retinal eccentricity for targets at each of three contrast levels (low,medium, high). The subjects also learned to associate a color symbol witheach contrast level. In the subsequent decision part, we asked subjects tochoose between two possible combinations of contrast and eccentricity. E.g.low contrast at 2 degrees or high at 10 degrees. They knew they wouldactually attempt some of their preferred choices at the end of the experimentand earn monetary rewards for correct responses. We used a staircaseprocedure to measure the point of subjective indifference between targetsthat differed in contrast, one fixed in eccentricity and the other varied ineccentricity. For each subject we used 12 such staircases (3 contrasts x 4probabilities) and estimated the eccentricity that the subject consideredto be equally detectible for the variable contrast. Eight naïve subjects participated.Results: Despite their calibration experience, all eight observersmatched probabilities incorrectly, with 0.14 mean error over all observers,conditions (theoretical upper limit 0.35). The matching failures showed acommon pattern of underestimating the difference between high and lowcontrasts. Conclusion: Observers exhibited little knowledge of their visualdetection ability as a function of contrast and retinal eccentricity. We findno evidence that the visual system has access to accurate estimates of detectionability for different types of targets at different eccentricities.33.449 Altering the rate of visual search through experience: Thecase of action video game playersBjorn Hubert-Wallander 1 (bwallander@bcs.rochester.edu), C. Shawn Green 2 ,Michael Sugarman 1 , Daphne Bavelier 1 ; 1 Department of Brain & Cognitive<strong>Sciences</strong>, University of Rochester, 2 Department of Psychology, University ofMinnesotaMany aspects of endogenous visual attention are enhanced following habitualaction video game play. For example, those who play fast-paced actionvideo games (such as Halo or Call of Duty) have demonstrated superior performanceon tasks requiring sustained attention to several objects, as wellas enhanced selective attention in time and in space (Hubert-Wallander,Green, and Bavelier, under review). However, using one of the diagnostictasks of the efficiency of visual attention, a visual search task, Castel andcollaborators (2005) reported no difference in visual search rate, proposingthat action gaming may change response time execution rather thanvisual selective attention itself. Here we used two hard visual search tasks,one measuring reaction time and the other accuracy, to test whether visualsearch rate may be changed by action video game play. In each case, wefound faster search rates in the gamer group as compared to the non-gamercontrols. We then contrasted these findings with a study of exogenouslydrivenattentional processes. No differences were noted across groups,suggesting that the neural mechanisms subserving the willful and flexibleallocation of attentional resources may be more susceptible to training thanthe processes by which attention is exogenously summoned.Acknowledgement: This research was supported by grants to D. Bavelier from theNational Institutes of Health (EYO16880) and the Office of Naval Research (N00014-07-1-0937).33.450 Effects of high-level ensemble representations on visualsearchAmrita Puri 1 (ampuri@ucdavis.edu), Shelley Morris 1 , Jason Haberman 1,2 , JasonFischer 1,2 , David Whitney 1,2 ; 1 Center for Mind and Brain, UC Davis, 2 Departmentof Psychology, UC DavisThe visual system’s ability to extract ensemble representations from clutteredscenes has been demonstrated for low-level features as well as highlevelobject properties such as facial expression (Ariely, 2001; Chong & Treisman,2003; Haberman & Whitney, 2007; Parkes et al., 2001). Our previouswork suggests that ensemble information influences visual search efficiency:participants were faster to detect a target face when its expression deviatedsubstantially from the mean of the set, but only within sets containing lowrather than high variance in expression (Puri et al., 2009). Thus, the relativelyprecise summary representations known to arise under low-varianceconditions (Dziuk et al., 2009) may provide a basis for deviance detection.Here we tested whether an individual’s ability to extract summary informationfrom low- compared to high-variance sets predicts the degree ofbenefit when searching for deviant targets under the two variance conditions.Participants estimated the mean expression of face sets with eitherlow or high variance. In a separate task, the same participants searchedfor a particular identity within low- and high-variance sets; the expressionof the target face could be either near or far from the mean expression ofthe set. Across participants, the difference in mean estimation performancefor low- versus high-variance sets was correlated with the relative benefitfor detection of deviant targets within low- versus high-variance sets. Inaddition, within individuals, search times were more positively correlatedacross two separate presentations of the same display when the varianceof the set was low. These results suggest that readily extracted ensemblerepresentations enhance deviance detection. Furthermore, the availabilityof ensemble information may contribute to consistencies in search behaviorwithin an individual.33.451 Reducing Satisfaction of Search Errors in Visual SearchKait Clark 1 (kait.clark@duke.edu), Mathias S. Fleck 1 , Stephen R. Mitroff 1 ; 1 Centerfor Cognitive Neuroscience, Department of Psychology & Neuroscience, DukeUniversitySeveral occupations rely upon the ability to accurately and efficiently performvisual search. For example, radiologists must successfully identifyabnormalities, and airport luggage screeners must recognize threateningitems. By investigating various aspects of such searches, psychologicalresearch can reveal ways to improve search performance. We (Fleck, Samei,& Mitroff, in press) have recently focused on one specific influence that hadpreviously been explored almost exclusively within the study of radiology:Satisfaction of Search (SOS), wherein the successful detection of one targetcan reduce detection of a second target in the same search array. To eliminateSOS errors, we must delineate the sources of the errors. Combinedwith our prior work, here we examine the specific roles of target heterogeneityand the decision-making component of target-distractor discrimination(i.e., how easy it is to determine whether a stimulus is a target or a distractor).In the current experiments, subjects searched arrays of line-drawnobjects for targets of two different categories (tools and bottles) amongstseveral categories of distractor objects. On any given trial, there could beno targets, one target (either a tool or bottle), or two targets (both a tool anda bottle). The relative occurrence of these trial types varied across experiments.Compared to previous experiments that employed homogeneoustargets and that required effortful evaluation to discriminate targets fromdistractors, the SOS effect was reduced here. That is, search accuracy foreasily discriminable, heterogeneous objects was no worse for dual-targettrials than for single-target trials. These results suggest that target heterogeneityand target-distractor discriminability may both play key roles inmultiple target search accuracy.Acknowledgement: Army Research Office, Institute for Homeland Security Solutions33.452 Memory and attentional guidance in contextual cueingSteven Fiske 1 (sfiske@mail.usf.edu), Thomas Sanocki 1 ; 1 Department ofPsychology, University of South FloridaWhat is the mechanism that underlies contextual cueing? The effect wasinitially thought to be the product of memory for the repeated context guidingattention to the target location. Recent work has disputed this explanationpointing to the absence of decreased search slopes (derived fromResponse Time x Set Size functions) in contextual cueing, a criterion usedfor establishing the presence of attentional guidance in standard search.However, we argue that the candidate source of guidance in contextualcueing – memory for the repeated displays – is fundamentally differentthan that of standard search tasks. It is this difference, rather than a lack ofattentional guidance, that explains the failure to observe decreased searchslopes in contextual cueing. While the quality of guidance derived fromfeature dimensions of the display in standard search is constant acrossset sizes, as set size increases in a contextual cueing task, so does the burdenon the memory system. I.e., the smaller set size displays are easier to150 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


Sunday Morning PostersVSS 2010 AbstractsSunday AMPrevious research has given support to the so-called “stare-in-the-crowdeffect”, the notion that a direct gaze “pops out” in a crowd and can be moreeasily detected than averted gaze. This processing advantage is thoughtto be due to the importance of gaze contact for social interactions. However,these studies bore little ecological validity as they used search paradigmsin which arrays of two-dimensional pairs of eyes were presentedon a computer screen. The purpose of the present research was to investigatewhether this processing advantage for direct gaze could be seen inmore realistic settings such as in a virtual environment. Participants wererequired to locate the person with a direct (or averted) gaze presentedamongst three other persons with averted (or direct) gaze. This was doneeither in 2D (on a flat computer screen), in 3D-no context (i.e. a blank virtualworld) or in 3D with context (a virtual elevator). For the 3D conditions, participantswore a head-mounted display which immersed them in a virtualworld. Results indicated slower reaction times when the task was done inthree rather than two dimensions, and even slower RTs with the additionof meaningful context. No overall effect of target gaze was found but aninteraction with target position was observed due to faster and more accuratedetection of direct over averted gaze when targets were presented inthe right visual field. When targets were in the far left visual field, however,the effect was reversed and averted gaze was more quickly detected thandirect gaze. These findings suggest that detecting gaze direction in the realworld mostly depends on spatial position. In other words, direct gaze doesnot always “pop-out”.Face perception: Emotional processingVista Ballroom, Boards 501–516Sunday, May 9, 8:30 - 12:30 pm33.501 Cortical and Subcortical Correlates of Nonconscious FaceProcessingVanessa Troiani 1 (troiani@mail.med.upenn.edu), Elinora Hunyadi 1 , Meghan Riley 1 ,John Herrington 1 , Robert Schultz 1 ; 1 Center for Autism Research, Children’sHospital of PhiladelphiaParadigms that provide independent input to each eye (e.g. binocularrivalry) have been used to test the role of subcortical visual processingstreams and establish the boundaries of visual awareness. These methodshave advantages over backward masking, which is insufficient for completedisruption of the ventral visual pathway. The current fMRI studypresented images of faces and houses that were rendered subliminal viabinocular rivalry combined with flash suppression and an orthogonal task– with the ultimate objective of examining subcortical pathways involvedin the perception of social stimuli.During fMRI data collection, 12 young adult participants wore anaglyphglasses and viewed centrally presented supraliminal words on a sharplymoving checkerboard. Participants identified the first letter of each wordas a consonant or vowel. Fearful faces and houses were presented to thenon-dominant eye and suppressed from conscious awareness. Catch trialsdetermined if and when participants perceived the subliminal stimuli; onlydata acquired prior to onset of awareness were analyzed.Whole-brain, mixed-model GLM analyses found significantly greater activationfor subliminal faces versus subliminal houses in precuneus and leftinferior parietal cortices. An a priori ROI analysis of bilateral amygdalaerevealed a significantly greater left amygdala response for subliminal faces.Psychophysiological interaction (PPI) analyses of individually-defined leftamygdala showed task-dependent correlations with bilateral pulvinar andearly visual cortices. Previous findings have implicated the amygdala andpulvinar in subliminal threat and saliency detection, respectively. Whilespatial resources are typically recruited in supraliminal vision, these datasuggest that precuneous and parietal cortices are activated prior to socialstimulus awareness. We suggest this response to detection of environmentallyrelevant stimuli also serves a preparatory role in spatial resource allocationfor subsequent behavior. Ultimately, present data cast some doubton the distinction typically made between subcortical and cortical pathwaysin subliminal perception of social stimuli.Acknowledgement: 5R01MH073084-06 (PI: Schultz), NSF Graduate Fellowship33.502 Separate neural loci are sensitive to facial expression andfacial individuationXiaokun Xu 1 (xiaokunx@usc.edu), Irving Biederman 1,2 ; 1 Department of Psychology,University of Southern California, 2 Program in Neuroscience, University ofSouthern CaliforniaA network of several face areas—defined by their greater activation to facesthan non face objects--have been reported in the cortices of both macaquesand humans, but their functionality is somewhat uncertain. We used fMRIadaptation(fMRIa) to investigate the representation of viewpoint, expressionand identity of faces in the fusiform face area (FFA), the occipital facearea (OFA), and the superior temporal sulcus (STS). In each trial, subjectsviewed a sequence of two computer-generated faces and judged whetherthey depicted the same person (1/5 of the trials were in fact different, buthighly similar individuals). On all trials, the second face was translated in anuncertain direction even when the faces were identical. Among the imagesof the same person, the two images could vary in viewpoint (~15º rotationin depth) and/or in expression (e.g., from happy to angry). Critically,the physical similarity of a view change and an expression change for eachface were equated by the Gabor-jet metric, a measure that predicts almostperfectly similarity effects on discrimination performance. We found thata change of expression, but not a change of viewpoint, produced a significantrelease from adaptation (compared to the identical, translated face) inFFA. In addition, a change of identity produced an even stronger release. Incontrast, OFA was not sensitive to either expression or viewpoint change,but did show a release from adaptation to an identity change. These resultsare consistent with Pitcher et al.’s (2009) finding that TMS applied to OFAdisturbs face identification, but are not consistent with a model (Haxby etal., 2000) that assumes that FFA is insensitive to emotional expression.33.503 Affective Information Affects Visual ConsciousnessErika Siegel 1 (siegelea@bc.edu), Eric Anderson 1 , Lisa Feldman Barrett 1 ; 1 BostonCollegeGossip can be thought of a form of affective information about who isfriend or foe. Recent evidence indicates that, as a way of learning about the“value” of a person, gossip influences how human beings evaluate eachother. In the current research, we show that gossip does not just impacthow a face is evaluated – it impacts whether or not a face is seen in thefirst place. Structurally neutral faces were paired with negative, positive,or neutral gossip. When viewed later, faces previously paired with negative(but not positive or neutral) gossip were prioritized in consciousnessusing a binocular rivalry procedure. These findings demonstrate gossip asa form of affective information can influence vision in a completely topdownmanner, independent of the basic visual features of a face.Acknowledgement: Army Research Institute33.504 Contrast-Negation Impairs Gender but Not EmotionDiscriminationPamela Pallett 1 (ppallett@ucsd.edu), Ming Meng 1 ; 1 Dartmouth CollegeBruce and Young’s (1986) model of face recognition proposed that facialidentity and emotional expression are processed independently. Yet, it hasbeen argued that certain processes such as the perception of configuralinformation are an important part of both face recognition and expressionperception that can be marred by contrast negation and inversion (Calder &Jansen, 2005; Hole, George, & Dunsmore, 1999). To further investigate theproposed dichotomy between expression perception and the encoding ofother facial attributes, we systematically measured threshold sensitivity todifferences in gender and emotion in positive contrast vs. contrast-negatedfaces. Previous studies have shown that gender perception is mediated primarilyby the fusiform gyrus, inferior occipital cortex, and cingulate gyrus(Ng, Ciaramitaro, Anstis, Boynton, & Fine, 2006), which largely overlapwith the neural pathways underlying face recognition. In contrast, processingof facial expression involves both cortical and subcortical pathways(e.g. amygdala). We predicted that, like recognition, contrast-negationmay impair gender discrimination. However, contrast-negation may notnecessarily impair emotion discrimination. Accordingly, our participantsdisplayed substantially decreased sensitivity to variation in gender withcontrast-negation, but no change in sensitivity when discriminating levelsof anger or fear. Although a t-test indicated a significant decrement in sensitivityto levels of happiness with contrast negation, the decrement wassignificantly less than that observed with gender and not significantly differentfrom the non-significant decrements observed with anger and fear.152 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Morning PostersMoreover, response times decreased with contrast-negation only for genderdiscriminations. Contrast negation destructs the otherwise highly stableordinal luminance relations between a few face regions (Gilad, Meng,& Sinha, 2009). Our results suggest that these luminance relations may beimportant for gender discrimination but not necessarily for emotion discrimination,highlighting separated visual processing of facial expression.33.505 Dynamic Shifts in the Criteria for Facial Expression RecognitionJun Moriya 1 (morimori@cbs.c.u-tokyo.ac.jp), Yoshihiko Tanno 1 ; 1 The University ofTokyoAn individual’s ability to recognize facial expressions is influenced byexposure to a certain emotional expression over a long period or by theprolonged exposure to a prototypical facial expression. This study revealedthat the recognition of facial expressions varied according to the exposure tonon-prototypical facial expressions for a relatively short period. After beingexposed to the faces of anger-prone individuals, whose morphed faces frequentlyexpressed anger, the participants more frequently perceived theexpression on the face as happy. On the other hand, after being exposed tothe faces of happiness-prone individuals whose morphed faces frequentlyexpressed happiness, the participants more frequently perceived the face asangry. In addition, we found a relative increase in the social desirability forhappiness-prone individuals after the exposure. These results proved thatpeople dynamically became sensitive to the change in facial expressions byadapting to the exposed facial expressions over a short period.33.506 How fast can we recognize facial expressions of emotion?Aleix Martinez 1 (aleix@ece.osu.edu), Shichuan Du 1 ; 1 The Ohio State UniversityWe use a set of 161 images, corresponding to a total of 23 individuals, eachdisplaying one of six emotions (anger, sadness, fear, surprise, happinessand disgust) in addition to neutral. All images were of 80x120 pixels. Weextended this set by reducing all the images to 40x50 pixels, yielding a totalof 322 images. We then designed a staircase procedure as follows. A fixationcross appeared for 500 ms, followed by a randomly selected image from ourset. The image was first displayed for a total of t=50ms. After a 500 msmask, subjects were instructed to respond to the perceived emotion. If thesubject response is correct, the exposure time t for that particular emotionis decreased. Otherwise, it is increased. To determine these increments/decrements, we assume that the value of t will converge to its right value(i.e., after several trials, the value of t will oscillate about the time thresholdrequired to achieve recognition). The results show that happiness is thefastest to be recognized (23-28ms) and that this value does not change as theimage size decreases. Neutral, disgust and surprise form a second grouprequiring additional time (3 to 4 times longer than happiness) and with aminimal increase of processing time as the size of the percept is reduced.Fear requires the same time as this second group, but its processing timeincreases dramatically as the percept decreases in size. Finally, sadness andanger constitute the group requiring the longest for recognition – about10 times slower than happiness. These results show that the recognitionof emotions has evolved differently for distinct emotions, suggesting anadaptation to some evolutionary needs.Acknowledgement: National Science Foundation33.507 Image size reveals perception biases of similarity amongfacial expressions of emotionShichuan Du 1 (dus@ece.osu.edu), Aleix Martinez 1 ; 1 The Ohio State University.Recognizing facial expressions of emotion is important in social communication.Humans have the ability to recognize emotions from faces representedby a small number of pixels or at large distances. Where is thelimit? Is this limit the same for all expressions of emotion? Or, are we moretuned to reading some specific emotions? We investigate these questionsusing six basic facial expressions of emotion (happiness, sadness, anger,fear, surprise, and disgust) in addition to neutral. Face images scaled infive different sizes encompassing 10x15 to 160x240 pixels (at increases offactors of 2) are presented in two emotion labeling tasks. Three importantaspects of emotion recognition emerge from our study: 1) Recognition accuracyincreases nonlinearly with image size in all expressions. 2) Happiness,surprise, disgust and neutral can be recognized at very small sizes (10x15pixels), whereas fear, sadness and anger cannot (requiring images of atleast 40x60 pixels). 3) At low resolutions, there is an asymmetric ambiguityin recognizing expressions of emotions – e.g., sadness is perceived as moresimilar to neutral than anger, while anger is most often confused with sadness;fear is more often misclassified as surprise than disgust, while disgustis typically misinterpreted as fear. This asymmetry is eliminated as the sizeof the image increases.Acknowledgement: National Science Foundation33.508 Individual differences in empathy and indices of faceprocessingReiko Graham 1 (rg30@txstate.edu), Janine Harlow 1 , Heidi Blocker 1 , Chris KellandFriesen 2 , Roque Mendez 1 ; 1 Department of Psychology, Texas State University,2 Department of Psychology, North Dakota State UniversityEmpathy is vital for social functioning, yet its relationship to lower levelprocesses like face processing remains unknown. We examined whetherindividual differences in empathy (as indexed by the Interpersonal ReactivityIndex; IRI, Davis, 1980) were related to facial expression processing,attentional disengagement from facial expression and reflexive orientingto gaze direction in three separate, but related experiments. In Experiment1, sensitivity and decision biases in perceiving expression (fear, anger)were examined with a 2-alternative, forced-choice task using morphedfacial expressions. While there was no relationship between the ability todetect the intensity of fear or anger alone, particular empathy subscales(perspective taking, personal distress and empathic concern) were significantpredictors of how individuals interpreted blends of fear and anger.In Experiment 2, we examined whether individual differences in empathywere predictive of attentional disengagement from irrelevant emotionalface distractors (happy, angry, fearful, and neutral faces) during a targetdetection task and found no relationship between empathy and attentionaldisengagement from emotional faces. In Experiment 3, we examinedwhether empathy was related to reflexive orienting to non-predictive gazecues in emotional faces (fearful, happy). Results indicated that cuing effectsat short SOAs were not related to personality differences. In contrast, cuingeffects at long SOAs were predicted by individual differences in empathy(fantasy and empathic concern). We conclude that empathy (as indexed bythe IRI) does not modulate rapid, sensory-driven perceptual or attentionalprocesses. Rather, individual differences in empathy appear to play a rolein decision processes associated with perceiving ambiguous facial expressionsand only mediate reflexive orienting to gaze direction when there issufficient time to process the face cue. Together these results suggest thatempathy may influence later stages of processing associated with interpretingfacial information.Acknowledgement: NIH 1R03MH079295 - 01A1 to R.G., NIH/NCRR Centers ofBiomedical Research Excellence (COBRE) to C.K.F.33.509 Laying the foundations for an in-depth investigation of thewhole space of facial expressionsKathrin Kaulard 1 (kathrin.kaulard@tuebingen.mpg.de), Christian Wallraven 1 ,Douglas W. Cunningham 1 , Heinrich H. Bülthoff 1 ; 1 Max Planck Institute forBiological CyberneticsFacial expressions form one of the most important and powerful communicationsystems of human social interaction. They express a large range ofemotions but also convey more general, communicative signals. To date,research has mostly focused on the static, emotional aspect of facial expressionprocessing, using only a limited set of “generic” or “universal” expressionphotographs, such as a happy or sad face. That facial expressions carrycommunicative aspects beyond emotion and that they transport meaning inthe temporal domain, however, has so far been largely neglected. In orderto enable a deeper understanding of facial expression processing with afocus on both emotional and communicative aspects of facial expressionsin a dynamic context, it is essential to first construct a database that containssuch material using a well-controlled setup. We here present the novel MPIfacial expression database, which contains 20 native German participantsperforming 58 expressions based on pre-defined context scenarios, makingit the most extensive database of its kind to date. Three experiments wereperformed to investigate the validity of the scenarios and the recognizabilityof the expressions. In Experiment 1, 10 participants were asked to freelyname the facial expressions that would be elicited given the scenarios. Thescenarios were effective: 82% of the answers matched the intended expressions.In Experiment 2, 10 participants had to identify 55 expression videosof 10 actors. We found that 34 expressions could be identified reliably withoutany context. Finally, in Experiment 3, 20 participants had to group the55 expression videos of 10 actors based on similarity. Out of the 55 expressions,45 formed consistent groups, which highlights the impressive vari-Sunday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>153


Sunday Morning PostersVSS 2010 AbstractsSunday AMety of conversational expressions categories we use. Interestingly, none ofthe experiments found any advantage for the universal expressions, demonstratingthe robustness with which we interpret conversational facialexpressions.33.510 Out of sight, but not out of mind: Affect as a source ofinformation about visual imagesEric Anderson 1 (andersix@bc.edu), Dominique White 1 , Erika Siegel 1 , Lisa Barrett 1,2 ;1 Boston College, 2 Massachusetts General Hospital / Harvard Medical SchoolRecent evidence suggests that affect influences visual processing. To furtherexplore this, we used Continuous Flash Suppression (CSF) as a techniqueto suppress stimuli from conscious visual awareness. Previous research hasdemonstrated that while suppressed images are experienced as unseen,they are still processed by the brain. In this study, we explored to whatdegree suppressed images are processed and whether suppressed imagesinfluence behavior. Consciously seen neutral faces were paired with suppressedangry, happy, or neutral faces rendered invisible with CFS. Participantsrated the neutral faces as more unpleasant when paired with anunseen angry face and more pleasant when paired with an unseen happyface. These findings demonstrate that affective information is extracted bythe brain from faces rendered invisible by CFS, and that this affective informationis readily misattributed to a different, consciously seen face.33.511 Preferential processing of fear faces: emotional content vs.low-level visual propertiesKatie Gray 1 (klhg103@soton.ac.uk), Wendy Adams 1 , Matthew Garner 1,2 ; 1 Schoolof Psychology, University of Southampton, 2 Division of Clinical Neuroscience,School of Medicine, University of SouthamptonBehavioural and neurological research suggests that emotional (relative toneutral) faces are more visually salient, with preferential access to awareness,for example in overcoming binocular rivalry suppression. However,it is difficult to determine to what extent such effects result simply fromlow-level characteristics as opposed to the emotional content of the face perse. Although spatial inversion has been used to control for low-level imagecharacteristics, the extent to which inversion disrupts emotion processing isunclear. We applied both spatial inversion and luminance reversal to fear,happy, angry and neutral faces. These manipulations retained the contrast,mean luminance and spatial frequency profiles of the images but combiningthem made the emotion impossible to categorise. Observers viewed thenormal and the manipulated images under continuous flash suppression: asingle face was presented to one eye and high contrast dynamic noise to theother. Fear faces emerged from suppression (i.e. became visible) faster thanthe other three expressions. However, this pattern was equally apparentfor the original and the manipulated faces. The properties that lead to theunconscious prioritisation of fearful faces are thus fully contained in unrecognisableimages that share the same low-level visual characteristics. Ourfindings suggest that some emotion-specific effects may be driven entirelyby low-level stimulus characteristics.Acknowledgement: KG was funded by an ESRC studentship33.512 Properties of a good poker faceErik Schlicht 1 (schlicht@wjh.harvard.edu), Shin Shimojo 2 , Colin Camerer 3 , PeterBattaglia 4 , Ken Nakayama 1 ; 1 Psychology, Harvard University, 2 Biology, CaliforniaInstitute of Technology, 3 Economics, California Institute of Technology, 4 Brainand Cognitive <strong>Sciences</strong>, Massachusetts Institute of TechnologyResearch in competitive games has exclusively focused on how opponentmodels are developed through previous outcomes, and how peoples’ decisionsrelate to normative predictions. Little is known about how rapidimpressions of opponents operate and influence behavior in competitiveeconomic situations, although such rapid impressions have been shownto influence cooperative decision-making. This study investigates whetheran opponent’s face influences players’ wagering decisions in a zero-sumgame with hidden information. Participants made risky choices in a simplifiedpoker task while being presented opponents whose faces differentiallycorrelated with subjective impressions of trust. If people use informationabout an opponent’s face, it predicts they should systematically adjusttheir wagering decisions, despite the fact that they receive no feedbackabout outcomes, and the value associated with the gambles is identicalbetween conditions. Conversely, if people only use outcome-based informationin competitive games, or use face information inconsistently, thenthere should be no reliable differences in wagering decisions between thegroups. Surprisingly, we find that threatening face information has littleinfluence on wagering behavior, but faces relaying positive emotional characteristicsimpacts peoples’ decisions. Thus, playing against opponentswhose faces rank high on subjective impressions of trustworthiness leadsto increased loss aversion and suboptimal wagering behavior. According tothese results, the best ‘poker face’ for bluffing may not be a neutral face, butrather, a face that contains emotional correlates of trustworthiness. Moreover,it suggests that rapid impressions of an opponent play an importantrole in competitive games, especially when people have little or no experiencewith an opponent.33.513 Testing emotional expression recognition with an adaptationof the “Bubbles” masking approachPeter Gerhardstein 1 (gerhard@binghamton.edu), Daniel Hipp 1 , Rory Corbet 1 , XingZhang 1 , Lijun Yin 1 ; 1 Binghamton University-SUNYClassification of facial expressions is thought to require differential allotmentof attention to some features and feature regions while limiting theallocation of attention to other features and feature regions. There is someevidence to suggest that adult observers classify expressions using bothconfigural and featural information, however research documenting whatinformation in the human face leads to successful recognition is incomplete.We applied the Bubbles masking approach (Gosselin & Schyns, 2000) to adichotomous forced choice facial expression classification task. Stimuli consistedof six individual faces, each posing all of six different facial expressionsplus neutral. While viewing various faces, participants chose oneof two prompts presented below the stimulus to classify the facial imageas exhibiting an emotion or the negation of that emotion (i.e. “happy” or“not happy”). In order to determine the regions used for classification,the images were masked using Gaussian windows (bubbles). Obstructionwas adjusted adaptively in order to maintain 75% classification accuracy.The classic Gosselin and Schyns task was adapted for future application totesting preschool children, by reducing the number of trials in a test andincreasing the N of observers tested. Classification images were calculatedfor each facial expression, across image identity and within. Results willbe presented in terms of a comparison of the diagnostically useful regionsfor humans to the diagnostic regions used by an ideal observer (followingSusskind, 2007). Regions showing an increase, relative to the idealobserver, will indicate regions of increased influence in the interpretationof the expression by human observers, whereas decreased regions willreflect areas of reduced influence in the human observers’ decision-makingprocess. Results will direct future explorations as we begin to manipulateexpression intensity and test 5-7-year-old children as well as adults.33.514 The Facial Width-to-Height Ratio as a Basis for EstimatingAggression from Emotionally Neutral FacesCheryl M. McCormick 1,2 (cmccormick@brocku.ca), Catherine J. Mondloch 1,2 , JustinM. Carré 1 , Lindsey Short 1 ; 1 Department of Psychology, Brock University, 2 Centrefor Neuroscience, Brock UniversityThe facial width-to-height ratio (FWHR), a size-independent sexuallydimorphic property of the human face, is correlated with aggressive behaviourin men. Furthermore, observers’ estimates of aggression from emotionallyneutral faces are accurate and are highly correlated with the FWHR.In a series of experiments we tested if the FWHR is the basis of observers’accuracy in estimating aggressive propensity from emotionally neutralfaces. In Experiments 1a-c, estimates of aggression remained accuratewhen faces were blurred or cropped, manipulations that reduce featuralcues but maintain FWHR. Accuracy decreased when faces were scrambled,a manipulation that retains featural information but disrupts the FWHR.The estimates of aggression were highly consistent across observers for allconditions except the scrambled condition. Overall, estimates of aggressionwere most accurate when all facial features (even if blurred) were presentedin their canonical arrangement, allowing for perception of the FWHR, withat most a small contribution from the appearance of individual features.There was no explicit use of the FWHR; 84% of participants indicated that“the eyes” were the basis for their judgement. No participant reportedusing any kind of configural information, including the FWHR. Nonetheless,in Experiment 1d, participants given instruction about the FWHR wereable to accurately estimate the FWHR of faces presented for 39 msec. InExperiment 2, computer-modeling software (FACEGEN) identified eightfacial metrics that correlated with estimates of aggression; regression analysesrevealed that FWHR was the only metric that uniquely predicted these154 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Morning Postersestimates. In Experiment 3, faces were manipulated to create pairs that differedonly in FWHR. Participants’ judgement of which individual of thepair was more aggressive was biased towards faces with the higher FWHR.Together, these experiments support the hypothesis that the FWHR is anhonest signal of propensity for aggressive behaviour.Acknowledgement: SSHRCC, NSERC33.515 Visual redundancy enhances face identity perception butimpairs face emotion perceptionBo-Yeong Won 1 (boyeong.won@gmail.com), Yuhong V. Jiang 1 ; 1 Department ofPsychology, University of MinnesotaNature, artworks, and man-made environments are full of redundantvisual information, where an object appears in the context of other identicalor similar objects. How is visual perception affected by whether the surroundingitems are identical to it, different from it, or absent? This studyaddresses the role of visual redundancy in the perception of human facesand reveals opposite effects on identity perception and emotion perception.Participants in Experiment 1 identified the gender of a single face presentedat fixation. This “target” face was preceded by three types of masked primedisplays: a single face at a randomly selected visual quadrant, four identicalfaces, one in each quadrant, or four different faces, one in each quadrant.All faces had neutral expression. Priming was indexed by faster gender discriminationof the “target” face when it was identical to one of the primefaces, than when it was a different gender. Experiment 1 found that genderpriming was greater when the prime display contained four identical facesthan when it contained a single face or four different faces, suggesting thatface identification was enhanced by redundant visual input. In Experiment2, participants viewed prime displays containing a single face, four identicalfaces, or four different faces, but these faces were either neutral or fearful infacial expression. Participants identified the facial expression of a “target”face, whose expression was either consistent or inconsistent with that of theprime display. Facial expression priming was significantly greater whenthe prime display contained a single face than when it contained four identicalfaces or four different faces. In fact, priming in the emotion task waseliminated when the prime display contained four identical faces. Theseresults show that visual redundancy facilitates the perception of face identities,but impairs the perception of facial emotions.Acknowledgement: University of Minnesota Grant-in-aid33.516 What does the emotional face space look like?Frédéric J.A.M. Poirier 1 (frederic.poirier@umontreal.ca), Jocelyn Faubert 1 ; 1 Ecoled’optométrie, Université de MontréalHumans communicate their emotions in large part through facial expressions.We developed a novel technique to study the static and dynamicaspects of facial expressions. The stimulus consisted of 4 parts: (1) adynamic face, (2) two smaller versions of the starting and end states, (3) alabel indicating the target dynamic expression, and (4) sliders that could beadjusted to change the facial characteristics. Participants were instructedto adjust the sliders such that the face would most closely match the targetexpression. Participants had access to 53 sliders, allowing them to manipulatestatic and dynamic characteristics such as face shape, eyebrow shape,mouth shape, and gaze. Preliminary data from 4 participants and 7 conditionsrevealed interesting effects. Some expressions are marked by uniquefacial features (e.g. anger given by frown, surprise and fright given byopen mouth and open eyes, pain given by partially closed eyes, and happinessgiven by upwards curvature of the mouth). Some expressions seemto develop non-linearly in time, that is, include an intermediate state thatdeviates from a linear transformation between starting and ending states(e.g. anger, surprise, pain). This demonstrates the method’s validity formeasuring the optimal representation of given facial expressions. Becausethe method does no rely on presenting facial expressions taken from orderived from actors, we believe that it is a more direct measure of internalrepresentations of emotional expressions. Current work is focused onbuilding a vocabulary of emotions and emotional transitions, towards anunderstanding of the facial expression space.Acknowledgement: NSERC-Essilor industrial research chair & NSERC discovery fundFace perception: Social cognitionVista Ballroom, Boards 517–530Sunday, May 9, 8:30 - 12:30 pm33.517 The time course of face-gender discrimination: Disentanglingthe use of color and luminance cuesNicolas Dupuis-Roy 1 (nicolas@dupuis.ca), Daniel Fiset 1 , Mélanie Bourdon 1 , FrédéricGosselin 1 ; 1 Département de psychologie, Université de MontréalIn a recent study using spatial Bubbles (Dupuis-Roy, et al., 2009), we identifiedthe eyes, the eyebrows and the mouth as the most potent features forface-gender discrimination (see also Brown & Perrett, 1993; Russell, 2003,2005; Yamaguchi, Hirukawa, & Kanazawa, 1995). Intriguingly, we foundthat the mouth was correlated only with rapid correct answers. Given thehighly discriminative color information in this region, we hypothesized thatthe extraction of color and luminance cues may have different time courses.Here, we tested this possibility by sampling the chromatic and achromaticface cues independently with spatial and temporal Bubbles (see Gosselin& Schyns, 2001; Blais et al., 2009). One hundred participants (35 men) completed600 trials of a face-gender discrimination task with briefly presentedsampled faces (200ms). To create a stimulus, we first isolated the S and Vchannels of the HSV color space for 300 color pictures of frontal-view faces(average interpupil distance of 1.03 deg of visual angle) and adjusted the Schannel so that every color was isoluminant (±5 cd/m2); then, we sampledS and V channels independently through space and time with 3D Gaussianwindows (spatial std = 0.15 deg of visual angle and temporal std = 23.53ms). The group classification image computed on the response accuracyshows that in the first 100 ms, participants used the color in the mouthregion along with the luminance in the left eye-eyebrow region; and thatin the last 100ms, they relied on the luminance information located in themouth and the right eye-eyebrows. Male and female observers slightly differin their extraction of the mouth information. Altogether, these resultshelp to disentangle the relative role of color and luminance in face-genderdiscrimination.Acknowledgement: FQRNT33.518 Reconfigurable face space for the perception of intergenderfacial resemblanceHarry Griffin 1 (harry.griffin@ucl.ac.uk), Alan Johnston 1 ; 1 Cognitive, Perceptual andBrain <strong>Sciences</strong>, University College LondonPsychophysical and neurophysiological evidence suggests that faces arerepresented in a mean-centered multi-dimensional face space. However,the organization of face space is poorly understood. We used a novel markerlessmorph-vectorization technique, based on the multi-channel gradientmodel, to investigate the organization and rapid reconfiguration of facespace. Face spaces for male and female faces were created via principal componentanalysis (PCA). Faces’ shape and texture were described as vectordeviations from populations’ mean faces. Novel faces were synthesized bytranslating these vectors within and between male and female face spacesand then reconstructing to image form. The mathematical basis of perceptualsimilarity between male and female face spaces was investigated byshowing subjects cross-gender pairs of faces which had either similar, unrelatedor opposite vector deviations from their population mean. Subjectsperceived faces with similar vector deviations from their respective means(sibling-faces) as most similar and faces with opposite vector deviationsas least similar. Facial identity aftereffects also transferred between maleand female face spaces. Adaptation to a male face yielded a shift in perceivedidentity of female faces toward the mathematically opposite femaleface. The perceptual similarity of synthesized sibling-faces indicates thatface space can be dynamically partitioned into mean-centered subspacese.g., male and female. This ability may underpin the perception of “familyresemblances” in disparate groups of faces with widely varying underlyingimage statistics. Cross-sibling adaptation indicates the existence of relationalas well as absolute coding in face space.Acknowledgement: EPSRC33.520 Perception of gender is a distributed attribute in the humanface processing networkChristian Kaul 1,2 (c.kaul@ucl.ac.uk), Geraint Rees 1 , Alumit Ishai 3 ; 1 Institute ofCognitive Neuroscience, UCL, London, 2 Department of Psychology and Centerfor Neural Science, NYU, 3 Inst. Neuroradiol, Univ. Zurich, Zurich, SwitzerlandSunday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>155


Sunday Morning PostersVSS 2010 AbstractsSunday AMFace perception is mediated by a distributed neural system in the humanbrain, but conventional univariate fMRI data analysis has not clearly localizeddifferential responses to male as compared with female faces withinthis network. We used fMRI and multivariate pattern decoding to testwhether we could detect gender-specific neural responses in forty subjects(hetero- and homosexual men and women), who viewed male and femalefaces and rated their attractiveness. Face stimuli evoked activation in theinferior occipital gyrus (IOG), fusiform gyrus (FG), superior temporal sulcus(STS), amygdala, inferior frontal gyrus (IFG), insula, and orbitofrontalcortex (OFC). Pattern classification with a sparse logistic regressionalgorithm revealed successful decoding of gender information with abovechance accuracies within the IOG, FG, STS, IFG, INS and OFC, but not inthe amygdala. We did not find any differences in decoding the gender offace stimuli (male vs. female) as a function of the subject’s gender (men vs.women) or their sexual orientation (hetero- vs. homosexual). Our findingssuggest that gender information is widely distributed across the face networkand is represented in the “core” regions that process invariant facialfeatures, as well as the “extended” regions that process changeable aspectsof faces. The lack of gender-specific information in the amygdala is likelydue to its role in threat detection and emotional processing.33.521 Perception of race and sex differently depends on the lowand high spatial frequency channelsShinichi Koyama 1 (skoyama@faculty.chiba-u.jp), Jia Gu 1 , Haruo Hibino 1 ; 1 Departmentof Design Science, Graduate School of Engineering, Chiba UniversityIn VSS2006, we reported a brain-damaged patient whose perception of racewas selectively impaired (Koyama et al 2006). The patient also demonstratedthat her perception of race largely depends on the surface properties of theface (e.g., convexes and concaves). On the other hand, a study showed thatthe perception of sex depends on the outline of the face and face parts (e.g.,Takahashi et al. 1996). Based on the above studies we hypothesized that theperception of race depends more largely on the low spatial frequency channelswhereas the perception of sex depends more largely on the high spatialfrequency channels. In order to test the hypothesis, we tested the normalsubjects’ performance in race and sex classification tasks with high-passand low-pass filtered pictures. Nineteen subjects participated in the experiment.We used 56 pictures from JACFEE (Matsumoto & Ekman 1988) forthe stimuli. There were 3 types of pictures which were made from the same56 pictures: (1) original grayscale pictures, (2) high-pass filtered pictures,and (3) low-pass filtered pictures. The subjects participated in the race andsex classification tasks. In the race classification task, a picture was presentedin a 21-inch LCD display and the subject judged whether the personin the picture would be Asian or Caucasian. The same pictures were used inthe sex classification task, and the subject judged whether the person in thepicture would be male or female. As predicted, the subjects performed betterwith the low-pass filtered pictures in the race classification task whereasthey performed better with the high-pass filtered pictures in the sex classificationtask. The results supported the hypothesis that the perception ofrace depends more largely on the low spatial frequency channels whereasthe perception of sex depends more largely on the high spatial frequencychannels.Acknowledgement: KAKENHI “Face perception and recognition” 2111950733.522 Differential spatial and temporal neural response patternsfor own- and other-race facesVaidehi Natu 1 (vsnatu@utdallas.edu), David Raboy 2 , Alice O’Toole 1 ; 1 The Universityof Texas at Dallas, 2 University of PittsburghHumans recognize own-race faces more accurately than other-race faces(Malpass & Kravitz, 1969), suggesting differences in the nature of neuralrepresentations for these faces. We examined the spatial and temporal patternsof neural responses to own- and other-race faces. Functional magneticresonance imaging data were obtained while Asian and Caucasianparticipants viewed blocks of Caucasian and Asian faces. Voxels from highlevelvisual areas, including fusiform gyrus and lateral-occipital areas werelocalized in a separate scan using faces (of the same race as the participant),objects and scrambled-images. We first applied a pattern-based classifierto discriminate neural activation maps elicited in response to Asian andCaucasian faces. A low dimensional representation of the brain scans basedon their principal components was used as input to the classifier. We measuredthe ability of the classifier to predict the race of the face being viewed.We found above-chance discrimination of the neural responses to own-versus other-race faces for both Asian and Caucasian participants. Reliablediscrimination scores were obtained only when the voxel selection processused a localizer that presented “own-race” faces. Next, we examined differencesin time-course of neural responses to own- and other-race faces andfound evidence for a temporal “other-race effect”. The neural response toown-race faces was larger than to other-race faces, but only across the firstfew time points in the block. The magnitude of the neural response to otherracefaces was lower at first, but increased across the block to ultimatelyovertake the magnitude of the own-race response. This temporal activationpattern held across the broader range of ventral temporal areas, and for theFFA alone. Spatial discrimination, however, was reliable only across thebroader range ventral temporal areas. The results highlight the importanceof examining the spatio-temporal components of face representations.Acknowledgement: UT-Southwestern Advanced Imaging Research Center Seed Grant33.523 Poor memory for other race faces is not associated withdeficiencies in holistic processingSacha Stokes 1 (sacha.stokes@hotmail.co.uk), Elinor McKone 1 , Hayley Darke 1 ,Anne Aimola Davies 2 ; 1 Australian National University, 2 University of OxfordIntroduction. Recent studies using standard tests of holistic face processing(part-whole and composite effects) have suggested the other-race effecton memory is associated with poor holistic processing for other-race faces(Tanaka et al 2004; Michel et al 2006). Yet, this was found only in Caucasiansubjects, with Asians showing equally strong holistic processing forown-race and other-race faces; also the studies did not test an inverted controlcondition, meaning effects for upright faces might not have been fullyface-specific in origin. Here, we re-examined the issue. Methods. Asian subjects(recently arrived overseas students) and Caucasians were tested onAsian and Caucasian faces. Memory was measured using the CambridgeFace Memory Test and a new Chinese-face version (CFMT-Chinese). Contactwith same- and other-race members was measured via questionnaire.Holistic processing was measured with (a) the ‘overlaid faces task’ (Martiniet al., 2006) – an upright and an inverted face are overlaid in transparencyand less contrast is needed to perceive the upright face – in a version whichappeared to tap face detection (i.e., faces dissociated from scrambled facesand objects, but face scores did not correlate with memory); (b) the overlaidfaces task in a version shown to tap identity-level processing (scores didcorrelate with memory); and (c) a standard composite task that showedcomposite effects only for upright faces and not inverted faces. Results.Despite a large other-race effect on memory, and large other-race differencesin contact, there was no suggestion of reduced holistic processing forother race faces, for either race of subject, on any of the three holistic processingtasks. Conclusion. There was no support for the holistic processingdeficit explanation of the other-race effect, implying that other factors areinvolved in its etiology.Acknowledgement: Supported by the Australian Research Council DP098455833.524 Race-modulated N170 Response to the Thatcher Illusion:Evidence for the expertise theory of the other race effectLawrence Symons 1 (Larry.Symons@wwu.edu), Kelly Jantzen 1 , Amanda Hahn 2 ;1 Psychology, Western Washington University, 2 Psychology, University of St.AndrewsIt has been suggested that differential use of configural processing strategiesmay be the underlying cause of racially-based recognition deficits.By employing a well known configural manipulation (thatcherization), weaimed to demonstrate, electrophysiologically, that configural processing isused to a greater extent when viewing same-race faces than when viewingother-race faces. N170 ERP responses were measured for participantsviewing normal and thatcherized faces of the same-race (Caucasian) andof another race (African-American). The N170 response was modulated toa greater extent by thatcherization for same-race faces, suggesting that theprocessing of these faces is, in fact, more reliant on configural informationthan other-race faces. These findings considered to be the result of greaterexperience, and thus greater expertise with faces of one’s own race as comparedto faces of another race.33.525 Effect of spatial frequency on other-race effectTae-Woong Yoon 1 (monolognov@gmail.com), Sang Chul Chong 1,2 ; 1 GraduateProgram in Cognitive Science, Yonsei University, 2 Department of Psychology,Yonsei University156 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Morning PostersPeople are better at recognizing faces of their own race than those of otherrace. Tanaka and his colleagues (2004) suggested that people processedfaces of their own race more holistically than those of other race. In addition,studies have shown that LSF (Low Spatial Frequency) information ismore important in the holistic process of faces than HSF (High Spatial Frequency;Goffaux & Rossion, 2006). To parametrically measure the effect ofLSF information on other-race effect, we used binocular rivalry paradigm.In Experiment 1, we made two kinds of filtered faces by applying differentcut-off frequencies (above 16 cycles/image for HSF filtered faces; below 8cycles/image for LSF filtered faces) per each race. Two different faces fromeach race were presented for 90 seconds to separate eyes. Each face waseither HSF or LSF filtered. Perceived duration of own-race face was significantlylonger than that of other-race face. This trend was more pronouncedin HSF filtered faces, producing the significant interaction between the raceand the spatial frequency. Moreover, this significant advantage of own-raceface was observed over the effect of eye dominance. Experiment 2 testedwhether other-race effect was generalized to full spectrum faces undergoingrivalry. Again, the perceived duration of own-race was significantlylonger than that of other-race. Finally, we tested the effect of spatial frequencyon binocular rivalry in Experiment 3. Only the same-race faces wereused in this experiment and we found that HSF filtered faces were alwaysperceived longer than LSF filtered faces. The results of three experimentssuggested that LSF information played an important role in other-raceeffect by influencing holistic process of faces. Furthermore, for the first timewe introduced more parametrical method to measure this effect.Acknowledgement: This work was supported by the Korea Science and EngineeringFoundation (KOSEF) grant funded by the Korea government(MOST) (No. R01-2008-000-10820-0(2008)).33.526 Race-specific norms for coding face identity and a functionalrole for normsRegine Armann 1 (regine.armann@tuebingen.mpg.de), Linda Jeffery 2 , Andrew J.Calder 3 , Isabelle Bülthoff 1 , Gillian Rhodes 2 ; 1 Max Planck Institute for BiologicalCybernetics, Tuebingen, Germany, 2 School of Psychology, The Universityof Western Australia, Australia, 3 MRC Cognition and Brain <strong>Sciences</strong> Unit,Cambridge, UKHigh-level perceptual aftereffects have revealed that faces are coded relativeto norms that are dynamically updated by experience. The nature ofthese norms and the advantage of such a norm-based representation, however,are not yet fully understood. Here, we used adaptation techniquesto get insight into the perception of faces of different race categories. Wemeasured identity aftereffects for adapt-test pairs that were opposite a racespecificaverage and pairs that were opposite a ‘generic’ average, made bymorphing together Asian and Caucasian faces. Aftereffects were largerfollowing exposure to anti-faces that were created relative to the race-specific(Asian and Caucasian) averages than to anti-faces created using themixed-race average. Since adapt-test pairs that lie opposite to each other inface space generate larger identity aftereffects than non-opposite test pairs,these results suggest that Asian and Caucasian faces are coded using racespecificnorms. We also found that identification thresholds were lowerwhen targets were distributed around the race-specific norms than aroundthe mixed-race norm, which is also consistent with a functional role forrace-specific norms.Acknowledgement: German Academic Exchange Service33.527 Own-race Effect: an Attentional Blink PerspectiveYurong He 1,2 (heeyes@gmail.com), Yuming Xuan 1 , Xiaolan Fu 1 ; 1 State Key Laboratoryof Brain & Cognitive Science, Institute of Psychology, Chinese Academy of<strong>Sciences</strong>, 2 Gradute University, Chinese Academy of <strong>Sciences</strong>Own-race effect is the tendency that people can better identify members oftheir own race than other race. In the present study, own-race effect wasstudied in attentional blink (AB) paradigm. AB studies have found that thesecond of two targets is often poorly discriminated when presented withinabout 500 ms from the first target. In 2 experiments, Chinese participantswere asked to identify Caucasian and Asian faces in a simplified two-targetRSVP paradigm (Duncan et al., 1994) and stimuli onset asynchronybetween two faces were manipulated on four levels: 0ms, 235ms, 706ms,and 1176ms. In Experiment 1, only cross-race orders were adopted: C-A(Caucasian as T1, Asian as T2) and A-C (Asian as T1, Caucasian as T2).We hypothesized that AB effect might decrease or even disappear in C-Acondition because of own-race effect. Contrary to our prediction, the sameamplitude of AB effects was observed in C-A and A-C conditions, althoughthe overall accuracy for identifying own-race faces was better than otherracefaces. In Experiment 2, besides cross-race orders, same-race orders (A-A and C-C) were added. AB effects were found in four race orders. Again,AB effects in C-A and A-C conditions were shown to have similar patternand similar effect size. But the advantage of identifying own-race faces overother-race faces was absent. Analysis of the first half data revealed similarresult to that of the whole data set, indicating the absence of the own-raceadvantage could not be due to practice effect. In sum, perception of ownraceand other-race faces were studied in a simplified AB paradigm and ourresults suggest that own-race faces and other-race faces have no differencesin competing for attentional resource considering the same pattern of ABeffect observed in both experiments.Acknowledgement: 973 Program (2006CB303101), National Natural Science Foundationof China (90820305, 30600182)33.528 When East meets West: gaze-contingent Blindspots abolishcultural diversity in eye movements for facesSébastien Miellet 1 (miellet@psy.gla.ac.uk), Roberto Caldara 1 ; 1 Department ofPsychology and Centre for Cognitive Neuroimaging (CCNi), University ofGlasgow, United KingdomEye movement strategies deployed by humans to identify conspecifics arenot universal. Although recognition accuracy is comparable, Westernerspredominantly fixate the eyes during face recognition, whereas Easternersfixate the nose region. We recently showed with a novel gaze-contingenttechnique - the Spotlight - that when information outside central vision (2°and 5°) is restricted, observers of both cultures actively fixated the sameface information during face recognition: the eyes and mouth. Only whenboth eye and mouth information were simultaneously available by fixatingthe nose region (8°), did East Asian observers shift their fixations towardsthis location - a strategy similar to natural viewing conditions. Therefore,the central fixation pattern deployed by Easterners during face processingsuggests better use of extrafoveal information while looking at faces, anissue that remains yet to be clarified.Here, we addressed this question by monitoring eye movements of WesternCaucasian and East Asian observers during face recognition with anovel technique that parametrically restricts central vision information:the Blindspot. We used both natural vision and Blindspot conditions withGaussian apertures of 2°, 5° or 8° dynamically centered on observers’ fixations.Face recognition performance deteriorated with increasing Blindspotapertures in both observer groups. Interestingly, Westerners deployed astrategy that shifted progressively towards the typical East Asian centralfixation pattern with increasing Blindspot apertures (see supplementaryfigure). In contrast, East Asian observers maintained their culturally preferredcentral fixation location pattern, showing better performance underunnatural viewing conditions relative to Westerners.Collectively, these findings show that restricting foveal information inducesan Eastern-style strategy amongst Westerners while restricting extrafovealinformation induces a Western-style strategy amongst Easterners. Overall,these observations show that the central fixation pattern used by Easternersrelies on a better use of extrafoveal information. Cultures shapes howpeople look at faces and sculpts visual information intake.Acknowledgement: The Economic and Social Research Council and Medical ResearchCouncil (ESRC/RES-060-25-0010)33.529 Social judgments from faces are universalJunpeng Lao 1 (j.lao@psy.gla.ac.uk), Kay Foreman 1 , Xinyue Zhou 2 , Martin Lages 1 ,Jamie Hillis 1 , Roberto Caldara 1 ; 1 Department of Psychology and Centre forCognitive Neuroimaging (CCNi), University of Glasgow, United Kingdom,2 Department of Psychology, Sun Yat-Sen University, Guangzhou, ChinaThere is a growing body of evidence showing that humans make automaticand reliable personality inferences from facial appearance. Interestingly,it has been robustly shown that the recognition of other-race faces isimpaired compared to same-race faces (the so-called other-race effect), withcategorization of gender and age also achieved inefficiently. However, theextent to which the ability of making reliable inferences from faces generalizesacross cultures and faces from different races is poorly understood.This issue is even timelier considering recent studies that have challengedthe universality of face processing. For instance, we recently showed thatWesterners predominantly fixate the eyes during face recognition, whereasEasterners fixate the central region of the face (i.e., nose) (Blais et al., 2008).Culture also modulates the strategy observers use to gather visual informa-Sunday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>157


Sunday Morning PostersVSS 2010 AbstractsSunday AMtion during facial expression categorization (Jack et al., 2009). Therefore,asking whether social judgments from faces generalize across cultures is anatural question to be addressed. To this aim, we tested Western Caucasianand East Asian observers performing visual and social judgments from allthe combinations of face pairs sampled from 40 Western Caucasian and40 East Asian unfamiliar faces. Observers from both cultures first evaluatedthe physical and the social similarity of each face pair. Subsequently,observers performed binary social evaluations on the same face pairs forattractiveness, competence, trustworthiness and warmth. All binary decisionswere paired with a measure of confidence. Finally, we represented theface space for each of the judgments in matrices of dissimilarity weightedby confidence levels for each observer and culture. Mantel correlations performedon the matrices of dissimilarity indicated a fairly robust agreementacross cultures for all judgments, both visual and social. Our data showthat humans rely on universal rules to perform trait inferences from facialappearance.Acknowledgement: The Economic and Social Research Council and Medical ResearchCouncil (ESRC/RES-060-25-0010)33.530 Is Social Categorization Alone Sufficient to InduceOpposing Face Aftereffects?Lindsey Short 1 (ls08ts@brocku.ca), Catherine Mondloch 1 ; 1 Department ofPsychology, Brock UniversityAdults encode individual faces in reference to a distinct face prototype thatrepresents the average of all faces ever encountered. The prototype is not astatic abstracted norm but rather a malleable face average that is continuouslyupdated by experience (Valentine, 1991); for example, after prolongedviewing of faces with compressed features, adults rate similarly distortedfaces as more normal and more attractive (simple attractiveness aftereffects).Recent studies have shown that adults possess category-specific faceprototypes (e.g., based on race, sex). After viewing faces from two categories(e.g., Caucasian/Chinese) that are distorted in opposite directions, adults’attractiveness ratings shift in opposite directions (opposing aftereffects).Recent research has suggested that physical differences between face categoriesare not sufficient to elicit opposing aftereffects and that distinct socialcategories are necessary (Bestelmeyer et al., 2008). For example, opposingaftereffects emerge when participants adapt to faces from two distinct sexcategories (female and male) but not when participants adapt to faces fromwithin the same sex category (female and hyper-female). The present set ofexperiments was designed to investigate whether social categorical distinctionsin the absence of salient physical differences are sufficient to induceopposing aftereffects. In each experiment, physical appearance was heldconstant (all Caucasian female faces) while social categorical informationdiffered (university affiliation in Experiments 1 and 2 and personality typein Experiment 3), such that half the faces purportedly belonged to participants’in-group while half the faces belonged to their out-group. Across allthree experiments, there was no evidence for opposing aftereffects, despitethe fact that participants showed better recognition memory for in-groupfaces than for out-group faces (Experiment 3). These results suggest thatboth physical differences and a social categorical distinction are necessaryin order to elicit category-contingent opposing face aftereffects.Acknowledgement: NSERCScene perception: Categorization andmemoryVista Ballroom, Boards 531–541Sunday, May 9, 8:30 - 12:30 pm33.531 Tiny Memory: How many pixels are required for good recognitionmemory?Yoana Kuzmova 1 (yoana@search.bwh.harvard.edu), Jeremy Wolfe 1,2 ; 1 Brigham &Women’s Hospital, 2 Harvard Medical SchoolWe can remember hundreds of pictures given only a few seconds of exposure(Standing,1973). How much stimulus resolution is necessary for successfulpicture memory? Torralba (2009) reported that 32x32 pixel photographscan be categorized with 80% accuracy, but can these thumbnails beeffectively coded into memory? In Experiment 1, observers saw a sequenceof natural scene images, and gave a new/old keypress response after eachimage. We varied picture resolution across blocks (16x16, 32x32, 64x64, or256x256 pixels). Old and new pictures within a block had the same resolution.The second (old) presentation of an image could lag 2, 4, 8, 16, 32, 64or 128 trials after the first. Higher resolution produced better performance,longer lags worse performance. However, performance was well abovechance even at 16x16: 89% correct at lag 2, 52% at lag 128, d’ of 2.52 and 1.18,respectively (using the 16% overall false alarm rate). Similar performancewas obtained whether lower resolution images were presented as smaller,thumbnail versions of 256x256 images or as highly blurred 256x256 images.Is resolution more important at encoding or at recall? In Experiment 2, thefirst presentation of a picture could be 32x32 or 256x256 pixels. The second(old) presentation of an image was always at a different resolution from thefirst. Results were strikingly asymmetric. Encoding at 256x256 producedgood memory at 32x32 (d’=1.80). Encoding at 32x32 produced very poormemory at 256x256 (d’=0.15), far worse than encoding and testing at 32x32(Exp 1: d’=2.04). We conclude that the representations of highly degradedimages can support robust recognition memory. However, when observerssee full resolution images they are unable match them to degraded representationsof the same picture in memory.33.532 Do expert searches remember what they have seen?Erica Kreindel 1 (ekreindel@search.bwh.harvard.edu), Karla K. Evans 1,2 , Jeremy M.Wolfe 1,2 ; 1 Brigham and Womens Hospital, 2 Harvard Medical SchoolPrevious research has shown that humans have a massive and robust abilityto recognize objects and scenes that they have seen before (Brady, Kongle,Alvarez, and Oliva, 2008). Do experts have similarly impressive memoryfor the unusual stimuli with which they are expert? We tested cytologistswho search “scenes” filled with cells for signs of cervical cancer on memoryfor those scenes. We tested the same observers on memory for images ofobjects and real scenes. We compared their results to non-cytologist controlsubjects. In all conditions, participants viewed 72 images and were told thatthey should remember them. During the testing phase, they were shown 36old and 36 new images and were asked to label image as new or old. Expertcytologists were no better than controls for object memory (d’ 1.99 and 1.97,respectively) or scenes (d’: 3.44 vs. 3.20). They were significantly better thannaives at remembering images of cells (d’ .62 vs. .12). Note, however, thattheir memory for cell scenes was quite poor, significantly worse than theirmemory for objects and for scenes. We conclude that expertise with stimulidoes not convey massive memory for those stimuli nor does expertise withone set of stimuli notably increase memory for stimuli in general. On thepractical side, these results mean that, with some caution, one can reusestimuli in studies of cytology in ways that would not be wise in studies ofmemory for natural scenes.Acknowledgement: National Eye Institute (NEI) Grant Number: 5R01EY17001-333.533 A taxonomy of visual scenes: Typicality ratings and hierarchicalclassificationKrista A. Ehinger 1 (kehinger@mit.edu), Antonio Torralba 2 , Aude Oliva 1 ; 1 Brain andCognitive <strong>Sciences</strong>, Massachusetts Institute of Technology, 2 Computer Scienceand Artificial Intelligence Laboratory, Massachusetts Institute of TechnologyResearch in visual scene understanding has been limited by a lack of largedatabases of real-world scenes. Databases used to study object recognitionfrequently contain hundreds of different object classes, but the largestavailable dataset of scene categories contains only 15 scene types. In thiswork, we present a semi-exhaustive database of 130,000 images organizedinto 900 scene categories, produced by cataloguing all of the place type orenvironment terms found in WordNet. We obtained human typicality ratingsfor all of the images in each category through an online rating taskon Amazon’s Mechanical Turk service, and used the ratings to identifyprototypical examplars of each scene type. We then used these prototypescenes as the basis for a naming task, from which we established the basiclevelcategorization of our 900 scene types. We also used the prototypesin a scene sorting task, and created the first semantic taxonomy of realworldscenes from a hierarchical clustering model of the sorting results.This taxonomy combines environments that have similar functions andseparates environments that are semantically different. We find that manmadeoutdoor and indoor scene taxonomies are similar, both based on thesocial function of the scenes. Natural scenes, on the other hand, are primarilysorted according to surface features (snow vs. grass, water vs. rock).Because recognizing types of scenes or places poses different challengesfrom object classification -- scenes are continuous with each other, whereas158 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Morning Postersobjects are discrete -- large databases of real-world scenes and taxonomiesof the semantic organization of scenes are critical for further research inscene understanding.Acknowledgement: Funded by NSF CAREER award to A.O. (0546262) and NSF CAREERAward to A.T. (0747120). K.A.E. is supported by a NSF Graduate Research Fellowship.33.534 Predicting object and scene descriptions with an information-theoreticmodel of pragmaticsMichael Frank 1 (mcfrank@mit.edu), Avril Kenney 1 , Noah Goodman 1 , JoshuaTenenbaum 1,2 , Antonio Torralba 2 , Aude Oliva 1 ; 1 Department of Brain and Cognitive<strong>Sciences</strong>, MIT, 2 Computer Science and Artificial Intelligence Lab, MITA picture may be worth a thousand words, but its description will likelyuse far fewer. How do speakers choose which aspects of a complex imageto describe? Grice’s pragmatic maxims (e.g., “be relevant”, “be informative”)have served as an informal guide for understanding how speakersselect which pieces of information to include in descriptions. We presenta formalization of Grice’s maxim of informativeness (“choose descriptionsproportional to the number of bits they convey about the referent withrespect to context”) and test its ability to capture human performance.Experiment 1: Participants saw sets of four simple objects that varied ontwo dimensions (e.g., texture and shape) and were asked to provide therelative probabilities of using two different adjectives (e.g., polka-dot vs.square) to describe a target object relative to the distractor objects. Participants’mean probabilities were highly correlated with the informationtheoretic model’s predictions for the relative informativeness of the twoadjectives (r=.92,p


Sunday Morning PostersVSS 2010 AbstractsSunday AM33.538 Broadening the Horizons of Scene Gist Recognition: Aerialand Ground-based ViewsLester Loschky 1 (loschky@ksu.edu), Katrina Ellis 1 , Tannis Sears 1 , Ryan Ringer 1 ,Joshua Davis 1 ; 1 Psychology Department, Kansas State UniversityNumerous studies in the last decade have used ground-based views ofscenes to investigate the process of scene gist recognition. Conversely, fewif any studies have investigated scene gist recognition of aerial (i.e., satellite)views. This study asks the question, how much of what we know aboutscene gist recognition from ground-based views directly translates to aerialviews?Fifty-two participants were randomly assigned to Aerial and Groundbasedconditions, with processing times (SOA) and scene categories variedwithin-subjects. Stimuli were monochrome photographs from 10 categories:5 Natural: coast, desert, forest, mountain, river; 5 Man-made: airport,city, golf-course, residential, stadium. Aerial images were from GoogleEarth©. Both target and mask images were presented for 24 ms, with SOAsof 24-94 ms plus a no-mask condition. Participants then chose between all10 categories.As predicted, ground-based views were recognized more accurately thanaerial views. However, contrary to predictions, aerial view recognitiondid not benefit more from additional processing time than ground-basedview recognition. Aerial view performance with no mask was worse thanground-based view performance at 24 ms SOA. Thus, gist perception ofaerial views is more data (information) limited than resource (time) limited,perhaps because they are “accidental views” (Biederman, 1987). Anadditional analysis collapsed all 10 basic level categories into 2 superordinatelevel “Natural” and “Man-made” categories. For ground-based views,Natural categories were consistently high, whereas Man-made categoriesbenefited from additional processing time. However, for aerial views, bothNatural and Man-made categories benefited equally from additional processingtime. Nevertheless, confusion matrices for the 10 basic level categoriesand responses showed a correlation of .80 across the Aerial andGround-based views, suggesting that discriminability between categoriesis similar across aerial and ground-based views. Further research willinvestigate what information both aerial and ground-based views contain,and what information aerial views lack.Acknowledgement: Kansas NASA Space Grant Consortium33.539 Adaptation for landmark identity and landmark location ona familiar college campusLindsay Morgan 1 (lmo@mail.med.upenn.edu), Sean MacEvoy 1,2 , Geoffrey Aguirre 1 ,Russell Epstein 1 ; 1 Center for Cognitive Neuroscience, University of Pennsylvania,2 Department of Psychology, Boston CollegeFamiliar landmarks have both an identity (e.g., White House) and a locationin space (e.g., 1600 Pennsylvania Ave.). How are these two kinds of informationrepresented in the brain? We addressed this issue by scanning Universityof Pennsylvania students with fMRI while they viewed images of 10landmarks from the Penn campus. Images (22 views of each landmark; 220total) were presented at 0.33 Hz in a continuous carry-over design (Aguirre,2007). We observed two different kinds of adaptation effects relating to repetitionof (i) landmark identity and (ii) spatial location. First, scene-responsiveparahippocampal place area (PPA) and retrosplenial complex (RSC),as well as medial retrosplenial cortex, showed reduced response when twodifferent images of the same landmark were shown on successive trials,suggesting that these regions represent individual places with some generalizationacross views. Second, the left anterior hippocampus exhibitedadaptation corresponding to real-world distances between landmarks; specifically,response was more strongly reduced when the landmarks shownon successive trials were closer together on campus. Importantly, there wasa dissociation between these two effects: PPA, RSC, and medial retrosplenialcortex did not show distance-related adaptation and left anterior hippocampusdid not show adaptation for landmark identity. The landmarkadaptation effect in PPA, RSC, and medial retrosplenial cortex is consistentwith previous work implicating these areas in the coding of real-worldplaces. The unexpected distance-related response in the left anterior hippocampusmay reflect the retrieval of episodic memories about these locationsin a way that is shaped by their positions in a larger spatial map.Acknowledgement: This research was funded by NIH grant EY-016464 to R.A.E.33.540 How Accurate is Memory for Familiar Slope?Anthony Stigliani 1 (astigli1@swarthmore.edu), Frank Durgin 1 , Zhi Li 1 ; 1 Departmentof Psychology, Swarthmore CollegeGeographical slant is generally overestimated. It has been reported thatthese overestimations are even greater in memory than in perception(Creem & Proffitt, 1998). However, these prior studies have used imageryinstructions, which may encourage biased responding. We asked twogroups of undergraduates to provide verbal, and pictorial or proprioceptiveslope estimates of 5 familiar campus paths ranging in actual slope from0.5 to 8.6 deg. One set of 30 participants was led to the base of each pathand made their estimates while looking at it (Perception Condition). Theother set of 30 participants made estimates from memory (Memory Condition).Maps, satellite photos and verbal names for the paths were used inthe memory condition to ensure that participants understood the locationof the path to be judged. Half the participants in each condition were askedto hold out their unseen hand to represent the slope of the path. Hand orientationwas measured precisely with a micro-inclinometer. Following thisthey made verbal estimates. The other half of the participants adjusted a 2Dline on a computer screen to represent the slope of the path prior to makingverbal estimates. All three measures showed the same patterns. For one ofthe shallower paths (1.2 deg), proprioceptive estimates from memory wereslightly lower (2.3 deg) than the proprioceptive estimates of those viewingthe path (4.0 deg), t(28) = 2.08, p = .046. For all other paths and measures,there was no evident or consistent difference between memory and perceptionon any of the measures. Non-verbal estimates were lower than verbalestimates, but all estimates overestimated all hills both in perception and inmemory. We conclude that memory for familiar paths includes unbiased(normal) perceptual information about path inclination. Creem, S.H., &Proffitt, D.R. (1998). Psychonomic Bulletin and Review 5(1):22-36.33.541 Does experience with a scene facilitate spatial layout judgments?Noah Sulman 1 (sulman@mail.usf.edu), Thomas Sanocki 1 ; 1 Department ofPsychology, University of South FloridaIn many perceptual domains, enhanced sensitivity to discrimination relevantdimensions, or perceptual learning, develops with repeated exposures.On one hand, layout perception maybe relatively direct, requiring nofamiliarity with a scene for efficient processing. In contrast, familiarity witha scene may allow for the rapid extraction of scene properties that supportlayout judgments. A series of experiments investigated whether scene-specificperceptual learning develops as a function of experience with a givenphotographic or synthetic scene. Subjects were instructed to indicate thecloser of two cued locations within a given scene. Neither of these cuedlocations within a scene repeated. Savings accrued across the experimentsuch that responses were faster overall. Further, the savings were greatestwith repeated scenes. However, this learning was greater with photographicstimuli.Object recognition: Selectivity andinvarianceVista Ballroom, Boards 542–556Sunday, May 9, 8:30 - 12:30 pm33.542 Object selective responses without figure-ground segregationand visual awarenessJohannes J. Fahrenfort 1 (j.j.fahrenfort@uva.nl), Klaartje Heinen 2 , Simon van Gaal 1 ,H. Steven Scholte 1 , Victor. A. F. Lamme 1,3 ; 1 University of Amsterdam, Departmentof Psychology, Amsterdam, the Netherlands, 2 Institute of Cognitive Neuroscience,London, UK, 3 Netherlands Institute for Neuroscience, Amsterdam, theNetherlands, part of the Royal Academy of Arts and <strong>Sciences</strong> (KNAW)It is well known that neurons in the temporal lobe classify objects, suchas faces, and it is generally assumed that the activity of such neurons isnecessary for conscious awareness of these objects. However, object categorizationmay also occur unconsciously, as has been shown by the selectiveactivation of object selective neurons by masked objects. So what distinguishesconscious from unconscious object recognition? We constructedschematic images containing objects such as faces and houses while keepinglocal retinal stimulation between conditions identical. Using a dichopticfusion paradigm, we manipulated stimulus visibility such that objects were160 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Morning Posterseither visible or not visible. Confirming earlier results, we found that bothconsciously perceived and non-perceived objects result in category specificBOLD activation. Critically however, we show that only consciously seenobjects show a distinct neural signature of figure-ground segregation inearly and midlevel visual areas, which is completely absent when objectsare not seen. Although counterintuitive, this implies that consciousness ismore intimately related to processes of figure-ground segregation and perceptualorganization than to object categorization. We propose that figuregroundsegregation is a prerequisite for visual awareness, and that bothphenomena share part of their neural correlate, which is recurrent processingin visual cortex.33.543 Decoding of object position using magnetoencephalography(MEG)Thomas Carlson 1 (tcarlson@psyc.umd.edu), Ryota Kanai 2 , Hinze Hogendoorn 3 ,Juraj Mesik 1 , Jeremy Turret 1 ; 1 Department of Psychology, University of Maryland,2 Helmholtz Institute, Experimental Psychology, University of Utrecht, 3 Instituteof Cognitive Neuroscience & Department of Psychology, University CollegeLondonContemporary theories of object recognition posit that an object’s positionin the visual field is quickly discarded at an early stage in visual processing,in favor of a high level, position-invariant representation. The present studyinvestigated this supposition by examining how the location of an objectis encoded in the brain as a function of time. In three experiments, participantsviewed images of objects while brain activity was recorded usingMEG. In each trial, subjects fixated a central point and images of objectswere presented to variable locations in the visual field. The nature of therepresentation of an object’s position was investigated by training a linearclassifier to decode the position of the object based on recorded physiologicalresponses. Performance of the classifier was evaluated as a function oftime by training the classifier with data from a sliding 10ms time window.The classifier’s performance for decoding the position of the object rose toabove chance levels at roughly 75ms, peaked at approximately 115ms, anddecayed slowly as a function of time up to 1000ms post-stimulus onset.Within the interval of 75 to 1000ms, classification performance correlatedwith the angular distance between targets, indicating a metric representationof visual space. Notably, prior to the time that classification performancereturned to chance, object category information could be decodedfrom physiological responses; and, participants were able to accuratelymake high level judgments about the objects (i.e. category and gender forfaces). These findings suggest that position may be a fundamental featureencoded in the representation of an object, in contrast to the notion thatposition information is discarded at an early stage of visual processing.33.544 Perceiving and representing the orientation of objects:Evidence from a developmental deficit in visual orientationperceptionEmma Gregory 1 (gregory@cogsci.jhu.edu), Michael McCloskey 1 ; 1 Department ofCognitive Science, Johns Hopkins UniversityPerceiving and representing the orientations of objects is important forinteracting with the world. For example, accurate orientation informationallows us to interpret visual scenes, comprehend symbols and pick upobjects. Despite its importance, little is known about how the visual systemrepresents object orientation. One potential clue, however, comes from thetendency to confuse mirror images. In previous work we explored mirrorimage confusion in normal adults’ memory for orientation. The presentstudy probes perceptual representations of object orientation by investigatingthe mirror image confusions made by AH, a woman with a remarkabledevelopmental deficit in perceiving the locations and orientations of objects.We describe a framework that conceives of orientation as a relationshipbetween reference frames. The COR (coordinate-system orientation representation)framework assumes that object orientation representations mapan object-centered reference frame onto a reference frame extrinsic to theobject, which may in turn be related to additional extrinsic frames. Moreover,for each of these mappings, the representations are compositional,involving several parameters. According to COR, mirror image confusionsresult from failures in encoding, retaining or processing specific parameters.We present new data from AH in support of COR assumptions. AH’ssystematic pattern of mirror reflection errors in copying pictures of objectsprovides evidence for the compositionality of orientation representations,and the role of object-centered frames and mappings between extrinsicframes in representation of orientation. AH makes similar mirror reflectionerrors when reaching for objects, suggesting the hypothesized orientationrepresentations also play a role in visually-guided action. Finally, the factthat AH’s errors arise at a perceptual level, in contrast with errors made byadults in memory, supports the idea that orientation representations havesimilar structure at multiple levels of processing.33.545 The “Inversion Effect” and Cortical Visual ProcessingViktoria Elkis 1 (elkisv@mail.nih.gov), Dwight Kravitz 1 , Chris Baker 1 ; 1 Unit onLearning and Plasticity, Laboratory of Brain and Cognition, National Institute ofMental HealthThe investigation of stimulus inversion can reveal important insights intothe nature of visual object processing. Many studies have examined inversionwith respect to faces, and have consistently demonstrated a large costin the recognition of inverted compared to upright faces. Imaging studieshave also demonstrated higher activation within the fusiform face area(FFA) for upright than for inverted faces. However, the inversion effect isnot unique to faces, and costs of inversion on recognition are also observedfor objects and scenes, although generally to a lesser extent than for faces.Here, we used functional magnetic resonance imaging (fMRI) to investigatethe effect of inversion on visual processing for multiple categories ofvisual stimuli across multiple regions of visual cortex. Specifically, uprightand inverted faces, objects, and scenes were presented to subjects, and thepatterns of response within FFA, object-selective cortex (Lateral Occipital,LO), and scene-selective cortex (Parahippocampal Place Area, PPA) toeach stimulus type and orientation were compared. In all high-level visualareas the patterns of response could be used to discriminate between thedifferent categories of upright stimuli. Within the FFA, while the effect ofinversion was strongest for faces, there was also an effect for non-preferredstimuli, demonstrating that face-selective cortex shows a general effect ofinversion. PPA showed a strong inversion effect only for scenes, suggestinga specialization for its preferred category. In contrast the generallyobject-selective LO showed inversion effects for all stimulus classes. Collectively,these results demonstrate robust effects of inversion for multipleobject categories in cortical visual processing and suggest that category isnot the only factor contributing to the inversion effect in specialized corticalregions. Additionally, these results complement behavioral studies ofinversion by demonstrating inversion effects for faces, objects, and scenesin their preferred and non-preferred regions of processing.33.546 The viewpoint debate revisited: What drives the interactionbetween viewpoint and shape similarity in object recognition?Pamela J Arnold 1 (p.j.arnold@bangor.ac.uk), Charles Leek 1 ; 1 School ofPsychology, Bangor UniversityPrevious reports have shown (a) viewpoint-dependent time costs in objectrecognition and (b) that shape similarity affects these viewpoint costs.However, it remains unclear which aspects of shape similarity interact withviewpoint effects (e.g., to what extent similarity is computed from solely2D image-based shape properties and/or 3D geometric structure). In thisstudy we examined the relative contributions of these factors to viewpointrelated time costs in order to elucidate the nature of the shape representationsmediating recognition. Using a series of sequential matching experiments(same shape/different shape) we measured recognition performancefor 3D novel objects or their silhouettes at the same or different viewpoints.The results showed a significant interaction between viewpoint and similarity.When participants matched objects at the same viewpoint, responsesfor different objects sharing the same 3D configuration (but not local features)were slower than those for different objects sharing local shape featuresalone. The same pattern of results was also found with silhouettessuggesting that this time cost may be attributed to the similarity of the 2Dglobal outline rather than to internal 3D configuration per se. Further, whenparticipants compared different rotations of objects a viewpoint-dependenttime cost was found. This cost was greater for different objects sharing onlylocal shape attributes than for different objects sharing only 3D configuration.These findings are consistent with previous proposals that recognitionis mediated by 2D image-based representations. However, in addition, theyfurther suggest that access to these representations involves the paralleloperation of two perceptual mechanisms: a rapid analysis of global shapeoutline at a coarse spatial scale, and a relatively slower analysis of internallocal shape features at a fine spatial scale. Thus, shape similarity interactswith viewpoint over different time courses depending on the spatial scaleat which similarity is computed.Sunday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>161


Sunday Morning PostersVSS 2010 AbstractsSunday AM33.547 Differential viewpoint preference for objects and scenesreflects encoding and retrieval efficiencyJ. Stephen Higgins 1 (higgins3@uiuc.edu), Ranxiao Frances Wang 1 ; 1 PsychologyDepartment, University of IllinoisCanonical views are different between objects and scenes. Previousresearch, using a stimuli set that equated all visual features between objectsand scenes except for connectedness between the parts, has demonstratedthat participants prefer oblique views for objects while they prefer straightonviews for scenes. These preferences are apparent by measuring eitherparticipants dwell time while studying different views of the stimuli or theview that participants chose as their favorite. Viewpoint preferences stayedconstant across different tasks requiring the use of identical visual informationbetween objects and scenes. The present studies explored whetherthis preference reflects encoding or retrieval efficiency. Participants eitherexplored oblique or straight-on views (45º or 0º, respectively) of objects orscenes and were tested on an intermediate view between the two types ofstudy views (22.5º or 202.5º), or studied objects or scenes from the intermediateviews and were tested on the oblique or straight-on views. In alltasks participants were faster at recognizing oblique views than straight-onviews of objects, and were faster at recognizing straight on than obliqueviews of scenes. These results suggest that people’s canonical viewpointpreferences reflect the ability to determine which view is most beneficial forboth encoding and recognition even though the stimuli types provide thesame visual information, except for connectedness between parts.33.548 Large Perspective Changes (>45°) Allow Metric ShapePerception Used to Recognize Quantitatively Different ObjectsYoung Lim Lee 1 (yl5@umail.iu.edu), Geoffrey Bingham 2 ; 1 Psychology, University ofHong Kong, 2 Psychological and Brain <strong>Sciences</strong>, Indiana UniversityBackground. Previous object recognition studies have shown that observerscould not recognize objects using quantitative (metric) properties. Thefinding of a relative inability to detect the metric differences is consistentwith numerous previous perception studies in which observers are unableto perceive metric 3D shape accurately. However, Bingham & Lind (2008)found large perspective changes (≥45°) allowed accurate metric shape perception.We now investigated whether such information can yield the abilityto detect quantitative properties and use them to recognize objects.Methods. 10 Ss participated in a small rotation (20°) condition and 10 Ssparticipated in a large rotation (70°) condition. 24 octagonal objects wereused. There were 4 qualitatively different objects and each object had 5quantitatively different variations. Every observer performed three sessionsin the same order, namely 2D quantitative difference, 3D quantitativedifference and 3D qualitative difference tasks. Observers viewed computergenerated displays of objects with stereo and structure-from-motion informationand performed a same-different task. Judgments were to be as accurateand quick as possible.Results. When information from large perspective changes was available,the ability to recognize quantitatively different objects was comparable tothat for qualitatively different objects both in respect to accuracy of judgmentsand reaction times.Conclusions. The two visual systems theory suggests that the ventral systemwhich is responsible for object recognition deals with and requires onlyqualitative properties. In contrast, our results showed that metric propertiesalso can be used to recognize objects if information from large perspectivechanges was available. Thus, we challenged the idea that the ventral systemonly uses qualitative properties to perform object recognition. We suggestedthat use of metric shape perception is not determined by anatomicallydistinct visual systems, but instead it is a function of information.33.549 The role of visual orientation representation in the mentalrotation of objectsDavid Rothlein 1 (david.rothlein@jhu.edu), Michael McCloskey 1 ; 1 Department ofCognitive Science, Johns Hopkins UniversityMental rotation tasks classically involve participants deciding whether twopictures of objects, presented at different orientations, are the same or mirrorimages. Reaction time in these tasks increases more or less linearly withthe angular difference in orientation between the two objects. This findinghas led many researchers to conclude that mental rotation is performed in amanner that is (somehow) analogous to the physical rotation of the object inquestion. This interpretation implies, for example, that the processes underlying90° and 180° mental rotations are qualitatively the same, differingonly quantitatively. In the present study participants reported (by drawingor selection from a forced-choice array) the orientation that would resultfrom rotating a stimulus picture 0°, 90°clockwise, 90°counterclockwise,or 180°. Analyses of participants’ errors revealed qualitative differencesin the distribution of error types across the different rotation conditions,suggesting that mental rotation processes may vary qualitatively and notjust quantitatively as a function of rotation angle. We interpret the resultsby reference to specific assumptions about the form of mental orientationrepresentations and the processes that transform these representations inmental rotation tasks.33.550 State-dependent TMS reveals rotation-invariant shaperepresentations in Lateral Occipital Cortex and Occipital FaceAreaJuha Silvanto 1,2 (juha_silvanto@yahoo.com), D. Samuel Schwarzkopf 2 , SharonGilaie-Dotan 2 , Geraint Rees 2 ; 1 Brain Research Unit, Low Temperature Laboratory,Helsinki University of Technology, 2 Instutute of Cognitive Neuroscience andWellcome Trust Center for Neuroimaging, University College LondonHuman extrastriate visual cortex contains functionally distinct regionswhere neuronal populations exhibit signals selective for visually presentedobjects. How such regions might play a causal role in underpinning ourability to recognize objects across different viewpoints remains uncertain.Here, we tested whether two extrastriate areas, the lateral occipital (LO)region and the occipital face area (OFA) contained neuronal populationsthat play a causal role in recognizing two dimensional shapes across differentrotations. We used visual priming to modulate the activity of neuronalpopulations in these areas, and then applied TMS before presentation of asecond rotated shape to which participants had to respond. Surprisingly,we found that TMS applied to both LO and OFA modulated rotationallyinvariant shape priming, but in a fashion that differed depending on thedegree of rotation. Our results thus demonstrate that both the LO andOFA contain neuronal representations which play a causal role in rotationinvariantshape processing.33.551 View-point dependent representation of objects in peripheralvisual fieldsNaoki Yamamoto 1 (mailto.naoki@gmail.com), Kiyoshi Fujimoto 2 , Akihiro Yagi 1 ;1 Department of Integrated Psychological Science, Kwansei Gakuin University.,2 SUBARU Engineering division, Fuji Heavy Industries, LtdPeripheral vision shows poorer performance than central vision for variousvisual tasks. Although there are many studies that have used artificialstimuli or human faces, little is known about the recognition of daily objectsin peripheral visual fields. In the present study, we investigated recognitionof facing direction of objects in peripheral vision. The task was to judge thefacing direction (left or right) of static objects briefly presented at the locationeither in the left or right peripheral visual field along the horizontalmeridian. Stimuli were daily objects (humans, animals, cars, motorcycles,and arrows), presented at either 5 or 20deg of eccentricity, and sizes 3or 6deg visual angle. The results showed that participants judged facingdirection of objects correctly when their size was relatively large. However,decreasing stimulus size made the recognition performance worse whenthe objects faced toward participants’ point of gaze than when the objectsfaced away. In an additional experiment, we investigated whether the phenomenonoccurred only in peripheral visual fields along horizontal meridianor not. Stimuli were presented at a total of eight peripheral locations;left, right, upper, lower, and upper / lower left and right from the participants’point of gaze. An arrow and bar figure was used in this experiment.We adopted the bar figure to examine the judgements for objects with nodirectional information. The results showed that recognition performanceswere lower for the arrow facing toward the point of gaze than for that facingaway at all eight peripheral locations, and that the bar appeared as thearrow facing away. These results indicate new interesting characteristicof object recognition in peripheral vision; in peripheral visual fields, representationof object is view-dependent, or more precisely, dependent onviewer’s point of gaze.33.552 Conspicuity of Object Features Determines Local versusGlobal Mental Rotation StrategiesFarahnaz Ahmed 1 (farahnaz@gmail.com), Alex Hwang 2 , Erin Walsh 2 , MarcPomplun 2 ; 1 Mount Holyoke College, 2 University of Massachusetts Boston162 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Morning PostersMost studies describe mental rotation as a top down process where the timeto discriminate between identical or mirrored objects increases linearly withthe angular deviation between them. Although mental rotation is regardedas a distributed processing task, its dependence on object features is still notwell understood. Therefore, we investigated the effect of structured colorcues on an object’s surface with a mental rotation task. Observers viewedtwo side-by-side images of Shepard Metzler type objects rendered for thefollowing conditions: (1) objects had distinctively colored surfaces (thesesurfaces had the same color from different viewpoints), (2) objects haddistinctive yet differently colored surfaces (colors of the surfaces changedwith every viewpoint), (3) objects had distinctive dark gray surfaces (similarto 1 with gray instead of colored surfaces), (4) objects without any colorinformation (uniformly gray). The viewing angle differed between the twoobjects from an egocentric frame of reference and the task was to determineas quickly as possible if the objects were identical or mirrored. Reactiontimes and eye movement data were recorded. Rotation effects wereseen across all four conditions, but they were largest for condition 2 andthe smallest for condition 3. The color cues in condition 2 seemed to makethe task less efficient as evidenced by the increased reaction times, numberof fixations and number of comparisons made between the images comparedto the other conditions. For larger angular deviation in condition 1,the highly distinctive cues seemed to bias the observers’ strategy towardcomparing the local object structure near the cues, as indicated by morecomparisons in further spread-out locations. The results suggest that, particularlyfor highly demanding rotation tasks, distinctive features inducemultiple local comparisons of the object structure whereas the absence ofsuch features tends to induce mental rotation of larger parts of the objector the entire object.33.553 Invariant behavioural templates for object recognition inhumans and ratsBen Vermaercke 1 (ben.vermaercke@psy.kuleuven.be), Hans Op De Beeck 1 ;1 Laboratorium of Biological Psychology, university of Leuven, BelgiumThe human visual system is expert in object recognition. We can identifyobjects under different viewing conditions, that is, object recognition showsa great deal of invariance. How the brain accomplishes this complex task ispuzzling neuroscientists. Many studies have applied methods that visualizelinear relationships between image properties and performance, suchas classification images, ‘bubbles’, and reverse correlation. However, theexistence of invariance means that simple relationships between imageproperties and performance do not exist, except in artificial experimentalsituations. The validity of results obtained in such situations is not clear.This problem is all the more relevant in studies of other animals that mightbe prone to rely on simple strategies. Here we extended the bubble techniqueto explicitly study invariant object recognition in humans as well asrats. We trained humans and five Brown-Norway rats to discriminate twosimple shapes (square vs triangle) that were partially occluded with bubbles.At first, the shapes had a fixed position. Behavioural templates fromthese data were complex, also for rats, and consisted of a few spatially separatedspots. Then these shapes were shown at random screen positions toprevent any simple relationship between the content of specific pixels andthe stimulus. As expected, we no longer obtained clear behavioural templateswith traditional classification image analyses. However, if we adaptthe analyses by taking the position shift of the stimulus into account andnormalizing the position of bubbles for the position of the stimulus, thenagain a behavioural template is found - for both species. We conclude thatmethods to visualize behavioural templates can be adapted to include thefull complexity and inherent nonlinearity of object recognition, and allowthe investigation of these nonlinearities in humans and even in species thatare not typically considered as being ‘visual’.Acknowledgement: CREA/07/00433.554 A Theory of Size-Invariance in Human Object RecognitionLi Zhao 1 (bcshaust@163.com); 1 Brain and Cognitive <strong>Sciences</strong> Institute, HenanUniversity of Science and Technology, Luoyang, Henan 471003, China, P.R.Human observers can recognize an object as small as thumb nail-sized photoor as large as several meter portrait photo. How do human observers realizethis remarkable feat? What is the underlying neural mechanism? Someresearchers suggested this is achieved by routing neural circuits processingdifferent size objects to a size invariant neuron. However this seems logicalcircular since even though the researchers knew routing circuits processingdifferent size objects, it is unclear how brain know this in the first place.Here we give a theory for size invariance. We argue that size invarianceis logically equivalent to a unique representation for an object, and neuralstructures count more than learning at least for size invariance.Human observers can recognize any different sized object after observinga specific size object. This is equivalent to a unique representation for anobject. This can be proved mathematically easily. Then the problem becomeswhat is the unique representation? If we want to get a large obejct representationfrom a small object representation, some detail information (edges)is not available. On the other hand, we can get a small object representationfrom a large object representation. Therefore we suggest that the uniquerepresentation is the smallest representation for an object. To achieve this,any object projected on the retina is first processed by the nerual systemto the smallest object the neural system can represents. Here the smallestobject may be the smallest detectable by the human vision system.Under this theory, all different size objects converge to this smallest objectrepresentation. This final convergent connection can be hard-wired by thebrain due to neural structures representing the equivalence. This equivalentstructure, however, may be partially built from evolution through observingobject moving from different distances.Acknowledgement: Chian Natural <strong>Sciences</strong> Foundation33.555 Robust object and face recognition using a biologicallyplausible modelGarrison Cottrell 1 (gary@ucsd.edu), Christopher Kanan 1 ; 1 Computer Science andEngineering, UCSDThe human ability to accurately recognize objects and faces after only asingle observation is unparalleled, even by state-of-the-art computer visionsystems. While performance of computer vision systems has increased dramaticallyin recent years, the systems tend to be optimized for particulardomains (e.g., faces), typically require many training examples, and usefeatures specially engineered for the task. We have developed a Bayesiancomputational model based on several characteristics of the human visualsystem: We move our eyes to regions of high salience, sample the image viafixations, and use oriented edge filters as our primary representation. Ourmodel combines a bottom-up salience map, a memory for fixations, and featureslearned from natural scenes by a sparse coding algorithm (IndependentComponents Analysis (ICA)). The features developed by ICA sharemany similarities with simple cells in V1 including the same color-opponentchannels. The statistical frequency of each ICA filter is estimated by fittingtheir responses to generalized Gaussian distributions. Using these statistics,we compute a saliency map to find rare visual features. The saliency mapis treated as a probability distribution and sampled to generate simulatedfixations. Fixation memory is represented as a probability density using akernel density model. During recognition, sequentially acquired fixationsare used to update a posterior probability distribution using Bayes’ rule,representing the model’s current beliefs. Our model has a small number offree parameters that are learned on two datasets of Butterflies and Birds.The complete model, then, uses features learned from natural images andparameter settings developed on datasets completely disjoint from the setsit is applied to, which are CalTech 101, CalTech 256, the Flower 102 dataset,and the AR face recognition dataset. Our method produces state-of-the-artresults on these sets, and exceeds all other approaches when trained on oneexample.Acknowledgement: This work was supported in part by NSF grant #SBE 0542013 to theTemporal Dynamics of Learning Center, an NSF Science of Learning Center.33.556 Reading words and seeing style: The neuropsychology ofword, font and handwriting perceptionJason Barton 1,2,3 (jasonbarton@shaw.ca), Alla Sekunova 1,2 , Claire Sheldon 1 ,Giuseppe Iaria 4 , Michael Scheel 1,2, ; 1 Department of Ophthalmology and Visual<strong>Sciences</strong>, University of British Columbia, 2 Department of Medicine (Neurology),University of British Columbia, 3 Department of Psychology, University of BritishColumbia, 4 Department of Psychology, University of CalgaryReading is considered to be primarily a function of the left hemisphere.However, it is also possible to process text for attributes other than theidentity of words and letters, such as the style of font or handwriting. Olderanecdotal observations have suggested that processing of handwritingstyle may involve the right hemisphere. There is also some fMRI evidenceof sensitivity to style in either right or left visual word form areas in thefusiform gyri. We created a test that, using the same set of text stimuli,required subjects first to sort text on the basis of word identity and secondSunday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>163


Sunday Morning PostersVSS 2010 Abstractsto sort text on the basis of script style. We presented two versions, one usingvarious computer fonts and the other using the handwriting of differentindividuals, and measured accuracy and completion times. For testing weselected four subjects with unilateral fusiform lesions and problems withobject processing who had been well characterized by neuropsychologicaltesting and structural and/or functional MRI. We found that one alexicsubject with left fusiform damage performed well when sorting by scriptstyle but had markedly prolonged reading times when sorting by wordidentity. In contrast, two prosopagnosic subjects with right lateral fusiformdamage that eliminated the fusiform face area and likely the right visualword form area were impaired in sorting for script style, but performedbetter when sorting for word identity. Another prosopagnosic subject withright medial occipitotemporal damage sparing areas in the lateral fusiformgyrus performed well on both tasks. The contrast in the performance ofpatients with right versus left fusiform damage suggests an important distinctionin hemispheric processing that reflects not the type of stimulus butthe nature of the processing operations required.Acknowledgement: CIHR grant MOP-77615, Alzheimer <strong>Society</strong> of Canada, Michael SmithFoundation for Health Research, Canada Research Chair program.Sunday AM164 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


Sunday Afternoon TalksEye movements: Top-down effectsSunday, May 9, 2:45 - 4:15 pmTalk Session, Royal Ballroom 1-3Moderator: Anna Montagnini34.11, 2:45 pmAnticipatory eye-movements under uncertainty: a window onto theinternal representation of a visuomotor priorAnna Montagnini 1 (Anna.Montagnini@incm.cnrs-mrs.fr), David Souto 2,3 , GuillaumeMasson 1 ; 1 Institut de Neurosciences Cognitives de la Méditerranée, CNRSand Aix-Marseille University, Marseille, France, 2 Faculté de psychologie et dessciences de l’éducation, University of Geneva, 3 Cognitive, Perceptual and Brain<strong>Sciences</strong>, University College London, UKPredictive information plays a major role in the control of eye movements.When a visual event can be predicted with some confidence the delay toinitiate an oculomotor response is reduced and anticipatory movementsoriented toward the predicted event can be observed. These effects of predictabilityunveil the expectancy state (or prior) of the visuomotor system.Here we try to infer some general properties of the internal representationof a visuomotor prior and its trial-by-trial buildup, by parametricallymanipulating uncertainty (thus predictability) in a visual tracking task. Weanalyze anticipatory smooth pursuit eye movements (aSPEM) in humansubjects, when the relative probability p of occurrence of one target motiontype (Right vs Left or Fast vs Slow target motion, in two experiments) wasvaried across experimental blocks. We observed that aSPEM velocity variesconsistently both as a function of the recent trial-history (local effect)and as a function of the block probability bias p (global effect). A singlemodel based on a finite-memory, Bayesian integrator of evidence allows tomimic both local and global effects. The comparison of model predictions(through numerical simulations) and data suggest that: aSPEM are basedon an internal continuous estimate of the probability bias p (as reflected bythe unimodal distribution of aSPEM) the estimate of p is updated accordingto an (almost) optimal model of integration of probabilistic knowledge,accomodating experience-related and newly incoming information (currenttrial). This integration leads in particular to asymptotic linear dependenceof mean aSPEM upon p and aSPEM-variance proportional to p(1-p) anadditional gaussian motor noise with variance proportional to the squareanticipatory velocity affects aSPEM. We conclude that the analysis of anticipatoryeye movements may open a window on the dynamic representationof the Bayesian Prior for simple visuomotor decisions.Acknowledgement: FACETS IST/FET 6th Framework34.12, 3:00 pmDynamic integration of saliency and reward information forsaccadic eye movementsAlexander C. Schütz 1 (alexander.c.schuetz@psychol.uni-giessen.de), Karl R.Gegenfurtner 1 ; 1 Department of Psychology, Justus-Liebig-University GiessenSaccade target selection is known to be influenced by bottom-up factors,like salient objects as well as by top-down factors like reward. We wantedto investigate whether saliency and reward can also affect the fine tuning ofsaccadic landing positions. We presented two luminance-defined, overlapping,blurred patches of opposite contrast polarity in front of a homogeneousgray background. One patch had a fixed contrast of 20%, the contrastof the other was varied to manipulate the saliency. In the saliency baselinecondition, we instructed subjects to make saccades to the target configuration.In the reward condition, subjects won points for landing on one targetand lost points for landing on the other target. To manipulate the reward,we varied the relative amount of bonus and penalty. The subjects wereinstructed to make as many points as possible, which were converted intoa monetary reward at the end of the experiment. Both saliency and rewardinfluenced the saccade landing positions. Subjects’ saccades landed closerto the patches that were more salient and rewarded. A model was able toaccount for these data by linearly weighting and combining saliency andreward. Saliency was modeled as the average of the two target positions,weighted by their relative contrast. For the reward, we predicted the optimalsaccade endpoint that maximizes gain, based on the individual saccadevariability and the bonus-penalty ratio. Interestingly, the relative weightswere modulated by the latency of the saccades. While fast saccades nearlyexclusively used salience to determine the landing point, slower saccadesgave a higher weight to reward information. Our results show that rewardsdo not only affect saccadic target selection, but also the exact landing positionwithin the target. However, integration of this top-down factor is timeconsumingand can be overridden in saccades with short latencies.Acknowledgement: This work was supported by the DFG Forschergruppe FOR 56034.13, 3:15 pmVisual Working Memory Influences the Speed and Accuracy ofSimple Saccadic Eye MovementsAndrew Hollingworth 1 (andrew-hollingworth@uiowa.edu), Michi Matsukura 1 , StevenJ. Luck 2 ; 1 Department of Psychology, The University of Iowa, 2 Center for Mind &Brain and Department of Psychology, University of California, DavisVisual working memory exerts top-down control over the allocation attention,typically by biasing attention toward visual objects that share featureswith those currently maintained in memory. In the present study, we examinedthe top-down influence of VWM on the dynamics of simple saccadiceye movements, which are traditionally thought to be generated automaticallyon the basis of low-level stimulus events. Participants held a color inVWM as they executed a saccade to an abruptly appearing colored disk,which either appeared alone or in the presence of a distractor disk. Evenwhen the target appeared alone, the execution of a saccade to the targetwas faster and more accurate when the target’s color matched the colorcurrently held in VWM. This implies that the color of the target disk wascompared with VWM prior to saccade execution, and that a match influencedthe efficiency of the saccade. This effect of VWM is striking giventhat the target was an abrupt luminance onset in an otherwise empty field.Even stronger effects of VWM were observed when a single distractor waspresented simultaneously with the target. In particular, the presence of adistractor led to large impairments in saccade timing and accuracy whenthe distractor matched memory but the target did not, and much smallerimpairments when the target matched memory but the distractor did not.This interaction held in a global effect paradigm, with saccade landing positionbiased toward the object that matched memory. It was also observed ina remote distractor paradigm, with a matching distractor capturing gaze ona significant proportion of trials and, in the absence of overt capture, slowingexecution of the primary saccade. These findings indicate that VWMinfluences gaze control under conditions in which eye movements are typicallythought to be stimulus driven.Acknowledgement: NIH R01EY01735634.14, 3:30 pmThe effect of previous implicit knowledge on eye movements in freeviewingMaolong Cui 1,2 (mlcui@brandeis.edu), Gergo Orban 3 , Mate Lengyel 3 , JozsefFiser 2,4 ; 1 Graduate Program in Psychology,Brandeis University, 2 VolenCenter for Complex Systems,Brandeis University, 3 Department ofEngineering,University of Cambridge, 4 Department of Psychology, BrandeisUniversityWe investigated whether previous knowledge about the underlying structureof scenes influence eye movements during free exploration. Subjects(N=6) were presented with a sequence of scenes, each for 3 seconds, consistingof a 5x5 grid and 6 shapes in various cells of the grid. The sceneswere composed of two triplet chunks (three element in fixed spatial relation)selected from an inventory of 4 triplets and configured randomly. Twohundred scenes were presented and eye movements of the subjects wererecorded, while they performed a two-back comparison: they had to noticeany change between the current display and the one before the previousdisplay. Next, a 2AFC task was used to assess how well subjects learned theunderlying statistical structure of the scenes by choosing the more familiarof the two presented pattern fragments in each trial. We trained an onlineprobabilistic non-parametric ideal observer model to learn the underlyingstructure of the scene including the number and identity of chunks com-Sunday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>165


Sunday Afternoon TalksVSS 2010 AbstractsSunday PMposing each scene. For each trial, we used the model to predict subjects’ eyemovements based on the estimated reduction in uncertainty about identitiesof shapes, given previous fixations in the scene and the knowledgeof previous scenes. For each subject, we found a significant relationshipbetween saccade length and the reduction of uncertainties produced bythe saccade (p0.05), for longer saccades,the average uncertainties reduction was 0.355 (p~200 ms), the visual system is able tocorrect this ‘depth illusion’ by a fixational saccade and vergence eye movements.When observers maintain fixation but the target swaps between nearand far location no differences in eye gaze and vergence were observed.Thus the saccadic adjustment and vergence are saccade dependent. Thisadjustment does not represent a correction due to under or overshoot ofthe saccade. Neither does it reflect an adjustment to vergence related todisturbance to the saccade. Thus eye movements (saccades and vergence)are guided by the perceived stimulus and during fixation eye movementsignals are guided by the physical stimulus.Object recognition: Object and sceneprocessingSunday, May 9, 2:45 - 4:15 pmTalk Session, Royal Ballroom 4-5Moderator: Gabriel Kreiman34.21, 2:45 pmContextual associations in the brain: past, present and futureMoshe Bar 1 (bar@nmr.mgh.harvard.edu); 1 Massachusetts General Hospital andHarvard Medical SchoolObjects in our environment tend to appear in typical contexts and configurations.The question of how the brain forms, represents and activates suchcontextual associations to facilitate our perception, cognition and action isfundamental. In recent years, we have characterized many aspects of thecortical mechanisms that mediate contextual associations, and this topichas received a much needed surge of attention that resulted in numerousfindings by our community. The purpose of this talk is to overview whathas been achieved in this research program so far; bridge findings that onthe face of things may seem contradictory; discuss far-reaching implicationsthat go from vision all the way to the brain’s “default network” andto the relationship between associative thinking and mood regulation; and,finally, list critical milestones that should be met in coming years so thatvision and memory are better connected, feedforward and feedback processesare better integrated, and more about the contextual cortical networkis illuminated.Acknowledgement: NIH NS050615, NIH EY019477 and NSF # 21508234.22, 3:00 pmMechanisms of perceptual organization provide auto-zoom andauto-localization for attention to objectsStefan Mihalas 1,2 (mihalas@jhu.edu), Yi Dong 1,2 , Rudiger von der Heydt 1,2 , ErnstNiebur 1,2 ; 1 Mind/Brain Institute, Johns Hopkins University, 2 Department ofNeuroscience, Johns Hopkins UniversityVisual attention is often understood as a modulatory field at early stages ofprocessing. In primates, attentive selection is influenced by figure-groundsegregation, which occurs at early stages in the visual cortex. The mechanismthat directs and fits the field to the object to be attended is not known.We propose here that the same neural structures that serve figure-groundorganization automatically focus attention onto a perceptual object. Specifically,we show that an additive attentional input which is spatially broadand not tuned for object scale produces a quasi-multiplicative attentionalmodulation which is repositioned and sharpened to match the object contours(auto-localization) and tuned for the scale of the object (auto-zoom).The model quantitatively reproduces the changes in attentional modulationcaused by the presence of objects observed at the level of V2. Theproposed mechanism works with generic, zero-threshold linear neurons,additive inputs and the connection patterns are plausibly related to the statisticsof natural visual scenes. We performed a global sensitivity analysisto determine the dependence of the attentional modulation, border ownershipmodulation and their interaction on several parameters in the model.The pattern and strength of the lateral inhibition are key to obtaining asharpening of the attention field and a quasi-multiplicative attention modulationwith an additive attention input. The strength of reciprocal con-166 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Afternoon Talksnections from neurons representing local features and neurons integratingthem, and inhibition between inconsistent proto-object representations areimportant to repositioning and tuning for scale of the attention field.Acknowledgement: 5R01EY016281-02 and R01-NS4059634.23, 3:15 pmRobustness to image clutter in human visual cortexGabriel Kreiman 1,2,3 (gabriel.kreiman@childrens.harvard.edu), Yigal Agam 1 ,Hesheng Liu 1 , Calin Buia 1 , Alexander Papanastassiou 4 , Alexandra Golby 5 , JosephMadsen 4 ; 1 Department of Ophthalmology, Children’s Hospital, Harvard MedicalSchool, 2 Center for Brain Science, Harvard University, 3 Swartz Center forTheoretical Neuroscience, Harvard University, 4 Department of Neurosurgery,Children’s Hospital, Harvard Medical School, 5 Department of Neurosurgery,Brigham and Women’s HospitalVisual recognition in natural scenes operates in the presence of multipleobjects, background and occlusion. How the neural representation ofimages containing isolated objects extrapolates to cluttered images remainsunclear. The responses of neurons along the monkey ventral visual cortexto cluttered images show varying degrees of suppressive effects. Attentioncould alleviate suppression by enhancing responses to specific features orlocations. Yet, it seems difficult to account for the accurate and fast recognitioncapacity of primates exclusively by serial attentional shifts. Here werecorded intracranial field potentials from 672 electrodes in human visualcortex while subjects were presented with 100 ms flashes of images containingeither one or two objects. We could rapidly and accurately readout information about objects in single trials in cluttered images from thephysiological responses. These observations could account for human fastrecognition performance and are compatible with simple hierarchical architecturesproposed for immediate recognition.Acknowledgement: NIH, NSF, Whitehall Foundation, Lions Foundation, Klingenstein Fund34.24, 3:30 pmTask dependence and level of processing in category-specificregions of the ventral streamPinglei Bao 1 (pbao@usc.edu), Bosco S. Tjan 1,2 ; 1 Neuroscience Graduate Program,University of Southern California, 2 Department Psychology, University ofSouthern CaliforniaSeveral modules along the ventral visual pathway selectively respond tospecific image categories: fusiform face area (FFA) to faces, parahippocampalplace area (PPA) to scenes, and extrastriate body area (EBA) to bodies.The existence of these category-specific regions suggests that there aredistinct branches in the visual-processing hierarchy. The relative levels ofprocessing or abstraction among these category-specific regions are notknown. Here we sought to determine the relative levels of processing forFFA and PPA using fMRI. We used noise-masked stimuli consisting ofboth a face and a scene, transparently superimposed. Subjects performeda face task and a scene task in separate scans using the same set of stimuli.For the scene task, subjects decided whether the scenes in two successivelypresented images were from the same scenery; for the face task, they determinedif the two faces were of the same individual. The transparency ofthe faces and scenes were adjusted such that the accuracies for both taskswere similar. For each ROI (V1-hV4, LO, pFs, FFA, PPA) and task, wemeasured BOLD amplitude as a function of image SNR. We found that,for both tasks, the log-log slope of the BOLD response function increasedmonotonically from low- to high-level visual areas. The log-log slopes ofthe BOLD response functions of FFA during the face task and PPA duringthe scene task placed both areas at a similar level of processing as pFs (Tjan,Lestou and Kourtzi, 2006). However, FFA and PPA differed in that FFAwas modulated equally by image SNR during both tasks, while PPA wasnot modulated at all during the face task, but was strongly modulated duringthe scene task. This suggests that face processing in FFA is involuntarywhile scene processing in PPA is task-dependent.Acknowledgement: NIH R03-EY016391, R01-EY01770734.25, 3:45 pmExamining how the real-world size of objects is represented inventral visual cortexTalia Konkle 1 (tkonkle@mit.edu), Aude Oliva 1 ; 1 Brain & Cognitive <strong>Sciences</strong>, MassachusettsInstitute of TechnologyThe size of objects in the world influences how we interact with them, butlittle is known about how known physical size is involved in object processingand representation. Here we examined if the dimension of real-worldsize is systematically represented in ventral visual cortex. In Experiment1, observers were presented with blocks of small objects (e.g. strawberry,calculator) and blocks of big objects (e.g. car, piano) displayed at the samevisual size (8 degrees) while undergoing whole brain imaging in a 3T fMRIscanner. Contrasts of big and small objects revealed that a region in theparahippocampal gyrus was preferentially active to big objects versussmall objects, while a subregion along the lateral occipital cortex was preferentiallyactive to small objects versus big objects. In Experiment 2, objectswith big and small real-world sizes were displayed at two visual sizes onthe screen (10 degrees and 4 degrees). The same regions were selectivefor big or small objects, independent of the visual size presented on thescreen, indicating that these regions are tolerant to changes in visual size.In Experiment 3, observers were shown blocks of objects grouped by category,with 16 different object categories spanning the range of real-worldsizes. We observed parametric modulation of the big and small regions ofinterest—in the big ROI, activity increased as object size increased (r=.74,p.1). These results suggest that activity in the ventral visual cortexdepends systematically on real-world size. Whether this modulation isbased on accessing existing knowledge, or by a combination of low-levelproperties that are correlated with real-world size, these results highlight anew dimension of information processing in ventral visual cortex.Acknowledgement: NSF Graduate Fellowship to T.K.34.26, 4:00 pmDepth Structure from Shading Enhances Face DiscriminationChien-Chung Chen 1,2 (c3chen@ntu.edu.tw), Chin-Mei Chen 1 , Christopher Tyler 3 ;1 Department of Psychology, National Taiwan University, 2 Neurobiology andCognitive Science Center, National Taiwan University, 3 The Smith-Kettlewell EyeResearch InstituteTo study how the visual system computes the 3D shape of faces fromshading information, we manipulated the illumination conditions on 3Dscanned face models and measured how the face discrimination changeswith lighting direction. To dissociate surface albedo and illuminationcomponent of face images, we used a symmetry algorithm to separatethesymmetric and asymmetric components face images in both low and highspatial frequency bands. Stimuli were hybrid male/female faces with differentcombinations of symmetric and asymmetric spatial content. We verifiedthat the perceived depth of the face was proportional to the degree ofasymmetric low spatial frequency (shading) information in the faces. Thesymmetric component was morphed from a male face to a female one. Theasymmetric shading component was manipulated through the change oflighting direction from 0 degree (front) to 60 degree (side). In each trial, thetask of an observer was to determined whether the test image was maleor female. The proportional of “female” response increased with the proportionof female component in a morph. Faces with asymmetric “male “shading was more easily judged as male than those with “female” shading,and vice versa. This shading effect increased with lighting direction.Conversely, the low spatial frequency symmetric information had little, ifany, effect. The perceived depth of a face increased with shading informationbut not symmetric information. Together, these results suggest that (1)the shading information from asymmetric low spatial frequencies dramaticallyaffects both perceived face identity and perceived depth of the facialstructure; and (2) this effect increased as the lighting direction shifts to theside. Thus, our results provide evidence that face processing has a strong3D component.Acknowledgement: NSC 96-2413-H-002-006-MY3Sunday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>167


Sunday Afternoon TalksVSS 2010 AbstractsSunday PMSpatial vision: Mechanisms and modelsSunday, May 9, 5:15 - 7:00 pmTalk Session, Royal Ballroom 1-3Moderator: Susana Chung35.11, 5:15 pm“Buffy contrast adaptation” with a single Gabor patchNorma Graham 1 (nvg1@columbia.edu), S. Sabina Wolfson 1 , Ian Kwok 1 , BorisGrinshpun 1 ; 1 Dept of Psychology, Columbia UniversityA few years ago we discovered a rather surprising effect of short-term adaptationto visual contrast, an effect we now call the Straddle Effect (althoughoriginally nicknamed “Buffy adaptation”). After adapting for less than asecond to a grid of evenly-spaced Gabor patches all at one contrast, a testpattern composed of two different test contrasts can be easy or difficult toperceive correctly. When the two test contrasts are both a bit less (or areboth a bit greater) than the adapt contrast, observers perform very well.However, when the two test contrasts straddle the adapt contrast (i.e. oneof the test contrasts is greater than and the other test contrast is less than theadapt contrast) performance drops dramatically.To explain the Straddle Effect, we proposed a shifting, rectifying contrast-comparison process. In this process a comparison level is continuallyupdated at each spatial position to equal the recent (less than a second)weighted average of contrast at that spatial position. The comparison levelis subtracted from the current input contrast, and the magnitude of differenceis sent upstream but information about the sign of that difference islost or at least degraded.In this previous work the test pattern and the observer’s task were of thetype known as second-order. We began to wonder: Is that necessary? Asit turns out, the answer is “no”. Here we will show a temporal StraddleEffect with a single Gabor patch having contrast that varies over time (ina two-temporal-interval same/different task). Thus the shifting, rectifyingcontrast-comparison process may occur in both spatially first-order andsecond-order vision. The important quantity in human contrast processingmay not be something monotonic with physical contrast but somethingmore like the un-signed difference between current contrast and recentaverage contrast.35.12, 5:30 pmThe Role of Temporal Transients in Forward and Backward MaskingJohn Foley 1 (foley@psych.ucsb.edu); 1 Department of Psychology, University ofCalifornia, Santa BarbaraContrast discrimination is poor when the test contrasts straddle the contrastof a context pattern presented just before and after the test pattern (Wolfson& Graham, 2007). With error feedback, discrimination is much better whenboth test contrasts are above or below the contrast of the context pattern.My hypothesis is that this phenomenon is caused by transients producedby the rapid stimulus change that are discriminable in magnitude, but notin sign; these transients determine performance when the stimuli are immediatelyadjacent in time, but not otherwise. I tested this hypothesis in fourcontrast discrimination experiments using a two-alternative spatial forcedchoicetask with Gabor test patterns presented between Gabor forwardand backward masks. The test interval was constant at 100 msec. For eachmask contrast, there was a small fixed test contrast difference. The task wasto indicate which contrast was higher. The two test contrasts were eitherbelow, above, or symmetrically straddling the mask contrast. Trials wereblocked by mask contrast and test contrast pair. The experiments show: 1)In the absence of feedback, when both contrasts are below the mask contrast,responses are usually wrong; with feedback they are usually correct.2) The phenomenon is produced with either 1 sec or 50 msec masks. 3) Performancein the straddle condition improves as a function of temporal gapsintroduced between masks and test. 4) With gaps of 50 msec, performanceis good in the straddle condition and gets worse for test contrast pairs belowand above the mask contrast, the opposite of the phenomenon. The psychometricfunction for contrast detection was measured in the same paradigm.Proportion correct increases at very low contrasts, then decreases to a minimumat two times the mask contrast (straddle condition), then increases athigher contrasts. These results are consistent with the hypothesis.Acknowledgement: NIH EY 1273435.13, 5:45 pmClassification Images in Free-Localization Tasks with GaussianNoiseCraig Abbey 1,2 (abbey@psych.ucsb.edu), Miguel Eckstein 1 ; 1 Dept. of Psychology,University of California Santa Barbara, 2 Dept. of Biomedical Engineering,University of California DavisClassification images have become an important tool for understandingvisual processing in tasks limited by noise. However, with the exceptionof a few studies, the technique is currently limited by requiring that targetsand distracters be well cued for location in yes-no or forced-choicepsychophysical experiments. Here, we investigate the method in free-localizationtasks, where subjects search a single contiguous image for a targetand respond by indicating the location where the target is believed to bepositioned. Subject responses can be acquired by a mouse or other pointingdevice, or indirectly by an eye-tracker. Free localization tasks have anumber of attractive qualities, including controllable incorporation of freesearch into detection tasks, higher target contrast, and a more informativesubject response that results in fewer images needed to estimate a classificationimage. The approach we propose involves averaging incorrect localizationsafter alignment and correcting for the correlation structure of thenoise.We have evaluated the proposed methods using linear filter models for aGaussian luminance target embedded in Gaussian noise having power-lawamplitude spectra with exponents from 0 (white noise) to -3/2. We find theresult of overlapping (i.e. dependent) locations is a consistent underestimationin the lowest spatial frequencies in the classification images, whichbecomes more pronounced as the exponent decreases. Small motor errorsin the localization response relative to the size of the target have little effecton the resulting classification images. However, as the motor errors exceedthe target size, the classification images show consistent underestimation athigh spatial frequencies that is also dependent on the exponent of the noiseprocess. We find approximately a factor of two or more reduction in thenumber of trials needed to obtain a classification image with comparablesignal-to-noise ratio to a yes-no task.35.14, 6:00 pmOptimal detection and estimation of defocus in natural imagesJohannes Burge 1 (jburge@mail.cps.utexas.edu), Wilson Geisler 1 ; 1 UT Austin,Center for Perceptual SystemsDefocus signals are important in many aspects of vision including accommodation,the estimation of scale, distance, and depth, and the control ofeye growth. However, little is known about the computations visual systemsuse to detect and estimate the magnitude of defocus under naturalconditions. We investigated how to optimally estimate defocus blur inimages of natural scenes, given the optical systems of primates. First, weselected a large set of well-focused natural image patches. Next, we filteredeach image patch with point-spread functions derived from a wave-opticsmodel of the primate (human) eye at different levels of defocus. Finally, weused a statistical learning method, based on Bayesian ideal observer theory,to determine the spatial-frequency filters that are optimal for estimatingretinal image defocus in natural scenes. We found that near the center ofthe visual field, the optimal spatial-frequency filters form a systematic setthat is concentrated in the range of 5-15 cyc/deg, the range that driveshuman accommodation. Furthermore, we found that the optimal filterscan be closely approximated by a linear combination of a small numberof difference-of-Gaussian filters. Cells with such center-surround receptivefield structure are commonplace in the early visual system. Thus, retinalneurons sensitive to this frequency range should contribute strongly to theretinal and/or post-retinal mechanisms that detect and estimate defocus.The optimal filters were also used to detect, discriminate, and identify defocuslevels for 1 deg natural image patches. Consistent with human psychophysicaldata, detection thresholds were higher than discrimination thresholds.Also, once defocus exceeds 0.25 diopters, we found that 0.25 diopterchanges in defocus can be identified with better than 86% accuracy. Theestimated optimal filters are biologically plausible and provide a rigorousstarting point for developing principled hypotheses for the neural mechanismsthat encode and exploit optical defocus signals.168 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Afternoon Talks35.15, 6:15 pmSpatial and Temporal Proximity of Objects for Maximal CrowdingSusana Chung 1 (s.chung@berkeley.edu), Saumil Patel 2 ; 1 UC Berkeley, 2 Universityof Texas Medical School at HoustonCrowding refers to the deleterious interaction among objects that are closetogether. A logical expectation is that crowding is maximal when the targetand flankers are closest to one another. But is this so? Here, we examinedhow crowding depends on the retinal and perceptual spatial/temporalproximity between the target and flankers. We compared the crowdingeffect with the flash-lag effect, where flashed and moving targets are perceivedto be spatially proximal when they are not retinally proximal andvice versa. Stimuli were high-contrast letter Ts (1.1°) presented randomlyin one of four orientations. Target-T was presented at 10° right of fixation atthe 3-o’clock position. A pair of flanking Ts, one on each side of the target-T,rotated around the fixation target at a velocity of 5 rpm. Target-T, flashedfor 22 ms, appeared at different target-flanker delays (TFD) with respect tothe instant at which the flankers reached the 3-o’clock position. In separateblocks of trials, observers judged the orientation of the target-T (the crowdingtask), or its position relative to the rotating flankers (the flash-lag task).Averaged across four observers, maximal crowding (reduction in accuracyof identifying the target’s orientation) occurred for TFD of –66±33(SE) ms(target before flanker). This temporal delay for maximal crowding did notcorrespond to the flash-lag effect, which averaged –34±17(SE) ms. A controlexperiment showed that when flankers were flashed briefly at the 3o’clock positions at different TFDs, maximal crowding occurred for TFD of–52±3(SE) ms. Further, when flankers were presented at different angularpositions but simultaneously with the target, maximal crowding occurredwhen flankers were close to the 3-o’clock position and that the “flash-lageffect” was virtually zero. Our results suggest that highest retinal or perceptualspatial/temporal proximity between target and flankers is not anecessary requirement for maximal crowding.Acknowledgement: NIH grant R01-EY012810 (SC) and NSF grant BCS 0924636 (AS &SP)35.16, 6:30 pmTargets uncrowd when they pop outBilge Sayim 1 (bilge.sayim@epfl.ch), Gerald Westheimer 2 , Michael H. Herzog 1 ;1 Brain Mind Institute, EPF Lausanne, Switzerland, 2 Department of Molecular andCell Biology, University of California, BerkeleyThe perception of a target usually deteriorates when flanked by neighboringelements, so-called crowding. Explanations of crowding in the peripheryand the fovea are often based on local neural interactions, such as spatialpooling, excessive feature integration, or lateral inhibition. In contrast,we proposed that the grouping of the target with the flankers determinescrowding. In a visual search task, a target that differed from distractorsby its unique color (pop-out) yielded faster reaction times than a targetthat differed by a combination of color and size (no pop-out; serial search).Identical stimulus configurations, but now presented for only 150 msec,included a vernier target that required the observer’s judgment of the directionof the offset. Even though the location of the target within the arraywas marked in both, the proportion of correct vernier responses was farhigher (83.2%) for the pop-out configurations than for the ones requiringserial search (59.6%, p


Sunday Afternoon TalksVSS 2010 AbstractsSunday PMtation can be extracted from textures without analyzing the orientation ofindividual contours in the texture. Are the statistical regularities or otherproperties that define materials used to guide attention? Can observerssearch efficiently for cloth among stone or glass among paper? To assessthis, we used Sharan’s stimuli; moderate close-up views of objects madefrom eight material categories: fabric, glass, leather, metal, paper, stone,water, & wood. Observers searched for targets of one category. On eachtrial, distractors were drawn from one other category. Search was inefficient(Hits: 35.9 msec/item, Absent: 78.4 msec/item). Perhaps Sharan’sstimuli were too heterogeneous. We tried again with simpler surfaces:square, frontal patches of water, wood, skin, stone, fur, and feather. Thiswas still inefficient. We ran three conditions with target and distractor heldconstant. Feather among wood and fur among water were run in grayscale.Stone among fur was run in color. Of these, only feather among wood wasclose to efficient (Hit: 8.2 msec/item) but this may have been an orientationartifact. Most of the wood grain was vertically oriented while featherswere horizontally oriented. Thus, while it may be possible to extract materialinformation very rapidly and, perhaps, even to appreciate a materialproperties without attention, material information cannot be used to efficientlyguide attention to targets of one type of material among distractorsof another.35.23, 5:45 pmAn Ideal Saccadic Targeting Model Acting on Pooled SummaryStatistics Predicts Visual Search PerformanceRuth Rosenholtz 1 (rruth@mit.edu), Livia Ilie 1 , Benjamin J. Balas 1 ; 1 Brain & Cognitive<strong>Sciences</strong>, MITOne of the puzzles of visual search is that discriminability of a single targetfrom a single distractor poorly predicts search performance. Last year(Rosenholtz, Chan, & Balas, VSS 2009) we suggested that in crowded visualsearch displays, the key determinant of search performance is insteadperipheral discriminability between a patch containing both target anddistractors and a patch containing multiple distractors. Using a model ofperipheral vision in which the visual system represents the visual input bysummary statistics over each local pooling region (Balas, Nakano, & Rosenholtz,2009), we predicted peripheral discriminability (d’) of crowded target-presentand distractor-only patches, and showed that this in turn predictedthe relative difficulty of a number of standard search tasks.Here, our goal is to make quantitative predictions of visual search performanceusing this framework. Specifically, we model both reaction time vs.set size (RT/setsize) slopes and number of fixations to find the target. Tothis end, we have derived the ideal saccadic targeter for the case in whichthe input consists of independent noisy “targetness” measurements frommultiple, overlapping pooling regions. The radius of each pooling region isroughly half its eccentricity, in accordance with Bouma’s Law. For crowdedpooling regions, our predicted d’ allows us to compute the likelihood ofobserving a given amount of “targetness,” conditioned on whether or notthe given pooling region contains a target. For uncrowded pooling regions,e.g. near fixation, discriminability is maximal. An additional parametercontrols the amount of memory from previous fixations.The model performs well at predicting RT/setsize slopes, and reasonablywell at predicting mean number of fixations to find the target. Best predictionscome when the model has minimal memory. This suggests that searchperformance is indeed constrained by the extend to which peripheral visioncan discriminate between target-present and distractor-only patches.Acknowledgement: Funded by NSF BCS-0518157 and NIH 1-R21-EU-10366-01A1 grantsto Dr. Rosenholtz35.24, 6:00 pmActive search for multiple targets is inefficientPreeti Verghese 1 (preeti@ski.org); 1 Smith Kettlewell Eye Research Institute, SanFrancsico CA 94115Rationale: When the task is to find multiple targets in noise in a limitedtime, saccades need to be efficient to maximize the information gained(Verghese, VSS, 2008). The strategy that is most informative depends on theprior probability of the target at a location: when the target prior is low andmultiple-target trials are rare, making a saccade to the most likely targetlocation is informative, but when the target prior is high and multiple-targettrials are frequent, selecting uncertain locations is more informative. Doobservers adjust their saccade strategy depending on the prior to maximizethe information gained?Methods. Observers actively searched a noisy display with 6 potentialtarget locations equally spaced on a 3° eccentric circle. Each location hadan independent probability of a target, so the number of targets in a trialranged from 0 to 6. The target was a vertical string of 5 dots among noisedots positioned randomly. Observers searched the display for 350, 700 or1150ms and subsequently selected all potential target locations with a cursor.We varied the prior probability of the target from 0.17 to 0.67 to determinewhether observers adjusted their saccades strategies to maximizeinformation. We performed a trial-by-trial analysis of observers’ saccadesto determine saccade strategy.Results & Conclusion: Observers (n=5, 3 naïve) made saccades to the mostlikely target location more often than the most uncertain location, for alltarget priors ranging from low to high. Fixating likely locations is efficientonly when multiple targets are rare, as in the case of a low target prior, orin the case of more standard single-target search task. Yet it is the preferredsaccade strategy in all our conditions, even when multiple targets are frequent.These findings indicate that humans are far from ideal searchers inmultiple-target search.35.25, 6:15 pmNeural basis of object memory during visual searchKelly Shen 1 (kelly@biomed.queensu.ca), Martin Paré 1 ; 1 Centre for NeuroscienceStudies, Queen’s University, CanadaCurrent models of selective attention and visual search incorporate twoprocesses believed to be crucial in searching for an item in a visual scene:the selection of locations to be attended and the temporary prevention of reselectingpreviously attended locations. In natural situations, the deploymentof visual attention is accomplished by sequences of gaze fixations,and the active suppression of recently visited locations can be examined byanalyzing the distribution of gaze fixations as a function of time and location.We trained four monkeys to perform a visual search task, in whichthey could freely search for a target stimulus with a unique conjunction offeatures. Monkeys made multiple fixations on distracters before foveatingthe target (mean: 3.1; range: 1-14) and their probability of foveating the targetwith a single fixation was only 0.25. Performance in this difficult task,however, was generally efficient as monkeys rarely re-fixated previouslyinspected stimuli. The probability of a re-fixation increased with time andapproximated chance levels after 5-6 fixations, suggesting that foveatedinformation is retained across fixations but completely degraded withinabout 1000 ms of fixation. To investigate the neural mechanisms underlyingthis behavior, we recorded the activity of superior colliculus (SC) neuronswhile two animals performed the task. SC sensory-motor activity was sufficientto guide this behavior: activity associated with previously fixatedstimuli was significantly lower than that of stimuli not yet fixated. Morethan two-thirds of neurons retained these differences up to 100 ms followingfixation. These results suggest a neural mechanism for suppressingthe re-fixation of stimuli temporarily maintained in memory. These findingsdemonstrate how neural representations on the visual salience mapare dynamically updated from fixation to fixation, thus facilitating visualsearch.Acknowledgement: CIHR, NSERC35.26, 6:30 pmSelective conjunctive suppression in visual search for motion– form conjunctionsKevin Dent 1 (k.dent@bham.ac.uk), Jason Braithwaite 1 , Harriet Allen 1 , GlynHumphreys 1 ; 1 Behavioural Brain <strong>Sciences</strong> Centre, School of Psychology,University of BirminghamThree experiments investigated the mechanism underlying efficient visualsearch for conjunctions of motion and form (e.g. McLeod, Driver, & Crisp,1988) using a probe-dot procedure (e.g. Klein, 1988). In Experiment 1 participantscompleted 3 conditions: 1) conjunction (moving X target amongstmoving Os and static Xs), 2) moving feature (moving X target amongstmoving Os), and 3) static feature (static X in static Os). Following the searchresponse (target present or absent) all items stopped moving and (after60 ms) a probe-dot appeared on a distractor, or on the blank background.Probe-dots on static distractors in the conjunction condition were locatedmost slowly, slower than probes on moving distractors, and slower thanprobes on static distractors in the static feature condition. In contrast a secondgroup who viewed the stimuli passively before locating the dot showeda different pattern: a cost for probe-dots on moving items. In Experiment 2170 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Afternoon Talkswe investigated whether suppression applied to all static items regardlessof form by adding static Os to the conjunction displays. The results showedthat the suppression was specific to distractors sharing the target form (e.g.static Xs) and did not apply to static Os. Experiment 3 investigated the timecourseof suppression when searching for a moving X (conjunction conditionof Experiment 2). Stimuli moved for between 100 and 925 ms beforethe probe-dot appeared (60 ms later). The results revealed that the selectivesuppression of static X distractors was fast-acting, being fully in place after100 ms of stimulus motion. The results are difficult to account for in termsof feature based guidance of attention, and suggest instead, a mechanismof selective tuning or biasing of competition in the form system by signalsfrom the motion system.Acknowledgement: BBSRC, Wellcome Trust35.27, 6:45 pmIdentifying social and non-social change in natural scenes: childrenvs.adults, and children with and without autismBhavin Sheth 1, 2 (brsheth@uh.edu), James Liu 3 , Olayemi Olagbaju 3 , Larry Varghese 1 ,Rosleen Mansour 4 , Stacy Reddoch 4 , Deborah Pearson 4 , Katherine Loveland 4 ;1 Department of Electrical and Computer Engineering, University of Houston,2 Center for NeuroEngineering and Cognitive Systems, University of Houston,3 University of Houston, 4 Department of Psychiatry & Behavioral <strong>Sciences</strong>, TheUniversity of Texas Health Science Center at HoustonTypically developing (TD) children use social cues (e.g. gestural joint attention,observations of facial expression, gaze etc.) to learn about the world.In contrast, children with autism spectrum disorders (ASD) have deficits injoint attention and impaired social skills. Therefore, attentional processesthat are under the guidance of social referencing cues should be betterdeveloped in TD versus ASD children. We employed the “change blindness”paradigm to compare how the presence, absence, or specific contextof different types of social cues in a scene affect TD children, children withASD, and typical adults in visually identifying change. Forty adults andforty children (22 high-functioning ASDs, 18 TDs) participated. Dependingon the presence/absence and nature of the social cues in the scene, changewas categorized into one of six conditions: an actor’s facial expression orgaze, an object that an actor overtly pointed to or gazed at, an object connectedwith an actor in the scene, an object unconnected with any actors,an object while an actor pointed to a different, unchanging object, or anobject in a scene containing no actors. Percent correct, response time, andinverse efficiency were measured. No significant differences were observedbetween children with and without autism. Children with autism use relevantsocial cues while searching a scene just as typical children do. Children(with and/or without autism) were significantly worse than adults inidentifying change when an actor pointed to an unchanging object, or whenan object changed, whether or not it was connected with an actor. Childrenwere not worse than adults when no actors were present in the scene, orwhen an actor in the scene pointed to the change. Our findings suggest thatcompared with adults, children are over-reliant on social cues over othercues. Social cues “capture” the child’s attention.Acknowledgement: The research on which this paper is based was supported in part bya grant to Bhavin R. Sheth from Autism Speaks/National Alliance for Autism Research,by a grant to Katherine A. Loveland from the National Institute of Child Health and HumanDevelopment (P01 HD035471) and by a grant to Deborah A. Pearson from the NationalInstitute of Mental Health (R01 MH072263).Sunday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>171


Sunday PMSunday Afternoon PostersNeural mechanisms: Neurophysiology andtheoryRoyal Ballroom 6-8, Boards 301–314Sunday, May 9, 2:45 - 6:45 pm36.301 The role of inhibition in formatting visual information in theretina and LGNDaniel Butts 1 (dab@umd.edu), Alexander Casti 2,3 ; 1 Dept of Biology and Programin Neuroscience and Cognitive Science, University of Maryland, 2 Dept of Mathematics,Cooper Union School of Engineering, 3 Dept of Neuroscience, MountSinai School of MedicineThe processing capabilities of the visual system certainly depend on “nonlinear”computation at multiple levels in the visual pathway. While successfulcomputational vision models generally recapitulate this, validatingand constraining such models using physiological data is confoundedby two problems: functional characterizations of visual generally rely onlinear “receptive fields” that cannot capture nonlinear effects; and mostrecordings are from single neurons, implicitly entangling characterizationsof their own computations with those taking place in preceding areas. Weapply a new nonlinear modeling framework to simultaneously recordedpairs consisting of an LGN neuron and the retinal ganglion cell that providesits main input. Because it is nonlinear, this framework can identifymultiple processing elements and their associated nonlinear computationsfor each neuron. Furthermore, by recording from successive stages of thevisual pathway simultaneously, we can distinguish the processing thatoccurs in the retina from processing that occurs in the LGN, and observehow visual information is successively formatted for the visual cortex. Wedetect nonlinear processing involving the interplay of excitation and inhibitionat both levels. Inhibition in the retina is similarly tuned but delayedfrom excitation, resulting in highly precise responses in time. Oppositelytunedinhibition is added at the level of the LGN, whose purpose is lessclear, but when combined with inhibition inherited from the retina, likelyplays a role in contrast adaptation. Thus, we demonstrate a new methodto detect nonlinear processing using easily obtained data at multiple levelsof the visual pathway. In doing so, we reveal new functional elementsof visual neurons that are generally thought of as mostly linear. This hasimplications for both our understanding of how information is successivelyformatted for the visual cortex by its inputs, and suggests more generalroles of nonlinear computation in visual processing.36.302 Predicting Orientation Selectivity in Primary Visual CortexAnushka Anand 1 (aanand2@lac.uic.edu), Jennifer Anderson 2 , Tanya Berger-Wolf 1 ;1 Dept. of Computer Science, University of Illinois at Chicago, 2 Dept. ofPsychology, University of Illinois at ChicagoOrientation specific cells in V1 organize themselves in either swaths of similar-orientationpreferring clusters (iso-orientation domains) or in distinctivesingularities where cells representing 180-degrees of orientation specificitycenter themselves about a blob (pinwheels). The gradient (0 to 180-degrees)of the orientation-specific cells are organized in either a clockwise or counterclockwisedirection. However, pinwheels and iso-orientation domainsdevelop with some level of stochasticity, which many computational modelshave attempted to explain. While many good models exist, our goalwas to develop a biologically plausible model incorporating a three-layerapproach (representing retina, LGN, and cortex), presynaptic competitionfor resources, diffusive cooperation of near neighbor cells and correspondinglateral connections, maintenance of retinotopic mapping, and a cappedsynaptic load per neuron.We use a Self-Organizing Map as the basis for each of the layers, and aHebbian-style approach for reinforcing link weights between layer sub-networks.We train our network with an iterative presentation of randomlysized and oriented Gabors. Our data set contained 10 maps of each iterationlevel: 3500, 5000, 10000. We compare our maps against both real maps andsynthetic maps produced via other methods.We also aim to develop meaningful metrics for comparing maps. In additionto counting clockwise and counterclockwise pinwheels, we use graphtheoretic approaches to compute distances between pinwheels and estimatethe coefficient of variance for those distances. Pooled distance varianceacross maps indicates the 10000-map to be closest to the real map. Ourmethod also maintains a counterclockwise/clockwise pinwheel ratio mostsimilar to that in the real map with increasing number of iterations.Our approach, we think, proposes both a biologically relevant model ofpinwheel organization and more statistically relevant methods for makingcomparisons across maps.36.303 ‘Black’ dominance measured with different stimulusensembles in macaque primary visual cortex V1Chun-I Yeh 1 (ciy@cns.nyu.edu), Dajun Xing 1 , Robert M. Shapley 1 ; 1 Center for NeuralScience, New York UniversityMost neurons in layer 2/3 (not layer 4c) of V1 have stronger responses to‘black’ (negative contrast) than to ‘white’ (positive contrast) when measuredby reverse correlation with sparse noise (Jones and Palmer 1987). Furthermore,the degree of the black dominance in V1 depends on the stimulusensemble – the black dominance is much stronger when neuronal responsesare measured with sparse noise than with Hartley stimuli (Ringach et al1997). Sparse and Hartley stimuli differ in many ways. First, the individualstimulus size of sparse noise is much smaller than that of Hartley stimuli.Second, the dark and bright pixels of sparse noise are present separatelyin time while those of Hartley stimuli are shown simultaneously. Third,there are spatial correlations along the long axis of Hartley stimuli that arenot present in sparse noise. Which of these differences might contributeto the disparity in black dominance? Here we introduced a third stimulusensemble – a binary checkerboard white noise (m-sequence, Reid et al 1997)to measure the black-dominant responses in sufentanil-anesthetized monkeyV1. Both white noise and Hartley stimuli activate a larger populationof neurons than sparse noise, and dark and bright pixels appear simultaneouslyunder both conditions. Unlike Hartley stimuli, neighboring pixelsof white noise are uncorrelated. Among the V1 neurons with significantresponses (signal-to-noise ratio>1.5) to all three ensembles, black-dominantneurons (BDNs) largely outnumbered white-dominant neurons in outputlayer 2/3 with all three stimulus ensembles (% of BDNs: 76~82%) while thenumbers of black and white-dominant neurons were nearly equal in inputlayer 4c (% of BDNs: 40~60%). The degree of the black-dominant responsewas significantly stronger for white noise than for Hartley stimuli (p


VSS 2010 AbstractsSunday Afternoon PostersWe found a clear laminar pattern of MUA preference for black stimuli:while black-dominant responses were observed in layer 2/3, the blackpreferencewas only seen in layer 4Cb but not in layer 4Ca. Compared tostrong and sustained black-dominant responses in layer 2/3, black-dominantresponses were much more transient and weaker in layer 4Cb. Thedynamic difference of black-dominant MUA and LFP between layer 2/3and 4Cb implies that black-dominance in layer 2/3 was not generated by afeedforward-plus-threshold mechanism applied to layer 4Cb signals.We conclude: 1) black-dominance originates in the parvocellular pathwayfor high contrast condition; 2) black-dominance is significantly amplified bya recurrent excitatory-inhibitory network in layer 2/3 producing the strongblack-preference of layer 2/3 neurons and downstream visual perception.Acknowledgement: NIH-EY001472, NSF-0745253, the Swartz Foundation and the RobertLeet and Clara Guthrie Patterson Trust Postdoctoral Fellowship36.305 Relative Disparity in V2 Due to Inhibitory Peak Shifts ofAbsolute Disparity in V1Karthik Srinivasan 1,2,3 (skarthik@bu.edu), Stephen Grossberg 1,2,3 , Arash Yazdanbakhsh1,2,3 ; 1 Department of Cognitive and Neural Systems, 2 Center for AdaptiveSystems, 3 Center of Excellence for Learning in Education, Science andTechnology (CELEST) , Boston UniversityIn humans and primates, streoscopic depth perception often uses binoculardisparity information. The primary visual cortical area V1 computesabsolute disparity, which is the horizontal difference in the retinal locationof an image in the left and the right fovea. However, cortical area V2computes relative disparity (Thomas et al., 2002), which is the difference inabsolute disparity of two visible features in the visual field (Cumming andDeAngelis, 2001; Cumming and Parker, 1999). Psychophysical experimentshave shown that it is possible to have an absolute disparity change acrossa visual scene, while not affecting relative disparity. Relative disparities,unlike absolute disparities, can be unaffected by vergence eye movementsor the distance of the visual stimuli from the observer. The neural computationsthat are carried out from V1 to V2 to compute relative disparity arestill unknown. A neural model is proposed which illustrates how primatescompute relative disparity from absolute disparity. The model describeshow specific circuits within the laminar connectivity of V1 and V2 naturallycompute relative disparity as a special case of a general laminar corticaldesign. These circuits have elsewhere been shown to play multiple roles invisual perception, including contrast gain control, selection of perceptualgroupings, and attentional focusing (Grossberg, 1999). This explanationlinks relative disparity to other visual functions and thereby suggests newways to psychophysically and neurobiologically test its mechanistic basis.Supported in part by CELEST, an NSF Science of Learning Center (NSFSBE-0354378) and by the SyNAPSE program of DARPA (HR0011-09-3-0001and HR0011-09-C-0011)Acknowledgement: Supported in part by CELEST, an NSF Science of Learning Center(NSF SBE-0354378) and by the SyNAPSE program of DARPA (HR0011-09-3-0001 andHR0011-09-C-0011)36.306 Roles of Early <strong>Vision</strong> for the Dynamics of Border-OwnershipSelective CellsNobuhiko Wagatsuma 1,2 (nwagatsuma@brain.riken.jp), Takaaki Mishima 3 , TomokiFukai 2 , Ko Sakai 3 ; 1 Research Fellow of the Japan <strong>Society</strong> for the Promotion ofScience, 2 RIKEN Brain Science Institute, 3 University of TsukubaThe determination of the figure-ground is essential for visual perception.Computational and psychophysical studies have reported that spatialattention in early vision facilitates the perception of border ownership (BO)that indicates the direction of figure (DOF) with respect to the border sothat attended location appears as figures (Wagatsuma, et al. 2008). A recentphysiological study has shown that the time course of BO-selective cells inV2 was affected by the ambiguity of the DOF on the previous display (P.O’ Herron and R. von der Heydt, 2009). We investigated the mechanismbehind this dynamics of the BO-selective cells through a computationalmodel in which the early visual areas play critical roles for the determinationof the activities of model BO-selective cells. Our model consists ofV1, V2 and Posterior Parietal (PP) modules. The PP module is designed torepresent spatial attention that could be considered as a saliency map basedon luminance contrast. In the model, spatial attention alters contrast gainin the V1 module so that it enhances local contrast. The change in contrastsignal then modifies the activity of model BO-selective cells in V2 becauseBO is determined solely from surrounding contrast (Sakai and Nishimura,2006). The model was tested with the stimuli corresponding to Herron’sphysiological experiment. When new information regarding the DOF ispresented, the activities of model BO-selective cells were rapidly modified.In contrast, if instead a new stimulus is presented with an ambiguous DOF,then the responses of model BO-selective cells decayed slowly. This modeldynamics appears to depend on the ambiguity of the BO signal on the previousdisplay and reproduce the same tendency found in the physiologicalstudy. These results suggest that network among PP and early visual areascould have crucial role for the time course of BO-selective cells.Acknowledgement: This work was supported by Grant-in-Aid for JSPS Fellows (KAKENHI09J02583).36.307 Chromatic Detection in Non-Human Primates: Neurophysiologyand Comparison with Human Chromatic SensitivityCharles Hass 1 (cahass@uw.edu), Gregory Horwitz 1 ; 1 The Graduate Programin Neurobiology and Behavior, Department of Physiology and Biophysics,Washington National Primate Research Center, The University of Washington,Seattle WAChromatic detection experiments in humans have been instrumental inelucidating the post-receptoral mechanisms that mediate color vision. Toinvestigate the neural substrates of these mechanisms, we trained Rhesusmonkeys to perform a spatial 2AFC chromatic detection task. Our goalswere two-fold: First, to assess the utility of Rhesus monkeys as a model forhuman chromatic detection, and second, to measure the quality of colorsignals available in cortical area V1 that might subserve detection of isoluminantstimuli.Monkeys proved to be exquisitely sensitive psychophysical observers– matching or exceeding the sensitivity of human subjects performingthe detection task under identical stimulus conditions. The sensitivity ofhumans and monkeys depended similarly on the spatial frequency andchromaticity of the stimulus: sensitivity was low-pass for isoluminant stimuliand bandpass for achromatic stimuli. When stimuli were equated forcone-contrast, sensitivity was greater for L-M modulations than for S-conemodulations.In a subset of these experiments, we recorded from individual V1 neuronsin monkeys performing the detection task. Stimuli in these experimentswere tailored to each neurons’ preferred orientation and spatial frequency.Neuronal and psychophysical sensitivities were compared directly via anideal observer analysis of firing rates during the stimulus presentationperiod. Although the sensitivity of individual neurons varied considerably,the most sensitive V1 neurons were roughly as sensitive as the monkey.Accordingly, detection thresholds of V1 neurons varied with color directionin qualitative agreement with the monkey’s psychophysical behavior.These data demonstrate the existence of individual V1 neurons that areexquisitely sensitive to chromatic stimuli and attest to the value of Rhesusmonkeys as a model for human chromatic detection.Acknowledgement: This work was supported by an NIH (NIGMS) Training Grant (CH), theARCS Foundation (CH), the McKnight Foundation (GH), and NIH grant RR000166 (GH).36.308 Response to motion contrast in macaque V2Jie Lu 1 (hdlu@sibs.ac.cn), Anna Roe 2 , Haidong Lu 1 ; 1 Institute of Neuroscience,Shanghai Institute of Biological <strong>Sciences</strong>, CAS, 2 Department of Psychology,Vanderbilt UniversityMotion processing in monkey visual cortex occurs in the dorsal pathwayV1 to MT. However, direction-selective neurons are also found in manyventral areas including V2 and V4. Previously we have reported that direction-selectiveneurons are clustered in V2 and form patches of directionmaps. These maps were observed mainly in V2 thick stripes and never colocalizedwith color-activated thin stripes. The functional significance ofthese direction-selective neurons and their maps remains unclear. One possibilityis that these neurons contribute to motion defined features and theirclustering may facilitate interactions among different direction-selectiveneurons. Using intrinsic optical imaging methods, we imaged V2 responseto motion contrast stimuli in both anesthetized and awake macaque monkeys.These stimuli contain drifting random dots (RD) moving on differentbackgrounds including 1. homogeneous gray, 2. stationary RD, 3. oppositely-movingRD. We found that V2 direction domains were more stronglyactivated by the third stimulus (which contains the strongest motion contrast)than to the other two conditions. Random dots moving on stationaryRD background elicited weaker response but still stronger than that toSunday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>173


Sunday Afternoon PostersVSS 2010 AbstractsSunday PMpure drifting RD patterns. Combined with the presence of motion contourresponse in V2, these observations support the idea that V2 plays importantroles in analyzing figure-ground segregation based on motion contrast.36.309 Encoding of brief time interval judgments in single neuronsJ. Patrick Mayo 1,2,3 (jpm49@pitt.edu), Marc A. Sommer 1,2,3 ; 1 Center for Neuroscienceat the University of Pittsburgh, 2 Department of Neuroscience, Universityof Pittsburgh, 3 Center for the Neural Basis of CognitionOur knowledge of the psychophysics of brief interval time perception currentlyoutweighs our knowledge of its neuronal basis. Of particular interestare the neuronal mechanisms for temporal judgments in frontal cortex, theregion of the brain thought to underlie conscious perception. One ubiquitousneuronal phenomenon, adaptation, is intimately tied to temporalprocessing and could play a role in perceiving time intervals. Neuronaladaptation results when two visual stimuli are presented in close succession,yielding a normal first neuronal response but a diminished secondresponse. The amount of time between the first and second stimuli governsthe magnitude of the second response, with longer interstimulus intervalsresulting in less adaptation and therefore larger second responses. Inprevious work (Mayo and Sommer, 2008), we quantified the dynamics ofneuronal adaptation during passive fixation at two stages of visual processingin the brain: the frontal eye fields (FEF) in prefrontal cortex, andthe superficial superior colliculus (SC) in the midbrain. We found robustneuronal adaptation in both areas with a similar time to recovery, even inthe superficial SC located one synapse away from the retina. Here, we askif the relative magnitude of successive neuronal responses contains usefulinformation about the amount of time between successively-presentedstimuli at naturalistic time intervals (


VSS 2010 AbstractsSunday Afternoon Postersa saccade to or away from an LIP neuron’s response field. Each day themonkeys learned the meaning (to or from RF) of four novel shapes, andwere retrained on four previously learned old shapes. In a second task, themonkeys simply viewed the shapes. This passive task established that LIPneurons indeed responded selectively to certain shapes, even meaninglessnovel shapes. The shape selectivity, however, did not seem to be invariantto changes in location. The shape-action association task revealed an interactionbetween novelty (old vs. novel shapes) and meaning (to vs. fromRF). Somewhat surprisingly, early in the trial, novel ‘to RF’ shapes showeda lower response than ‘from RF’ shapes. The old shapes showed the oppositepattern, with initial ‘to RF’ responses higher than ‘from RF’ responses.The effect for old stimuli quickly reversed again so that ‘to RF’ responsesbecame lower than ‘from RF’ responses. Finally, when the monkeys wereallowed to saccade, both old and novel stimuli had higher responses in the‘to RF’ condition, but the difference was more pronounced for old shapes.These results suggest that LIP shape selectivity is unlikely to be part ofa representation of object structure, because it should be little affected bymeaning. That the meaning of the shapes can affect even the earliest LIPresponses raises the possibility that they reflect a best guess of how to reactto an object. Even novel shapes have points of interest that may bias lookingpatterns. With extensive training, as in the case of old shapes, these initialresponses may be overridden to represent an arbitrarily associated action.Acknowledgement: International Fulbright Science and Technology Award (HMS), NIHR01EY014681 (DLS)36.314 Virtual Multi-Unit Electrophysiology: Inferring neuralresponse profiles from fMRI dataRosemary Cowell 1 (rcowell@ucsd.edu), David Huber 1 , Garrison Cottrell 2 , JohnSerences 1 ; 1 Department of Psychology, University of California, San Diego,2 Computer Science and Engineering, University of California, San DiegoWe present a method for determining the underlying neural code frompopulation response profiles measured using fMRI. This technique usesorientation tuning functions for single voxels in human V1 (Serences et al.,2009; Kay and Gallant, 2008), which superficially resemble the electrophysiologicaltuning functions of V1 neurons. However, a voxel tuning function(VTF) is a summed population response, and does not specify the underlyingneural responses. For example, the same bell-shaped VTF may arisefrom a population of neurons that are (1) tuned to a range of preferredstimulus orientations, with each preferred orientation present in varyingproportions, or (2) uniformly distributed across preferred orientations, butneurons with a particular preferred orientation are tuned more sharply.The reported technique gains traction on this “inverse problem” by modelingthe underlying neural responses across a range of tested orientations,and can be used to model task-induced changes in VTFs. We assume a setof underlying neural tuning curves, centered on orientations spaced evenlybetween 0° and 180°, and sharing a common standard deviation (SD). Fora given SD, we use least-squares linear regression to solve for the ‘coefficients’of the neural tuning curves (i.e., the relative weighting of each neuraltuning curve present in the voxel) underlying the BOLD responses of anexperimentally observed voxel. We find coefficients at a range of SD values,then determine the best-fitting SD to give the best estimate of the SD andcoefficients. In future work, we will scan human subjects performing visualattention tasks, then use the present method to generate and test modelsof how the population response in visual cortex changes (e.g., Scolari andSerences, 2009). The technique effectively extracts “virtual” simultaneousmulti-unit recordings from fMRI data – albeit with the usual fMRI limitations– and may help to address fundamental questions of neural plasticity.Acknowledgement: This research was supported by NSF Grant BCS-0843773.Perception and action: Pointing andhittingRoyal Ballroom 6-8, Boards 315–332Sunday, May 9, 2:45 - 6:45 pm36.315 Sequence effects during manual aiming: A departure fromFitts’s Law?Darian Cheng 1 (chengda2@interchange.ubc.ca), John DeGrosbois 1 , JonathanSmirl 1 , Gordon Binsted 1 ; 1 University of British ColumbiaIn 1954, Paul Fitts forwarded a formal account of the relationship betweenthe difficulty of an aiming task and movement time associated with itscompletion. While most models used to explain this speed-accuracy tradeoffhave been based upon visual feedback utilization and target-deriveduncertainty, the idea that speed-accuracy constraints can also be dictatedby previous aiming history has been largely ignored. In order to examinewhether sequential movements are interdependent, we utilized a sequential-discreteaiming paradigm where the target changed difficulty midsequence,but between reaches. Individuals performed an adapted Fitts’stask by performing discrete manual aiming movements between two equidistanttargets from the midline. Responses were produced in sequences of20 manual aiming movements separated by a fixed inter-trial-interval of 1s. Four trial sequences were used in the experiment: in two of the sequencesthe target widths remained constant throughout trial (wide or narrow), intwo sequences the target width changed (wide to narrow, narrow to wide)between the 7th to 12th movements of the sequence. Our main area ofinterest was in the trials immediately following the change in target width.Namely, we wanted to see if there were any carry-over effects from thepreceding target width on the subsequent movements to a different targetwidth. The extant sequential aiming literature suggests individuals planseveral movements in advance during sequential movements, as comparedto a single movement in isolation. In accord with this view, we demonstrateda gradual change in movement times and movement endpoint distributionsfollowing a switch in target width during a reciprocal aimingtask. Importantly, this necessitates a transient departure from Fitts’s lawand highlights the role of visuomotor memory for the planning and executionof movements even in the presence of vision.Acknowledgement: NSERC to G Binsted36.316 The effect of target visibility on updating rapid pointingAnna Ma-Wyatt 1 (anna.mawyatt@adelaide.edu.au), Emma Stewart 1 ; 1 School ofPsychology, University of AdelaideDuring rapid, goal directed hand movements, eye and hand position areusually yoked. The saccade typically leads the hand. Visual and proprioceptivefeedback can also be used to update the movement online. However,it is not yet clear what effect the new, high resolution image of thetarget location gathered by this first saccade has on online control of thehand and eye-hand coordination. If this lately acquired visual informationsignificantly modulates performance, it would have significant implicationsfor theories of eye-hand coordination. We investigated the impact ofthe visibility of the goal at different times during the reach on endpointprecision and accuracy. If visual information about the target gathered bythe first saccade is used to update a movement online, endpoint precisionand accuracy should decrease if target visibility decreases late in the movement.Target contrast can significantly affect visual localization thresholds.In our experiment, we varied target contrast and duration in the reach tomanipulate the quality of the visual information gathered by the first saccade.The target could appear at one of 8 different locations, each 8 degreeseccentric to initial fixation. In Experiment 1, participants pointed to targetsof varying contrast and varying duration. We measured pointing precision,accuracy and movement time. Contrast significantly affected pointing precision.Pointing accuracy for low contrast targets was significantly better atlonger target durations. In Experiment 2, participants pointed to a targetthat either reduced or increased its contrast either early or late in the reach.Low contrast targets resulted in longer movement times. The results demonstratethat target contrast significantly impacts on pointing performance,and suggests that aggregation of information can affect rate of movement,perhaps as a corollary of Fitts’ law. We will discuss the implications of thesefindings for theories of eye-hand coordination.36.317 Comparing chromatic and luminance information in onlinecorrection of rapid reachingAdam Kane 1 (adam.kane@adelaide.edu.au), Anna Ma-Wyatt 1 ; 1 School ofPsychology, University of AdelaideHumans update goal-directed reaches online. There are additional delays inintegrating location changes for chromatic (parvocellular or koniocellular)targets compared to luminance (magnocellular) targets. This may reflectthe chromatic pathways’ slower conduction velocities. But the chromaticinformation may also take a longer route to the parietal cortex. Integrationtimes increase with stimulus intensity. Comparing different stimulus typesdirectly is problematic. Circumventing this problem in different ways hasproduced different results. Veerman et. al. (2008) found additional chro-Sunday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>175


Sunday Afternoon PostersVSS 2010 AbstractsSunday PMmatic delays of ~50ms, while White et. al. (2007) found a negligible difference.We compared integration times for for pointing chromatic and luminancetargets of equal intensity. We used identical stimuli in setting subjectiveequiluminance and detection thresholds, and in the final experiment.The stimuli were gaussian blobs (SD ~.5o) on a grey background. Participantsfirst adjusted red, green, yellow and blue blobs (defined in DKL colorspace) to be equiluminant to the background. Next, we measured detectionthresholds for these chromatic stimuli and for two luminance contrast blobsfor each participant. This produced stimuli of equal salience that primarilystimulated the parvocellular, magnocellular or koniocellular geniculatepathways. Finally, participants made fast pointing responses towards a fixationcross on a touchscreen. In one-in-three trials, the blob appeared 6o leftof the cross, 12-106ms after movement onset. Participants were instructedto touch the blob in under 410 ms or the trial was repeated.‘50% Integrationtime’ (IT50) is the ‘threshold’ time the blob was present during the reach forparticipants to correct more than half way towards it. Generally, IT50 wasshortest for magnocellular blobs and longest for koniocellular blobs. IT50for parvocellular targets varied between participants. The small additionalintegration delays for chromatic stimuli are best explained by slower conductionvelocities.36.318 Effect of speed overestimation on manual hitting at lowluminanceMaryam Vaziri Pashkam 1 (mvaziri@fas.harvard.edu), Patrick Cavanagh 1,2 ; 1 <strong>Vision</strong><strong>Sciences</strong> Laboratory, Department of Psychology, Harvard University, 2 LaboratoirePsychologie de la Perception, Paris Descartes University and CNRSPrevious studies have reported an overestimate in the perceived speed ofmoving objects at low luminance (Hammett et al 2007, Vaziri-Pashkam &Cavanagh 2008) and we have shown that this overestimate is a result ofthe longer blurred trajectory left by the moving stimulus at low luminance.Here we investigate whether this cue of extended motion trace affects actionas well as perception by testing the accuracy of hand motion to a translatingtarget at low luminance. We first verified the low-luminance effecton perception with two stimuli presented successively, one at high (meanluminance 75 cd/m2) and one at low luminance (0.15 cd/m2). Subjects hadto decrease the speed of the low luminance stimulus by approximately 30%to match the apparent speed at high luminance. In the second experiment,subjects were asked to make rapid hand movements towards the movingtargets so that their fingertip would land on the center of the moving randomdot pattern. <strong>Vision</strong> of the hand was blocked to prevent visual feedbackso that the accuracy of the landing depended on the speed estimate for themoving target. Results showed that the landing position of the finger wassignificantly farther ahead of the target at low luminance suggesting thatthe programming of the hand motion was based on an overestimated targetspeed. Based on the timing of the hand movement and its landing position,we derived the target speed used to plan the hand movement and found itto be about 10-15% too fast at low luminance compared to high luminance.We suggest that overestimation of perceived speed based on the extendedblur cue affects our motor performance at low luminance conditions.Acknowledgement: NIH36.319 Extrapolation of target movement is influenced by thepreceding velocities rather than by the mean velocityOh-Sang Kwon 1 (oskwon@cvs.rochester.edu), David Knill 1 ; 1 Center for visualscience, University of RochesterPurpose: Previous studies have suggested that the extrapolation of anoccluded target movement is influenced by the target velocity of the precedingtrial, (Lyon and Waag, 1995, de Lussanet et al, 2001, Makin et al,2008) and the overall mean velocity of the target (Brouwer et al., 2002,Makin et al, 2009). However, those studies may fail to isolate the effect ofthe preceding trial’s velocity from the effect of overall mean velocity, andvice versa. We examined the significance of the two effects. Method: In avirtual environment, a moving target disappeared behind an occluder andsubjects hit the target at the impact zone when the target is supposed to bein the zone had it moved with a constant velocity. Seven velocities (6 deg/sto 18 deg/s) and four occluded distances (6 deg to 18 deg) were used andthe exposure duration of the target was fixed at 800ms across all conditions.Results: A model was developed to predict the hitting time which is theduration from the moment of the target disappearance to the moment of asubject’s hitting on the impact zone. Velocities of the first preceding trial,the second preceding trial, and the overall mean velocity, and the meanhitting time were considered as possible factors influencing the performanceon the current trial. A cross validation technique was used to select amodel. The model with the best fit includes the first preceding velocity andthe second preceding velocity terms but does not include the overall meanvelocity and the overall mean hitting time terms. Conclusion: Results suggestthat extrapolation of target movement is influenced by the precedingvelocities rather than by the overall mean velocity.36.320 Perceiving and controlling actions: Visually perceiveddistances map onto different forms of throwing as a function ofthe ball’s weight and constraints on throwing actionsJohn Rieser 1 (j.rieser@vanderbilt.edu), Aysu Erdemir 1 , Gayathri Narasimham 1 ,Joseph Lappin 1 , Herbert Pick 2 ; 1 Vanderbilt University, 2 University of MinnesotaPeople control their own actions and judge the results of other’s actionsfrom the action’s kinematics. We study the psychophysics when children &adults vary the forms of their throwing to accommodate for varying targetdistances, ball weights, and constraints on whether they can rotate elbow,shoulder, waist or step forward. People control the forms of their throwingto fit with the different ranges of visually perceived distance. For a 1mtarget they swing from the elbow alone, to a 5m target from elbow & shoulder,and so forth out to 30m targets. They know how many of the availabledegrees of freedom in the throwing action are needed to generate enoughforce to reach the target’s vicinity. In Study 1 4-6 year olds & adults werevideo taped while throwing to visual targets ranging from 1-30m. Throwdynamics/kinematics were constrained by varying the ball’s weight, wristweights, & constraining movements across some swing points. How wouldpeople adapt, we ask, when tossing to nearby targets with their elbow constrained?How would they adapt when tossing to far away target withoutstepping or waist rotation? Study 2 was aimed at investigating the accuracywith which 4-6 year olds and adults can judge the thrown distanceby observing the kinematics of others’ throws. People viewed videotapesof throws up to the instant of release; they judged the throw’s distance& trajectory from the hand/ball’s velocity. Weber fractions were used todescribe the thrown distances across target distances. Children & adultsalike coped with the constraints in sensible ways, always varying the formof throw in ways that let them control the hand/ball’s velocity. Finally, children& adults were not accurate at judging the thrown metric distance fromvideotapes, but were remarkably accurate at rank ordering the thrown distances.36.321 Noise Modulation in the Dorsal and Ventral Visual PathwaysJennifer Anderson 1 (jander22@uic.edu), Michael Levine 1,2 ; 1 Department ofPsychology, University of Illinois at Chicago, 2 Laboratory of Integrative Neuroscience,University of Illinois at ChicagoThe human visual system responds differently to the same stimulus dependingon type of task. These differences may be due to how the stimulus isencoded; action-tasks utilizing an observer-based encoding, and perceptual-tasksutilizing an object-based encoding. We are interested in how thesystems modulating these different outputs process extrinsic noise. Previously,we demonstrated a method allowing subjects to respond to the samevisual display via hand-eye coordination or via perceptual-awareness. Inthe current study, we examined response variance in the two tasks givenincreasing levels of noise. Noise was defined as a random displacementapplied to each frame of a moving target as it traversed the visual display.The magnitude of the displacement corresponded to the standard deviationof the sampled normal distribution. We hypothesized that response variancewould be the sum of the intrinsic noise of the system and the appliedextrinsic noise. We tested video-game experts and non-experts. All datafollow the expected trend in the action-task. Data from non-experts alsofollow this trend in the perceptual-task, however data from video-gameexperts show less variability in responses compared to the predicted model,especially at high levels of extrinsic noise. This may suggest that expertsare better able to “ignore” noise than non-experts. Further analysis using aLAMSTAR neural network, trained on subject data, was able to determinethe threshold at which noise overwhelmed the mechanism for “ignoring”noise in the perceptual-task. We found that the two systems would beginto treat the sum of the intrinsic and extrinsic noise similarly at much higherlevels of noise in video-game experts. In sum, these findings suggest thatnoise may be processed differently according to the type of visual task.176 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Afternoon Posters36.322 Event-related potential (ERP) reflections of perceptualrequirements during the planning of delayed actionLeanna Cruikshank 1 (leannac@ualberta.ca), Jeremy Caplan 1,2 , Anthony Singhal 1,2 ;1 Centre for Neuroscience, University of Alberta, 2 Department of Psychology,University of AlbertaKinematic studies have robustly shown that delayed hand actions involveslower and less accurate movements compared to immediate, visuallyguided actions. Furthermore, converging evidence from neuropsychologicaland neuroimaging studies suggest that visual perceptual brain mechanismsin the lateral occipital cortex (LOC) are critically recruited duringdelayed hand actions. In this study we sought to further investigate theseissues by directly comparing the amount of perception-based neural activityduring the planning phase of visually guided and delayed actions. Tothis end, twelve paid volunteers were auditorily cued to perform a reachingtask to circular targets at varying locations on a nineteen-inch touchsensitive monitor. In the visually guided condition, the targets remainedvisible for 300 milliseconds after the onset of the auditory movement cue. Inthe delayed condition, the targets disappeared from view at the same timeas the auditory movement cue. We collected scalp recorded event-relatedpotential (ERP) data from 256 electrodes during both conditions of the task,and focused our analysis on the neural activity during the action planningphase of each trial. The behavioral data showed that, as expected, movementtime (MT) was slower in the delayed condition compared to the visuallyguided condition. Moreover, the ERP data showed that the sensory P1response over occipital electrodes was equivalent in both conditions; andmost importantly, the object recognition N170 response was larger duringthe planning phase of the delayed action condition compared to the visuallyguided action condition. This effect was robustly observed at 22 electrodesover temporal-occipital sites in both hemispheres. These data suggest thatthe planning of delayed actions relies more heavily on perception-basedinformation than visually guided action. Furthermore, this difference is notreflected in early visual processing, but involves higher-order perceptionlikely associated with regions in the inferior-temporal cortex.36.323 Testing the spatial reference frames used for manualinterceptionJoost C. Dessing 1,2,3,4 (joost@yorku.ca), J. Douglas Crawford 3,4,5,6 , W. Pieter Medendorp1 ; 1 Donders Institute for Brain, Cognition and Behaviour, Radboud UniversityNijmegen, The Netherlands, 2 Research Institute MOVE, Faculty of Human Movement<strong>Sciences</strong>, VU University Amsterdam, The Netherlands, 3 Center for <strong>Vision</strong> Research,York University, Toronto, Canada, 4 Canadian Action and Perception Network(CAPnet), 5 Canada Research Chair in Visuomotor Neuroscience, 6 Departments ofPsychology, Biology and Kinesiology & Health Science, York University, Toronto,CanadaWhile early cortical reach areas are known to represent earth-fixed movementgoals in a dynamic, gaze-centered map, it is unknown whether thesame spatial reference plays a role in the coding of moving targets for manualinterception. We tested the role of gaze-dependent and gaze-independentreference frames in the coding of memorized moving targets, renderedinvisible prior to a saccade that intervened before the reach. Gaze-centeredcoding would require the internal representation of the interception point(IP) to be actively updated across the saccade whereas gaze-independentcoding would remain stable. Head-fixed subjects (n = 9) sat in completedarkness and fixated a visual fixation point (FP) presented on a screen infront of them. A target started moving for 2.1 s downward at 7 deg/s atvarious approach directions (-18, 0, +18 deg), after which it disappeared.Occluded targets passed fixation height in a range from -5 to +5 deg (relativeto straight ahead); FP locations ranged from -10 to +10 deg. After asaccade (in saccade trials - as opposed to fixation trials) subjects reachedout to intercept the occluded target at fixation height with their index finger(saccade trials). We analyzed the pointing errors using regression analyses.Both initial and final fixation direction, as well as IP relative to these gazedirections affected the pointing errors (fixation trials: R2 = 0.15-0.50; saccadetrials: R2 = 0.15-0.62). Importantly, errors in the saccade trials reflectedcombined effects of fixation direction during target presentation and duringthe memory period. This suggests that a gaze-dependent representationof the IP is transformed into gaze-independent coordinates before the saccade,but that this transformation is not entirely finished at saccade onset.This explains why the pointing errors reflect a mixture of gaze-dependentand gaze-independent reference frames.Acknowledgement: NWO, HSFP, CIHR, NSERC-CREATE36.324 Spider-phobia influences conscious, but not unconscious,control of visually guided actionKim-Long Ngan Ta 1 (kimlongta88@gmail.com), Geniva Liu 1 , Allison A. Brennan 1 ,James T. Enns 1 ; 1 University of British ColumbiaFear of a stimulus can distort perception of its appearance (e.g., spiders:Rachman & Cuk, 1992; heights: Stefanucci & Proffitt, 2008). These studieshave not distinguished between conscious perception versus the controlof visually guided action (Milner & Goodale, 1995). In this study wetested spider-phobic (n=15) and non-phobic (n=20) participants in a visuallyguided pointing task that measured both conscious and unconsciousaspects of visual-motor control. Participants made speeded pointingactions on a touch-screen to images depicting either negative or positiveemotional content (spiders vs. pets). The pointing task was performedwith visual attention either focused on the image (single task) or dividedbecause participants were also identifying letters (dual task). This dual taskdisrupts the conscious planning of actions (as measured by action initiationtime) but not their online control (as measured in movement time, pointingaccuracy, and response to target displacement during the action) (Liu,Chua, & Enns, 2008). Pointing was controlled differently by spider-phobicthan non-phobic participants. In the dual task, they showed greater interferencefor letter identification and slower pointing movement. In the singletask, they showed less accuracy and greater sensitivity to image content,specifically avoiding the negative images when pointing. Yet, when attentionwas divided between images and letters in the dual task, measures ofunconscious motor control showed no differences related to phobia; pointingspeed, accuracy, and sensitivity to target displacement were unaffectedby phobia or image content. These findings support the hypothesis thatspider-phobia exerts its influence on the conscious, but not unconscious,control of visually guided action. They imply that the automatic pilot of thedorsal stream (Pisella et al., 2000), which is guided by the location of theimages, is not influenced by their emotional content.Acknowledgement: UBC AURA Award, NSERC Discovery Grant36.325 Motor output effect of objects presented in the blindspotDamon Uniat 1 (damon.uniat@hotmail.com), Frank Colino 1 , John De Grosbois 1 ,Darian Cheng 1 , Gordon Binsted 1 ; 1 University of British Columbia-OkanaganThe physiological blindspot is defined by the junction where the opticnerve exits the eye chamber and the accompanying absence of photoreceptors(Enns, 2004). Despite this absence of retinal input however, perceptualfilling of the blindspot has been consistently shown; suggesting visual perceptioncan exist in the absence of retinal drive. Recent examinations byBinsted et al (2007) suggest the converse is also true, whereby consciousvisual percept is not a necessary emergent of retinal input – while stillsupporting motor output. In the current investigation we examined howobjects presented in the blindspot could modulate motor output (i.e. pointing)in the absence of conscious awareness. The blindspot of the right eyewas mapped using a modification of the protocol developed by Araragiand Nakamizo (2008). Subsequently, participants were asked to point toobjects presented either within the blindspot (+/- 40% scotomic diameter)or outside of the blindspot. Specifically, while fixating a stationary point,participants pointed to the target circles briefly flashed (33 ms) either insideor outside the blindspot; on some trials no target was presented to serveas a control. Responding to an auditory tone, the subject was to point tothe presented target (whether present/perceived or not) as quickly andaccurately as possible. Although participants were ubiquitously unable todetect the presence of targets within the blindspot (and able outside) bothendpoint position and variability was sensitive to the occurrence and positionof a target. Subjects pointed more to the right/left respectively of thescreen corresponding to the target circle despite presentation within theblindspot. Further, they were less variable when pointing to non-conscioustargets than when responding in the absence of a target. Thus, despite theabsence of conscious percept due to subthreshold retinal input, visuomotorpathways – presumably within the dorsal stream – are able to use targetlocation information to plan and execute actions.Acknowledgement: NSERC to G. Binsted36.326 Extrinsic manipulations of the mental number line do notimpact SNARC-related influences on the planning and control ofaction.Jeffrey Weiler 1 (jweiler2@uwo.ca), Ali Mulla 1 , Taryn Bingley 1 , Matthew Heath 1 ;1 School of Kinesiology, The University of Western OntarioSunday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>177


Sunday Afternoon PostersVSS 2010 AbstractsSunday PMThe spatial numerical association of response codes (so-called SNARCeffect) manifests as faster reaction times (RT) to judgments of numericalmagnitude in left and right space when cued by low and high numbers,respectively. In addition, Fischer (2003:Vis Cogn) reported that movementtimes (MT) associated with goal-directed reaching movements are influencedin a direction consistent with the SNARC effect. These findings havebeen explained by the presence of a mental number line with smaller andlarger digit magnitudes preferentially represented in left and right space,respectively. In the present study, we sought to determine whether the magnitudeof the SNARC effect for goal-directed reaching is influenced by thepremovement manipulation of a real number line. Prior to response cuingparticipants were briefly (50 ms) presented with an ascending (i.e., digitmagnitude increasing from left to right), or descending (i.e., digit magnitudedecreasing from left to right) number line. In addition, we includeda control condition wherein a number line was not presented in advanceof response cuing. Following premovement cuing, low (1,2) or high (8,9)digits were presented and used to visually cue the onset of a left or rightspace reaching response. Results for RT and MT did not elicit a SNARCeffect: a finding consistent across the different premovement visual cuingconditions (i.e., ascending, descending, no number line). Interestingly,however, when total response time was analyzed (RT+MT) a SNARC effectwas observed. Based on these findings, we propose that the SNARC effectfor reaching responses is represented as an aggregation of the temporalproperties of both movement planning and control. Further results suggestthat the SNARC effect is refractory to extrinsic manipulations of the mentalnumber line.Acknowledgement: NSERC36.327 Visuomotor mental rotation: Reaction time is determinedby the complexity of sensorimotor transformations supporting theresponseKristina Neely 1 (kneely4@uwo.ca), Matthew Heath 1 ; 1 School of Kinesiology, TheUniversity of Western OntarioIn the visuomotor mental rotation (VMR) task, participants execute a center-outreaching movement to a location that deviates from a visual cue by apredetermined angle. Seminal work from Georgopoulos and Massey (1987)revealed a linear increase in reaction time (RT) as a function of increasinginstruction angle, for angles of 5, 10, 15, 35, 70, 105 and 140˚. This findingled to the mental rotation model, which asserts that response preparationis mediated by the imagined rotation of the movement vector (Georgopoulosand Massey, 1987). We recently demonstrated that the mental rotationmodel does not account for RT in all VMR tasks. Specifically, we revealeda RT advantage for the 180˚ instruction angle relative to 90˚ (Neely andHeath, 2009, in press). We interpreted this as evidence that 180˚ is mediatedby a vector inversion strategy; however, we were unable to determinewhether 90˚ invoked a mental rotation strategy. The goal of the presentwork was to examine 90˚ and 180˚ in concert with a set of intermediaryangles to determine whether 180˚ is a special case of VMR. To that end,we evaluated two independent sets of instruction angles: 0, 5, 10, 15, 35,70, 105 and 140˚ (Experiment One) and 0, 30, 60, 90, 120, 150, 180, and 210˚(Experiment Two). The results revealed a linear increase in RT as a functionof instruction angle for Experiment One. In contrast, the results for ExperimentTwo revealed a non-linear relationship between RT and instructionangle; specifically, we observed a RT advantage for 180˚, followed by 30˚and 90˚. Such results provide convergent evidence that response planningin the VMR task is not universally mediated by a mental rotation strategy.Rather, we contend that RT is determined by the complexity of the visuomotortransformations supporting the voluntary response.Acknowledgement: Natural <strong>Sciences</strong> and Engineering Research Council of Canada(NSERC)36.328 EEG microstates during visually guided reachingJohn de Grosbois 1 (john.degrosbois@gmail.com), Frank Colino 1 , Olav Krigolson 2 ,Matthew Heath 3 , Gordon Binsted 1 ; 1 Department of Human Kinetics, University ofBrisith Columbia Okanagan, 2 Department of Psychology, University of BritishColumbia, 3 School of Kinesiology, University of Western OntarioThe notion that vision’s importance in controlling goal-directed reachingmovements has been experimentally validated (Woodworth, 1899). FunctionalMRI and rTMS work has subsequently confirmed that the posteriorparietal cortex (PPC) are important for the control of visually guided movements(Culham and Kanwisher, 2001, Desmurget et al, 1999). The temporalresolution is inappropriate to study the cortical dynamics of reachingmovements. Therefore, this investigation examined the activation dynamicsof movement planning and control as measured by electroencephalography(EEG). Participants completed reaching movements during full-vision(FV), no-vision-delayed (NV) or open-loop (OL). Event-related potentialanalysis segmented with respect to peak velocity (PV) yielded differencesin visual and motor areas following PV. To generate an overall evaluationof the activation across time, ERP waveforms were submitted to a spaceorientedfield clustering approach (Tunik et al, 2008) to determine epochsof semi-stable field configurations (i.e. microstates) throughout the planning/controlof reaches. The results of this micro-state analysis showed thatregardless of visual condition, the planning and initiation of movement ischaracterized by two state transitions: an activation pattern dominated byincreasing primary-visual and motor activation (FCz, Oz). NV remained inthis early movement state and did not enter any control-based state. DuringFV, activation shifted following PV to an activation consistent with dorsal(contralateral PPC, frontal and 1o visual areas) guidance of movement. OLtransitioned into a bilateral temporal (presumably memory-guided) modeof control that did not exhibit primary visual activation. This had beenexpected of NV. Thus, even though previous fMRI studies have correctlyidentified important structures for the control of movement across differentvisual conditions, they have lacked the temporal resolution to elucidate thepattern of functioning across visually guided reaching movements.Acknowledgement: NSERC (Binsted, Heath) CFI (Binsted)36.329 Rapid Visuomotor Integration of flanking valenced objectsFrancisco Colino 1 (colinofr@interchange.ubc.ca), John De Grosbois 1 , GavinBuckingham 2 , Matthew Heath 2 , Gordon Binsted 1 ; 1 University of British Columbia,2 University of Western OntarioSignificant neurobehavioral evidence suggests a discrete segregationbetween the pathways associated with visual perception (i.e., ventral projections)and those ascribed visuo-motor functions (i.e., dorsal projections;in humans, see Milner & Goodale 2008; in non-human primates see Ungerleider& Mishkin 1982). In general the dorsal stream appears to be specializedfor processing veridical and egocentrically coded cues in a fashionthat is independent of conscious awareness (e.g., Binsted et al. 2007). Conversely,the ventral stream considers the relational characteristics of visualobjects and scenes to arrive at a richly detailed percept. However, demonstrationsof dorsal insensitivity to perceptually driven object features havefailed to address valence as an action moderator despite its apparent evolutionaryrelevance. Moreover, valenced cues have been observed to modifymotor behavior in non-human primates (fear conditioning; Mineka et al.1984). Thus, it follows that the human visuo-motor system should rapidlyintegrate abstract scene cues (e.g., valence) to reach a goal while avoidingpotential dangers (e.g., predation). To examine this we asked participantsto point to visual targets that were randomly flanked by valenced imageschosen from the International Affective Picture System (IAPS: e.g., bearcub, gun). All pointing movements had 50 cm amplitude; the target waswithdrawn upon movement initiation while the valenced flanker remained.Participant endpoint position was driven towards negatively valencedobjects and driven away from positively valenced objects. Thus, it appearsthe visuomotor system does not restrict its visual set. Rather, it appears torapidly integrate perceptual interpretations of abstract and contextual cuesfor movement adaptation.Acknowledgement: NSERC, CFI36.330 Digit magnitude does not influence the spatial parametersof goal-directed reaching movementsTaryn Bingley 1 (tbingley@uwo.ca), Matthew Heath 1 ; 1 Department of Kinesiology,The University of Western OntarioMovement times are advantaged when numerical magnitude is used toprompt the initiation of a goal-directed reaching response. In particular,movement times in left and right visual space are reported to be faster whenrespectively paired with smaller (i.e., 1, 2) and larger (i.e., 8, 9) digits (Fisher2003: Vis Cogn). In other words, the well-documented spatial numericalassociation of response codes (the so-called SNARC effect) can be extendedto the movement domain. The present study sought to determine whetherthe SNARC effect differentially influences not only the temporal propertiesof a reaching response, but also the spatial properties of the unfoldingtrajectory. To accomplish this objective, participants completed left andright space reaches following movement cuing via numerical stimuli (i.e.,1, 2, 8, or 9). Importantly, placeholders were used to denote the amplitude178 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Afternoon Postersof the reaching response and were either continuously visible to participants(Experiment 1) or occluded prior to movement onset (Experiment2). Results for Experiments 1 and 2 elicited a SNARC effect for reactiontime; that is, smaller and larger digits produced faster response latencieswhen used to cue left and right space reaches, respectively. In terms ofmovement time, Experiment 1 yielded a reversed SNARC effect: reacheswere completed faster to larger and smaller digits in left and right space,respectively. For Experiment 2, movement times were not influenced bydigit magnitude and the direction of the reaching response. Further, spatialanalysis of movement trajectories (Experiments 1 and 2) did not yield reliableinteractions between digit magnitude and reaching direction. In general,our results support the assertion that numerical magnitude influencesthe planning of a response, but does not reliably influence the temporal orspatial parameters of the unfolding reaching trajectory.Acknowledgement: NSERC36.331 Bimanual Interaction in Pointing to a Common VisualTarget with Unseen HandsWenxun Li 1 (wl18@columbia.edu), Leonard Matin 1 ; 1 Department of Psychology,Columbia University in the City of New YorkAlthough bimanual performance generally involves high correlationbetween the responses of the two hands, bimanual independence has beenachieved by haptic tracking two different targets, indicating that peoplecan maintain two separate movement plans simultaneously. Recently, wefound that observers maintained a high degree of bimanual independencewhen manual heightmatching to a common visual target. In the presentexperiments, we employed bimanual pointing to a common visual target.Observers in darkness monocularly viewed a visual target either 12oabove, 12o below, or at eye level with a 50o-long inducing line pitchedeither -30o (topbackward), or 20o (topforward) at 25o horizontal eccentricity.Manual pointing to the target was measured by a Polhemus 3-Spacesearch coil with the unseen hand either in the midfrontal plane or with afully-extended arm. The perceived elevation of a fixed-height target wasraised in a pitched-topbackward visual field and lowered in a pitched-topforwardvisual field. However, manual pointing to the mislocalized targetwas accurate with the fully-extended arm whereas, with the hand in themidfrontal plane, pointing errors were equal and opposite to the perceptualmislocalization. With the hands at different distances simultaneously,the pointing direction to the target by the second hand was influenced bythe prior pointing direction of the first hand. Average bimanual transferapproximated 43%, whether the first pointing was with the left or righthand. Less transfer (bimanual independence) was found with the first handpointing from the midfrontal plane and the second hand pointing with afully-extended arm than for the reverse order of manual distances. Similarresults were obtained for different target heights. Hand dominance alsoplayed an important role: 65% transfer was measured for the dominant tothe nondominant hand, 24% transfer was measured for the nondominanthand to the dominant hand.Acknowledgement: Supported by NSF grant BCS-06-1665436.332 Quantitative Treatment of Bilateral TransferLeonard Matin 1 (matin@columbia.edu), Wenxun Li 1 ; 1 Department of Psychology,Columbia University in the City of New YorkThe usual means of describing bilateral transfer quantitatively is to measurea response for the modality of interest on one side of the midline (e.g.,left arm, left eye, etc.) as a baseline condition, and in a separate conditionmeasure the same response following (or simultaneous with) activity withthe same modality on the other side of the midline as a bilateral condition.The deviation of the response in the bilateral condition from the baselinecondition divided by the difference between the unilateral responses on thetwo sides provides the usual measure of % transfer. We show that the %transfer measure is mathematically identical to a linear weighted averageof the unilateral responses from the two sides of the midline and that thisleads to a decrease in the slope of the function relating the original unilateralresponse-vs-a-critical stimulus parameter. We have measured manualheightmatching to a visual target in darkness whose perceived elevation issystematically influenced by the pitch of a single eccentrically located line,and also systematically influenced by the distance of the hand from thebody. The connection between the % transfer and the slope of the manualheightmatch-setting-vs-hand-to-body distance function noted above wasfound to hold. This supports a linear weighted average model for bimanualheightmatching. The ecological significance of such bilateral averagingwill be described. This significance is general for many bilateral functionsbeyond the manual heightmatching for which we found it to hold, andsuggests that experimental tests of other bilateral functions would providesimilar agreement with the linear weighted average model.Acknowledgement: Supported by NSF grant BCS-06-16654Perceptual learning: Sensory plasticityand adaptationOrchid Ballroom, Boards 401–414Sunday, May 9, 2:45 - 6:45 pm36.401 Neural correlates of perceptual learning in the humanvisual cortexJanneke Jehee 1,2,3 (janneke.jehee@vanderbilt.edu), Sam Ling 1,2,3 , JaschaSwisher 1,2 , Frank Tong 1,2 ; 1 Department of Psychology, Vanderbilt University,2 Vanderbilt <strong>Vision</strong> Research Center, Vanderbilt University, 3 These authorscontributed equally to this workAlthough practice is known to improve perceptual discrimination of basicvisual features, such as small differences in the orientation of line patterns,the neural basis of this improvement is less well understood. Here, we usedfunctional MRI in combination with pattern-based analyses to probe theneural concomitants of perceptual learning.Subjects extensively practiced discriminating small differences in the orientationof a peripherally presented grating. Training occurred in daily1-hour sessions across 20 days, during which subjects performed the taskbased on a single orientation at a single location in the visual field. BOLDactivity was measured before and after training, while subjects performedthe orientation discrimination task on the trained orientation and location,as well as three other orientations and a second isoeccentric location.Behavioral thresholds showed large improvements in performance aftertraining, with a 40% mean reduction in thresholds for the trained orientationat the trained location, and no significant improvement for any of theother conditions. However, analysis of the amplitude of the BOLD responsedid not reveal a location- or orientation-specific change in gross activityin early visual areas. To test whether learning nonetheless improved therepresentation of the trained orientation at the trained location, we used apattern-based analysis to decode the presented stimulus orientation fromcortical activity in these regions. Preliminary analyses indicated betterdecoding performance in areas V1 and V2 for the trained orientation andlocation, as compared to the untrained conditions. These results suggestthat, when analyzed at the population level, perceptual learning resultsin an improved early-level representation at the trained location for thetrained visual feature.Acknowledgement: This work was supported by a Rubicon grant from the NetherlandsOrganization for Scientific Research (NWO) to J.J., NRSA grant F32 EY019802 to S.L.,NRSA grant F32 EY019448 to J.S., NEI grant R01 EY017082 to F.T., and NEI centergrant P30 EY008126.36.402 Perceptual learning recruits both dorsal and ventralextrastriate areasYetta K. Wong 1 (yetta.wong@vanderbilt.edu), Jonathan R. Folstein 1 , IsabelGauthier 1 ; 1 Psychology Department, Vanderbilt UniversityIn perceptual learning (PL), behavioral improvement is specific to trainedstimuli and trained orientation. Some studies suggest that PL recruits V1(Schiltz et al., 1999; Yotsumoto et al., 2008) and leads to a large-scale decreasein the recruitment of higher visual areas and of the dorsal attentional network(Mukai et al., 2007; Sigman et al., 2005). However, the designs do notaddress whether these effects are task-dependent, may result from mereexposure, and could generalize to training stimuli with variability in shape.Twelve participants were trained for 8 hours to search for objects in a targetorientation among an array of 8 distracter objects. Within each display, allobjects were identical in shape and varied only in orientation, but acrossdisplays, a number of similar objects were used. With fMRI, we comparedneural activity in response to these objects before and after training. As inprior work (Sigman et al., 2005), behavioral improvement was specific totrained orientation but it generalized to similar objects, and neural activityin early visual areas was higher for objects at the trained orientation aftertraining. Importantly, the neural inversion effect was observed in visualareas well beyond retinotopic cortex, including extrastriate face and objectSunday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>179


Sunday Afternoon PostersVSS 2010 AbstractsSunday PMselective areas. These inversion effects were not obtained during shape discriminationwith the same objects, or in a separate group of participantsundergoing eight hours of naming training with the same object set in thesame peripheral visual positions, suggesting that the inversion effects weretask-dependent, and were not a result of mere exposure with the objects.Our results extend prior work to suggest that PL can sometimes recruithigher visual areas, possibly depending on the training objects and the variabilitywithin the object set.Acknowledgement: This research was supported by grants to the Temporal Dynamics ofLearning Center (NSF Science of Learning Center SBE36.403 Top-down attention is facilitati ve, but not obligatory, inperceptual learning to reduce sensory eye dominanceJingping P. Xu 1 (j0xu0007@louisville.edu), Zijiang J. He 1 , Teng Leng Ooi 2 ; 1 Departmentof Psychological and Brain <strong>Sciences</strong>, University of Louisville, 2 Departmentof Basic <strong>Sciences</strong>, Pennsylvania College of Optometry at Salus UniversityMutual binocular inhibition is unequal in people with sensory eye dominance(SED). We found SED can be reduced using a Push-Pull perceptualtraining paradigm where the weak eye was cued before the two eyes werestimulated with a pair of orthogonal gratings (Xu et al, Neurosci. Abst,2009). The pre-cueing ensured the grating in the weak eye was perceivedso that the observer could discriminate its orientation, while the strong eyewas interocularly suppressed. The impact of the training was limited tothe trained retinal location and grating orientation. However it is unknownwhether top-down attention, which directs orientation discrimination inthe weak eye, is required for the learning. We investigated this by implementinga 10-day Push-Pull training. During the training, two pairs oforthogonal grating discs (vertical/horizontal, 1.25deg, 3cpd, 35cd/m2)simultaneously stimulated two different retinal locations (2 deg from thefovea). While both retinal locations in the weak eye were pre-cued, observerswere instructed to attend to and discriminate the grating orientationin one location (attended), and ignore the other (unattended). We foundSED gradually reduced with training, both at the attended and unattendedlocations. This indicates top-down attention is not required for the learning.Top-down attention, however, is facilitative as the reduction in SEDwas larger at the attended location. The consequences of reduced SED arealso evident in two unrelated binocular visual tasks. One, using a binocularrivalry tracking task, we found the predominance of seeing the dominantimage in the weak eye was significantly enhanced. The enhanced predominanceeffect was slightly larger at the attended location than at the unattendedlocation. Two, we found the training caused a similar improvementin stereoacuity at both locations. These findings show that with the Push-Pull training paradigm, top-down attention is not obligatory, but can facilitateperceptual learning.Acknowledgement: NIH (R01 EY015804)36.404 Short Term Adaptation of Visual Search Strategies inSimulated HemianopiaSara Simpson 1 (Sara_ann_simp@hotmail.com), Mathias Abegg 1 , Jason JS Barton 1 ;1 University of Britisih ColumbiaObjective: In this study we isolated the effects of strategic adaptation ofhealthy individuals to a simulated homonymous hemianopia (sHH) andstudied the time course of early changes in search task performance. Background:Patients with homonymous visual field defects from occipitalstroke are known to have impaired visual search performance both clinicallyand experimentally. Rehabilitation training is often used to improveon such deficits. If improvement occurs it is not clear as to whether it is dueto strategic adaptation or recovery through neural plasticity. Moreover, itis not clear how rapidly adaptation occurs. Design/Methods: We used avideo eyetracker with a gaze-contingent display to simulate hemianopia. 10healthy subjects performed a letter search task under conditions of normalviewing, right sHH and left sHH, with 25 trials per condition. We measuredsearch performance in terms of both speed and accuracy, assessedthe effect of viewing condition and, to reveal adaptation effects, assessedthe time course within a given viewing condition. Results: Visual searchwas slower and less accurate in the sHH than normal viewing conditions.Search performance was comparable in left and right sHH. In the normalviewing condition subjects showed task-learning improvements in searchspeed over the first 6 trials, and then maintained a steady asymptotic performance.After the onset of sHH, subjects showed early improvements insearch speed that continued over all the 25 trials. This hemianopic adaptationwas larger in magnitude than the task-learning displayed in thenormal viewing condition. Conclusions/Relevance: Our results indicatethat an early and rapid strategic adaptation of visual search to hemianopiclimitations on vision occurs in the first few minutes after the onset of visualfield deficits. Such strategic shifts may account for the alterations in searchbehaviour seen in pathologic hemianopia, and may need to be taken intoaccount when evaluating the effects of rehabilitation.Acknowledgement: Dr Barton’s lab receives funding from CIHR36.405 Effects of adaptation on orientation discriminationErika Scilipoti 1 (erika_scilipoti@brown.edu), Leslie Welch 2 ; 1 Cognitive andLinguistic <strong>Sciences</strong>, Brown University, 2 Psychology, Brown UniversityAdaptation can have an immediate effect on the subsequently viewedstimuli; discrimination thresholds decrease at the adapted stimulus orientationand increase for orientations away from the adapted stimulus(Regan & Beverley, 1985; Clifford et al., 2001). Here we investigated theeffects that perceptual adaptation could have for orientation discriminationfor trained and untrained orientations. Participants were initially trained inan orientation discrimination task at the adapted orientation. On each triala Gabor pattern was presented at fixation and the adaptor was followed bythe test stimulus. Participants compared the test stimulus orientation to astandard that had the same orientation as the adapting stimulus. Participantscompleted a total of 10 sessions that were administered in separatedays. Thresholds across sessions for the adaptation condition were lowercompared to a control condition at a different orientation with no adaptingstimulus. In the second part of the study, we examined participants’ orientationdiscrimination at an orientation 10 degree away from the adaptedorientation. Two conditions were compared: orientation discrimination atthe previously trained orientation and at an untrained orientation. Participantscompleted 4 sessions for the two conditions. In both cases, adaptingto an orientation 10 degree away from the test orientation increased thresholds.However, the threshold increase was larger for the previously trainedorientation than for the untrained orientation. Our results are consistentwith the idea that training orientation discrimination increases the weightsof the neighboring orientation mechanisms relative to the mechanism mostsensitive to the test orientation (Blaser et al., 2004).36.406 Short-term components of visuomotor adaptation to prisminduceddistortion of distanceAnne-Emmanuelle Priot 1,2 (aepriot@imassa.fr), Rafael Laboissière 2 , ClaudePrablanc 2 , Olivier Sillan 3 , Corinne Roumes 1 ; 1 Institut de recherche biomédicaledes armées (IRBA), 2 Espace et Action, INSERM, UMR-S 864, 3 PlateformeMouvement et Handicap, IFNL-HCLIf the adaptive mechanisms to prism-induced lateral deviation have beenwidely investigated, little is known about prism-induced alteration of distance.The purpose of the present experiment was to study if a similar patternof visuomotor plasticity applies to a prism-induced distortion of distance.The experimental paradigm involved successively pre-test measures,an exposure phase and post-test measures. The adaptation process wasevidenced by a compensatory aftereffect between the pre- and post-tests.During the exposure, subjects had to point quickly to a visual target withtheir left hand seen through a pair of 5 ∆ base-out prisms spectacles. Visuomotoradaptation was assessed by open-loop pointing (i.e. without seeingthe hand) to visual targets with the left (exposed) hand. Visual adaptation(an adaptive process common to all effectors) was assessed by open-looppointing to visual targets with the right (unexposed) hand. Proprioceptiveadaptation of the left hand was measured by pointing to the left hand withthe right hand while blindfolded. Motor adaptation of the left hand wasindirectly inferred by calculating the difference between the visuomotoraftereffect and the algebraic sum of the visual and proprioceptive aftereffects.A significant aftereffect was obtained for both visuomotor and visualcomponents. No aftereffect was found for the proprioceptive component.The fact that the visuomotor aftereffect was significantly greater than thesum of the visual and (null) proprioceptive aftereffects is an indication thata motor adaptation had developed during exposure in addition to a visualadaptation. These findings highlighted short-term adaptive components toprism-induced distortion of distance. The adaptive components differedfrom those found with prism-induced lateral deviation by their respectivecontributions to the aftereffect, the latter ones involving little visual adaptation.Such differences in visuomotor adaptation may be attributed to theaccuracy of the available error signals, and could rely on different levels ofplasticity.180 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Afternoon PostersAcknowledgement: Grant N° 07CO802 from Délégation Générale pour l’Armement36.407 Does sleep influence how we see the world around us?Huy Nguyen 1 (htnguy37@mail.uh.edu), Greg Whittaker 1 , Scott Stevenson 2 , BhavinSheth 3, 4 ; 1 University of Houston, 2 School of Optometry, University of Houston,3 Department of Electrical and Computer Engineering, University of Houston,4 Center for NeuroEngineering and Cognitive Science, University of HoustonSleep improves learning and consolidates memory. While this view iswidely accepted, the notion that sleep can affect our perceptions, the waywe view the world around us, has not yet been investigated. Here, weexamine if sleep has an effect on visual perception, specifically on classificationof stimulus color. On a given trial, a full-field homogeneous stimulusof either slightly reddish or greenish hue was displayed. The observer hadto judge if the stimulus was greener or redder than their internal percept ofneutral gray. Across trials, the hue was varied using the method of constantstimuli. One pair of monocular tests was run just before the observer wentto sleep overnight and the second pair immediately after the person wokeup. Sleep duration was monitored with sleep diaries and actigraphy (7.7hours on average). A comparison of pre- and post-sleep data (n=5 observers)yielded a small but significant change: After sleep as compared tobefore, the stimulus was significantly less likely to perceptually take on agreenish tint (p


Sunday Afternoon PostersVSS 2010 AbstractsSunday PMand the task (2AFC vs yes-no). We present criteria to identify psychometricfunctions that are influenced by nonstationarity. Furthermore, we developstrategies that can be applied in different statistical paradigms−frequentistand Bayesian−to correct for errors introduced by nonstationary behavior. Asoftware that automates the proposed procedures will be made available.36.412 Pre-exposure interferes with perceptual learning forambiguous stimuliLoes van Dam 1 (Loes.van.Dam@tuebingen.mpg.de), Marc Ernst 1 , BenjaminBackus 2 ; 1 Max Planck Institute for Biological Cybernetics, 2 Dept. of <strong>Vision</strong><strong>Sciences</strong>, SUNY College of OptometryThe perception of a bistable stimulus is influenced by prior presentationsof that stimulus. Such effects can be long lasting: e.g. position-dependentlearned biases can persist for days, and reversing them requires extensiveretraining (Haijiang et al., 2006). The effectiveness of training may thereforebe influenced by pre-exposure to the ambiguous stimulus. Here we investigatethe role of pre-exposure for learning a position dependent perceptualbias. We used rotating Necker cubes as the bistable stimuli that could bepresented either above or below fixation. On training trials, additional cues(binocular disparity and occlusion) disambiguated the rotation directionfor the cube. On test trials the rotating cube was presented without disambiguationcues. Subjects reported whether the front face of the cube and amoving dot moved in the same or opposite directions. Subjects receivedfeedback about the correctness of their response. Using 350 training trials,subjects were exposed to different rotation directions for the above andbelow fixation locations of the cube. Following a 5-minute break a post-test(80 test trials) was performed. Separate subjects either directly started withthe training, or were pre-exposed to the ambiguous stimulus in a pre-test(80 test trials). Subjects starting the training immediately, on average perceivedthe cube to be rotating in the trained direction for both locations on83% of the post-test trials, replicating previous results. However, for thepre-exposed subjects, consistency with the trained percept-location contingencywas only 58% in the post-test. In control conditions we simulated thepre-test using disambiguated trials and initially presented subjects with thereversed contingency than that which they would subsequently be exposedto during training. Post-test consistency with the trained contingency was78%. This shows that the pre-exposure interference does not necessarilydepend on the initial perceptual history, suggesting a fundamental differencebetween test and training trials.Acknowledgement: Human Frontier Science Program36.413 The Role of Gist in DyslexiaMatthew H. Schneps 1 (mschneps@cfa.harvard.edu), James Brockmole 2 , AmandaHeffner-Wong 1 , Marc Pomplun 3 , Alex D. Hwang 3 , Gerhard Sonnert 1 ; 1 Laboratoryfor Visual Learning, Harvard-Smithsonian Center for Astrophysics, 2 Departmentof Psychology, University of Notre Dame, 3 Visual Attention Lab, Department ofComputer Science, University of Massachusetts BostonDyslexia, a neurological condition that impairs reading, has been associatedwith advantages for rapid processing in the peripheral visual field (Geigerand Lettvin, 1987; Facoetti, et al, 2000; von Karolyi, et al 2003; Schneps, et al,2010, in preparation) suggesting enhanced sensitivity to visual gist (Oliva,2005) in this group. Sensitivities to peripheral gist might be expected tocontribute to spatial learning, and Howard et al., (2006) and Schneps, et al.,(2010) used the contextual cueing (CC) paradigm of Chun & Jiang (1998)to measure this in those with dyslexia. They found that while people withdyslexia are initially slower than controls at visual search, they are able toeffectively improve the efficiency of their search through spatial learning,so that the search times of those with dyslexia become comparable to thecontrols. Spatial learning in the traditional CC task is dictated by the configurationof cues nearest the target (Brady & Chun, 2005), making scantuse of peripheral gist. However, if the task is modified to provide strongerperipheral cues in the gist, we might expect those with dyslexia can outperformcontrols on searches involving learned configurations. To test thishypothesis, we compared a group of college students with dyslexia againstcontrols using three variants of the CC task: (1) a traditional CC paradigmusing L shapes for cues; (2) a variant using realistic scenes for cues (Brockmole& Henderson, 2006); and (3) a new task that uses a context definedby low spatial frequency gist. Our hypothesis is that spatial learning willimprove in those with dyslexia compared to controls as the role of gist successivelyincreased in each task. Here, we report preliminary findings fromthis study.Acknowledgement: NSF supported this work under award HRD-0930962. Schnepsreceived support from a George E. Burch Fellowship to the Smithsonian Institution.36.414 Repeated contextual search cues lead to reduced BOLDonsettimes in early visual and left inferior frontal cortexStefan Pollmann 1,2 (stefan.pollmann@ovgu.de), Angela Manginelli 1 ; 1 Departmentof Experimental Psychology, University of Magdeburg, 2 Center for BehavioralBrain <strong>Sciences</strong>, Magdeburg, GermanyRepetition of context can facilitate search for targets in distractor-filled displays.This contextual cueing goes along with enhanced event-related brainpotentials in visual cortex, as previously demonstrated with depth electrodesin the human brain. However, modulation of the BOLD-response instriate and peristriate cortices has, to our knowledge, not yet been reportedas a consequence of contextual cueing. In an event-related fMRI experimentwith 16 participants, we observed a selective reduction of the BOLD onsetlatency for repeated distractor configurations in these areas. In addition,the same onset latency reduction was observed in posterior inferior frontalcortex, a potential source area for feedback signals to early visual areas.These latency changes occured in the absence of differential BOLD time-topeakand BOLD-amplitude for repeated versus new displays. The posteriorpart of left inferior frontal cortex has previously been linked to repetitionpriming, however in the form of repetition suppression. These studies differfrom ours in many respects, such as awareness of stimulus repetition andsemantic processing. The overlap of activation found in previous primingstudies and in the current experiment does not allow the reverse inferencethat the same mechanisms are involved in contextual cueing and priming.However, future experiments may investigate the mechanisms that lead torepetition suppression versus BOLD-onset reduction in left posterior inferiorfrontal cortex and visual cortex, thereby elucidating the commonalitiesor differences between repetition priming and contextual cueing.Acknowledgement: DFG, Grant PO 548/6-2Color and light: Lightness and brightnessOrchid Ballroom, Boards 415–431Sunday, May 9, 2:45 - 6:45 pm36.415 The staircase Kardos effect: An anchoring role for lowestluminance?Stephen Ivory 1 (southorange21@yahoo.com), Alan Gilchrist 1 ; 1 Rutgers University-NewarkIn the staircase Gelb effect, a black surface in a spotlight appears whiteand becomes darker as four lighter shades of gray are added within thespotlight. Each new square is seen as white until the next square is addedconfirming that lightness values are anchored by the highest luminance. Toexplore whether the lowest luminance plays any anchoring role, we testedan inverted version of the staircase Gelb effect. We started with a white targetsquare in a hidden shadow that appeared black (Kardos illusion). Thensuccessively 4 darker squares were added in a row within the shadow: lightgray, middle gray, dark gray, and black, each new configuration viewed bya separate group of 15 observers, who matched each square using a 16 stepMunsell chart. The target square appeared lighter as each darker squarewas added, suggesting that the lowest luminance may play some anchoringrole. However, as darker squares were added, not only did lowest luminancedecrease but the number of squares (articulation) increased as well. Insubsequent experiments we varied lowest luminance while holding articulationconstant and we varied articulation while holding lowest luminanceconstant. Separate groups of 15 observers each viewed 6 different displays:2, 5, and 30 squares with a reflectance range of 30:1 (white to black) and 2,5, and 28 squares with a reflectance range of 2.25:1 (white to light gray).Articulation level had a major effect on target lightness while lowest luminanceaffected target lightness in the 5-square configuration, but not in the2- or 28/30- square configurations. Our results suggest that the staircaseKardos effect is due to the increasing articulation, not the decreasing lowestluminance, consistent with other evidence of the asymmetry between highestand lowest luminance values in anchoring lightness.Acknowledgement: NSF (BCS-0643827) NIH (BM 60826-02)36.416 Bayesian and neural computations in lightness perceptionMichael E. Rudd 1,2 (mrudd@u.washington.edu); 1 Howard Hughes Medical Institute,2 Department of Physiology and Biophysics, University of Washington182 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Afternoon PostersThe task of computing lightness (i.e., perceived surface reflectance) fromthe spatial distribution of luminances in the retinal image is an underdeterminedproblem because the causal effects of reflectance and illuminationare confounded in the image. Some recent approaches to lightness computationcombine Bayesian priors with empirical estimates of the illuminantto compute reflectance from retinal luminance. Here, I argue for a differentsort of Bayesian computation that takes local signed contrast (roughly,“edges”) as its input. Sensory edge information is combined with Bayesianpriors that instantiate assumptions about the illumination and other rulessuch as grouping by proximity. The model incorporates a number of mechanismsfrom the lightness literature, including edge integration, anchoring,illumination frameworks, and contrast gain control. None of these mechanismsis gratuitous—all are required to account for data. I demonstratehow the model works by applying it to the results of lightness matchingstudies involving simple stimuli. Failures of lightness constancy are quantitativelyaccounted for by misapplying priors that probably favor lightnessconstancy in natural environments. Assimilation and contrast occuras byproducts. The rules that adjust the priors must necessarily be appliedin a particular order, suggesting an underlying neural computation thatfirst weighs the importance of local edge data according to the observer’sassumptions about illumination, then updates these weights on the basis ofthe spatial organization of the stimulus, then spatially integrates weightedcontrasts prior to a final anchoring stage. The order of operations is consistentwith the idea that top-down attentional feedback sets the gains of earlycortical contrast detectors in visual areas V1 or V2, then higher-level visualcircuits having larger receptive fields further adjust these gains in light ofthe wider spatial image context. The spatial extent of perceptual edge integrationsuggests that lightness is represented in or beyond area V4.36.417 Illusory lightness perception due to signal compression andreconstructionCornelia Fermuller 1 (fer@cfar.umd.edu), Yi Li 2 ; 1 Institute for Advanced ComputerStudies, University of Maryland, 2 Department of Electrical and ComputerEngineering, University of MarylandWe propose a computational model that can account for a large numberof lightness illusions, including the seemingly opposing effects of brightnesscontrast and assimilation. The underlying mathematics is based onthe new theory of compressive sensing, which provides an efficient methodfor sampling and reconstructing a signal that is sparse or compressible.The model states that at the retina the intensity signal is compressed. Thisprocess amounts to a random sampling of locally averaged values. In thecortex the intensity values are reconstructed using as input the compressedsignal, and combined with the edges. Reconstruction amounts to solvingan underdetermined linear equation system using L1 norm minimization.Assuming that the intensity signal is sparse in the Fourier domain,the reconstructed signal, which is a linear combination of a small numberof Fourier components, deviates from the original signal. The reconstructionerror is consistent with the perception of many well known lightnessillusions, including the contrast and the assimilation effect, the articulatedenhanced brightness contrast, the checker shadow illusion, and the gratinginduction. Considering in addition, the space-variant resolution of thehuman eye, the model also explains illusory patterns with changes in perceivedlightness over large ranges, such as the Cornsweet and related illusions.We conducted experiments with new variations of the White and theDungeon illusion, whose perception changes with the resolution at whichthe different parts of the patterns appear on the eye, and found that themodel predicted well the perception in these stimuli.Acknowledgement: NSF36.418 Local computation of brightness on articulated surroundsMasataka Sawayama 1 (m.sawayama@graduate.chiba-u.jp), Eiji Kimura 2 ; 1 GraduateSchool of Humanities and Social <strong>Sciences</strong>, Chiba University, 2 Department ofPsychology, Faculty of Letters, Chiba University[Purpose] A brightness difference between two identical gray stimuli onuniform light and dark surrounds becomes larger when the surrounds arereplaced by the ones composed of many small patches having differentluminances (“articulated” surrounds) while keeping the space-averagedluminance constant. To explore visual mechanisms underlying this articulationeffect in view of global vs. local processing, the present study introducedthe perception of transparency over the dark surround by manipulatingglobal stimulus configuration alone, and investigated its effects onbrightness perception on the surround.[Methods] Light and dark surrounds were placed side-by-side which wereeither spatially uniform or articulated. By adding a contiguous region oflower luminance to the dark surround, the perception of transparency(i.e., impression of being covered with a larger dark filter or shadow) wasproduced under the transparency condition. Under the no-transparencyconditions, the perceived transparency was eliminated by separating thedark from the light surround and also by introducing a gap at the border ofthe dark surround. Local stimulus configuration within the surround waskept constant under different conditions. The space-averaged luminancesof the light and dark surrounds were 1.16 and 0.38 log cd/m2, respectively.Observers matched the brightness of the test stimulus (1.06 log cd/m2) onthe dark surround by adjusting the luminance of the matching stimulus onthe light surround.[Results and Discussion] With the uniform surrounds, the test stimulusappeared brighter under the transparency condition than under the notransparencyconditions. In contrast, the brightness difference was notfound with the articulated surrounds, although the manipulation of globalconfiguration substantially changed the appearance of the stimulus on thedark articulated surround. The articulation effect was consistently foundunder all conditions. These findings suggest that brightness perceptionon the present articulated surround was determined almost exclusivelydepending upon local computation of brightness.36.419 Can luminance contrast be estimated with real light?James Schirillo 1 (schirija@wfu.edu), Matthew Riddle 1 , Rumi Tokunaga 2 , AlexanderLogvinenko 3 ; 1 Department of Psychology, Wake Forest University, 2 Departmentof Information Systems Engineering, Kochi University of Technology, 3 Departmentof <strong>Vision</strong> <strong>Sciences</strong>, Glasgow Caledonian UniversityIn that numerous studies have shown that humans can match the luminancecontrast between edges generated on a CRT monitor, it should be possibleto match a crisp luminance edge produced by a spotlight to a luminanceedge produced by a reflectance edge in a natural scene. In one experimentwe had 40 naïve observers match the luminance contrast of a luminanceedge produced by a spotlight to one of 20 reflectance edges. In a secondexperiment we had the same observers match the lightness of the region litby the spotlight to one of the same 20 reflectance edges. The luminance ratioproduced by the spotlight was 15:1, where its luminance was 22.4 cd/m2,and its area was 9.0° X 4.9° visual angle. The size of each of the 20 reflectancepapers was 0.72° X 0.72°. We found, first, large inter-individual variationswith the luminance match covering a ~15:1 range, suggesting that observerscannot make an accurate luminance match unlike CRT screen performance.Second, observers made the histograms of lightness and luminancematches very close to each other, suggesting that when asked to make aluminance match they actually performed a lightness match. Lastly, theluminance contrast averaged ~ 6.33:1 for both luminance contrast matchesand lightness matches. This underestimates the actual luminance contrastproduced by the spotlight by 42%. These findings suggest that observerscannot estimate the luminance contrast produced by real objects lit by reallight sources. Whether these findings conflict with what has been reportedfor luminance contrast matches with a CRT screen will be discussed.36.420 On the relationship between luminanc increment thresholdsand apparent brightnessMarianne Maertens 1 (marianne.maertens@tu-berlin.de), Felix A. Wichmann 1 ;1 Modelling of Cognitive Processes, Berlin Institute of Technology and BernsteinCenter for Computational Neuroscience, Berlin, GermanyIt has long been known that the just noticeable difference (JND) betweentwo stimulus intensities increases proportional to the background intensity- Weber’s law. It is less clear, however, whether the JND is a functionof the physical or apparent stimulus intensity. In many situations, especiallyin the laboratory using simple stimuli such as uniform patches orsinusoidal gratings, physical and perceived intensity coincide. Reports thattried to disentangle the two factors yielded inconsistent results (e.g. Heinemann,1961 Journal of Experimental Psychology 61 389-399; Cornsweetand Teller, 1965 Journal of the Optical <strong>Society</strong> of America 55(10) 1303-1308;Henning, Millar and Hill, 2000 Journal of the Optical <strong>Society</strong> of America17(7) 1147-1159; Hillis and Brainard, 2007 Current Biology 17 1714-1719).A necessary condition for estimating the potential effect of appearance onJNDs is to quantify the difference between physical and apparent intensityin units of physical intensity, because only that will allow to predictthe expected JNDs. In the present experiments we utilized a version of theSunday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>183


Sunday Afternoon PostersVSS 2010 AbstractsSunday PMCraik-O’Brien-Cornsweet stimulus (Purves, Shimpi and Lotto, 1999 Journalof Neuroscience 19 8542-8551) to study the relationship between JNDs andapparent brightness. We quantitatively assessed apparent brightness usinga paired comparison procedure related to maximum-likelihood differencescaling (Maloney and Yang, 2003 Journal of <strong>Vision</strong> 3(8) 573-585), in whichobservers compared the perceptual difference between two pairs of surfaceintensities. Using the exact same stimulus arrangement, that is, two pairs ofsurfaces, we asked observers to detect a luminance increment in a standardspatial 4-alternative forced-choice (4-AFC) task.36.421 Feedback does not cleanse brightness judmements ofcontrast and assimilation effectsSteven Kies 1 (skies@uci.edu), Charles Chubb 1 ; 1 Department of Cognitive<strong>Sciences</strong>, UC IrvineJudgments of the brightness of a test patch are strongly influenced bycontrast and assimilation. However, the experiments that document theseeffects typically do not use feedback. We wondered whether observersmight have access to strategies that were cleansed of these effects if theywere given trial-by-trial feedback. In this study, observers viewed a 3.33ºdiameter Test-disk surrounded by an annular ring of what appeared to behomogenous visual noise; their task was to judge whether the luminanceof the Test-disk was higher or lower than that of the fixed gray backgroundoutside the annulus. Although the annulus looked like a ring of visual noiseon each trail, it was actually composed of a random, weighted sum of 11,orthogonal basis images: 5 noise images (which were constrained to contributed94.8% of the energy in the noisy annulus and 6 concentric annuliwhich collectively covered the same region as each of the noise basis imagesand which contributed the remaining 5.2% of the energy). Data were collectedfor four display durations: 13, 27, 53, and 107ms. In each case, logisticregression was used to determine the influence exerted on the participant’sjudgments by the 11 basis components. Performance was similar for thefour display durations. The innermost annulus exerted a contrast effect:Test-disk contrast judgments were negatively correlated with inner-annuluscontrast. However, for four of the five participants, judgments tendedto be positively correlated with the contrasts of the outermost two annuli.Plausibly, this latter effect reflects assimilation by the Test-disk of annulusbrightness induced by contrast of the outer annular strip with the background.Thus the feedback supplied did not enable observers to escapecontrast and assimilation effects.Acknowledgement: National Science Foundation BCS-084389736.422 Optic flow strongly affects brightnessYury Petrov 1 (y.petrov@neu.edu), Jiehui Qian 1 ; 1 Psychology Department, NortheasternUniversityIt is well known that brightness/lightness is determined by the pattern ofluminance within the target’s context. Here we report a new phenomenondemonstrating that brightness is also strongly affected by the motion patternwithin the context. We found that the optic flow of dots which is consistentwith the dots moving in depth modulates their brightness. The brightnessof light dots increases while the brightness of dark dots decreases by 30%when the dots appear to move away to twice the original distance from theviewer. The effect reverses when the dots appear to move nearer. The effectpersists for a wide range of dot contrasts, velocities, sizes, densities, andbackground luminances. We also found that the density of dots modulatestheir brightness in a similar fashion, but the density effect alone is about 3times weaker than that produced by the optic flow. To explain the phenomenonwe suggest that the brain calculates brightness based on the estimateddistance to the dots. When the distance appears to increase while the luminanceof the dots remains constant, the brain interprets this as an increasein the dots’ luminosity and (partially) displays this increased luminosity asthe increased brightness. This interpretation is corroborated by the fact thatthe size of the dots appears to be modulated in the same fashion as theirbrightness: the receding dots seem to grow, while the approaching dotsseem to shrink.36.423 The Neural Locus Underlying Perception of the Craik-O’Brien-Cornsweet EffectAnthony D’Antona 1,2 (adantona@uchicago.edu), Ari Rosenberg 3 , Steven Shevell 1,2,4 ;1 Department of Psychology, University of Chicago, 2 Visual Science Laboratories,Institute for Mind and Biology, University of Chicago, 3 Committee onComputational Neuroscience, University of Chicago, 4 Visual Science, Universityof ChicagoIntroduction: The Craik-O’Brien-Cornsweet (COC) effect occurs when twoadjacent equiluminant regions differ in brightness because of a light-darkborder between them. This effect, described more than half a century ago,still has an unknown neural basis. This study localizes the origin of theCOC effect to a binocular neural locus. Methods: Experiment 1) Two luminanceprofiles, with equal baseline luminances, had a central region shapedlike an isosceles triangle. One luminance profile had an incremental centralregion (luminance profile ---^---); the other profile was similar exceptthe isosceles triangle was a decrement. The two luminance profiles werecombined; the separation between the two triangles’ centers was varied.At certain separations, superposition of the two profiles produced a COCluminance edge. The increment and decrement profiles were either 1) physicallysummed and presented to one eye (monocular COC border) or 2) presentedto separate eyes, so that the COC border existed only after binocularcombination (dichoptic COC border). Observers indicated which side ofthe stimulus appeared brighter. Experiment 2) Monocular and dichopticCOC borders were presented at different contrasts, and observers matchedthe brightnesses on each side of the COC border. Experiment 3) A monocularCOC border was presented to one eye and a grating or moving dotswere presented to the other eye so the COC border was suppressed due tobinocular rivalry. Observers indicated which side of the stimulus appearedbrighter. Results & Conclusion: The COC effect occurred for both monocularand dichoptic borders with brightness matches virtually identical inboth cases. The COC effect was absent when the border was suppressed bybinocular rivalry. Therefore, a monocular COC border is neither necessarynor sufficient for the COC effect. This implies a binocular neural locus afterbinocular rivalry is resolved.Acknowledgement: Supported by NIH grant EY-0480236.424 Filling-in versus multiscale filtering: Measuring the speedand magnitude of brightness induction as a function of distancefrom an inducing edgeBarbara Blakeslee 1 (barbara.blakeslee@ndsu.edu), Mark McCourt 1 ; 1 Departmentof Psychology, Center for Visual Neuroscience, North Dakota State UniversityEarly investigations of the temporal properties of brightness inductionusing brightness matching found that induction was a sluggish processwith temporal frequency cutoffs of 2-5 Hz (DeValois et al., 1986; Rossi &Paradiso, 1996). This led Rossi and Paradiso (1996) to propose that a relativelyslow “filling-in” process was responsible for induced brightness. Incontrast, Blakeslee and McCourt (2008), using a quadrature-phase motiontechnique, found that real and induced gratings showed similar temporalcharacteristics across wide variations in test field height and demonstratedthat induction was observable at frequencies up to 25 Hz. Here we comparepredictions of filling-in versus multiscale filtering mechanisms withdata disclosing the phase (time) lag and magnitude of brightness inductionas a function of distance from the test/inducing field edge. Narrow probeversions of the original quadrature-phase motion technique (Blakeslee &McCourt, 2008) and a quadrature-phase motion cancellation technique areused to measure the phase (time) lag and magnitude of induction, respectively.Both experiments employ a 0.0625 c/d sinusoidal inducing gratingcounterphasing at a temporal frequency of 4 Hz and a test field height of3o. A 0.25o quadrature probe grating is added to the test field at seven locationsrelative to the test/inducing field edge. The psychophysical task inboth experiments is a forced-choice “left” versus “right” motion judgmentof the induced plus quad probe compound in the test field. The resultsshow that the phase (time) lag of induction does not vary with distancefrom the test/inducing field edge, however, the magnitude of inductiondecreases with increasing distance. These results are inconsistent with anedge-dependent filling-in process of the type proposed by Rossi and Paradiso(1996) but are consistent with multiscale filtering by a finite set of filterssuch as that proposed by Blakeslee and McCourt (2008).Acknowledgement: NIH NCRR P20 RR020151 and EY01401536.425 Perception Begets Reality: A “Contrast-Contrast” KoffkaEffectAbigail Huang 1 (huangae@umdnj.edu), Megha Shah 2 , Alice Hon 1 , Eric Altschuler 1,3 ;1 School of Medicine, New Jersey Medical School, UMDNJ, 2 Department ofBiology, The College of New Jersey, 3 Departments of Physical Medicine andRehabilitation and Microbiology & Molecular Medicine, New Jersey MedicalSchol, UMDNJ184 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Afternoon PostersEighty years ago Koffka described a fascinating effect: When a contiguousgray ring is placed on a background half of one shade of gray (differentfrom the ring) and half of another shade of gray,the ring appears to behomogeneous. However, if the ring is slightly divided, the two halves ofthe ring appear different shades of gray with the half of the ring on thedarker background appearing lighter than the half of the ring on the lighterbackground. The Gestalt principle of continuity is invoked to explain thiseffect with the geometric continuity when the half rings are joined leadingto the perception of a homogeneity of shade/color of the ring. In studyingthis effect we have found a “contrast-contrast” Koffka effect: Single,identical small gray square checks are placed on each of two identical grayhalf-rings. The half-rings are then placed on a white/light backgroundand a dark/black background, respectively. Both the check on the halfringon the white background, and the half-ring on the white backgroundappear darker than the check and half-ring, respectively, on the black background--astandard contrast effect. We then join the half rings. The ringnow appears homogeneous--Koffka’s effect. What about the two checks?They still appear somewhat different with the check on the side of the whitebackground appearing darker. But the difference in the appearance of thechecks is less pronounced than when the half-rings were separated! Thechange in the perception of the half-rings by the Koffka effect has begota change in the appearance of the checks! We find this a particularly cleardemonstration of how perception can influence perception indeed “reality”,with different a perception of the ring begetting a new reality of theperception of the checks.36.426 Response priming driven by local contrast, not subjectivebrightnessThomas Schmidt 1 (thomas.schmidt@sowi.uni-kl.de), Sandra Miksch 2 , LisaBulganin 2 , Florian Jäger 2 , Felix Lossin 2 , Joline Jochum 1 , Peter Kohl 1 ; 1 Psychology I,University of Kaiserslautern, Germany, 2 General and Experimental Psychology,University of Giessen, GermanyWe demonstrate qualitative dissociations of brightness processing in visuomotorpriming and conscious vision. Speeded keypress responses to thebrighter of two luminance targets were performed in the presence of precedingdark and bright primes (clearly visible and flanking the targets)whose apparent brightness was enhanced or attenuated by a visual illusion.Response times to the targets were greatly affected by consistent vs.inconsistent arrangements of the primes relative to the targets (responsepriming). Priming effects could systematically contradict subjective brightnessmatches, such that one prime could appear brighter than the other butprime as if it was darker. Systematic variation of the illusion showed thatresponse priming effects only depended on local flanker-background contrast,not on the subjective brightness of the flankers. Our findings suggestthat speeded motor responses, as opposed to conscious perceptual judgments,access an early phase of lightness processing prior to full lightnessconstancy.Acknowledgement: German Research Foundation (DFG)36.427 The effect of contrast intensity and polarity in achromaticwatercolor effectBo Cao 1 (ffcloud.tsao@gmail.com), Arash Yazdanbakhsh 1 , Ennio Mingolla 1 ;1 Department of Cognitive and Neural Systems, Boston UniversityThe watercolor effect (WCE) is a filling-in phenomenon on a surface surroundedby two thin abutting lines with the chromaticity of the interiorline. We developed a series of achromatic WCE stimuli and a method toquantitatively compare the lightness of the filling-in region surrounded bylines of various luminances. We define the interior line as “the inducer”,the luminance of which is fixed, while the exterior line “the suppressor”,the luminance of which varies across different stimuli. The results of a psychophysicalexperiment for seven subjects (five naive) show that the achromaticWCE exists. Moreover, we found that suppressors with both highand low luminance can induce the WCE with an inducer with a moderateluminance as long as the contrast difference between the inducer and thesuppressor passes a certain threshold. All the subjects show a single peakof the effect strength, which is never at the extreme contrast difference,though there are individual differences in the location of the peak. That is,the effect is never the strongest when the suppressor is black or white. Mostsubjects show an inverted-U curve for suppressors with both higher andlower contrast than the inducer. For most subjects, the suppressor with anopposite contrast polarity to that of the inducer, generates a stronger effectthan the suppressor with the same contrast polarity as that of the inducer.These results suggest that the contrast difference affects the existence andthe strength of the WCE, but not in a linear way. Moreover, as in the Craik-O’Brien Cornsweet Effect, besides the contrast intensity, the contrast polarityalso plays a role in the WCE.Acknowledgement: EM was supported in part by CELEST, an NSF Science of LearningCenter (NSF SBE-0354378), HP (DARPA prime HR001109-03-0001), and HRL Labs LLC (DARPA prime HR001-09-C-0011). BC and AY were supported by CELEST.36.428 Response classification analysis of the maintenance ofcontrast for an objectSteven Shimozaki 1 (ss373@le.ac.uk); 1 School of Psychology, University ofLeicesterPreviously Shimozaki, Thomas and Eckstein (1999, JEP:HPP) found that anobject’s contrast is affected by its previous contrast. In that study observersperceived two moving squares across two intervals through apparentmotion. Observers were told to judge the contrast of one square only inthe second interval; despite this instruction, contrast changes in the targetsquare from the first to the second interval led to worse performance.This study assessed the spatio-temporal dynamics of this effect throughresponse classification. Three observers performed a yes/no contrast discriminationof 1°uniform central squares (30% signal contrast) presentedfor 90.9ms (2 frames, 45.4ms/frame) on pedestals varied for each observerfor near-threshold performance (15-25%). A non-judged interval of 318.2ms(7 frames, 45.4ms/frame) preceded the judged interval, with two 1° squares(1° apart) in the upper left and right with abutting corners to the targetsquare. Another square (1°) in the second interval (1° to the right or leftof the target) led to apparent motion of two squares, either left or right.The contrasts of all squares were randomized (either the pedestal or signalcontrast), as well as direction of motion (left or right). For response classification,stimuli were presented in Gaussian-distributed image noise, independentlysampled for each frame. The behavioral results replicated theprevious study; despite instructions to judge only the second interval, performancewas significantly worse when the target contrast changed fromthe first to second interval (change in d’: 0.788 to 0.991). The classificationmovies during the second (judged) interval found that the target contrastin the second interval affected the judgments, as expected, with the secondframe having a larger effect. The classification movies during the first (nonjudged)interval found that the effects of the target square were distributedthroughout the 7 frames (318.2ms), and began with the second frame at45ms.36.429 Snake illusion, edge classification, and edge curvatureDejan Todorovic 1 (dtodorov@f.bg.ac.rs), Suncica Zdravkovic 2 ; 1 Department ofPsychology, Laboratory of Experimental Psychology, University of Belgrade,Serbia, 2 Department of Psychology, University of Novi Sad, Serbia, and Laboratoryof Experimental Psychology, University of Belgrade, SerbiaThe snake illusion is a lightness effect in which identical gray targets embeddedin complex displays may look either strongly different (‘snake’) or verysimilar (‘anti-snake’), depending on the luminance of some non-adjacentpatches. It has been suggested (Logvinenko et al., Perception & Psychophysics,2005, 67, 120-128) that this effect is based on the classification ofluminance edges as either reflectance or illumination edges (which alsoinvolve a sense of transparency or shadow), and the tendency of the visualsystem to interpret edges as the former rather than the latter if they arecurved rather than straight. To examine these notions, we used five pairsof snake/anti-snake displays (created by switching the luminance levelsof certain portions of displays), each containing six targets (a high luminance,a medium luminance, and a low luminance pair). The displays werepresented on a calibrated monitor placed in a dark void. Each of our ninenaïve observers participated in four individual sessions. They performedlightness matches by adjusting the luminance of comparison patches onthe screen. We replicated the basic effect with a snake/anti-snake stimuluspair slightly modified from original, which contained straight luminanceedges and a sense of shadow or transparency. However, we obtained aneffect of the same strength with a variant display which retained much thesame structure as the first, including straight edges, but involved invertedtransparency conditions. For displays involving curved or jagged edges wefound that the strength of the effect was clearly diminished. Our resultsconfirm previous findings that the shape of luminance edges may affect thestrength of this class of illusions, but argue against theories based on edgeSunday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>185


Sunday Afternoon PostersVSS 2010 AbstractsSunday PMclassification and transparency/shadow perception. We also found that theillusion was strongest for medium luminance targets, weaker for high luminancetargets, and weakest for low luminance targets.Acknowledgement: This research was supported by Grant #149039D from the SerbianMinistry of Science.36.430 Impairment of Magnocellular and Parvocellular VisualProcessing in Normal Aging: Rehabilitation by Yellow Filters orPlacebo Effect?Quentin Lenoble 1,2 (quentin.lenoble@hotmail.fr), Hélène Amieva 2 , Sandrine Delord 1 ;1 Laboratory of Psychology EA 4139, University Victor Segalen Bordeaux 2,France, 2 ISPED, Centre Mémoire de Recherches et de Ressources, Inserm U897, University Victor Segalen Bordeaux 2, FranceThe study aimed at evaluating the psychophysical correlates of the magnocellularand parvocellular visual pathways, their evolution and rehabilitationwith normal aging. Thirteen young (24.2) and 36 old (71.4) participantswere shown with a short version of the psychophysical paradigm (Pokornyand Smith, 1997, JOSA), to bias processing toward magnocellular or parvocellularprocessing. Observers had to discriminate the location of the higherluminance square within a 33-msec four-square-array. In the steady-pedestalcondition (magnocellular-bias), the array was preceded and followed bya four identical squares pedestal whereas, in the pulse pedestal condition(parvocellular-bias), the array was presented alone on a gray background.There were three filters: a control (no filter), a placebo (neutral filters), andan experimental condition (yellow filters, CPF 450). Three target luminancediscrimination thresholds were collected for each of the 18 experimentalconditions (order counterbalanced): pedestal-contrast (63%; 70%; 75%)x pedestal-condition (pulse and steady) x filters (no filter, neutral, yellow)using an adaptive staircase procedure. The results showed a higherincrease of threshold with pedestal contrast in the pulse-pedestal relative tothe steady-pedestal condition. A double interaction between group, pedestalcontrast and pedestal-condition was observed: the increased discriminationthreshold found for old relative to young participants was stronger inthe pulse-pedestal than in the steady-pedestal condition, especially for highpedestal contrasts. Moreover, there was no significant main effect of filterand no interaction between filter and the other variables. However, specificallyin the steady-pedestal condition and for old group, planned comparisonsshowed a significant decrease in threshold for neutral or for yellowfilters relative to no filter condition, whatever the pedestal contrast. Theseresults replicate the dissociation between the two low-level visual systemsin young group and demonstrate a magnocellular and huge parvocellularimpairment with normal aging. The yellow filters were inefficient but a placeboeffect was found36.431 Macular Pigment Reduces Visual discomfortMax Snodderly 1 (max.snodderly@mail.utexas.edu), James Stringham 2 ; 1 Nutritional<strong>Sciences</strong>, Inst for Neurosci, Ctr for Perceptual Systems, Univ of Texas atAustin, 2 Northrop Grumman Information Technology, San Antonio, TexasPurpose: To determine the effect of macular pigment (MP) on the thresholdof visual discomfort in young subjects Methods: A Maxwellian-view opticalsystem was used. Six young (


VSS 2010 AbstractsSunday Afternoon Posterschanges in speed (Exp2b). Observers again detected targets more quicklyin objects that exhibited biological motion, and response times decreasedwith greater changes in direction or speed. Thus, biological motion, in andof itself, appears to be capable of capturing attention.Acknowledgement: Natural Science and Engineering Council of Canada36.434 The effect of motion onset and motion quality on attentionalcapture in visual searchAdrian von Muhlenen 1 (a.vonmuhlenen@warwick.ac.uk), Meera Mary Sunny 1 ;1 Department of Psychology, University of WarwickAbrams and Christ (2003, Psychological Science, Christ & Abrams, 2008,Journal of <strong>Vision</strong>) reported that new motion in a scene (motion onset) capturesattention in a bottom-up fashion. This is contradictory to findings byvon Muhlenen et al. (2005, Psychological Science), who reported no capturefor motion onset, unless the onset represents a temporally unique event.Methodological differences between the two studies make a direct comparisondifficult. For example, von Muhlenen et al. looked at slope reductionsin the search function to measure the effect of attentional capture, whereasAbrams and Christ looked at simple reductions in Reaction Time (RT). Theaim of Experiment 1 was to further explore these differences by employingthe same method and design used by Abrams and Christ. However, theresult of twelve participants show no RT reduction for motion-onset targetsin comparison to static targets, which supports von Muhlenen et al. findings.Another difference between the two studies concerns the quality ofmotion: von Muhlenen et al. used relatively smooth motion (updating themoving stimulus at 60 Hertz), whereas Abrams and Christ used relativelyjerky motion (updating the moving stimulus at 15 Hertz). Experiment 2addressed this difference by systematically varying motion quality acrosstrials from 100, 33, 17, to 8 Hertz. The results show that motion quality playsa crucial role: motion-onset stimuli only capture attention when motion isjerky (8 and 17 Hertz), not when it is smooth (33 and 100 Hertz), thus, replicatingthe findings of both studies. Finally, Experiment 3 shows that simpleflicker without motion (100, 33, 17, or 8 Hertz) has not the same effect onattention. We conclude that it is the motion onset in conjunction with thecontinuous stream of transient signals produced by jerky motion that capturesattention.36.435 Interaction between stimulus-driven orienting and topdownmodulation in attentional captureHsin-I Liao 1 (f91227005@ntu.edu.tw), Su-Ling Yeh 1 ; 1 Department of Psychology,National Taiwan UniversityThe issue whether attentional capture is determined by top-down factors orit can be purely stimulus-driven remains unsolved. Proponents of the contingentcapture hypothesis argue that only the distractor that matches thetarget characteristics can capture attention, whereas those of the stimulusdrivencapture hypothesis argue that attentional capture occurs regardlessof distracter-target contingency. We aimed at solving this discrepancy byfinding boundary conditions of the two contrast hypotheses and furtherproposed an interactive model to explain the results. We used a spatial cueingparadigm, in which a color target was followed by an uninformativecue that either matched (a color cue) or did not match (an onset cue) the targetto test whether and how the cue captures attention. We added a no-cuecondition beforehand to make the very first with-cue trial unexpected to theparticipants, and analyzed the response to the first with-cue trial to contrastit with the average data. Results showed that the onset cue captured attentionto its location when it appeared unexpectedly, but this effect disappearedover repeated trials. In contrast, the color cue did not capture attentionwhen it appeared unexpectedly but did so over repeated trials. Wethus demonstrate that, on one hand, the contingent capture hypothesis issupported under the condition that when the same cue is presented repeatedly,top-down modulation determinates the capture effect. On the otherhand, the stimulus-driven hypothesis holds true under the condition thatwhen the cue is presented for the first time, the onset cue captures attentioneven when it does not match with the target defining feature. The proposedinteractive model in which stimulus-driven orienting exists at early timecourse but is later modulated by top-down controls can adequately explainthe results by distinguishing attentional capture through stimulus-drivenorienting from that through top-down modulation.Acknowledgement: This study is supported by 98-2410-H-002-023-MY3 and 96-2413-H-002-009-MY3.36.436 Overt and covert capture of attention by magnocellular andparvocellular singletonsCarly J. Leonard 1 (cjleonard@ucdavis.edu), Steven J. Luck 1,2 ; 1 Center for Mindand Brain, University of California, Davis, 2 Department of Psychology, Universityof California, DavisThe generation of rapid saccades has been tied to magnocellular processing,due to direct M-pathway inputs to the superior colliculus as well asthe predominance of magnocellular information in the dorsal stream. However,attention can also be guided by information encoded in the slower,but detail-rich, parvocellular processing stream. Although it is clear thatinput from both pathways can influence behavior, less is known about howsalient task-irrelevant stimuli encoded by these two systems may influencecovert and overt attentional processing. We used manual RT and oculomotoractivity to reveal differences in interference caused by a singleton thatpredominantly activates the magnocellular system and one that isolatesthe parvocellular system. Participants performed the irrelevant singletontask (Theeuwes, 1991), searching for a unique shape while attempting toignore an irrelevant but highly salient singleton distractor. For a third ofthe trials, the singleton distractor was an isoluminant object of a differentcolor (parvo-singleton); for another third, the singleton distractor differedin luminance from the other objects (magno-singleton); for the remainingthird, no singleton distractor was present. We matched the salience of thetwo singleton distractor types such that there was an equivalent attentionalcapture effect on manual RT. Despite the equivalent manual RT effects, themagno-singleton distractor was more likely to attract an eye movementthan the parvo-singleton distractor. When the first eye movement did godirectly to the target, its latency was slowed in the presence of both magnoandparvo- singletons, indicating covert attentional competition. Theseresults provide a more precise understanding of how underlying competitiveattentional processes and intermediary saccadic behavior result in theexplicit distraction effect found in manual RT.Acknowledgement: This research was made possible by grants R01MH076226 andR01MH065034 from the National Institute of Mental Health.36.437 Attentional capture by objecthood is unaffected by saliencein other dimensionsBenjamin Tamber-Rosenau 1 (brosenau@jhu.edu), Jeff Moher 1 ; 1 Department ofPsychological & Brain <strong>Sciences</strong>, Johns Hopkins UniversityRecently, Kimchi and colleagues (Kimchi, Yeshurun, & Cohen-Savransky,PB&R, 2007; Kimchi, Yeshurun, & Sha’shoua, Psychonomics, 2009) presenteddata demonstrating automatic visuospatial attentional capture byperceptually organized “objects” when a display of nine uniform-sizedrotated Ls contained four items arranged to form the corners of a diamond.Specifically, subjects were faster to respond to a cue presented inside theobject than they were to respond to a cue presented elsewhere in the display;response times were intermediate when the display elements did notform an object. However, when the display contained an object, the objectenclosed a quarter of the display, making it a size singleton. Previous experimentshave shown that a size singleton can involuntarily capture attention.By varying the size of the non-object-defining elements in the display, wedemonstrate that the status of the object as a size singleton cannot accountfor the attentional capture found by Kimchi and colleagues. When we madethree of the display elements larger than the remaining elements, responsetimes were slowed equally across all object conditions (cue in object, cueoutside object, no object), yielding a main effect of size variation. Additionally,we replicated the effect found by Kimchi and colleagues in whichthe perceptually organized “object” captured attention. Critically, thesetwo effects did not interact—the presence of additional large elements hadno effect on attentional capture by the perceptual object. In further experiments,we explore the role of shape in determining attentional capture. Ourresults suggest elements arranged to form an object do capture attention,even when there is no incentive to allocate attention to the object comparedto other parts of the display.Acknowledgement: NIH grants R01 DA13165 and T32 EY0714336.438 Contingent attentional capture influences performance notonly by depleting limited target processing resources, but also bychanging attentional control settingsKatherine S. Moore 1 (mooreks@umich.edu), Elise F. Darling 1 , Jillian B. Steinberg 1 ,Erika A. Pinsker 1 , Daniel H. Weissman 1 ; 1 Department of Psychology, University ofMichiganSunday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>187


Sunday Afternoon PostersVSS 2010 AbstractsSunday PMIn some theories of visual search, detecting a potential target leads to a briefattentional enhancement of that item’s representation in working memory,which likely includes information about the top-down attentional set thatdefines the item as a target. Given that the contents of working memoryoften guide attentional systems, we investigated whether such enhancementstemporarily facilitate the selection of subsequent stimuli whosefeatures match the same attentional set. Data from six experiments, inwhich multiple attentional sets for color guided target selection in a centralRSVP stream, supported this hypothesis. In Experiment 1, a target-coloredperipheral distractor produced 68% less capture when its color matchedthe same attentional set as that of an immediately upcoming target thanwhen its color matched a different attentional set. In Experiment 2, weruled out bottom-up perceptual priming of the target’s color as an alternativeaccount of this enhancement effect. In Experiment 3, the enhancementeffect was reversed when a target-colored distractor was presentedafter (versus before) a target, thereby revealing that a distractor is mostdisruptive when its color matches a currently-enhanced attentional set. InExperiments 4 and 5, a target-colored central distractor not only facilitatedthe selection of an upcoming target whose color matched the same attentionalset, but also prevented an intervening target-colored peripheral distractorfrom capturing attention, consistent with models in which only asingle working memory representation can be enhanced at any given time.In Experiment 6, the enhancement effect was shown to depend criticallyon conscious perception of the leading target-colored item, which likelyindexed whether that item’s representation had been enhanced in workingmemory. Together, these findings indicate that contingent attentional captureinfluences performance not only by depleting limited target processingresources, but also by changing attentional control settings.Acknowledgement: National Science Foundation Graduate Fellowship, Rackham GraduateResearch Award, and Pillsbury Award to K.S.M.36.439 Attention capture by an invisible flicker not in the middle ofgamma rangeMing Zhang 1 (zhangy228@nenu.edu.cn), Yang Zhang 1,2 , Sheng He 2 ; 1 Departmentof Psychology, Northeast Normal University, 5268 Renming Street, Changchun,Jilin 130024, China, 2 1Department of Psychology, University of Minnesota, 75E. River Rd., Minneapolis, Minnesota 55455, USAIt was recently reported that a subliminalflicker could trigger attentionalselection at the target location (Bauer et al, PNAS 2009). They specificallyattributed this effect to the middle of gamma range (50Hz) since they failedto find such effect with flickers slower than 35-Hz. However, it is possiblethat rendering a flicker signal subliminal by lowering its contrast resultedin more severe loss of effective neural contrast for a 30 Hz flicker than a50 Hz flicker. To test this possibility, we used flicker contrast reductioncombined with spatial crowding to render a flicker signal subliminal. Thisapproach allowed a subliminal flicker to maintain substantial contrast. Specifically,a 30 Hz flicker Gabor patch and a non-flickering control were presentedone to the left and one to the right of the fixation, both surroundedby four static Gabor patches. Subjects performed at chance level in 2AFCtask detecting which side had the flickering signal. However, in a modifiedPosner cueing paradigm, subjects responded faster to a probe target presentedat the 30 Hz subliminal flicker location than the control location. Ina follow-up experiment, when the probe target appeared at the non-flickeringcontrol location in 80% of the trials and subjects were instructed to usethis information to direct their attention, the side with the subliminal flickerstill showed a benefit effect. Together these results show that a subliminalflicker can capture spatial attention and the flicker frequency does not needto be in the middle of the gamma range.Acknowledgement: This work was supported by a China National Science Foundation[grant number 30770717] research grant awarded to Ming Zhang and a China NationalScience Foundation [30700229] research grant awarded to Sui Jie36.440 Relative Contributions of SPL and TPJ to Object-basedAttentional CaptureSarah Shomstein 1 (shom@gwu.edu), Sarah Mayer-Brown 1 , Erik Wing 2 , SilasLarsen 1 ; 1 Department of Psychology, George Washington University, 2 Departmentof Psychology, Duke UniversityThe contribution of object based representations to attentional guidancehas been focused almost exclusively within the framework of top-downattentional control, and little is known regarding its contribution to bottomup,or stimulus driven, attentional control. In our recent behavioral investigation,we demonstrated that attentional capture is in fact object-based,and that the extent to which objects guide attentional capture is modulatedby the involvement of top-down attentional orienting. In the present set oftwo fMRI experiments, we investigated the neural mechanism of objectbasedattentional capture, namely the involvement of SPL and TPJ in contingentcapture (with top-down involvement) and pure capture (bottom-upinvolvement). Participants viewed a central rapid serial visual presentation(RSVP) stream in which a target letter was either defined by a specific color(Experiment 1, contingent capture) or could be one of four random colors(Experiment 2, singleton capture). The RSVP stream was superimposedonto a set of three objects (a cross like configuration). On critical trials, atask-irrelevant color singleton and three neutral distractors appeared in theperiphery. On half of the trials the colored singleton appeared on the sameobject as the central target, and on the different object on the other half. Weobserved capture related activations in the SPL/precuneus and TPJ regionsfor the contingent capture, and TPJ activation exclusively for the singletoncapture. Additionally, these capture related activations were modulated bywhether the singleton appeared on the same- or different object (i.e, objectbasedeffect). Furthermore, with the use of retinotopic mapping, the effectsof such object-related attentional capture were examined in the early visualcortex. These results suggest that object-based representations guide bottom-upas well as top-down attentional orienting, as well as provide furtherconstraints on the mechanisms of attentional guidance and of objectbasedselection.36.441 Covert attention can be captured by an illusory Focus ofExpansionMichael von Grünau 1 (vgrunau@alcor.concordia.ca), Tomas Matthews 1 , MikaelCavallet 1 ; 1 Department of Psychology, Concordia UniversityPurpose: Covert attention can be captured by sudden stimulus onsets andother salient events. It has recently been shown that the focus of expansion(FOE) of a radial flowfield can also capture covert attention (Fukuchiet al., 2009). We were wondering whether an illusory FOE displacedby a linear flowfield (optic flow illusion; Duffy & Wurtz, 1993) could alsocapture attention. Methods: We measured the illusory FOE displacementwith a 2AFC method, and then presented targets at the actual and illusoryFOE and at corresponding locations in the other hemifield, with and withoutthe presence of the linear flowfield. This was done for each observeraccording to the individual illusion strength. We measured the detectionresponse times for targets appearing with different SOAs between flowfieldand target, including 20% of catch trials. Results: A majority of participantsshowed a pattern of responses that suggested that the illusory FOE hadcaptured attention. Some observers showed a different pattern indicatingthat the actual FOE had continued to capture attention, even in conditionswhere they had experienced the illusion. The effectiveness of the illusoryFOE for capturing attention was not related to the size of the perceivedillusion. Conclusion: Covert attention can be captured by both actual andillusory FOEs. This implies that smooth pursuit eye movements or wholeworld motions can take part in capturing attention. Thus covert attentioncan exert its effects at different levels. Fukuchi M. et al. (2009). Journal of<strong>Vision</strong>, 9(8), 137a; Duffy C. & Wurtz R. (1993). <strong>Vision</strong> Research, 33(11),1481.Acknowledgement: NSERC, FQRNT36.442 The interaction between memorized objects and abruptonsets in oculomotor capture: New insights in the architecture ofoculomotor programmingMatthew S. Peterson 1 (mpeters2@gmu.edu), Jason Wong 1,2 ; 1 Department ofPsychology, George Mason University, 2 Naval Undersea Warfare CenterRecent evidence has been found for a top-down source of task-irrelevantoculomotor capture, in which an event draws the eyes away from a primarytask. In these cases, an object memorized for a non-search task cancapture the eyes when it appears during search (Sato, Heinke, Humphreys& Blanco, 2005; Olivers, Meijer & Theeuwes, 2006). Here, an experimentwas conducted to investigate the interaction between memory-driven capture,goal-driven search, and capture by abrupt onsets. The use of eye trackingallowed us to determine the rate of capture by the different types ofstimuli and explore the temporal dynamics of the various signals drivingoculomotor guidance. This is important because we were able to distinguishbetween potential sources of capture and build a theoretical model ofhow visual working memory, top-down goals, and abrupt onsets can driveoculomotor orienting.188 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Afternoon PostersThe results of our experiments show that memorized objects capture theeyes at a higher rate than abrupt onsets when they are both present in thesearch display and in competition for the initial saccade. Additionally,when the abrupt onset and memorized color are the same object, this combinationleads to even greater oculomotor capture away from the target.However, the degree of capture is less than additive, suggesting these aretwo independent sources of guidance signals. More importantly, saccadelatencies differed between the three potential saccade targets, with saccadesto the search target yielding the longest latencies, and saccades to memorizedcolor singletons yielding latencies that were shorter than saccades toabrupt onsets. Results will be discussed in terms of a neural-computationalmodel.36.443 Attention to faces: Effects of face inversionBettina Olk 1 (b.olk@jacobs-university.de), Andrea M. Garay-Vado 2 ; 1 School ofHumanities and Social <strong>Sciences</strong>, Jacobs University Bremen, Germany, 2 Ludwig-Maximilians-University Munich, GermanyGoal-directed behavior requires focusing on important target stimuli andthe prevention of attention to irrelevant distracters. According to the loadtheory of attention (Lavie, 1995, 2000), a factor that modulates whether distractersare attended to is the perceptual load of the relevant task. Followingthe theory, perception of distracters can be prevented when perceptualload is high. Lavie, Ro, and Russell (2003) showed that face distracters arean exception as they attract attention and are hard to ignore even underhigh load. Further research suggests that a face advantage may be linkedto the upright presentation of faces, however, there is conflicting evidenceregarding the role of the orientation of a face and a potential face advantage.We thus investigated the link between face orientation, perceptualload and attention in three experiments using a sex classification task.Experiment 1 tested whether upright and inverted distracter faces attractattention reflexively under low and under high perceptual load conditionsto a comparable degree in a flanker paradigm and showed that upright butnot inverted faces attracted attention, suggesting that inverted faces wereeasier to ignore. Experiment 2 proved that inverted distracter faces can leadto congruency effects though, provided that attention is directed volitionallyto the peripheral distracters. Experiment 3 showed, using a cuing paradigm,that although participants are slower to discriminate inverted faces,the allocation of attention facilitates face processing and sex discriminationfor upright and inverted faces to a similar extent. Our findings suggest alink between mechanisms of face processing and their attention capturingpower.36.444 Hitting the brakes: Is attention capture reduced withslower responding?Andrew B. Leber 1 (andrew.leber@unh.edu), Jennifer R. Lechak 1 , Sarah M. Tower-Richardi 1 ; 1 Department of Psychology, University of New HampshireHow do people resist distraction by salient, irrelevant stimuli? By oneaccount, resistance to distraction carries a concomitant slowing in overallRT, suggesting that observers delay visual processing to avoid distraction.By a competing account, distraction is best avoided during periods of higharousal, when overall RT is fastest. These accounts have been tested viaanalysis of cumulative RT distributions, in which distraction is measuredas a function of overall RT. Unfortunately, the results of such analyses havelacked consensus. Here, we offer a resolution of the conflicting results byfirst highlighting a critical weakness of the cumulative RT analysis andthen correcting for it. Specifically, while RT on a given trial should reflectthe observer’s control state, incidental stimulus aspects can also influenceRT. For instance, if the distractor appears in the same location on consecutivetrials, RT will be faster while interference will be smaller (Kumada& Humpreys, 2001). Effects like this distort the RT distributions in a waythat is unrelated to the observer’s internal control state. To address suchconfounds, we performed multiple regression to partial out RT varianceattributable to an exhaustive array of incidental stimulus aspects, thus generating“corrected” RT distributions. We then carried out the cumulativeRT analysis on both the uncorrected and corrected RT distributions. For theuncorrected data, distraction was smallest at the fastest RTs and graduallyincreased as RT slowed. However, the corrected data revealed a dramaticreversal, in which distraction was greatest at the fastest RTs. These resultsoffer a parsimonious resolution to the debate on how distraction is avoided.In particular, the results support the “slowing” account and argue againstthe “high arousal” account.36.445 Commonality between attentional capture and attentionalblinkJun Kawahara 1 (jun.kawahara@aist.go.jp), Ken Kihara 1 ; 1 National Institute ofAdvanced Industrial Science and TechnologyVisual search for a unique target is impaired when a task-irrelevant salientdistractor is simultaneously present. This phenomenon, known as attentionalcapture, is said to occur because attention is diverted to a distractor ina stimulus-driven way before it reaches the target (Theeuwes, 1992). However,another view holds that attention could be directed selectively to atask-relevant feature under an appropriate attentional set (Folk et al., 2002).Recently, Ghorashi et al. (2003) suggested that temporal attentional capture(Folk et al., 2002) represents virtually the same impairment as that observedin the attentional blink. The question is whether these phenomena emergefrom a common underlying attentional mechanism. The present studyexamined this question using correlation studies. If these phenomena sharea common foundation, the magnitude of these deficits should show withinsubjectcorrelations. In Experiment 1, 135 participants performed three tasksin a counter-balanced order. The tasks for spatial and temporal capture andthe attentional blink were identical to those used by Theeuwes (1992), Folket al. (2002) and Chun and Potter (1995), respectively. A significant attentionaldeficit was observed in each task. However, no significant correlationwas found across these tasks, suggesting that these deficits reflect differentaspects of selective attention. In Experiment 2 (N=95), identical resultswere obtained using the same procedure as that in Experiment 1 exceptthat another attentional blink task, requiring spatial switching betweenthe two targets, was included. Strong correlations emerged only betweenthe two attentional blink tasks (with/without spatial switch). The presentresults suggest that the attentional capture revealed by the two types ofprocedures (Theeuwes’ and Folk’s) reflects different aspects of attention.The results also indicate that the similarity between attentional capture andattentional blink is superficial.36.446 Advance Knowledge of Potential Distractors InfluencesCompetition between Color Salience and Perceptual LoadAdam Biggs 1 (abiggs2@nd.edu), Brad Gibson 1 ; 1 University of Notre DameVisual salience and perceptual load may both influence the efficiency ofvisual selection. Previous evidence reported by Gibson and Bryant (2008)suggested that high perceptual load can dominate color salience in a distractorinterference paradigm where observers attempted to ignore asalient color singleton under different levels of perceptual load. Morerecently, Biggs and Gibson (in press) extended this research by investigatingwhether full vs. no knowledge of the color singleton and/or full vs.no knowledge of perceptual load would modulate the relative operationof these two mechanisms. Consistent with previous findings, Biggs andGibson found that high perceptual load dominated color salience. However,this result only occurred when advance knowledge of load was notavailable, and high-load displays were preceded by other high-load displays.More importantly, Biggs and Gibson also found that color saliencedominated high perceptual load in other contexts where participants wereprovided full knowledge of color conditions and display load. This latterfinding was unexpected because distractor interference increased as theamount of knowledge provided to the observer increased. The presentexperiments were designed to further investigate how different forms ofknowledge may influence this paradigm; namely, full vs. no knowledgeof distractor presence. In the full knowledge condition, the presence orabsence of the distractor was fixed within blocks; whereas, in the no knowledgecondition, the presence or absence of the distractor was mixed. Theresults of three experiments suggested that color salience dominated highperceptual load when the observer was able to incorporate this knowledgeinto a search strategy. Altogether, these findings suggest that the competitionbetween color salience and perceptual load can vary based upon theknowledge provided to the observer and how they choose to integrate thatknowledge into search. Implications for theories of top-down control willbe discussed.36.447 Non-contingent attention capture by an onsetFook Chua 1 (fkchua@nus.edu.sg); 1 Department of Psychology, National Universityof SingaporeThis set of experiments revisits the issue whether all involuntary orientingis contingent on top-down goals. Specifically, the question was whether anabrupt onset captures attention in a non-contingent fashion. A variation ofSunday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>189


Sunday Afternoon PostersVSS 2010 AbstractsSunday PMthe Folk, Remington, and Johnston (1992) spatial pre-cueing paradigm wasused. Observers searched a letter array for one of two target letters. Attentioncapture was assessed by the difference in reaction times to valid andinvalid trials (target location correctly and incorrectly cued, respectively).To rule out the contingency explanation, one needs to ensure that (a) allfeatures associated with the target, and (b) all visual features accompanyingthe search array’s appearance, are excluded from the putative capturestimulus. We first established that when target location was color-defined, acolor cue captured attention. But crucially, an onset cue also captured attention.In separate experiments, we ruled out explanations that (a) were basedon the contingency between the onset cue and the transients accompanyingthe search array; (b) claimed that observers adopted a singleton-search strategyrather than monitoring specifically for the target-defining feature; and(c) assume that observers were monitoring motion, rather than specificallyonset, transients. We showed that an onset captured attention even whenthe search array was not presented as an abrupt onset. We also showedthat a color singleton failed to capture attention when the target’s appearancemay be construed as a singleton, suggesting that singleton-detectioncould not have been the observers’ strategy. Finally, we showed that offsettransients failed to capture attention when the target could be localized asa color singleton, implying that observers were not detecting transients perse. The evidence across the experiments support the view that an onset capturesattention automatically, even though the top-down control settingsmay not be tuned specifically to monitoring onset transients.Acknowledgement: NUS Grant R-581-000-078-75036.448 Attentional capture by masked colour stimuliUlrich Ansorge 1,2 (ulrich.ansorge@univie.ac.at); 1 University of Vienna, Austria,2 Universtiy of Osnabrück, GermanyComputational theories of stimulus-driven capture of attention calculatea colour contrast’s potential to attract attention as an objective local colourdifference within the image. Is that allowed? We tested whether subjectiveor phenomenal salience of colour contrasts could be a crucial prerequisitefor stimulus-driven attentional capture by colour contrasts: Colour singletonswere metacontrast masked and, thus, invisible. Under these conditions,colour singletons failed to capture attention in a stimulus-driven way. Thiswas reflected in behavioural responses as well as in attention-related eventrelatedpotentials (ERPs). In addition, diverse control conditions corroboratedthe attention-grabbing power of colour singletons, ruling out that themethod was insenstive to the detection of attentional capture by maskedcolour singletons.Acknowledgement: German Research Council36.449 Invisible causal capture in the tunnel effectGi Yeul Bae 1 (freebird71@gmail.com), Jonathan Flombaum 1 ; 1 Johns HopkinsUniversity, Department of Psychological and Brain <strong>Sciences</strong>Beyond identifying individual objects in the world, the visual system mustalso characterize the relationships between objects, for instance whenobjects occlude one another, or when they cause one another to move.Here we explored the relationship between perceived causality and occlusion.Can causality be perceived behind an occluder? Participants watcheda series of events and simply had to judge whether a centrally presentedevent involved a single object passing behind an occluder, or one objectcausally launching another. With no additional context, the centrally presentedevent was always judged as a pass, even when the occluded anddisoccluding objects were different colors —an illusion known as the ‘tunneleffect’ that results from spatiotemporal continuity. However, when anearby context event involved an unambiguous causal launch synchronizedwith the occlusion event, participants perceived a causal launch behind theoccluder. In other words, participants experienced invisible causal capture,perceiving a casual relationship in an occluded location. Crucially, whenthe context event involved two distinct objects, but no causal relationshipbetween them, no casual launch was perceived behind the occluder. Thusinvisible causal capture did not depend merely on the suggestion that twoobjects might exist behind the occluder, but instead on the causal nature ofthe context. Perhaps most surprisingly, invisible causal capture was perceivedeven when the two disks in the central occlusion event shared thesame color. Thus spatiotemporal synchrony trumped featural similarityin the interpretation of the hidden event. Related context events illustratethat invisible causal capture depends upon grouping by common motion.Taken together, these results emphasize the inherent ambiguity that thevisual system faces while inferring the relationships between objects.36.450 The anatomy of superior parietal cortex links everydaydistractibility with attentional captureMia Dong 1 (mia.y.dong@gmail.com), Ryota Kanai 2 , Bahador Bahrami 2,3 , GeraintRees 2,3 ; 1 Department of Psychology, University College London, 2 Institute ofCognitive Neuroscience, University College London, 3 Wellcome Trust Centre forNeuroimaging, University College LondonAttention can be voluntarily directed by top-down signals to stimuli of currentinterest and automatically captured by bottom-up signals from salientstimuli. The interactions between these two types of attentional orientinghave been studied using attentional capture (AC) paradigms where thepresence of a salient task-irrelevant stimulus interferes with top-downattentional selection. In our first Experiment, we investigated whetherindividual differences in self-reported distractibility in daily life reflectedindividual differences in brain structure using a voxel-based morphometry(VBM) analysis. Participants rated themselves for distractibility using theCognitive Failures Questionnaire. We found that for highly distractibleindividuals, the grey matter density was higher in the left superior parietalcortex (SPL) – a region that is involved in AC. The overlap suggesteda neural mechanism common to AC in the laboratory and distractibilityin everyday life. At least two alternative roles could be attributed to SPL:higher SPL density may exert greater control in more distractible individualsto maintain or re-engage attention to task-relevant stimuli and suppressingsaliency-driven distraction (compensation hypothesis). Alternatively,higher SPL density may be responsible for more distractibility itselfby increasing sensitivity to automatically orienting salient stimuli (orientinghypothesis). To distinguish these possibilities, we applied repetativetranscranial magnetic stimulation (rTMS) over the left SPL and measuredthe amount of AC before and after TMS. The compensation hypothesispredicted that TMS should increase AC whereas the orienting hypothesispredicted the opposite. The results showed that, relative to the controlstimulation site, AC increased following TMS over the left SPL, supportingthe compensation hypothesis. We conclude that grey matter density in leftSPL plays a crucial role in maintaining attention on relevant stimuli andavoiding distraction. In highly distractible individuals, left SPL seems tohave undergone structural changes to arm them with necessary top-downcontrol to function in daily life.Acknowledgement: Wellcome TrustAttention: Brain and behavior IOrchid Ballroom, Boards 451–459Sunday, May 9, 2:45 - 6:45 pm36.451 MEG activity in visual areas of the human brain duringtarget selection and sustained attentionJulio Martinez-Trujillo 1 (julio.martinez@mcgill.ca), Therese Lennert 1 , RobertoCipriani 2 , Pierre Jolicoeur 3 , Douglas Cheyne 4 ; 1 Department of Physiology, Facultyof Medicine, McGill University, Montreal, Canada, 2 Montreal NeurologicalInstitute, McGill University, Montreal, Canada, 3 Department of Psychology,University of Montreal, Montreal, Canada, 4 Hospital for Sick Children ResearchInstitute, University of Toronto, Toronto, CanadaWe combined MEG and magnetic resonance imaging (MRI) to examineevoked activity in visual areas during a task that involves both target selectionand sustained attention. During task trials, 9 human subjects werepresented with two white moving random dot patterns (RDPs, the targetand the distractor), left and right of a central fixation spot on a dark background.After a brief delay, each RDP color changed to red, blue, or green.Subjects were required to select the target using a color rank selection rule(red > blue > green), sustain attention to it, and identify a transient changein either its direction (clockwise/counterclockwise) or color (pink/grey).All possible stimulus configurations were presented randomly. We foundthat following color cue onset, early visual areas along the cuneus and lingualgyrus (V1 and V2) were activated bilaterally starting as early as 120 msafter cue onset. Activation in other areas such as V3, V3A, and V4 was significantlystronger contralateral to the target stimulus, peaking at ~170 msfrom color cue onset. These data demonstrate that target selection becomesevident in early visual cortex with a latency of about 170 ms followingcue onset. During the sustained attention period, changes in the directionof the RDPs evoked peak activation in contralateral area MT (Talairach:-40/-64/16; 39/-64/17), while changes in color evoked activity in contralat-190 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Afternoon Posterseral areas V2/V3 (-21/-74/11; 27/-67/5). These activations were stronger(~60%) for targets than for distracters, becoming most pronounced at ~180ms from change onset. Our results reveal that MEG activity in early visualareas of the human brain reflects target selection, as well as the effects ofsustaining attention on that stimulus. This may be the result of interactionsof feed-forward and feedback signals originated in different areas of thehierarchy of visual processing.Acknowledgement: EJLB, CERNEC, CIHR, NSERC36.452 Bilateral Visual Orienting with Adults Using a ModifiedPosner Paradigm and a Candidate Gene StudyRebecca Lundwall 1 (beckylundwall@rice.edu), James Dannemiller 1 ; 1 Department ofPsychology, School of Social <strong>Sciences</strong>, Rice UniversityVisual orienting represents “the aligning of attention” with a stimulus (Posner,1980). We examined the associations between multiple genetic markers(DBH, DRD4, DAT1, APOE e4, and COMT) and measures from a cuedorientingtask. In previous research using this paradigm, costs have beencombined with benefits into an overall validity score, and almost no geneticassociations with visual orienting have been found (Fan, Wu, Fosella &Posner, 2001). It could be premature, however, to claim that genes play norole in explaining individual differences in orienting. If costs and benefitsare determined by (even partially) distinct neural mechanisms, then theyshould be analyzed separately. This is consistent with Posner’s formulationof orienting as a three-step process of disengaging, moving and thenre-engaging attention at a new location. Disengaging attention (which isnecessary for invalid but not for valid cues) could have separate geneticinfluences.We used a modified cued-orienting paradigm (Posner, 1980) that addedbilateral cues with unequal luminances (Kean & Lambert, 2003). Subjectsrespond with a left or right key press to the location of a small white square(the target) which appeared 150 msec after the brief presentation of a cue.The cue was either bright or dim. Subjects were told that the target hada 50% probability of appearing near the cue (for single cues) or near thebrighter cue (for bilateral, asymmetric cues). Each individual’s averageresponse time (RT) to neutral cues served as a baseline for determining thecosts and benefits of invalid and valid cues, respectively.In our sample of 161 individuals, the correlation between costs and benefitswas low, r = .25. Each of the genetic markers showed significant associationwith at least one attentional measure, especially with invalid dim cues. Themajority of the genes showing associations with orienting code for dopamine.Acknowledgement: Rice Graduate Student Research Fellowshipto RAL and Lynette S.Autrey Research grant to JLD36.453 Neural signatures of local and global biases induced byautomatic versus controlled attentionAlexandra List 1 (a-list@northwestern.edu), Aleksandra Sherman 1 , Anastasia V.Flevaris 2,3 , Marcia Grabowecky 1 , Satoru Suzuki 1 ; 1 Department of Psychology,Northwestern University, 2 Department of Psychology, University of California,Berkeley, 3 Medical Research Service, Veterans Affairs, MartinezTwo mechanisms have been identified for orienting to local versus globalvisual information. One mechanism directs attention to a hierarchical levelvia the persistence of attention to the most recently attended level, i.e., anautomatic priming effect. A second mechanism is via voluntary effort, i.e.,a controlled shift of attention. Both mechanisms, whether automatic or controlled,bias individuals to attend to either local or global information. In thecurrent experiment, we tested whether these behavioral biases rely on similarneural dynamics. Participants viewed hierarchical stimuli and identifiedone of two target letters while EEG was recorded from 64 scalp electrodes. Inone session, participants identified targets presented equiprobably at eitherthe local or global level. We expected priming to the most recently attendedlevel and confirmed this behaviorally. In a second session, 100%-predictivelocal or global cues preceded each hierarchical stimulus. We expected participantsto shift attention to the cued hierarchical level and also confirmedthis behaviorally. Distinct EEG activity patterns emerged depending onwhether a bias was generated automatically or from a cue. Specifically, wecompared induced oscillatory EEG activity in the ~2-second blank intervalfollowing identification of a local or global target (i.e., during a primedstate) or following a local or global cue (i.e., during a controlled state). Globally-primedstates showed enhanced gamma (high frequencies: 30-50 Hz)oscillations over the right hemisphere compared to locally-primed states,and locally-primed states showed enhanced bilateral posterior alpha andbeta (low frequencies: 6-30 Hz) oscillations compared to globally-primedstates. In contrast, globally-cued states showed enhanced posterior alphafrequency oscillations compared to locally-cued states. These results revealthat behavioral biases for directing attention to local or global hierarchicallevels rely on distinct neural oscillatory states, depending on whether thebias is driven by automatic or controlled attention.Acknowledgement: NSF BCS 0643191, NIH R01 EY018197& -02S136.454 Event-related potential evidence for a dual-locus model ofglobal/local processingKirsten Dalrymple 1 (kdalrymple@psych.ubc.ca), Alan Kingstone 1 , Todd Handy 1 ;1 Department of Psychology, University of British ColumbiaWe investigated the perceptual time-course of global/local processingusing event-related potentials (ERPs). Subjects discriminated the global orlocal level of hierarchical letters of different sizes and densities. Subjectswere faster to discriminate the local level of large/sparse letters, and theglobal level of small/dense letters. This was mirrored in early ERP components:the N1/N2 had smaller peak amplitudes when subjects madediscriminations at the level that took precedence. Only global discriminationsfor large/sparse letters led to amplitude enhancement of the later P3component, suggesting that additional attention-demanding processes areinvolved in discriminating the global level of these stimuli. Our findingssuggest a dual-locus time course for global/local processing: 1) level precedenceoccurs early in visual processing; 2) extra processing is required ata later stage, but only for global discriminations of large, sparse, stimuli,which may require additional attentional resources for active grouping.Acknowledgement: NSERC, SSHRC, CIHR, MSFHR36.455 Finding a salient stimulus: Contributions of monkeyprefrontal and posterior parietal cortex in a bottom-up visualattention taskFumi Katsuki 1 (fkatsuki@wfubmc.edu), Christos Constantinidis 1 ; 1 Department ofNeurobiology and Anatomy, Wake Forest University School of MedicineThe dorsolateral prefrontal (PFC) and posterior parietal cortex (PPC) areknown to represent visuospatial information and to be activated by tasksinvolving the subjects’ attention processes. Recent reports have suggestedthat salient stimuli are encoded first by PPC during bottom-up attention;however, previous experiments have not used tasks driven entirely bybottom-up signals. We developed a behavioral task that orients attentionbased purely on bottom-up factors and tested the hypothesis that responsesto the salient stimuli emerge earlier in PPC than in PFC. Electrophysiologicalrecordings were made in area 46 of PFC and area 7a of PPC which areknown to be strongly interconnected. A stimulus array consisting of onetarget stimulus differing in color from 8 distractor stimuli was presentedto monkeys followed by a sequence of single stimuli separated by delayperiod. We trained animals to identify the salient stimulus on the screen(color and location varied randomly from trial to trial) and to release a leverwhen another stimulus appeared at the same location. Analysis was conductedon 134 PFC neurons and 71 PPC neurons with significant responsesto visual stimuli. We found that the average visual response latency tostimulus arrays was later for PFC neurons (70ms after the stimulus onset)than PPC neurons (50ms) in our experiment. The average time of target discrimination,however, was earlier for PFC neurons (120ms) than PPC neurons(160ms). The results indicate that salient stimuli are represented firstin the activity of prefrontal than parietal neurons, although initial latencyto the stimulus presentation is shorter for parietal neurons. These findingssuggest that prefrontal cortex has a previously unappreciated involvementin the processing of bottom-up factors and plays a role in the guidance ofattention to salient stimuli.Acknowledgement: National Institutes of Health grant EY1677336.456 The Effect of Spatial Attention on Pupil DynamicsHoward Hock 1 (hockhs@fau.edu), Lori Daniels 1 , David Nichols 2 ; 1 Department ofPsychology, Florida Atlantic University, 2 Department of Psychology, RoanokeCollegeAlthough it is well known that the pupil responds dynamically to changesin ambient light levels, we show for the first time that the pupil alsoresponds dynamically to changes in spatially distributed attention. Usinga variety of exogenous and endogenous orientating tasks, subjects alternatedbetween focusing their attention on a central stimulus and spreadingSunday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>191


Sunday Afternoon PostersVSS 2010 AbstractsSunday PMtheir attention over a larger area. Fourier analysis of the fluctuating pupildiameter indicated that: 1) pupil diameter changed at the rate of attentionvariation, dilating with broadly spread attention and contracting with narrowlyfocused attention, and 2) pupillary differences required changes inattentional spread; there were no differences in pupil diameter betweensustained broad and sustained spread attention. Given that broadly spreadattention increases the relative activation of large receptive fields and narrowlyfocused attention increases the relative activation of small receptivefields (Balz & Hock, 1997), the current results indicate that changesin attention spread can be mediated by changes in pupil diameter. Attentionis narrowed in order to extract detailed, high spatial frequency informationfrom a stimulus. This information remains available in the retinalimage when attention is narrowly focused because the pupil is constricted,minimizing spherical aberration (blur). Attention is broadened in order toattend simultaneously to stimulus information spread over a large region.The pupil is dilated when attention is broadly spread, so spherical aberration(blur) decreases the activation of small receptive fields by reducingthe high spatial frequency content of the retinal image. In effect, the largereceptive fields that mediate broad spatial attention are “selected” by thedilated pupil.36.457 The Effects of Voluntary Attention on the Event-RelatedPotentials and Gamma-Band Response of EEGAllison E. Connell Pensky 1 (allison.connell@berkeley.edu), Ayelet Landau 1, 2 , WilliamPrinzmetal 1 ; 1 Psychology, University of California, Berkeley, 2 Department ofVeterans Affairs, Martinez, CAPrevious research has shown that there are two types of spatial attention,sometimes referred to as voluntary (goal-directed) and involuntary (stimulus-driven)attention. These studies used the spatial-cueing paradigm inwhich a spatial cue could be predictive of the location of an upcoming target,or not. The target could be in either the cued or uncued location. Withinthis paradigm, predictive spatial cues engage voluntary attention, whilenonpredictive cues capture involuntary attention. Event-related potential(ERP) studies have found no clear difference between the P1 and N1 componentsfor predictive and nonpredictive cues, while studies using timefrequencyanalysis have found differences in the gamma-band response(30 to 80 Hz). We addressed this disconnect with a direct comparison ofthese analytical approaches within a single study. Furthermore, in all previousstudies target-related activity was confounded with the lingering cuerelatedresponse. This is particularly a problem for target-cued trials, whichdiffer in their physical composition from target-uncued trials and, as such,may influence the EEG response to the target. We addressed this overlappingactivity in two ways. First, we used a cueing paradigm in which wedifferentially, but simultaneously, cued both spatial locations. The participantswere told that these cues were either random with respect to thetarget (involuntary attention condition) or that one of the cues would predictthe location of the target (voluntary attention condition). This designensured that every trial was physically identical. Second, we employed asufficiently long period between the cue and target (600 ms) to allow theERP signal to return to baseline prior to the appearance of the target. Withthis design, we were able to separate cue-related and target-related activityboth in ERP and time-frequency analyses, and to delineate what aspects ofthe EEG signal are due to voluntary attention.36.459 Interactivity between the left intraparietal sulcus andoccipital cortex in ignoring salient distractors: Evidence fromneuropsychological fMRICarmel Mevorach 1 (c.mevorach@bham.ac.uk), Harriet Allen 1 , John Hodsoll 1 , LilachShalev 2 , Glyn Humphreys 1 ; 1 Behavioural Brain <strong>Sciences</strong> Centre, The School ofPsychology, The University of Birmingham, 2 School of Education, The HebrewUniversityVisual attention mechanisms in the human brain act both to enhance theprocessing of relevant targets and to suppress the processing of irrelevantdistractors. Attentional control mechanisms are typically linked to activityin a fronto-parietal network, but their effect can be measured in extrastriatevisual cortex. Previous work indicates that the left intraparietal sulcus (IPS)is particularly critical for the selection of low saliency targets in the presenceof higher-saliency distractors. Here we use neuropsychological fMRIto examine how interactions between the left IPS and extrastriate cortexgenerate selection by saliency. We compared activation patterns in the leftIPS and in extrastriate visual cortex for responses to the local and globalproperties of compound letters. We tested two patients exhibiting distinctpatterns of damage to extrastriate visual cortex which differentially affectedtheir ability to select targets at local and global level. In healthy controlsthere was increased activity in the left IPS, but reduced activity in extrastriatevisual cortex, when the target had low saliency and the distractorhigh saliency. Similar effects were found with the patients but only whendistractors at their spared level of processing had high saliency. In contrast,there was increased activation in their intact extra-striate region when thetarget on their spared level of processing had low saliency. We concludethat the left IPS acts to bias the competition for selection against salientdistracting information (rather than in favour of the low-salient target). Inaddition, in the absence of competition from salient distractors, extra-striateactivity reflects target selection. We discuss the implications for understandingthe network of regions controlling visual attention.3D perception: Pictorial cuesVista Ballroom, Boards 501–512Sunday, May 9, 2:45 - 6:45 pm36.501 Shape from SmearRoland Fleming 1 (roland.fleming@tuebingen.mpg.de), Daniel Holtmann-Rice 1,2 ;1 Max Planck Institute for Biological Cybernetics, 2 Dept. of Computer Science,Yale UniversityOver the last few years, I have shown that images of 3D objects containhighly organized patterns of orientation and spatial frequency information(‘orientation fields’), which are systematically related to 3D shape. Here wepresent a novel illusion and an adaptation experiment that provide the firstdirect evidence that orientation fields are sufficient to drive 3D shape perception.The logic of the illusion is as follows. If orientation fields play an importantrole in 3D shape estimation, then it should be possible to synthesize 2Dpatterns of orientation that elicit 3D shape percepts. We did this by ‘smearing’random noise along specific directions in the image (derived from a3D model). The result is a 2D texture pattern—generated entirely throughfiltering operations—that appears vividly like a 3D object. We call this illusion“shape from smear”. A depth discrimination task showed that naïvesubjects reliably perceive specific 3D shapes from such stimuli.Because ‘shape from smear’ is an entirely 2D process, we can modify theorientations and scales in the image and measure the effects on perceived3D shape. The more we ‘smear’ the noise pattern, the more 3D the imageappears, suggesting that we are manipulating the image information thatthe visual system uses for estimating shape. We also find that orientationvariations are more important than spatial frequency variations.Most importantly, we used shape from smear to induce 3D shape perceptsby adaptation. We created ‘anti-shape’ textures by smearing noise alongdirections orthogonal to the correct directions for a specific 3D shape. Prolongedviewing induces local orientation adaptation, which makes a subsequentlypresented neutral isotopic noise pattern appear like a specific 3Dshape. Thus, for the first time we can show that local orientation detectorsare directly involved in the perception of 3D shape.Acknowledgement: RF supported by DFG FL 624/1-136.502 The perception of 3D shape from contour texturesEric Egan 1 (egan.51@osu.edu), James Todd 1 ; 1 Department of Psychology, TheOhio State UniversityA new computational analysis is described for estimating the 3D shapes ofcurved surfaces with contour textures. This model assumes that contourson a surface are stacked in depth so that the depth interval between any twopoints is optically specified by the number of contours by which they areseparated. Whenever this assumption is violated, the model makes specificpredictions about how the apparent shape of a surface should be distorted.Two psychophysical experiments were performed in an effort to comparethe model predictions with the perceptual judgments of human observers.Stimuli consisted of sinusoidally corrugated surfaces with contours thatwere oriented in different directions. In Experiment 1 images of texturedsurfaces were presented together with a set of red and yellow dots thatcould be moved along a single horizontal scan line with a handheld mouse.Observers were instructed to mark each local depth minimum on the scanline with a red dot and each local depth maximum with a yellow dot. InExperiment 2 horizontal scan lines on images were marked by a row of five192 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Afternoon Postersto eight equally spaced red dots. An identical row of dots was presentedagainst a blank background on a separate monitor, each of which could bemoved perpendicularly with a handheld mouse. Observers were instructedto adjust the dots on the second monitor in order to match the apparentsurface profile in depth along the designated scan line. The results of bothexperiments revealed that observers’ shape judgments are close to veridicalwhen surface contours are stacked in depth, but that contour patternsthat violate this constraint produce systematic distortions in the apparentshapes of surfaces that are quite consistent with our proposed model.Acknowledgement: This research was supported by a grant from NSF (BCS-0546107).36.503 Haptic learning disambiguates but does not overridetexture cues to 3-D shapeXin Meng 1 (xmeng@sunyopt.edu), Qasim Zaidi 1 ; 1 SUNY, State College of OptometryLi and Zaidi (2003) showed that when 3-D developable surfaces are coveredby isotropic random-dot textures, fronto-parallel concave surfaces are seenas convex, because orientation flows are absent, and the spatial-frequencygradient is consistent with a convex percept, if image spatial frequency isassumed to vary solely as a function of distance. The percept suggests thatdeformations of texture elements were not used by the visual system. Hapticfeedback can influence priors and weighting of 3-D visual cues (Ernst etal,2003; Adams etal, 2004)). We tested whether haptic learning can correct theperception of 3-D shape based on spatial frequency cues. Observers wereshown four half-cycles of sinusoidal corrugations (Convex, Concave, Rightslant,Left-slant) and a flat surface, all covered with a random-dot pattern.Observers perceived concavities and convexities as convex, both slants asconcave, and the flat surface as flat. Using a Phantom force-feedback device,observers were then allowed to “feel” the actual 3-D shapes. After repeatedexploration, observers started perceiving the concave and slanted surfaces“correctly”. The effect of haptic learning spread over the complete image,but disappeared when the haptic feedback ended. Since the texture in eachimage is physically compatible with the veridical shape, this may be dueto recruiting correct cues or to overriding texture cues. Since the frequencycues are similar for the two curvatures and the two slants, haptic feedbackindicating the opposite curvature or slant predictably evoked the perceptcompatible with haptic information. As a critical test we presented flatfrontoparallel haptic feedback for the textured images of the curved andslanted surfaces. This feedback failed to modify the pre-training percept. Inaddition, curved or slanted haptic feedback did not alter the percept of theflat stimulus. Consequently, prolonged haptic training can disambiguatetexture cues to 3-D shape, but can not override them.Acknowledgement: EY07556 and EY1331236.504 Contributions of orientation and spatial frequency modulationsin the perception of slanted surfacesDanny Tam 1,2 (danny.tam@qc.cuny.edu), Jane Shin 2 , Andrea Li 1,2 ; 1 NeuropsychologyDoctoral Program, Graduate Center, CUNY, 2 Department ofPsychology, Queens College, CUNYIn images of textured 3D surfaces, pattern changes can be neurally characterizedas changes in orientation and spatial frequency. Previously, wehave shown that correct 3D shape perception is contingent on the visibilityof orientation flows running parallel to the surface curvature. However,little is known about the relative contributions of orientation and frequencyinformation in 3D shape perception. We sought to determine the relativecontributions of orientation and frequency in the perception of surfaceslant. Horizontal and vertical gratings were mapped onto planar surfacesthat were rotated around a horizontal or vertical axis and then viewed inperspective. We measured the minimum amount of surface slant requiredto detect the direction of orientation (OM) or frequency modulation (OM)change (pattern detection thresholds) and compared them to the minimumamount of slant required to detect the direction of surface slant (slant detectionthresholds) for the same surfaces patterned with horizontal-verticalplaids containing both OM and FM changes. For both horizontally andvertically rotated surfaces, results indicate that 1) For surfaces close to thefronto-parallel plane, steeper slants were consistently needed to detectFM (than OM) changes and to detect surface slant when both OM and FMchanges were present. Slant detection thresholds were consistently closeto pattern detection thresholds for OM changes. 2) For surfaces at steeperslants, preliminary results show that pattern detection thresholds are consistentlylow for both OM and FM conditions, and are comparable to slantdetection thresholds when both types of information are present. Patternfrequencies will be varied to examine the contribution of effective contrastto FM detection. Our results suggest that 3D slant perception is dictated byOM information at both shallow and steep slants while FM information isefficiently used only at steep slants.Acknowledgement: This work was supported by a grant from The City University of NewYork PSC-CUNY Research Award Program (PSC-69450-00 38 to A. Li) and a grant fromNIH (EY13312 to Q. Zaidi).36.505 A spherical harmonic model for 3D shape discriminationFlip phillips 1 (flip@skidmore.edu), Eric Egan 2 , Josh Lesperance 3 , Kübra Kömek 1 ;1 Psychology & Neuroscience, Skidmore College, 2 Psychology, The Ohio StateUniversity, 3 Mathematics & Computer Science, Skidmore CollegeAt VSS 2008, we presented a series of experiments that sought out commonmental representation strategies for three-dimensional shape acrossthe modalities of vision and touch. One of these experiments requiredsubjects to physically sculpt replicas of visually and haptically presentedobjects. While investigating strategies for comparing the depicted shapesto their respective ground-truth we developed a metric based on sphericalharmonic decomposition. An unexpected and surprising artifact of thisprocedure is that it is also highly predictive of performance in our variousdiscrimination tasks. Here, we present the details of this model as well asa reanalysis of results from other haptic and visual discrimination experiments(Norman et al. 2004, 2006) that also show close agreement with ourmodel. Finally, we present a series of experiments intended to test the limitsof our model. We use a spherical harmonic decomposition that shares characteristicswith traditional Fourier methods. Subjects performed a pairedcomparisondiscrimination task using objects that varied in frequency(complexity) and phase (relative location of features). It is well known that,in the case of two-dimensional visual images, the phase component containsan overwhelming amount of the information needed for identificationand discrimination. Is this true for the visual discrimination of threedimensionalobjects as well? Our results show that, for particular rangesof 3D spatial frequency, the phase components dominate, while at otherfrequencies the amplitude carries the information used for discrimination.36.506 Depth cue combination in spontaneous eye movementsDagmar Wismeijer 1 (d.a.wismeijer@gmail.com), Casper Erkelens 2 , RaymondvanEe 2 , Mark Wexler 3 ; 1 Justus-Lieblig-University, Giessen, Germany, 2 HelmholtzInsitute, Utrecht University, The Netherlands, 3 Laboratoire Psychologie de laPerception, CNRS/Université Paris Descartes, FranceWhere we look when we scan visual scenes to obtain an understandingof the 3D world around us is a question of interest for both fundamentaland applied research. Recently, it has been shown that depth is an importantvariable in driving eye movements: the directions of saccades tend tofollow depth gradients (Wexler (2008), Janssen (2009)). Whether saccadesare aligned with a single depth cue or a combination of depth cues is stillunknown. And, in the latter case, it is interesting to ask whether saccades arebased on similar combination rules as those that apply to depth perception.Moreover, these scanning eye movements across different depth planes arecomposed of two distinct components: conjugate shifts of gaze (saccades)and disjunctive movements changing the depth of fixation (vergence). Andthe same questions about the effect of depth cues still apply to vergence:various studies have reported that vergence is guided by the consciouslyperceived depth percept, whereas others report that vergence is based ondepth cue(s). Here we studied what depth information is used to plan bothsaccades and vergence. We showed observers surfaces inclined in depth,in which perspective and disparity defined different plane orientations(both small (0°-45°) and large (90°,180°) conflicts). Observers´ eye movementswere recorded binocularly, while they scanned the surface. After thestimulus presentation, observers reported the perceived surface orientationusing a joystick. We found saccade directions and perceived surface orientationuse the same pattern of depth cue combination: a weighted linear cuecombination for small conflicts and cue dominance for large cue conflicts.The weights assigned to each cue varied across subjects, but were stronglycorrelated for perception and saccades, within subjects. This correlationwas maintained while manipulating cue reliability. Vergence on the otherhand was dominated by the disparity cue.Sunday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>193


Sunday Afternoon PostersVSS 2010 AbstractsSunday PM36.507 Relative contribution of outline (perspective) and shadingcues to monocular depth perceptionGlen Harding 1 (g.harding1@bradford.ac.uk), Marina Bloj 1 , Julie Harris 2 ; 1 BradfordOptometry Colour and Lighting Lab (BOCAL), University of Bradford, 2 School ofPsychology, University of St. AndrewsTwo important monocular cues to depth in static scenes are perspective/object outline and shading. These cues are widely used among artists toindicate depth in flat images, but knowledge of their relative contributionsto depth and shape perception, and their interactions, is limited. In orderto explore this issue we rendered (using RADIANCE) physically accuratecolour images of folded card stimuli and displayed them in 42-bit colour.One side of the card was a saturated red colour, the other white. The sideswere separated by a vertical fold to form a concave ‘corner’ or a convex ‘roof’such that the angle between each side of the card could be varied. A widerange of card angles were produced. Observers viewed stimuli monocularly,through a small circular aperture to exclude other cues to depth, andwere asked to match the angle of the folded card stimulus by adjusting theangle between two lines in a ‘view-from-above’ configuration, displayed onanother monitor Observers also performed matches to wire frame stimuli(‘outline-cue-only’ condition), large card stimuli that extended beyond thefield of view (‘gradient-cue-only’ condition) and stimuli where the anglesindicated by the outline differed from that indicated by the gradient (‘cueconflict-condition’).Results for 3 observers (678 trials each) indicate that theinformation in the shading is a poor depth cue in isolation, and also ambiguous.Angle estimation seems to be dominated by the perspective cue and aprior for flatness, which can be modelled using Bayesian cue-combination.The addition of shading, of any type, to the ‘outline-cue-only’ improvedthe accuracy of card angle estimates (linear regression slopes significantlydifferent, p


VSS 2010 AbstractsSunday Afternoon PostersObservers were close to the physical prediction for tall/narrow shapes, butwith decreasing aspect ratio (shorter/wider shapes), there was a tendencyto underestimate the critical angle. With this bias factored out, we foundthat errors were mostly positive when the part faced toward the table’sedge, and mostly negative when facing the opposite direction. These resultsare consistent with observers underestimating the physical contribution ofthe attached part. Thus, in making judgments of physical stability observerstend to down-weight the influence of attached part---consistent with arobust-statistics approach to determining the influence of a part on globalvisual estimates (Cohen & Singh, 2006; Cohen et al., 2008).Acknowledgement: SAC & MS: NSF CCF-0541185 and IGERT DGE-0549115, RF: DFG FL624/1-136.512 Visualizing the relations between slices and wholes isfacilitated by co-locationBing Wu 1 (bingwu@andrew.cmu.edu), Roberta L. Klatzky 1,2 , George Stetten 3,4 ;1 Department of Psychology, Carnegie Mellon University, 2 Human-ComputerInteraction Institute, Carnegie Mellon University, 3 Robotics Institute, CarnegieMellon University, 4 Department of Biomedical Engineering, University ofPittsburghCross-sectional 2D images are widely used in medicine to represent 3Danatomy, but even experienced physicians have difficulty visualizing therelationship between the slices and the whole. Three experiments examinedwhether mental visualization is facilitated by displaying the cross sectionsin the physical space of the whole object. Subjects used a hand-held tool toscan and expose a hidden 3D object as a sequence of axial cross sections.A non-axial test angle was then indicated within the scanned space, andthe subjects were instructed to visualize the corresponding cross section.A 2D test image then appeared, and the subjects indicated whether or notit matched the visualized cross section. The target’s cross sections and thetest image were either displayed directly at the source locations, by meansof an augmented-reality display (in situ viewing), or displaced to a remotescreen (ex situ viewing). In Experiment 1, both the target cross sections andthe test image were presented in the same display mode, in situ or ex situ.Consistent with the hypothesis, we found that subjects achieved higheraccuracy with the in situ than the ex situ display. In particular, displacingthe images from the source induced failures to detect geometrical differencesbetween the visualized cross section and test image. In Experiment2, the test image was always displayed in situ. The disadvantage for ex situexploration remained, showing that it is the visualization process, not thetest, that is undermined by displacing the cross sectional displays from thesource location. A third experiment confirmed this result by showing thatex situ viewing at test alone had no negative effect. These findings extendthe advantages we have shown for in situ visualization in facilitating perceptuallyguided action, to the mental construction of complex object representations.Acknowledgement: Supported by grants from NIH (R01-EB000860 & R21-EB007721) andNSF (0308096).Face perception: FeaturesVista Ballroom, Boards 513–528Sunday, May 9, 2:45 - 6:45 pm36.513 Integration of facial features is sub-optimalJason Gold 1 (jgold@indiana.edu), Bosco Tjan 2 , Megan Shotts 1 , Patrick Mundy 1 ;1 Department of Psychological and Brain <strong>Sciences</strong>, Indiana University, Bloomington,2 Department of Psychology, University of Southern CaliforniaHow efficiently do we combine information across facial features when recognizinga face? Some previous studies have suggested that the perceptionof a face is not simply the result of an independent analysis of individualfacial features, but instead involves a coding of the relationships amongstfeatures that enhances our ability to recognize a face. We tested whether anobserver’s ability to recognize a face is better than what one would expectfrom their ability to recognize the individual facial features in isolation byusing a psychophysical summation-at-threshold technique. Specifically,we measured contrast sensitivity for identifying left eyes, right eyes, nosesand mouths of human faces in isolation as well as in combination. FollowingNandy & Tjan 1 , we computed an integration index from thesesensitivities, defined as = S2 left eye+right eye+nose+mouth / (S2 left eye+ S2 right eye + S2 nose + S2 mouth ), where S is contrast sensitivity. Anindex of 1 indicates optimal integration of information across features (i.e.,observers use the same amount of information from each feature when theyare shown in isolation as when they are shown in combination with eachother). An index 1indicates super-optimal integration (i.e., the combination of features allowsobservers to use more of the available information than they were able touse when the features were shown in isolation). Surprisingly, we find thatmost observers integrate facial information sub-optimally, in a fashion thatis more consistent with a model that bases its decisions on the single ‘bestfeature’. 1 Nandy AS & Tjan BS, JOV 2008, 8(13):3,1-20.Acknowledgement: This research was funded by National Institute of Health GrantsEY019265 to J.M.G., and EY016093, EY017707 to B.S.T.36.514 There can be only one: Change detection is better forsingleton faces, but not for faces in generalWhitney N. Street 1 (street1@illinois.edu), Sean Butler 2 , Melinda S. Jensen 1 ,Richard Yao 1 , James W. Tanaka 2 , Daniel J. Simons 1 ; 1 Department of Psychology,University of Illinois, 2 Cognition and Brain <strong>Sciences</strong> Program, Department ofPsychology, University of VictoriaChange detection is a powerful tool to study visual attention to objects andscenes because successful change detection requires attention. For example,people are better able to detect a change to the only face in an array thanthey are changes to other objects (Ro et al, 2001), suggesting that faces drawattention. To the extent that such attention advantages depend on experience,they might vary with age. Our study had two primary goals: (a) toexplore the nature and limitations of the change detection advantage forfaces, and (b) to determine whether that advantage changes with age andexperience. Children, ages 7 to 12 years, viewed an original and changedarray of objects that alternated repeatedly, separated by a blank screen,until they detected the one changing object. The arrays consisted of varyingnumbers of faces or houses, any one of which could change to anotherexemplar from the same category. Consistent with earlier work, in the presenceof a singleton face, changes to that face were detected more quicklyand changes to houses in the array were detected more slowly, suggestingthat the singleton face drew attention. This advantage was specific tofaces—singleton houses show no benefit. However, the advantage for facesoccurred only for singleton faces—when multiple faces were present in thedisplay, change detection was no better for faces than for houses. This singletonadvantage for faces was present for all age groups even though oldersubjects showed better overall change detection performance. Apparently,people prioritize single faces over other objects, but they do not generallyprioritize faces over other objects when multiple faces appear in a display.36.515 The SHINE toolbox for controlling low-level image propertiesVerena Willenbockel 1 (verena.vw@gmail.com), Javid Sadr 2 , Daniel Fiset 1 , GregHorne 3 , Frédéric Gosselin 1 , James Tanaka 3 ; 1 Département de Psychologie,Université de Montréal, 2 Department of Psychology, University of MassachusettsBoston, 3 Department of Psychology, University of VictoriaVisual perception can be influenced by top-down processes related tothe observer’s goals and expectations, as well as by bottom-up processesrelated to low-level stimulus attributes, such as luminance, contrast, andspatial frequency. When using different physical stimuli across psychologicalconditions, one faces the problem of disentangling the contribution oflow- and high-level factors. Here we make available the SHINE (Spectrum,Histogram, and Intensity Normalization and Equalization) toolbox writtenwith Matlab, which we have found useful for controlling a number of imageproperties separately or simultaneously. SHINE features functions for scalingthe rotational average of the Fourier amplitude spectra (i.e., the energyat each spatial frequency averaged across orientations), as well as for theprecise matching of the spectra. It also includes functions for normalizingand scaling mean luminance and contrast, as well as a program for exacthistogram specification. SHINE offers ways to apply the luminance adjustmentsto the whole image or to selective regions only (e.g., separately tothe foreground and the background). The toolbox has been successfullyemployed for parametrically modifying a number of image properties orfor equating them across the stimulus set in order to minimize potentiallow-level confounds in studies on higher-level processes (e.g., Fiset, Blais,Sunday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>195


Sunday Afternoon PostersVSS 2010 AbstractsSunday PMGosselin, Bub, & Tanaka, 2008; Williams, Willenbockel, & Gauthier, 2009).The toolbox can be downloaded here: www.mapageweb.umontreal.ca/gosselif/shine.36.516 The role of contour information in the spatial frequencytuning of upright and inverted facesDaniel Fiset 1 (daniel.fiset@umontreal.ca), Verena Willenbockel 1 , Mélanie Bourdon 1 ,Martin Arguin 1 , Frédéric Gosselin 1 ; 1 Department of Psychology, University ofMontrealUsing the spatial frequency (SF) Bubbles technique, we recently revealedthat the same SFs are used for the identification of upright and invertedfaces (Willenbockel et al., in press; see also Gaspar, Sekuler, & Bennett,2008). In these articles, the faces were presented through an elliptical aperturehiding contours. Given that contours do contain information usefulfor face identification, real-world differences between upright and invertedface SF processing might have been missed. Here, we examined the role ofcontour information in the SF tuning of upright and inverted face identificationusing SF Bubbles. We created a bank of 20 faces, and each face wasrandomly assigned either to set A or to set B. Six participants saw the facesfrom set A with contours and the faces from set B without contours (shownthrough an elliptical aperture), whereas six other participants saw the facesfrom set A without contours and faces from set B with contours. On eachtrial, a face was selected and its SFs were sampled randomly (for details, seeWillenbockel et al., in press). Participants completed one thousand trials ineach condition. Multiple linear regressions were performed on the randomSF filters and response accuracy. Without contours, we closely replicatedWillenbockel et al.: the same SFs correlated with accurate identification ofupright and inverted faces (a single band beginning at ~6 cycles per face(cpf) and ending at ~15 cpf). The presence of contour information led to asimilar increase in the diagnosticity of low spatial frequencies, irrespectiveof face orientation; and to a decrease in the diagnosticity of higher spatialfrequencies for inverted faces (upright faces with contour: a single bandbeginning at ~2.3 cpf and ending at ~16.5 cpf; upright faces without contour:a single band beginning at ~4 cpf and ending at ~20 cpf).36.517 Different spatial frequency tuning for face identificationand facial expression recognition in adultsXiaoqing Gao 1 (gaox5@mcmaster.ca), Daphne Maurer 1 ; 1 Department ofPsychology, Neuroscience, and Behaviour, McMaster UniversityFacial identity and facial expression represent invariant and changeableaspects of faces, respectively. The current study investigated how humanobservers (n=5) use spatial frequency information to recognize identity versusexpression. We measured contrast thresholds for the identification offaces with varying expression and for the recognition of facial expressionsacross varying identity as a function of the center spatial frequency of narrow-bandadditive spatial noise. At a viewing distance of 60 cm, the peakthreshold representing maximum sensitivity was at 11 cycles/face width foridentifying the faces of two males or two females with varying expression.The peak threshold was significantly higher for recognizing facial expressionsacross varying identity: it was at 16 cycles/face width for discriminatingbetween happiness and sadness, and between fear and anger, whetherthe expression was high or low in intensity. In a second phase we investigatedthe effect of viewing distance. As viewing distance increased from 60to 120 and 180 cm, the peak threshold for identifying faces shifted graduallyfrom 11 to 8 cycle/face width, while the peak threshold for recognizingfacial expressions shifted gradually from 16 to 11 cycles/face width. Thepatterns from human observers were different from an ideal observer usingall available information, which behaved similarly in recognizing identityand expression. In conclusion, we found, regardless of viewing distance,the optimal spatial frequency band for the recognition of facial expressionsis higher than that for the identification of faces. The patterns suggest thatfiner details are necessary for recognizing facial expressions than for identifyingfaces and that the system is only partially scale invariant.Acknowledgement: Canadian Natural <strong>Sciences</strong> and Engineering Research Council (NSERC)36.518 Using Spatial Frequency to Distinguish the PerceptualRepresentations of Identity and Emotional ExpressionsDanelle A. Wilbraham 1 (wilbraham.1@osu.edu), James T. Todd 1 ; 1 Ohio StateUniversity Department of PsychologyRecently, there has been more of a focus in the face recognition literatureon the perceptual representation of faces, and specifically, a focus onattempting to identify the constituent dimensions of the face space (e.g.Valentine, 1991). Of additional interest is the idea that independent facespaces may exist depending on the task at hand. For instance, is the informationrequired to judge identity independent of the information requiredto judge facial expression? One approach to investigating this problem is tomanipulate what range of spatial frequency information is available to theobserver. In the current study, we limited spatial frequency informationto one of five frequency bands, from coarse to fine, using band-pass filtering.Observers completed a match-to-sample task where they saw a sampleface followed by two alternatives, which were both limited to same one ofthe five spatial frequency bands. Using the exact same stimuli, observersengaged in two tasks: in one task, they matched identity; in the other, theymatched facial expression. This technique allows us to isolate differencebetween the two types of judgments and thus draw conclusions regardingthe underlying representation. Various image measures were investigatedto attempt to account for the results, including those based on the Fourierphase spectrum, which we believe carries the alignment information that iscritical for these tasks.36.519 Facial contrast polarity affects FFA uniquely in humans andmonkeysXiaomin Yue 1 (xiaomin@nmr.mgh.harvard.edu), Kathryn Devaney 1 , Daphne Holt 1 ,Roger Tootell 1 ; 1 Martinos center for biomedical imaging, MGH, Harvard MedicalSchoolWhen otherwise-familiar faces are presented in reversed contrast polarity(e.g. as photographic negatives), they are very difficult to recognize. Herewe tested fMRI activity in FFA, in response to quantitatively controlledfacial variations in contrast polarity, contrast level, illumination, meanluminance, and rotation in plane. Among these, only reversal of contrastpolarity affected FFA activity uniquely. Compared to all other corticalareas, reversal of facial contrast polarity produced the highest fMRI signalchange in FFA, across a wide range of contrast levels (5.3 - 100% RMScontrast). By comparison, FFA responses were equivalent (invariant) inresponse to systematic variations in illumination location, mean luminance,and rotation in plane – even though those parameters also affect facial recognition.In greater detail, reversal of facial contrast polarity changes threeimage properties in parallel: surface absorbance, shading, and specularreflection. In FFA, we found that the polarity bias was produced only bya combination of all three properties; one or two of these properties in isolationdid not produce a significant contrast polarity bias. This suggeststhat the polarity bias arises from subthreshold (non-linear) summation ofmultiple face image properties. Using fMRI, we found a homologous effectin visual cortex of awake behaving macaque monkeys. Reversal of facialcontrast polarity produced decreased activity, confined to the posterior facepatch (homologous to FFA), across contrast levels. Apparently, the polaritybias reflects fundamental mechanisms of visual processing, conserved forat least 25 million years.Acknowledgement: This work was supported by the National Institutes of Health(EY017081 to RBHT, MH076054 to DJH), and the National Alliance for Research onSchizophrenia and Depression (NARSAD) (RBHT, DJH).36.520 The Recognition of Faces, Airplanes, and Novel Objects isImpaired by Contrast ReversalAmanda Killian 1 (Amandakillian@csu.fullerton.edu), Quoc Vuong 2 , Jean Vettel 3 ,Jessie Peissig 4 ; 1 California State University Fullerton, 2 Institute of Neuroscience,Newcastle University, UK, 3 Army Research Laboratory, 4 California StateUniversity FullertonViewing faces in a negative contrast (i.e., a photograph negative) has beenshown to produce a significant decrement in recognition performance(Bruce & Langton, 1994; Galper, 1970; Goldstein & Chance, 1981). This findinghas been suggested to support the existence of a face-specific modulein the brain. Alternatively, the pigmentation, lighting, and shading patternspresent in faces, and other object categories, may contribute to thisphenomenon (Bruce & Langton, 1994). If this latter explanation is true, weshould expect to see contrast reversal effects in categories other than faces,even categories that are novel to the participant. In this experiment, wecompared the recognition of faces and other categories of objects, includinga novel, nonbiological category (“pengs”), across contrast. Our previoustest using a novel category of objects used objects that were perceived196 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Afternoon Postersas biological (e.g., Greebles; Vuong et al., 2005). Consequently, the currentexperiment tests the robustness of the contrast effect. This is particularlyimportant, because recent data has been reported showing no contrasteffect for a set of novel objects (blobs; Nederhauser, et. al, 2007).36.521 Hemispheric specialization for the processing of horizontaland vertical manipulations of the eye region in facesMichael D. Anes 1 (manes@wittenberg.edu), Daniel E. Kochli 1 ; 1 Department ofPsychology, Wittenberg UniversityAt VSS09, we presented experiments in which participants saw an initialface for 3500 ms, a brief probe face (120 ms), and made same/differentidentity judgments. Inversion of the left eye in probes (initially projected tothe right hemiphere) resulted in lengthened same judgment RTs comparedto when probes were unaltered, while inversion of the right eye (initiallyprojected to the left hemisphere) did not result in lengthened RTs relativeto unaltered conditions. We took these results to show RH sensitivity tomanipulations of face configuration. The present experiments used thissame technique. First we created a set of highly standardized faces (identicalface shape and eyebrow “frames,” with internal features swapped tocreate identities). We made 5 pixel movements of each eye alone and eacheye plus eyebrow to the outside of the face (horizontal) and downward(vertical) to uncover potential differential hemispheric sensitivity to horizontaland vertical displacements. We reduced initial face duration to 400ms. Movements of eye plus eyebrow were more disruptive to same judgmentRTs than movements of the eye alone, and horizontal movementswere more disruptive than vertical movements. Unlike our previous work,effects of the side of manipulation were weak. We found some evidencethat the LH was more negatively affected by eye plus eyebrow movementsthan eye alone movements but the RH was not. Horizontal movements ofthe eye plus brow were more disruptive than eye alone horizontal movements,but there was no difference in same judgment RTs for vertical eyeplus brow and eye alone vertical movements. In another experiment welengthened the initial display to investigate “memorial” vs. “perceptual”contributions to hemispheric effects. We relate our results to inversioneffect studies revealing a configurational anisotropy of horizontal and verticaldisplacements of facial features (Goffaux and Rossion, 2007).36.522 How first-order information contributes to face discriminationin nonhuman primatesJessica Taubert 1 (jtauber@emory.edu), Lisa Parr 1 , David Murphy-Aagten 2 ; 1 YerkesNational Primate Research Center, Emory University GA USA, 2 School ofPsychology, The University of Sydney, NSW AustraliaFaces are complex visual objects that can be distinguished from otherobjects that occur in our visual environment using first-order information.The term “first-order information” refers to the basic spatial layoutof features that is repeated in all faces (two eyes, above a nose, above amouth). An outstanding question is how the detection of the first-order, orcanonical, configuration interacts with the processes that underlie exemplardiscrimination. Here, we begin to address this question by examininghow the first-order configuration of a face contributes to an exemplar-baseddiscrimination task in two nonhuman primate species. Twelve subjects (sixchimpanzees Pan troglodytes and six rhesus monkeys Macaca mulatta)were trained to discriminate scrambled faces. Subjects were then able togeneralize from the learned configuration to both the canonical configurationand a novel configuration. In an alternative condition, the subjectswere trained with whole faces (in the canonical configuration) and testedwith scrambled faces. A comparison between these two conditions demonstratesthat the presence of the canonical configuration changed the perceptionof local features. These results are, thus, consistent with the concept ofholistic processing for whole faces. We also present new data showing thatboth species tended to match the configuration of the face over-and-abovesecond-order information, the unique variation among faces assumed to bethe basis for exemplar discrimination. These data make a valuable contributionby clarifying the definition of terms that have become a source of confusionin face recognition research. We propose that the first-order configurationof a face serves a crucial social function by enabling the involuntaryintegration of features during the early stages of face perception.36.523 Recognizing people from dynamic video: Dissecting identityinformation with a fusion approachAlice O’Toole 1 (otoole@utdallas.edu), Samuel Weimer 1 , Joseph Dunlop 1 , RobertBarwick 1 , Julianne Ayyad 1 , Jonathan Phillips 2 ; 1 The University of Texas at Dallas,2 National Institute of Standards and TechnologyThe goal of this study was to measure the quality of identity-specific informationin faces and bodies presented in natural video or as static images.Participants matched identity in stimulus pairs (same person or differentpeople?) created from videos of people walking (gait videos) and/or conversing(conversation videos). We varied the type of information presentedin six experiments and two control studies. In all experiments, there werethree conditions with participants matching identity in two gait videos(gait-to-gait), two conversation videos (conversation-conversation), oracross a conversation and gait video (conversation-gait). In the first set ofexperiments, participants saw video presentations of the face and body(Exp. 1), face with body obscured (Exp. 2); and body with face obscured(Exp. 3). In the second set, they saw the “best” extracted image of face andbody (Exp. 4), face-only (Exp. 5); and body-only (Exp. 6). Identification performancewas always best with both the face and body, although recognitionfrom the face alone was close in some conditions. A video advantagewas found for face and body and body-alone presentations, but not theface-alone presentations. In two control studies, multiple static imageswere presented. These studies showed that the video advantages could beexplained by the extra image-based information available in the videos, inall but the gait-gait comparisons. To assess the differences in the identityinformation in the experiments, we used a statistical learning algorithmto fuse the participants’ judgments for individual stimulus items acrossexperiments. The fusion produced perfect identification when tested witha cross validation procedure. When the stimulus presentations were static,the fusion indicated that there was partially independent perceptual informationavailable in the face and body and face-only conditions. With videopresentations, partially independent perceptual information was availablefrom the face and body condition and the body-only condition.Acknowledgement: TSWG/DOD to A. O’Toole.36.524 Face viewpoint aftereffect in peripheral visionMarwan Daar 1 (mdaar@yorku.ca), Hugh R. Wilson 1 ; 1 Centre for <strong>Vision</strong> Research,York UniversityPrevious research has shown the existence of the face viewpoint aftereffect(Fang & He, 2005), where adapting to a left or right oriented face causesa perceptual shift in the orientation of a subsequently presented frontalface. Thus far, this aftereffect has only been explored in the central regionof the visual field. In the current study we used a novel adaptation techniquewhich differs from previous studies in that in each trial there was oneadapting stimulus followed by two simultaneously presented test stimuli.Here, the adapting stimulus was displayed in either half of the visual field,and the two test stimuli were displayed in both halves of the visual field,separated by ±3.3 degrees of visual angle. Instead of judging whether a singletest stimulus was oriented to the left or to the right (relative to straightahead), subjects judged whether one test stimulus was oriented to the leftor to the right of the other stimulus. Since only one of the test stimuli ispresented in the adapted region, this allows us to assess the strength of theaftereffect by measuring the perceived differences between the two stimuli.This technique has the advantage of allowing aftereffects to be probed relativeto arbitrary orientations, rather than to those that are exclusively centeredaround 0 degrees. Using this technique, we discovered that a viewpointaftereffect occurs in the periphery. An additional finding was a bias inthe left visual field to perceive faces in the periphery as facing slightly moretowards the observers (p


Sunday Afternoon PostersVSS 2010 AbstractsSunday PMtest whether the similarity-based structure of face-space could also mediateidentity invariance – the fundamental ability to maintain constant identityrepresentations under varying face transformations (e.g., changes in lightingor viewpoint). This invariance can be achieved if similarity relationsremain unchanged across different transformations. We therefore examinedthe extent to which similarity relations among faces were indeed constantunder view or lighting transformations. In Experiment 1, subjects rated perceivedsimilarity within a set of facial stimuli, viewing either its frontally-litvariant or its top-lit variant. Two group-averaged face-space configurationswere constructed from these ratings, and their degree of concordance wasestimated using Procrustean analysis. In Experiment 2, subjects rated perceivedsimilarity both for a frontal-view variant and a 60o-view variant ofthe same stimuli set, in two separate sessions three weeks apart. Concordancewas estimated both for inter-subject and intra-subject spaces. Consistentwith our hypothesis, the fit between spaces constructed for differentviews or lighting transformations was significantly high, indicating thatsimilarity relations were kept constant under these transformations. Furthermore,multidimensional spaces created for relatively similar transformations(e.g., frontal-lighting space and frontal-view space) showed higherconcordance than those created for more distant transformations (e.g., toplightingand 60o-view). Finally, intra-subject spaces were found to be morein accordance with each other than inter-subject spaces, suggesting thatsimilarity across group-averaged spaces was not due to averaging. Overall,our findings suggest that invariant identity processing can be achieved bykeeping the distance between face exemplars in face-space similar underdifferent transformations.36.526 The role of features and spatial relations in adaptation offacial identityPaul Pichler 1 (pichler.paul@gmail.com), Ipek Oruç 2,3 , Jason Barton 2,3,4 ; 1 Departmentsof Molecular Biology and Philosophy, University of Vienna, 2 Departmentof Ophthalmology and Visual <strong>Sciences</strong>, University of British Columbia, 3 Departmentof Medicine (Neurology), University of British Columbia, 4 Department ofPsychology, University of British ColumbiaFace recognition may involve qualitatively different mechanisms fromother object recognition. One of the markers for that assertion is the faceinversion effect, showing that face recognition is more sensitive to orientationthan that for other objects. Inversion may particularly disrupt the processingof configural information, such as the second-order spatial relationsof facial features. The aim of our study was to further investigate the contributionof features and configuration in facial representations by studyingface identity aftereffects.We used three types of stimuli: whole faces, ‘exploded’ faces (disruptedsecond-order relations but preserved first-order relations) and ‘scrambled’faces (disrupting first-order relations). Whole and altered faces served asadapting or test stimuli, viewed either both upright or both inverted. Wemeasured perceptual-bias aftereffects in identity judgments regardingambiguous morphed test face stimuli. Our primary goal was to determinethe degree of adaptation that altered faces could induce in whole faces andwhether this varied with orientation. Fourteen healthy subjects participated.Compared to whole-face adaptors, exploded faces induced partial aftereffectsin whole test faces and these showed an inversion effect similar tothose seen with whole-face adaptors. In contrast, scrambled faces were ineffectiveat adapting whole-face test stimuli in either orientation, althoughthey could induce aftereffects in scrambled-face test stimuli.We conclude that disruption of second-order spatial relations does notprevent facial features from engaging facial representations of identity, butthat a proper first-order relationship of features is an essential prerequisite.Second-order spatial relations do form an integral part of face representationsas disrupting these reduces the magnitude of the face aftereffect.Acknowledgement: NSERC Discovery Grant RGPIN 355879-08, CIHR MOP-7761536.527 Visual attractiveness is leaky (2): hair and faceChihiro Saegusa 1,2 (csaegusa@caltech.edu), Eiko Shimojo 2,3 , Junghyun Park 2,3 ,Shinsuke Shimojo 2,3 ; 1 Institute of Beauty Creation, Kao Corporation, 2 Division ofBiology / Computation and Neural Systems, California Institute of Technology,3 JST.ERATO Shimojo Implicit Brain Function ProjectMemory-based attractiveness integration is implicit and nonlinear, as wedemonstrated with images featuring a central face (FC) and a surroundingnatural scene (NS) (Shimojo et al., VSS’09). Here, we aimed to see howthe task-irrelevant surround affects attractiveness of the central stimulusand vice versa, using hair (HR) and face (FC). There is evidence that HR isindeed a surrounding, accessory part of the holistic FC perception (Ellis etal., 1980), and both are processed in the face-specific temporal area (Kanwisheret al. 1997).Eight FC images (4 attractive and 4 less attractive FCs), and 16 HR pictures(4 colors, 2 lengths and 2 shapes) were selected from a pre-rated set. EachFC and HR were combined in the natural spatial alignment, and subjectswere asked to rate attractiveness of 1) FC only or 2) HR only in a 7-pointscale in separate sessions.Results of 1) show that, when FC is shown with an attractive HR, the attractivenessof FC was higher than with a less attractive HR, even though subjectwas asked to focus only on the FC. Results of 2) were symmetrical tothose of 1) in that the task-irrelevant FC affects attractiveness of HR. Theoverall patterns of the results cannot be simply interpreted as the subjectsneglecting the “ONLY” instruction, because the “FC only” attractivenesswith HR is exceedingly lower than the range predicted from weighted averagingof the pre-rated attractiveness of HR and FC.These results seem difficult to interpret unless we accept two possibilities:(a) the attractiveness of the task-irrelevant surround is implicitly “imported”into that of the central stimulus, and (b) something more nonlinear than justaveraging occurs particularly in the FC only evaluation with HR.Acknowledgement: Kao Corporation, JST.ERATO Shimojo Implicit Brain Project,Tamagawa-Caltech gCOE36.528 An attractiveness function for human facesChristopher Said 1 (csaid@princeton.edu), Alexander Todorov 1 ; 1 PsychologyDepartment, Princeton UniversityPrevious research on facial attractiveness has shown that mathematicallyaverage faces are perceived as highly attractive. In this study, we obtainedattractiveness ratings for 2000 male and 2000 female faces sampled froma 50 dimensional face space. This face space approximates the shape andreflectance variance in human faces. After collecting the ratings, we usedsecond-order polynomial regression to create an attractiveness function.This data-driven approach allows us to predict the attractiveness of anyarbitrary face. The attractiveness function shows that while averageness isimportant for some dimensions, it is not for others. In particular, attractivemale faces have darker skin, darker eyebrows, more beard, and longer jawsthan the average male face. Attractive females have upper and lower eyelidsthat are much darker than those of the average female face. For manyother dimensions, however, the theoretically most attractive female is nearthe mean. Additionally, the attractiveness function confirms the importanceof sexual dimorphism for some, but not all dimensions.Scene perception: MechanismsVista Ballroom, Boards 529–539Sunday, May 9, 2:45 - 6:45 pm36.529 Neural Coding of Scene Volume: the Size of Space Representedacross the PPA and LOCSoojin Park 1 (sjpark31@mit.edu), Talia Konkle 1 , Aude Oliva 1 ; 1 Department of Brain& Cognitive <strong>Sciences</strong>, MITEstimating the size of a space is intuitively central to our daily interactions,for example when deciding whether or not to take a crowded elevator.Here, we examined how neural areas respond to scenes that parametricallyvary in the volume of depicted space. Observers were shown blocksof indoor scene categories and performed a one-back repetition task whileundergoing whole brain imaging in a 3T fMRI scanner. The 18 scene categoriesvaried in the size of depicted space on a 6 point log scale, from smalland confined spaces such as closets and showers, to expansive areas suchas concert halls and sports arenas. Using a regions-of-interest approach,we found that activity in the lateral occipital complex (LOC) systematicallydecreased as the size of space increased, showing a preference for smallerspaces (r=-.64, p.1). We further examined the multivoxel pattern activityin the PPA using a linear support vector machine. Voxel patterns in thePPA classified the six different volumes of space well above chance (39%performance with leave-one-block-out cross-validation, chance level being198 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Afternoon Posters17%). Importantly, most classification errors were found across scenes thatwere close in size (within 1-2 scales), and not across scenes that were furtherin size (within 4-5 scales). Similar results were found in LOC (36% classificationperformance). These data suggest that scene volume information iscoded in a distributed manner over a range of areas in the ventral visualpathway, consistent with the general idea that understanding the size of aspace can influence a wide range of our interactions and daily navigationthrough the world.Acknowledgement: Funded by NSF CAREER award to A.O. (IIS 0546262). We thank theAthinoula A. Martinos Imaging Center at McGovern Institute for Brain Research, MIT forhelp with fMRI data acquisition. SP and TK contributed equally to this work36.530 Using V1-Based Models to Predict Blur Detection andPerception in Natural ScenesPei Ying CHUA 1 (cpeiying@dso.org.sg), Michelle P.S. TO 2 , David J. TOLHURST 2 ;1 DSO National Laboratories, 27 Medical Drive #11-10 Singapore 117510,SINGAPORE, 2 Department of Physiology, Development and Neuroscience,University of Cambridge, Downing Street Cambridge CB2 3EG, UKWe studied the performance of V1-based models (incorporating both linearand non-linear characteristics) in predicting how observers perceivechanges in natural scenes. Pilot studies found that the simplest modelwas able to predict subjective perception of many types of suprathresholdchanges, but consistently underpredicted the actual perceived magnitudeof changes in blur. Blur might be perceived differently from other types ofchanges because it serves as an important cue for accommodation, depth,and motion. This study investigated whether the poor predictions for blurchanges arise from differences in higher level processing.To investigate the role of high-level processing in blur perception, wecompared our low-level model’s performance for blur in eight normal (N)natural scenes and their distorted (D) counterparts (in which the higherlevelcues were removed by blurring only selected portions of the naturalscenes). If blur change perception were independent of high-level processing,human and model performance should be similar in both conditions.Blur detection thresholds were collected from 3 observers using a 2AFC protocol.Suprathreshold discrimination measurements were obtained using amatching procedure: the blurriness of a test pair was adjusted to matchthe degree of change in a “comparison pair”. Three-way ANOVAs showedthat blur detection and suprathreshold perception were independent ofthe stimulus type (N versus D): F(2,10)=1.42, P=0.287 and F(1,135)=1.42,P=0.235 respectively. This suggests that higher-level cues did not significantlyinfluence the perception of blur differences.A successful model should give identical outputs for all threshold-level differences.Models of low-level processes in V1 failed to explain the observers’high sensitivities to blur changes (Three-way ANOVA: F(2,10)=6.21,P=0.0177). However, additional modelling of attention and bias towardshigh spatial frequencies produced some significant improvements (ThreewayANOVA: F(2,10)=1.41, P=0.288). These results suggest that purely lowlevelmodels cannot readily describe blur perception, and must incorporatemore complex mechanisms.36.531 Spatiotemporal chromatic statistics of the natural worldFilipe Cristino 1 (f.cristino@bristol.ac.uk), P. George Lovell 2 , Iain D. Gilchrist 1 , DavidJ. Tolhurst 3 , Tomasz Troscianko 1 , Chris P. Benton 1 ; 1 Department of ExperimentalPsychology, University of Bristol, UK, 2 School of Psychology, University of St.Andrews, UK, 3 Department of Physiology, Development and Neuroscience,University of Cambridge, UKWe measured the spatiotemporal chromatic properties of the natural worldusing a high speed calibrated digital video camera. Our video clips, eachlasting 10 seconds and gathered at 200 Hz with a stationary camera, featureda wide variety of scenes, ranging from temporal texture (such asgrass blowing in the wind and waves breaking on the sea) to meaningfulspatiotemporal structure (such as people communicating using British SignLanguage). The raw video output was calibrated and combined to closelyapproximate the human luminance, red-green and blue-yellow channels(Lovell et al. 2004). By analysing the videos using the power spectrum ofthe 3D FFT transform, we characterised the natural world as conveyed tothe visual cortex. Examination of spatial characteristics showed that theamplitudes of the various spatial frequencies are, as expected, well characterisedby a 1/fn relationship with n close to 1 for the luminance channel.In the temporal domain, the overall statistics follow a 1/ωn pattern (whereω denotes temporal frequency) with values of n substantially less than 1for all three channels. However, when examined on a video-by-video basisa markedly different temporal structure can be observed (e.g. peaks in thetemporal spectrum for waves in a river at 6Hz). We note that such peaksare invariant to viewing distance and we propose that vision may use thisinvariant structure to extract temporal gist from a scene. The spatiotemporalsensitivities of visual organisms may well be driven by a need to capturesuch information optimally.36.532 Anisotropic Gain Control Pools Are Tuned In TemporalFrequency As Well As Spatial Frequency And OrientationYeon Jin Kim 1 (y0kim009@louisville.edu), Edward A. Essock 1,2 ; 1 Department ofPsychological and Brain <strong>Sciences</strong>, University of Louisville, 2 Department ofOphthalmology and Visual Science, University of LouisvilleMasking of a grating by broadband content is greatest for the horizontalorientation and least for oblique orientations (Essock, Haun and Kim, JOV,2009; Kim, Haun and Essock, VSS, 2007). Thus when viewing oriented contentin a natural scene (or other broadband images), oblique content is seenbest and horizontal is seen least well (the “horizontal effect” e.g., Essocket al, VisResearch, 2003). We have suggested that his horizontal effect isdue to anisotropic suppression that is observed when enough contextualspatial content is present to create a significant response in a gain controlpool, thus revealing the anisotropy. Previously we have shown that theanisotropic gain control pools are local (i.e., “tuned”) in spatial frequencyand orientation. Here we compare these pools (suppression magnitude andanisotropy magnitude) for narrowband tests across the spatio-temporalsurface with either spatially or temporally broadband masks. Results show:(1) a horizontal effect anisotropy at all spatio-temporal conditions tested,(2) the magnitude of suppression and of the horizontal effect anisotropyare greatest at middle values (2cpd/10Hz for spatially broadband masks,and 4cpd/.5Hz for spatially broadband masks), and (3) the anisotropic gaincontrol pools are local in not only spatial frequency but also in temporal frequency.This tuning in temporal frequency is in contrast to prior temporalmasking studies that show only 2 (or 3) tuned channels when narrow-bandmasks are used. This suggests that a lower-level suppression exists, that isanisotropic, and becomes significant when driven by many spatial components(i.e., content broadband in spatial, temporal and/or orientation)but is apparently not revealed by a single (narrowband) mask as in priorstudies.36.533 The Nature of Perceptual Averaging: Automaticity, Selectivity,and SimultaneityAlice R. Albrecht 1 (alice.albrecht@yale.edu), Brian J. Scholl 1 ; 1 Perception & CognitionLab, Dept. of Psychology, Yale UniversityPerception represents not only discrete features and objects, but also informationdistributed in time and space. One intriguing example is perceptualaveraging: we are surprisingly efficient at perceiving and reporting theaverage size of objects in spatial arrays or temporal sequences. Extractingsuch statistical summary representations (SSRs) is fast and accurate, butseveral fundamental questions remain about their underlying nature. Weexplored three such questions, investigating SSRs of size for static objectarrays, and for a single continuously growing/shrinking object (as introducedin Albrecht & Scholl, in press, Psychological Science). Question 1:Are SSRs computed automatically, or only intentionally? When viewing aset of discs, observers completed three trials of a ‘decoy’ task, pressing a keywhen they detected a sudden luminance change. Observers also reportedthe discs’ average size on the final trial, but could receive these instructionseither before the final display onset, or after its offset. Performance forthe second (‘incidental averaging’) group was no worse than for the first(‘intentional averaging’) group -- suggesting that some SSRs can be computedautomatically. Question 2: Can SSRs be computed selectively fromtemporal subsets? Observers viewed a continuously growing/shrinkingdisc that changed color briefly during each trial. Observers were asked toaverage either the entire sequence, or only the differently-colored subset --via instructions presented either before the display onset, or after its offset.Performance was as accurate with subsets as with the whole -- suggestingthat SSRs can be temporally selective. Question 3: Can we simultaneouslyextract multiple SSRs from temporally overlapping sequences? In the sameexperiments, there was a small but reliable cost to receiving the instructionsafter the display offset -- suggesting that SSRs cannot automaticallySunday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>199


Sunday Afternoon PostersVSS 2010 AbstractsSunday PMcompute multiple temporally-overlapping averages. Collectively, theseand other results clarify both the flexibility and intrinsic limitations of perceptualaveraging.36.534 Effective Acuity for Low-Pass Filtering of Real World ImagesAmy A. Kalia 1 (kali0080@umn.edu), Gordon E. Legge 1 , Christopher S. Kallie 1 ;1 Department of Psychology, University of Minnesota Twin-CitiesUnderstanding of low-vision mobility problems would benefit from methodsfor predicting the visibility of environmental hazards such as steps. Auseful tool for rehabilitation specialists and architectural designers wouldblur an image of a space according to a particular acuity level. Althoughmany graphics programs have blurring functions, the relationship betweenblur filters, clinical measures of acuity, and the information transmittedin the resulting image is unclear. To examine this relationship, we testedthe effective letter acuity associated with the bandwidth of two low-passfilters applied to a photograph of an eye chart. A high-resolution cameraimage was obtained of the Lighthouse Distance Visual Acuity chart at astandard viewing distance of 4 m. The image (with peak resolution of 80pixels per degree) was filtered with Gaussian and 4th-order Butterworthfilters with bandwidths (defined as the frequency at 50% of maximum)ranging from 1.33 to 35 cycles per degree. Five normally sighted subjectsviewed the blurred images on a display screen at a distance that allowedeasy identification of the 20/20 letters in the unblurred image. For the highestbandwidths, subjects were able to read letters smaller than the 20/20line. For a lower range of bandwidths (1.33 to 8.78 cycles per degree) therewas a linear relationship between filter bandwidth (expressed in degreesper cycle) and the smallest resolvable letter size (in degrees). Effective acuitywas worse with the Butterworth filter compared to the Gaussian filterfor equal bandwidths; the smallest resolvable letter size for a 1 degree percycle bandwidth was approximately 1.1 degrees for the Butterworth filterand 0.63 degrees for the Gaussian filter. These results yield functions (effectiveacuity vs. filter bandwidth) for simulating the effects of reduced acuityon the information available in real-world scenes.Acknowledgement: NIH 1 R01 EY017835-01 (Designing Visually Accessible Spaces)36.535 Factors influencing the detectability of pedestrians inurban environmentsDavid Engel 1 (david.engel@tuebingen.mpg.de), Cristóbal Curio 1 ; 1 Max PlanckInstitute for biolgoical CyberneticsDriver assistance systems based on computer vision modules aim to provideuseful information for the driving task to its user. One critical task insuch scenarios is avoiding dangerous encounters between cars and vehicles.Classical computer vision systems aim only at finding all pedestrians. Wepropose that in order to provide the maximally useful information to thedriver, it is also necessary to know the probability that the driver will see thepedestrian. This way the system is able to direct and modulate the attentionof the driver towards pedestrians that he might not have noticed. Methods:We performed an experiment with 10 subjects. We showed images of urbanenvironments for 120 ms followed by a noise mask. Afterwards, subjectshad to indicate positions where they saw a pedestrian. We used the MITStreetScenes database [1] which contains 3547 photos with hand-labeledpedestrian positions. Each participant was shown a total of 557 images ina random order. 142 images without pedestrians, 245 contained one singlepedestrians and the rest contained two or more pedestrians. Results: Wecounted mouse clicks within a 100 pixel radius of the center of a pedestrianas hits. The average hit rate was 69%. We evaluated how well a classifier canpredict the detectability of a pedestrian based on several features such as:compositional features (position and size of the pedestrian), image features(color histograms, contrast and histogram of oriented gradients descriptorsof the pedestrian as well as the decision value of a support vector machinetrained on a pedestrian classification task) and context features (differencein mean, standard deviation and color histograms between pedestrian andbackground and distance to other pedestrians in the image). References: [1]S. M. Bileschi. Streetscenes: towards scene understanding in still images.PhD thesis, Massachusetts Institute of Technology, 2006.Acknowledgement: This work was supported by the EU-Project BACS FP6-IST-02714036.536 Framework and implementation for perceptionLior Elazary 1 (elazary@usc.edu), Laurent Itti 1,2 ; 1 Computer Science, University ofSouthern California, 2 Neuroscience, University of Southern CaliforniaA biologically-inspired framework for perception is proposed and implemented,which helps guide the systematic development of machine visionalgorithms and methods. The core is a hierarchical Bayesian inference system.Hypotheses about objects in a visual scene are generated “bottom-up”from sensor data. These hypotheses are refined and validated “top-down”when complex objects, hypothesized at higher levels, impose new featureand location priors on the component parts of these objects at lower levels.To efficiently implement the framework, an important new contributionis to systematically utilize the concept of bottom-up saliency maps to narrowdown the space of hypotheses. In addition, we let the system hallucinatetop-down (manufacture its own data) at low levels given high-levelhypotheses, to overcome missing data, ambiguities and noise. The implementedsystem is tested against images of real scenes containing simple 2Dobjects against various backgrounds. The system correctly recognizes theobjects in 98.71% of 621 video frames, as compared to SIFT which achieves38.00%.Acknowledgement: ARO36.537 Black, White, or Neutral Gray Blank Screens Have DifferentialEffects on Scene Gist MaskingTyler Freeman 1 (tylerf@ksu.edu), Lester Loschky 1 , Ryan Ringer 1 , Caroline Kridner 1 ;1 Department of Psychology, Kansas State UniversityScene perception research often uses visual masks to vary the time thatinformation is available on a viewer’s retina. However, little is knownabout the effects of spatial and temporal masking parameters when maskingreal-world scenes. Such studies using visual masking often includeblank screens at the start of the trial, during the ISI between target andmask, and following the mask presentation. These blank screens are typicallyblack, white, or a neutral gray matched to the mean luminance of thetarget and mask, with neutral blank screens presumably intended to minimizeluminance contrast with the target and mask. Earlier research (Freeman& Loschky, VSS 2009) showed differences between black and grayblank screens at SOAs


VSS 2010 AbstractsSunday Afternoon Postersms. Eyetracking ensured central fixation during scene presentation. Attentionwas manipulated between-subjects by presenting 80% of the trials aseither window or scotoma images.Early use of central versus peripheral information differed significantly asa function of attention. Specifically, at 36ms SOA, when attention was centrallyfocused, performance was significantly better in the window conditionthan the scotoma condition, whereas when attention was peripherallyfocused, there was no difference between either condition. Thereafter, withincreasing processing time, gist performance equalized, as predicted by theuse of the critical radius to create the stimuli.Thus, at early processing times, attention moderates gist recognitionbetween central and peripheral vision. However, with additional processingtime, performance converges to produce equal gist performancebetween central and peripheral information, consistent with the hypothesisthat attention expands out from the center of a scene over a single fixation.36.539 Multi-Event Scene Perception at an Ecologically RepresentativeScaleThomas Sanocki 1 (sanocki@usf.edu), Noah Sulman 1 ; 1 Psychology, University ofSouth FloridaResearch on scene perception is still in its infancy and, in general, hasfocussed on convergent processes in which scene information is integratedto arrive at a single label denoting category name or animal decision (e.g.,).However, also important in scene perception is the divergent perceptualability of perceiving multiple events. Little is known about this ability inthe context of continuous scene perception. Further, theories make differentpredictions: Attentional set theory predicts that switching out of a singletask set is costly, whereas approaches emphasizing efficient bottom-up processingare consistent with efficient time sharing between multiple events.We developed a continuous event paradigm involving a 60 sec event-streamwith an average of 12 simultaneously active events. The events were asynchronousand took time (4 sec average), like events in a typical real worldscene. Observers could time share between events, as in real world perception.Experiment 1 examined the cost of switching between multiple eventstypes, relative to single event conditions. The hit rate was 78.4% for singletasking, and fell to 64.3% for switching between multiple events. The 14.1%cost for switching tasks was reliable (and consistent with attentional settheory) but fairly modest in size. In fact, one could say that multiple eventperception (MEP) was fairly efficient. Is there a basis for reasonably efficientMEP? A promising hypothesis comes from a principle that pervadesdesigned spaces — that similar functions be grouped together. Is MEPmore efficient when event types are organized by location? Experiments2 and 3 provided strong positive evidence, showing that the cost of MEP(relative to single-tasking) is much higher (34.1% and 32.7% respectively)when event-types are distributed throughout space rather than organizedby location. MEP is a significant and theoretically interesting aspect ofscene perception.Binocular vision: Stereo mechanismsVista Ballroom, Boards 540–547Sunday, May 9, 2:45 - 6:45 pm36.540 The limit of spatial resolution for joint stereo disparity /motion perceptionFredrik Allenmark 1 (fredrik.allenmark@ncl.ac.uk), Jenny Read 1 ; 1 Institute ofNeuroscience, Newcastle University, Newcastle upon Tyne, UKHuman spatial resolution for luminance gratings – the ability to distinguishblack-and-white stripes from gray – can reach 50 cycles per degree (cpd;Campbell & Green 1965, J Physiol 181:576). The equivalent resolution forstereo disparity is an order of magnitude lower: depth corrugations definedby binocular disparity cannot be perceived beyond about 4 cpd (Tyler 1974,Nature 251:140; Banks et al. 2004, J Neurosci 24:2077; Bradshaw & Rogers1999, <strong>Vision</strong> Res 39:3049). Both these limits are believed to be set by theproperties of cells in primary visual cortex (V1): stereo resolution by thearea of their receptive fields, and luminance resolution by the arrangementof ON/OFF subregions within receptive fields (Nienborg & Cumming2003, J Neurosci 24:2065). Here, we examine the spatial resolution for perceiving,not motion or disparity alone, but the correlations between both.The stimuli were random-dot stereograms depicting two transparent depthplanes made up of dots streaming at constant speed, either left or right.Both directions of motion were always present everywhere in the visualfield, but for the target stimulus they were locally segregated into depthplanes (e.g. front plane moving to the left, back moving right), while forthe control stimulus, both front and back planes everywhere consisted oftwo transparent directions of motion. This task requires observers to extractdisparity contingent upon motion direction. To find the resolution limit,we alternated the motion direction within each depth plane for the targetstimulus, i.e. the target consisted of horizontal strips, alternately front-leftwards/back-rightwardsand front-rightwards/back-leftwards. We examinedhow performance on this task varied as we reduced the height of thestrips. We compared this with a task with the same motion energy butwhich could be performed based solely on the disparity. We find that thehigh-frequency cut-off is lower for the joint motion/disparity task.Acknowledgement: Royal <strong>Society</strong>, Institute of Neuroscience36.541 Effects of image statistics on stereo coding in human visionKeith May 1 (keith@keithmay.org), Li Zhaoping 1 , Paul Hibbard 2 ; 1 Department ofComputer Science, UCL, 2 School of Psychology, University of St AndrewsBiological visual systems continuously optimize themselves to the prevailingimage statistics, which gives rise to the phenomenon of adaptation. Forexample, post-adaptation color appearance can be explained by efficientcoding which appropriately combines the input cone channels into variouschromatic and achromatic channels with suitable gains that depend on theinput statistics [Atick, J.J., Li, Z. & Redlich, A.N. (1993). <strong>Vision</strong> Research,33, 123-129]. In this study we focus on the ocular channels correspondingto the two eyes. We investigated how image statistics influence the wayhuman vision combines information from the two eyes. Efficient coding inocular space [Li, Z. & Atick, J.J. (1994) Network, 5, 157-174] predicts that thebinocularity of neurons should depend on the interocular correlations inthe visual environment: As the interocular correlations increase in magnitude,the neurons should become more binocular. In natural viewing conditions,interocular correlations are higher for horizontal than vertical imagecomponents, because vertical binocular disparities are generally smallerthan horizontal disparities. Thus, adaptation to natural stereo image pairsshould lead to a greater level of binocularity for horizontally-tuned neuronsthan vertically-tuned neurons, whereas adaptation to pairs of identicalnatural images should not. We used interocular transfer of the tilt illusionas an index of binocularity of neurons with different characteristics.Subjects adapted either to natural stereo pairs or pairs of identical naturalimages. As predicted, interocular transfer was higher for near-horizontalthan near-vertical stimuli after adaptation to natural stereo pairs, but notafter adaptation to pairs of identical natural images.Acknowledgement: This work was supported by a grant from the Gatsby CharitableFoundation and a Cognitive Science Foresight grant BBSRC #GR/E002536/0136.542 Using numerosity to explore monocular regions in binocularscenesKatharina M Zeiner 1 (kmz@st-andrews.ac.uk), Manuel Spitschan 1 , Julie M Harris 1 ;1 School of Psychology, University of St AndrewsHow does the visual system combine the two, slightly different, retinalimages to arrive at a single, meaningful percept? Traditional models ofstereo matching suggest that we match corresponding points in the tworetinal images. However, virtually every scene around us contains regionsthat only one eye can access. These regions are, in these models, treated asnoise and thus ignored.. However, there is some evidence that they formpart of our cyclopean percept of a scene (Ono et al. 2003. J. of Exp. Psych.:Gen. 132(2), 253-265), rather than appearing as rivalrous. Here, we soughtto explore how items in monocular regions contribute to the representationof pattern and density. Observers viewed a stimulus comprising a randomdot pattern viewed via one of 3 conditions: binocularly (all dots visible),behind a set of fence-like vertical occluders (each dot could be seen by onlyone eye), or behind a set of horizontal occluders (each dot was binocularlyvisible but only 50% of the dots were visible in total).In a 2AFC, relative numerosity task, participants were asked to indicatewhich one of two stimuli was more numerous. We measured thresholdsand biases. There was no significant difference between thresholds in thevertical and horizontal occluder conditions, suggesting that monocularregions are not seen as rivalrous. We found no significant bias for any of theconditions. Our results are consistent with the hypothesis that monocularregions contribute fully to the representation of pattern and density.Sunday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>201


Sunday Afternoon PostersVSS 2010 AbstractsSunday PM36.543 Neural activity in higher dorsal visual areas relates to thediscrimination of disparity-defined depth positionMatthew Patten 1 (m.l.patten@bham.ac.uk), Andrew Welchman 1 ; 1 School ofPsychology, University of Birmingham, UKNeural responses to binocular disparity have been observed throughoutthe visual cortex. Although it is thought that the ventral and dorsal pathwaysperform distinct roles in the perception of depth, the nature of thisprocessing is still far from being understood. To investigate the relationshipbetween cortical activity and the perception of depth, we used neuroimagingtechniques to test regions for cortical activity that varied in aperceptually-relevant manner and compared this to the behavioural performancefrom a near-far depth discrimination task which was measuredconcurrently. Participants viewed random dot stereograms depictingplanes with crossed (near) or uncrossed (far) disparity and were asked tojudge the depth position (near or far). Performance was manipulated parametricallyby changing the correlation of dots presented to the two eyes.When 100% of the dots were correlated (e.g. white dots in one eye matchwhite dots in the other) the task was trivial; however, when 100% of thedots were anticorrelated (white dots in one eye match black dots in theother), discrimination performance was reduced to chance. We measuredconcurrent event-related fMRI responses and used multivariate analysismethods (SVM: support vector machine) to determine cortical regions thatcontained information about the disparity-defined depth (cf. Preston et al,2008, J Neurosci, 28, 11315-27). In particular, we trained an SVM to discriminatenear/far depth for 100% correlated stereograms and then testedthe SVM with fMRI responses evoked at lower coherence levels, therebyobtaining ‘fMR-metric’ functions. Comparing fMR-metric and psychometricfunctions indicated a close association between psychophysical judgmentsof depth and activity in higher dorsal areas V7 and VIPS. Consistentwith recent findings, our results demonstrate an important role for higherdorsal areas in the perception of disparity-defined depth.36.544 Visual Fusion and Binocular Rivalry in Cortical Visual AreasStefan Kallenberger 1 (Stefan.Kallenberger@gmx.de), Constanze Schmidt 2 , TorstenWüstenberg 3 , Hans Strasburger 2,4 ; 1 Inst. of Physiology, University of Erlangen-Nürnberg, 2 Dept. of Med. Psychology, University of Göttingen, 3 Clinic ofPsychiatry, Charité, University Medical Center Berlin, 4 Inst. of Med. Psychology,University of MunichCorrelates of visual fusion were studied independent from binocular rivalryby fMRI at various eccentricities in visual cortical areas V1 to V4 and MT+.Stimuli to elicit visual fusion (BF), binocular rivalry (BR), and simultaneousfusion and rivalry (BFR) were designed by superimposing fusable andnon-fusable lattices. Responses to these were acquired in a group of tensubjects together with meridian-, eccentricity- and motion mapping in thesame session. Retinotopic maximum probability maps on an average flatmap for the group were calculated for dorsal and ventral visual areas V1to V4 at five eccentricity intervals and motion area MT+, resulting in 41ROIs for each hemisphere. To isolate either fusion- or rivalry-related activitywithin each ROI, 2×2 ANOVAs with factors eye and condition wereperformed where, for fusion, the condition levels were BFR and BR, and forrivalry BFR and BF, respectively. Resulting F values are reported as measureof activity. Visual fusion showed the highest activity within V3 and V4at ROIs with increasing eccentricity, and further within Area MT+. Binocularrivalry, in contrast, mainly showed highest activities within V1 and V2,preferring lower eccentricities. In conclusion, fusion seems to be predominantlyprocessed in the peripheral visual field representations in areas V3and V4 as well as in MT+, playing a lower role within earlier visual areas.36.545 Binocular coordination: Reading stereoscopic sentences indepthElizabeth Schotter 1 (eschotter@ucsd.edu), Hazel Blythe 2 , Julie Kirkby 2 , KeithRayner 1 , Simon Liversedge 2 ; 1 Psychology, University of California, San Diego,USA, 2 Psychology, University of Southampton, UKWhen we fixate objects that are close to us our eyes make disconjugate convergentmovements (e.g., they move nasally), and when we fixate objectsthat are distant, our eyes make disconjugate divergent movements (Kirkbyet al. 2008). We investigated how readers controlled their eyes binocularlyas they read sentences presented stereoscopically such that they appearedto loom out from the screen towards the reader. To address this question,we had subjects read sentences as we monitored the movements of eacheye simultaneously. Sentences were presented in three conditions: 1) a sizeconstant2D condition in which sentence depth and character size were constantthroughout the sentence; 2) an increasing-size 2D condition in whichsentence depth was constant, but character size increased from left to right(a monocular cue used to infer depth); 3) a 3D looming condition wherecharacter size increased from left to right AND the text was presented stereoscopicallysuch that the perceived sentence started at the screen andloomed toward the subject at an angle of 55° from the plane of the screen.To create the looming stimuli, the stereoscopic sentences were such that aletter from the left eye stimulus was displaced to the left of the correspondingletter in the right eye stimulus.We predicted that binocular disparity would remain constant in the 2Dconditions (1&2), but the eyes would become more converged as theyprogressed through the sentence in the looming 3D condition (3) if readersprocessed the text as they would in a real depth condition. Our resultsshowed increased divergence as readers read further in the sentence, indicatingthat binocular eye coordination is driven by each eye’s unique retinalsignal rather than by depth cues associated with sentences that appear toloom towards the reader.Acknowledgement: EPS Study Visit Grant NIH Leverhulme Trust36.546 Suppression in Intermittent Exotropia during fixationIgnacio Serrano-Pedraza 1 (i.s.pedraza@ncl.ac.uk), Vina Manjunath 2 , OlaoluwakitanOsunkunle 3 , Michael P. Clarke 1,2 , Jenny C. A. Read 1 ; 1 Institute of Neuroscience,Newcastle University, Newcastle upon Tyne, NE2 4HH, UK, 2 Eye Department,Royal Victoria Infirmary, Newcastle upon Tyne, NE1 4LP, UK, 3 Gonville & CaiusCollege, University of Cambridge, Cambridge, CB2 1TA, UKIntermittent exotropia (X(T)) is a common oculomotor anomaly where oneeye intermittently deviates outwards. Patients with this type of strabismusare often not aware of the exodeviation and do not experience diplopia(Jampolsky 1954, American Orthoptic Journal, 4). The absence of diplopiaduring the divergent phase has been explained by suppression of thedeviated eye. Since X(T) patients have stereopsis, it is widely believed thatsuppression occurs only during deviation. Here, we show that dichopticimages trigger suppression even during correct fixation. We studied 12X(T) patients aged between 5 and 22 years. All had functional stereo visionwith stereoacuity similar to that of 20 age-matched controls (0.2-3.7 arcmin).Wemeasured suppression during fixation at 120cm. Each eye viewedan identical cartoon face (6x6 deg) dissociated by polarizing filters and presentedfor 400 msec. In one eye, the face was presented at the fovea; inthe other, at different retinal positions along the horizontal axis. We alsoincluded catch-trials where two faces were presented in both eyes. The taskwas to indicate whether one or two faces were present. To ensure correctfixation, in between stimuli, subjects viewed a nonius image composedof a dissociated butterfly and a net on a binocularly-viewed forest background.All X(T) patients showed normal diplopia when the non-fovealface was presented in the nasal area of the retina. However 83% of X(T)sreported perceiving only one face when the non-foveal face was presentedto temporal retina, indicating suppression during fixation. In a follow-upexperiment, we examined which eye was suppressed. Some subjects suppressedthe temporal stimulus regardless of which eye viewed it, so theyalways perceived the central stimulus; others always suppressed the sameeye even when it viewed the central stimulus, so they then perceived theperipheral stimulus; others showed a mixture of both strategies.Acknowledgement: Supported by Medical Research Council New Investigator Award80154 and Royal <strong>Society</strong> <strong>Society</strong> University Research Fellowship UF041260 to JCAR36.547 “What” constrains “where”: Perceptual interactionsbetween object shape and object locationValentinos Zachariou 1 (vzachari@andrew.cmu.edu), Marlene Behrmann 1 , RobertaKlatzky 1 ; 1 Psychology, Humanities & Social <strong>Sciences</strong>, Carnegie Mellon UniversityObject identification and object localization are processes that are thoughtto be mediated by two relatively segregated brain regions that are independentfrom each other (Mishkin, Ungerleider & Macko, 1983; Goodale& Milner, 1992). Much literature, however, argues that the two processesmight not be as independent as previously assumed, given the evidencethat, when both processes are engaged in a single task, the performance ofone process interferes with the performance of the other (Creem & Proffitt,2001). Most of the experiments that report this interference, however,rely on complex motor movements in reach-to-grasp tasks, and it remainspossible that the interference arises from the fact that the two visual mecha-202 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Afternoon Postersnisms, albeit independent in nature, both influence the motor component ofthe complex task (Milner & Goodale, 2008). In this study, in a series of perceptualtasks with minimal motor demands, we explore the extent to whichobject identification and object localization are truly independent. Participantsare required to compare two pairs of objects, presented simultaneouslyon a computer screen, and to determine how many differences thepairs have between them using a numeric keypad. The pairs can differ byzero, one or two changes either in object shape or location. The results indicatethat the two visual processes are not independent but, rather, that theshape processing mechanism recruits the location mechanism in, at least,some integral part of its function. This finding was confirmed in a secondexperiment in which, in separate blocks, participants made only a locationchange detection in which the objects either did or did not have a concurrentshape change, or made a shape change detection with location changespresent or not. Importantly, any change in the orthogonal dimension wasirrelevant to the participant’s task. Whereas shape changes interfered withlocation judgments, the converse was not true.Acknowledgement: NIH/NIDA R90 DA023420 Multimodal Neuroimaging Training ProgramTemporal processing: Mechanisms andmodelsVista Ballroom, Boards 548–556Sunday, May 9, 2:45 - 6:45 pm36.548 Targets lurking behind a mask: Suppression of onset transientcauses mislocalization of targetsArielle Veenemans 1 (aveenemans@gmail.com), Patrick Cavanagh 1 ; 1 LaboratoirePsychologie de la Perception, Université Paris Descartes, Paris, FranceWe embedded masking in an apparent motion display that allowed us toinspect the targets independently of the masks. A set of masks and targetsis presented alternating in adjacent locations around a circular path (e.g.,MTMTMTM). The entire set steps forward repetitively so that masks fallwhere targets had been and vice versa. At each location there is an alternationof targets and masks but across locations the display is seen as a movingtrain of masks and targets. This display allows observers to attentivelytrack one target as it steps from location to location and avoid the masksthat precede and follow it at each location. At high contrasts the target canbe seen at its actual location in the moving train, whereas at low contrastsit vanishes and the space between the masks appears to be empty. Surprisingly,we find that in the middle range of contrasts the target is visibleagain but mislocalized, appearing to lurk behind the subsequent mask as ifthe two were presented together rather than sequentially. We propose thatthe suppression of the target’s onset transient delays its visible appearanceuntil the next onset transient, the one triggered by the subsequent mask atthe target’s location. These results support the proposal (Motoyoshi, VSS2007) that a suprathreshold onset transient is required for a target to reachawareness even for high levels of target contrast.Acknowledgement: This research was supported by a Chaire d’Excellence grant to PC36.549 Apparent contrast peaks, rather than plateaus, as a functionof stimulus durationHector Rieiro 1,2,3 (hrieiro@neuralcorrelate.com), Susana Martinez-Conde 2 , JoseLuis Pardo-Vazquez 4 , Nishit Srivastava 1 , Stephen L. Macknik 1,2 ; 1 Neurosurgery,Barrow Neurological Insititute, 2 Neurobiology, Barrow Neurological Institute,3 Signal Theory and Communications, University of Vigo, 4 Physiology, Universityof Santiago de CompostelaThe apparent contrast of a visual stimulus varies as a function of duration,a phenomenon known as temporal integration. There are two acceptedprinciples to explain the role of stimulus duration in perceived contrast.Bloch’s law states that below a critical duration, apparent contrast is a functionof both stimulus intensity and duration. Above this critical duration,apparent contrast plateaus. Contrary to Bloch Law’s predictions, Broca andSulzer proposed that apparent contrast is maximized for specific stimulusdurations, and that smaller or greater durations result in lesser apparentcontrast. Contradictory results have been published; some support Bloch’sLaw and some support the Broca-Sulzer effect. We hypothesize that thesource of this discrepancy may be that previous studies were conductedon experienced subjects who knew the proposed hypotheses (i.e. previousstudies used the authors as subjects), and that no previous study properlycontrolled for subject criterion. To address these concerns, we designeda 2-AFC task that counterbalanced stimulus dynamics and controlled forsubject criterion. Nine human subjects were presented with Gabor patchesof different contrasts and durations over a 50% grey background and wereasked to report which of them had higher contrast. Our results show thatwhen the stimulus duration had a value between 67-100 ms, subjects experiencedsignificantly higher apparent contrast, peaking at approximately7% greater perceived contrast than very long durations of the same stimulus.This result more-or-less matches the Broca-Sulzer finding, but providesthe appropriate controls for the first time. The existence of this peak hasimportant implications for the design of power-efficient lighting and visualdisplay equipment.Acknowledgement: This work was supported by Science Foundation Arizona (award CAA0091-07), National Science Foundation (award 0726113), a CHW Intellectual PropertySEED award, and the Barrow Neurological Foundation.36.550 The temporal profile of visual information sampling andintegrationCaroline Blais 1 (caroline.blais@umontreal.ca), Martin Arguin 1 , Frédéric Gosselin 1 ;1 Department of Psychology, University of MontrealWhile intuition suggests that visual information sampling through time iscontinuous, some have argued instead that sampling occurs in temporallydiscrete moments (VanRullen, & Koch, 2003). Of related interest is the questionof how visual information is integrated through time. For example,is the information simply summed? We attempted to clarify the nature ofvisual information sampling and integration through time using a temporalresponse classification approach. Five subjects were asked to decide whichof two movies, presented successively at the center of the screen, was thebrightest. Each movie consisted in a sequence of 30 Gaussian blobs (200ms) of different contrasts and subtending one degree of visual angle. Apatch of spatial bit noise displayed through a Gaussian aperture was presentedfor 200 ms at the movies’ location immediately before and after eachmovie. The contrast of the Gaussian blobs varied randomly through time.Specifically, on each trial, both movies had the same average and maximumcontrast values across their temporal extent, but they differed in thetemporal distribution of the contrasts. Thus, the brightness decision couldonly be influenced by the interaction between the participant’s sampling/integration profile and the temporal sequence of contrasts in the stimuli.The sequence of contrasts that “optimally” led to a bright percept was computedfor each participant by performing multiple regressions on the contrasttemporal sequences and the participant’s decisions. Three participantsout of five showed a clear oscillation in their information sampling function(ranging between 5 and 15 Hz), and a linear decrease of information intakethrough time; the other participants reported being incapable of performingthe task. Our results support the hypothesis that the visual system samplesinformation in a discrete manner. They also indicate that the weight givento the information sampled decreases as information accumulates.36.551 Temporal extinction in hemi-neglect patientsMarie de Montalembert 1 (mariedemontalembert@gmail.com), Pascal Mamassian 1 ;1 Laboratoire Psychologie de la Perception, CNRS & Université Paris DescartesRecent neuroimaging and neuropsychological studies have suggested thatthe right temporo-parietal junction has a dominant role in visual time estimation,suggesting that it forms a core structure of a when pathway. Mostof the time, neurological patients with right brain damage present extinction(i.e. when two brief near-simultaneous stimuli are presented they onlyreport the ipsilesional item). In this experiment, we were interested in howhemi-neglect patients with visual extinction deal with the duration estimationof two simultaneous events. For this purpose, we asked participants tocompare the duration of two stimuli, a standard and a test (a blue and a redcircle) presented in their central visual field at different time durations (test/standard duration: 0.3 to 3.0 sec). We compared the performance of normalobservers and left hemi-neglect patients who had a right temporo-parietalstroke or a hematoma and who presented visual extinction. Stimuli wereshown diametrically opposed on a virtual circle (radius = 2.6 deg. of visualangle). Simultaneous events were obtained by setting the half duration ofthe second stimulus at the end of the first stimulus. We found that controlparticipants were almost not impaired to estimate the duration of these twosimultaneous events in comparison to sequentially presented stimuli (dropof 3.0% in their duration threshold). In contrast, hemi-neglect patients weresignificantly more impaired in the simultaneous versus sequential presentation(drop of 7.0% at minimum in their duration threshold). This result isSunday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>203


Sunday Afternoon PostersVSS 2010 AbstractsSunday PMin accordance with previous studies showing the crucial role of the parietallobe for time estimation. Furthermore, our results show that hemi-neglectis probably not simply a bias in orienting attention to one side of space buta more profound deficit to process simultaneously two objects.36.552 Reaction time and event-related potentials to visual, auditoryand vestibular stimuliMichael Barnett-Cowan 1 (mbarnettcowan@gmail.com), Hugh Nolan 2 , John S.Butler 3 , John J. Foxe 3 , Richard B. Reilly 2 , Heinrich H. Bülthoff 1,4 ; 1 Department ofHuman Perception, Cognition and Action, Max Planck Institute for BiologicalCybernetics, 2 Department of Electronic & Electrical Engineering, Trinity CollegeDublin, Neural Engineering Group, Trinity Centre for Bioengineering, 3 Departmentsof Psychology and Biology, City College of New York, The Children’sResearch Unit (CRU) Program in Cognitive Neuroscience, 4 Department of Brainand Cognitive Engineering, Korea UniversityInvoluntary physical responses to vestibular stimulation are very fast.The vestibulo-ocular reflex, for example, occurs approximately 20ms afterthe onset of vestibular stimulation (Lorente de No, 1933, Nature). Despitethese fast responses, reaction time (RT) to the perceived onset of vestibularstimulation occurs as late as 438ms after galvanic vestibular stimulation,which is approximately 220ms later than RTs to visual, somatosensoryand auditory stimuli (Barnett-Cowan & Harris, 2009, Exp Brain Res). Todetermine whether RTs to natural vestibular stimulation are also slow, participantsin the present study were passively moved forwards by .1178m(single cycle sinusoidal acceleration; 0.75m/s/s peak acceleration) using aStewart motion platform and were asked to press a button relative to theonset of physical motion. RTs to auditory and visual stimuli were also collected.RTs to physical motion occurred significantly later (>100ms) thanRTs to auditory and visual stimuli. Event related potentials (ERPs) weresimultaneously recorded where the onset of the vestibular-ERP in both RTand non-RT trials occurred about 200ms or more after stimulus onset whilethe onset of the auditory- and visual-ERPs occurred less than 100ms afterstimulus onset. All stimuli ERPs occurred approximately 135ms prior toRTs. These results provide further evidence that vestibular perception isslow compared to the other senses and that this perceptual latency may berelated to latent cortical responses to physical motion.Acknowledgement: This research was supported by a Postdoc stipend to MBC fromthe Max Planck <strong>Society</strong> and by the WCU (World Class University) program through theNational Research Foundation of Korea funded by the Ministry of Education, Scienceand Technology (R31-2008-000-10008-0) to HHB. Irish Research Council for Science,Engineering and Technology Embark Initiative postgraduate award to HN and ScienceFoundation Ireland Research Frontiers award to RBR. Special thanks to Karl Beykirch fortechnical assistance.36.553 The effects of aging on surround modulation of backwardcontrast maskingLindsay E. Farber 1 (farberle@mcmaster.ca), Allison B. Sekuler 1 , Patrick J. Bennett 1 ;1 Department of Psychology, Neuroscience & Behaviour, McMaster UniversitySaarela and Herzog (J Vis, 2008, 8(3):23, 1-10) measured backward maskingfor a centrally-viewed Gabor target that was produced by a small centralmask that overlapped the target, a surround annulus mask, and a largecombination mask (i.e., centre plus surround). Interestingly, they foundthat significantly less masking was produced by the combined mask thanthe central mask, even though the surround mask produced little maskingon its own. One interpretation of this result is that the surround reducedthe effectiveness of the central mask. The current study examined whetherthis non-linear interaction between centre and surround masks is affectedby aging. Detection thresholds were measured for a Gabor target (duration=80ms) in five younger (~25 years) and older (~69 years) subjects.The target was preceded or followed by surround, central, or combinationmasks. Thresholds were measured using surround, central, and combinationmasks that were displayed for 200 ms at five SOAs relative to targetonset. The target and masks were 4 cpd and had a horizontal orientation.Mask contrast was 0.4; a baseline, no-mask condition also was included.Significant masking was obtained in both age groups, and the combinedmask produced less masking than the central mask. However, the temporalpattern of masking across target-mask SOA differed noticeably betweengroups. Our results suggest strong centre-surround interactions exist inolder subjects, but that the temporal properties of these interactions changewith age.Acknowledgement: CIHR, Canada Research Chair program36.554 Temporal and spatial grouping: questions derived fromstudies in patients with schizophreniaLaurence Lalanne 1 (laurence.lalanne@neuf.fr), Anne Giersch 2 ; 1 inserm666-cliniquepsychiatrique, 2 inserm666-clinique psychiatriquePatients with schizophrenia are known to have an impaired sense of continuity,which, according to Husserl, involves the integration of past, presentand future moments. The experience of present time is thus not a point buta period in time. It can be evaluated by means of simple psychophysicsexperiments, in which two bars appear simultaneously or asynchronously,and then stay on the screen until subjects decide whether the bars appearedsynchronously or not. Healthy volunteers typically judge bars as synchronousfor SOAs up to 30 to 50 ms, in contrast with patients, who require alonger SOA to detect that bars are asynchronous. However in these studies,bars were systematically presented in different hemi-fields, and a qualitativeand quantitative impairment of inter-hemispheric transfer has beensuggested to exist in patients with schizophrenia. This impairment couldexplain why time windows are larger in patients when bars appear in twodifferent hemi-fields. We checked this hypothesis by manipulating thelocation of the squares, displayed in either the same or across hemi-fields.Continuous eye tracking ensured that subjects looked at a central fixationpoint. SOAs varied between 0 and 96 ms, and subjects decided if squaresappeared synchronously or not. Results showed an enlarged time windowin patients but no location effect (intra versus inter-hemispheric presentation).Furthermore the analysis of the Simon effect showed that patientsare sensitive to very short duration stimuli (8.3 ms). This suggests that theenlargement of the time window is not associated with a fusion of eventsin time, but rather to a difficult comparison between stimuli onsets. As thetwo stimuli are clearly separated in space, this leads to the question of therelationship between spatial and temporal event-coding. We will especiallydiscuss the possibility that comparing time-onsets requires mental groupingof the compared stimuli.36.555 Oppel-Kundt illusion weakens with shortening of the timepresentationsTadas Surkys 1 (tsurkys@vision.kmu.lt), Algis Bertulis 1 , Arunas Bielevicius 1 , AleksandrBulatov 1 ; 1 Biology institute, Kaunas University of MedicineThe magnitude of the Oppel-Kundt illusion was measured at various durationsof presentation followed by the masking stimulus. The referentialpart of the Oppel-Kundt figure was of 70 arc min length and comprised7 stripes, height of which was 28 arc min and width 1 arc min. The emptytest part of the stimulus was terminated by a single stripe. The figureluminance was 52cd/m2 and background luminance was 0 cd/m2. Themasking stimulus 130×200 arc min in size consisted of randomly distributedstripes equivalent to illusory figure elements but brighter twice (100cd/m2). The stimulus display duration varied from 60 ms to 1.3 s. Eachpresentation consisted of three parts: the blank fixation point exposedfor 700 ms on the screen, the Oppel-Kundt figure itself, and the maskingstimulus appearing immediately after the figure offset and lasting 2s. Twoalternative forced choice constant stimulus procedures were used to measureillusion strength. Psychometric functions were obtained for all displaydurations of the stimulus. Six subjects participated in the experiments. TheOppel-Kundt illusion weakened gradually from the maximum strength (ofabout 20% overestimation) within the 700 – 1300 ms interval to 2 – 3 timesless strength at about 100 ms and showed tendency to decrease further atshorter times. The results suggest that an extra time is required to establishthe spatial misperception of the Oppel-Kundt type compared to the timeused in the length estimation procedure. The results obtained also denotethat the Oppel-Kundt and Müller-Lyer illusions may be of different origin,as previous experiments with the Müller-Lyer illusory figure didn’t showsubstantial strength variations with duration of presentations.36.556 Controlling the timing of oscillations in neural activity andconsciousness with rhythmic visual stimulationKyle Mathewson 1 (kmathew3@uiuc.edu), Christopher Prudhomme 1 , MonicaFabiani 1 , Diane Beck 1 , Gabriele Gratton 1 , Alejandro Lleras 1 ; 1 Beckman Institute &Department of Psychology, University of Illinois at Urbana-ChampaignWhat is the underlying nature of conscious awareness? William Jamesobserved introspectively that consciousness, “… does not appear to itselfchopped into bits…A ‘river’ or a ‘stream’ are the metaphors by which it ismost naturally described.” (James, 1890). Since this time, however, evidencehas accumulated supporting an alternative, discrete nature of perception204 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsSunday Afternoon Posters(Efron 1970; VanRullen & Koch 2003). Recently, we have found evidencethat perception fluctuates on a fine temporal scale, as a function of thephase of ongoing neural oscillations. Visual targets presented in the peakof ongoing 10 Hz neural oscillations (alpha rhythm) are visible, while identicalstimuli presented in the trough are less likely to reach consciousness(Mathewson et al., 2009). Furthermore, we have shown that rhythmic visualstimulation at similar frequencies can control the timing of oscillations inconsciousness (Mathewson et al., in press). Here we show that it is possibleto control ongoing neural oscillations with this rhythmic visual stimulation,thus eliciting predictable concomitant oscillations in brain activity and consciousness.After the offset of periodic visual stimulation, masked visualtargets were presented at multiple lags, sampling various phases withrespect to the induced oscillations. Targets presented in phase with the precedingrhythmic stimulation were more likely to be detected than those outof phase. This induced oscillation in visual sensitivity was strongly correlatedwith an induced oscillation in the EEG. These effects were markedlysmaller for randomly spaced preceding stimulation. These data providethe first evidence of a causal link between ongoing neural oscillations andfine grained temporal variations in consciousness, and reveal a method toexperimentally control these discrete perceptual snapshots.Acknowledgement: This research was supported by Natural Science and EngineeringResearch Council of Canada Fellowship to K. E. Mathewson and National Institute ofMental Health grant # MH080182 to G. GrattonSunday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>205


Monday AMMonday Morning TalksBinocular vision: Models and mechanismsMonday, May 10, 8:15 - 10:00 amTalk Session, Royal Ballroom 1-3Moderator: Zhong-Lin Lu41.11, 8:15 amEvidence that disparities defined by luminance and contrast aresensed by independent mechanismsB.M. Sheliga 1 (bms@lsr.nei.nih.gov), E.J. FitzGibbon 1 , F.A. Miles 1 ; 1 Laboratory ofSensorimotor Research, National Eye Institute, National Institutes of Health,Bethesda, MD 20892We recorded the initial vergence eye movements that were elicited by 1-D sinusoidal gratings differing in phase at the two eyes by ¼ wavelength(binocular disparity) and created by luminance modulation (LM) or contrastmodulation (CM) of dynamic binary noise that was uncorrelated at thetwo eyes. Whether horizontal or vertical, gratings defined by either LM orCM elicited vergence responses that were always compensatory, workingto reduce the ¼-wavength disparity. When LM was added to the CM, vergenceresponses showed a U-shaped dependence on the magnitude of theLM, reaching a minimum with in-phase LM of 3.0-5.5%, consistent with thenulling of 1st-order distortion products due to compressive nonlinearitiesearly in the visual pathway. The minimum vergence responses here wererobust, had longer latencies than the responses evoked by the LM componentof the stimulus (differences ranging from 15.5 to 31.2 ms), and wereattributed to cortical mechanisms that can sense disparities defined solelyby contrast. In a second experiment, we found that disparities defined byLM in one eye and CM in the other eye (“LM+CM stimulus”) generatedonly weak vergence responses and these were always in the “wrong” direction,i.e., opposite to the imposed ¼-wavelength disparity, consistent withmediation entirely by 1st-order distortion products associated with the CMstimulus. Thus, these (reversed) vergence responses could be eliminatedentirely by adding a small amount of LM to the CM stimulus (in phase),and the greater the depth of the CM, the greater the added LM requiredfor nulling. Controls indicated that the failure of the LM+CM stimulus toelicit vergence responses (after nulling the distortion products) was not dueto differences in the amplitude or timing of the inputs from the two eyes.These data suggest that disparities defined by LM and CM are sensed byindependent mechanisms.Acknowledgement: NEI Intramural Program41.12, 8:30 amLocal and non-local effects on surface-mediated stereoscopicdepthBarbara Gillam 1 (b.gillam@unsw.edu.au), Harold Sedgwick 2 , Phillip Marlow 1 ;1 School of Psychology University of New South Wales Australia, 2 SUNY Collegeof OptometryThe magnitude and precision of stereoscopic depth between two probescan be mediated by the disparity of each relative to a common backgroundsurface (e.g. Glennerster & McKee, VR 1999). For example, the commonunderestimation of background surface slant produces a bias in relativeprobe depth. Gillam & Sedgwick (ARVO 2000) have shown that this biasis reduced when flanking surfaces in the frontal plane reduce underestimationof background surface slant. Here we manipulate the relationsbetween flanking surfaces, background surface, and probes to explore thepropagation of slant across surfaces and from surfaces to isolated objects. Inour first experiment observers set two disc probes to apparent equidistancewhen viewed naturally against a horizontally slanted rectangular randomdot surface whose height was varied. Frontal plane random dot rectanglesabutted this surface above and below. The bias in the probes increased assurface height/flanker distance increased but even with flankers 4.4 degfrom the probes, bias was less when flankers were present. In a second similarexperiment the flankers were slanted and a central background surface,when present, was in the frontal plane. For flankers alone probe bias didnot diminish up to a 4.4 deg.separation of flankers and probes. The additionof a central frontal plane background surface strongly reduced this bias asthe separation of the flankers increased regardless of whether the centralsurface filled the gap between flankers or was of constant height in the centre.These results may be related to changes in contrast effects from flankersto background. Stereoscopic depth between probes is thus influenced by acommon background surface, by neighboring surfaces acting (contiguouslyor non-contiguously) on the background surface, and by distant surfacesacting directly on the probes. These local and non-local effects are determinedby the overall configuration of probes and surfaces.Acknowledgement: ARC DP0774417 to BG and NSF BCS-0001809 To HS & BG41.13, 8:45 amBiases and thresholds for depth perception from monocularregions of binocular scenesJulie M. Harris 1 (julie.harris@st-andrews.ac.uk), Danielle Smith 1 ; 1 <strong>Vision</strong> Lab,School of Psychology, University of St. Andrews, St. Andrews, Scotland, UK.Monocular regions in binocularly viewed scenes are usually found neara step-change in depth, between a foreground object and the backgroundscene. They occur because one eye can see a portion of the background thatis occluded in the other eye’s view by the foreground object itself. Thatsuch regions have a role in the perception of depth is clear, but what is lesswell understood is the nature of the visual mechanisms that deliver theperceived depth. For example, for most configurations, viewing geometrypredicts that monocular regions do not specify a unique depth. Instead, thepossible depth interpretations can be expressed in terms of a ‘depth constraintregion’. This specifies the minimum possible depth, and sometimesa maximum. Previous research has shown that perceived depth is oftenclose to the minimum possible depth. Here we used a depth discriminationexperiment to directly compare thresholds and perceived depth, forboth conventional binocular disparity and depth from monocular occlusions.Forced-choice psychophysical methods were used, where observerswere shown a target and comparison stimulus, and asked which containedthe greater depth step. Targets contained either conventional binocular disparity,or depth from monocular regions. Comparison stimuli containedconventional binocular disparity. Depth discrimination thresholds wereconsiderably elevated for depth from monocular regions compared withconventional binocular disparity. Depth biases were also found. Therewere large individual differences, but some biases were consistent withobservers perceiving less depth from monocular occlusions than the depthconstraint region would predict. Our experiments suggest that a different,less precise, mechanism is at work in the perception of depth from monocularocclusions, than that available for the perception of depth from conventionalbinocular disparity.41.14, 9:00 amDepth magnitude and binocular disparity: a closer look at patentvs. qualitative stereopsisDebi Stransky 1 (debis@yorku.ca), Laurie Wilcox 1 ; 1 Centre for <strong>Vision</strong> Research,York UniversityOgle (1952; 1953) used measurements of perceived depth as a function ofdisparity to divide human stereopsis into patent (quantitative) and qualitativecategories. Patent depth percepts result from a range of disparitieswithin and outside Panum’s fusional zone, while qualitative percepts resultonly from very large disparities well beyond the fusional limit. While thisdichotomy is widely recognized, it is not clear if it is merely descriptive, orif it reflects an underlying neural dichotomy. If the latter is true, then patentand qualitative depth percepts should be associated with other distinguishingproperties. In this series of experiments we evaluate the possibility thatthe 1st /2nd –order dichotomy proposed by Hess & Wilcox (1994) mapsonto Ogle’s patent/qualitative distinction. We used a magnitude estimationtechnique to evaluate the amount of depth perceived from test disparitieswithin and beyond the fusable range. In separate blocks of trialswe used stimuli designed to activate either the luminance-based 1st-orderor the contrast-based 2nd-order system. The stimuli were windowed, 1Dluminance noise patches that were presented either as correlated or uncorrelatedstereopairs which activated 1st and 2nd-order stereopsis respectively.As anticipated, we find that at small disparities our 1st-order stimuli206 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong> See page 3 for Abstract Numbering System


VSS 2010 AbstractsMonday Morning Talksprovide patent depth percepts that follow geometric predictions. However,our data also reveal that quantitative depth percepts are provided by 2ndorderstereopsis at small disparities, but the amount of depth is less thanpredicted by viewing geometry. Further, depth percepts become qualitativeas the stimuli become diplopic and are mediated by solely 2nd-ordermechanisms. Our results show that Ogle’s qualitative stereopsis reflects theoperation of a distinct neural mechanism designed to provide crude depthestimates for diplopic stimuli. The situation for stimuli within Panum’sarea is not as straightforward, as both 1st and 2nd-order mechanisms providequantitative depth information in this range.Acknowledgement: NSERC to LMW41.15, 9:15 amShape aftereffects require awarenessTimothy Sweeny 1 (timsweeny@gmail.com), Marcia Grabowecky 1,2 , SatoruSuzuki 1,2 ; 1 Department of Psychology, Northwestern University, 2 InterdepartmentalNeuroscience Program, Northwestern UniversityHigh-level face identity aftereffects require awareness (e.g., Moradi et al.,2005), whereas low-level tilt aftereffects occur without awareness (e.g., Heet al., 2001). Here we demonstrate that intermediate-level aspect-ratio aftereffectsrequire awareness. During adaptation, we presented an ellipse witha tall or flat aspect ratio to one eye. In the unaware condition, a dynamicmaskingpattern was dichoptically presented to prevent awareness of theadaptor ellipse. In the aware (control) condition, the dynamic-masking patternwas monoptically superimposed over the adaptor ellipse so that bothwere visible. This control condition allowed us to determine the degreeto which preventing awareness reduced aspect-ratio aftereffects over andabove local masking effects. During adaptation (2000 ms), participantsreported the aspect ratio of the adaptor ellipse (if it was visible) so thatwe could verify the awareness manipulation. After adaptation, participantsreported the aspect ratio of a briefly-flashed (73 ms and backward-masked)test ellipse using the method of adjustment. In the aware condition, theaspect ratio of the flashed ellipse appeared distorted away from that of theadaptor (e.g., adaptation to a flat ellipse made a circle appear tall). No aftereffectoccurred in the unaware condition. Lack of awareness rather thanlow-level local masking is likely to be the crucial factor because local inhibitoryinteractions in V1 would have been stronger in the monoptic-maskingthan dichoptic masking condition (at least for neural spike rates; Macknik& Martinez-Conde, 2004). Furthermore, these aftereffects arise from adaptationin global aspect-ratio coding and not local curvature coding, becausethey showed substantial binocular transfer, and no aftereffects occurred forthe component curved segments using the same paradigm. These resultssuggest that adaptation of aspect-ratio coding requires high-level and/orrecurrent processes that generate conscious awareness, similar to face identitycoding and different from local orientation coding.Acknowledgement: NSF BCS0643191, NIH R01EY018197-02S141.16, 9:30 amPhase-Independent Contrast Combination in Binocular <strong>Vision</strong>Jiawei Zhou 1 (zhoujw@mail.ustc.edu.cn), Chang-Bing Huang 2 , Zhong-Lin Lu 2 ,Yifeng Zhou 1 ; 1 <strong>Vision</strong> Research Lab, School of Life Science, USTC, Hefei, P.R.China, 2 Laboratory of Brain Processes (LOBES), Departments of Psychology,University of Southern California, Los Angeles, CA 90089, USAHow the visual system combines information from the two eyes to forma unitary binocular representation of the external world is a fundamentalquestion in vision science that has been the focus of many psychophysicaland physiological investigations. Ding and Sperling (2006) measured theperceived phase of the cyclopean image as a function of the contrast ratiobetween two monocular sinewave gratings of the same spatial frequencybut different phases, and developed a binocular combination model inwhich each eye exerts gain control on the other eye’s signal and over theother eye’s gain control. Critically, the relative phase of the two sinewavesplays a central role. We used the Ding-Sperling paradigm but measuredboth the perceived contrast and phase of cyclopean images in seventytwocombinations of base contrast, interocular contrast ratios, eye originof the probe, and relative phase. We found that the perceived contrast ofcyclopean images was independent of the relative phase of the monocularsinewave gratings, although the perceived phase of cyclopean imagesdepended on the relative phase and contrast ratio of the monocular images.We modified the Ding-Sperling binocular combination model in two ways:(1) phase and contrast of the cyclopean images are computed in separatepathways, although with shared cross-eye contrast-gain control; and (2)phase-independent local energy from the two monocular images are usedin contrast combination, after additional within-eye contrast gain-control.With five free parameters, the model yielded an excellent account of datafrom all the experimental conditions.Acknowledgement: Supported by NEI, NSF of China.41.17, 9:45 amPerisaccadic Stereopsis from Zero Retinal DisparityZhi-Lei Zhang 1 (zhilei_z@berkeley.edu), Christopher Cantor 1 , Clifton Schor 1 ;1 School of Optometry, University of California at BerkeleyA stimulus flashed immediately before a saccade is perceived as mislocalizedin the direction of the eye movement. This perisaccadic-positional shiftvaries with the time from the flash to the saccade onset (TSO:). We haveshown that this shift is also strongly affected by the stimulus luminancefor a single flash: the shift is larger with low than high luminance. We alsofound an interaction between flashes presented asynchronously to the sameeye in which a flash with a longer TSO is shifted more than a second flashwith a shorter TSO. The results suggest a low-level mechanism in which thevisual system combines eye position information with a persistent neuralrepresentation of the retinal image (temporal impulse response) to estimatethe visual direction during saccadic eye movements. These results also providedthe foundation for studies of a head-centric disparity mechanism inwhich asynchronous dichoptic foveal flashes presented before a saccadeproduced different amounts of perisaccadic shift in each eye and resultedin the depth percept from the head-centric disparity of the zero retinaldisparity stimulus. This head-centric disparity also cancelled a retinal disparityof opposite sign, illustrating an interaction between the retinal andheadcentric disparity estimates. This is the first experimental evidence thatdemonstrates a head-centric disparity mechanism for stereopsis in human.Acknowledgement: NSF-BCS-0715076Attention: TimeMonday, May 10, 8:15 - 10:00 amTalk Session, Royal Ballroom 4-5Moderator: Khena Swallow41.21, 8:15 amDo We Experience Events in Terms of Time or Time in Terms ofEvents?Brandon M. Liverence 1 (brandon.liverence@yale.edu), Brian J. Scholl 1 ; 1 Perception& Cognition Lab, Department of Psychology, Yale UniversityIn visual images, we perceive both space (as a continuous visual medium)and objects (that inhabit space). Similarly, in dynamic visual experience, weperceive both continuous time and discrete events. What is the relationshipbetween these units of experience? The most intuitive answer is similar tothe spatial case: time is perceived as an underlying medium, which is latersegmented into discrete event representations. Here we explore the oppositepossibility -- that events are perceptually primitive, and that our subjectiveexperience of temporal durations is constructed out of events. In particular,we explore one direct implication of this possibility: if we perceivetime in terms of events, then temporal judgments should be influencedby how an object’s motion is segmented into discrete perceptual events,independent of other factors. We observed such effects with several typesof event segmentation. For example, the subjective duration of an object’smotion along a visible path is longer with a smooth trajectory than whenthe same trajectory is split into shorter independent pieces, played back ina shuffled order (a path shuffling manipulation). Path shuffling apparentlydisrupts object continuity -- resulting in new event representations, andflushing detailed memories of the previous segments. In contrast, segmentationcues that preserve event continuity (e.g. a continuous path but withsegments separated by sharp turns) shorten subjective durations relativeto the same stimuli without any segmentation (e.g. when the segments arebound into a single smoothly-curving path, in trajectory inflection manipulations).In all cases, event segmentation was manipulated independentlyof psychophysical factors previously implicated in time perception, includingoverall stimulus energy, attention and predictability. These and otherresults suggest a new way to think about the fundamental relationshipbetween time and events, and imply that time may be less primitive in themind than it seems to be.Monday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>207


Monday Morning TalksVSS 2010 AbstractsMonday AM41.22, 8:30 amThe Attentional Boost Effect and Temporal SynchronyKhena Swallow 1,2 (swall011@umn.edu), Yuhong Jiang 1,2 ; 1 Department ofPsychology, University of Minnesota, 2 Center for Cognitive Science, Universityof MinnesotaIncreasing attention to one task typically impairs performance in a secondtask. However, the opposite can also occur: Encoding is facilitated forimages that are presented at the same time that attention to an unrelatedtarget detection task increases (“the attentional boost effect”; ABE). Onepotential explanation for the ABE is that the appearance of a target orientsattention to the moment in time that the target appeared, facilitatingperceptual processing of concurrently presented information (temporalorienting hypothesis). Accordingly, an image whose presentation overlapsin time with the presentation of the target will receive additional attentionand processing resources. Alternatively, the ABE may result from temporalgrouping (temporal grouping hypothesis). In previous experimentsthe images and targets always onset at the same time. Because commononset is a strong cue for temporal grouping, participants may have groupedthe image and target into a single temporal entity. If this is the case, thenincreasing attention to the target should lead to enhanced processing of theentire temporal group, resulting in the ABE. To address these two hypothesescommon onset and temporal overlap were manipulated. Experiment1 demonstrated that the ABE occurs even when the target appears 100 mslater than the image. Experiments 2 and 3 showed that even though commononset is not necessary for the ABE, temporal overlap is. In these experimentsthe target could overlap with the image in time or it could appearover a mask 100 ms before or 100 ms after the image. Consistent with thetemporal orienting hypothesis, the ABE was eliminated when the target didnot overlap with the image in time. Based on these data, we suggest thatperceptual processing of images presented with targets is enhanced becauseattention is oriented to the moment in time that the target appeared.Acknowledgement: NIH and the University of Minnesota Institute for Marketing Research41.23, 8:45 amAttentional modulation of the temporal contrast sensitivityIsamu Motoyoshi 1 (motoyosi@apollo3.brl.ntt.co.jp); 1 NTT Communication ScienceLabs, NTTRecent psychophysical studies show that attention not only raises the sensitivityfor visual targets, but also enhances the spatial resolution. On theother hand, little is known about the effect of attention on the temporalproperty. A few studies suggest that attention rather declines the temporalresolution for suprathreshold stimuli (e.g., Yeshurun & Levy, 2003), butit is unclear if this reflects changes in the general properties of the visualsystem. In the present study we examined the effect of attention on thecontrast sensitivity over a range of temporal frequencies. Eight observerswere asked to detect a drifting grating (2.2 c/deg, 0 to 40 Hz,) presentedgradually at one of eight possible locations (4.6 deg eccentricity) on a uniformbackground while performing a letter recognition task in the centralRSVP display (dual task). The results showed that the removal of attentionby the central task largely declined the contrast sensitivity, particularly tolow temporal frequencies, resulting in the band-pass shaped CSF. The sensitivityratio (90% correct response) between the single and dual task modeswas 7.2 for the static grating, far greater than those obtained with flashedgratings (~1.5; e.g., Carrasco, Talgar & Eckstein, 2000), but was 1.2 for thedrifting grating of 40 Hz. A system analysis revealed that the removal ofattention reduced the overall gain and increased the transient factor of theCSF, but little affected the cut-off temporal frequency. These results supportthe notion that attention extensively modulates the sensitivities forsustained, but not transient, visual inputs.41.24, 9:00 amSilent updating: cross-dimensional change suppressionJordan Suchow 1 (suchow@fas.harvard.edu), George Alvarez 1 ; 1 Harvard UniversityA vivid, color-changing display was created by continuously cycling 300randomly-colored dots through the color wheel. Surprisingly, when thisdisplay was rotated about its center, the color-change appeared to halt. Inone experiment, the display alternated between rotating and remainingstill, and observers were asked to adjust the rate of color-change when theannulus was still to match the apparent rate of color-change when the annulusmoved. We found that, as the angular velocity of the annulus increased,the matched rate of color-change decreased. At high angular velocities (180deg/s), observers reported a nearly-complete halt in color-change. We suggestthat the transients produced by the dots’ motion cause transients producedby color-change to go unnoticed, updating silently. Next, we examinedhow transients produced by continuous changes in two dimensions,position (motion) and luminance (twinkle), interacted when one was dominant.Twelve dots appeared in a ring, centered about fixation (dot radius =0.5 deg, ring radius = 10 deg). In a blocked design, observers were asked toreport which one of the 12 dots moved or twinkled, while simultaneously,all of the dots changed along the other dimension. We measured thresholdsfor detecting the specified change, and found that they rose by as much asa factor of four when the amplitude of change along the irrelevant dimensionwas increased. The reported interference suggests that transient signalsproduced by one dimension can suppress transient signals producedby other dimensions; this may play an important role in controlling whichchanges in the visual field capture attention, and which will fail to captureattention, updating silently.41.25, 9:15 amCompeting for consciousness: Reduced object substitutionmasking with prolonged mask exposureStephanie Goodhew 1 (s.goodhew@psy.uq.edu.au), Troy Visser 1 , Ottmar Lipp 1 , PaulDux 1 ; 1 School of Psychology, University of QueenslandIn object substitution masking (OSM) a sparse, temporally-trailing fourdotmask impairs target identification, even though it has different contoursfrom and does not spatially overlap with the target (Di Lollo, Enns,& Rensink, 2000; Enns & Di Lollo, 1997). OSM is thought to reflect “perceptualhypothesis testing” whereby iterative re-entrant processing loopsare initiated from higher cortical areas to lower ones in an effort to confirmthe identity of coarsely coded visual stimulation. Because the target is presentedonly briefly while the mask remains on the display, this hypothesistesting results in the mask being confirmed as the identity of the stimulus,thus excluding the target from consciousness. Here, we demonstrate a previouslyunknown characteristic of OSM: at prolonged (e.g., ~ 600 ms) maskdurations, observers show reduced masking relative to intermediate maskdurations (e.g., ~ 250 ms). In our experiments, observers identified the locationof the gap (left versus right) in a Landolt C target, which was trailed bya four-dot mask for various durations (Supplementary Figure 1A). Targetidentification accuracy decreased up to mask durations of 240 ms, but thenimproved at longer durations (Supplementary Figure 1B). This recoverywas obtained across a range of stimulus presentation conditions using bothtrained and naïve observers. Our findings demonstrate that although initiallyonly one of two spatiotemporally adjacent stimuli presented to thevisual system may gain access to consciousness, the “losing” stimulus is notirreversibly lost to awareness.41.26, 9:30 amDelayed reentrant processing impairs visual awareness: An objectsubstitution masking studyPaul E. Dux 1 (paul.e.dux@gmail.com), Troy A. W. Visser 1 , Stephanie C. Goodhew 1 ,Ottmar V. Lipp 1 ; 1 School of Psychology, University of QueenslandIn object substitution masking (OSM) a sparse, common-onsetting, maskimpairs conscious target perception if it temporally trails the target andspatial attention is dispersed. Di Lollo et al.’s (2000) Reentrant ProcessingModel explains OSM as reflecting the interaction of feedforward and feedbackprocesses in the brain. Specifically, upon presentation of a target andmask a coarsely coded representation of both stimuli progresses from V1to anterior brain regions (feedforward sweep). Due to the low resolutionof this information feedback/reentrant processing is employed to confirmthe identity of the visual stimulation. According to this model, dispersingspatial attention delays feedforward processing, increasing the likelihoodthat only the mask remains visible once reentrant processing is initiated.Therefore, the mask will substitute the target in consciousness. Notably, theReentrant Processing framework predicts that OSM will be elicited wheneither feedforward or feedback processing is delayed/impaired as both willincrease the probability that only the mask remains visible once reentrantanalysis begins. Thus, it should be possible to observe OSM for spatiallyattended stimuli if feedback processing from anterior regions is delayed.We presented subjects with a standard OSM paradigm (Landolt C target,four-dot mask) while they performed a difficult arithmetic task known toengage brain areas involved in reentrant processing (prefrontal and parietalcortex). All stimuli appeared in the same spatial location and, employing208 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsMonday Morning Talksa standard dual-task protocol, the arithmetic and OSM tasks had either ashort (100ms) or long (800ms) stimulus onset asynchrony (SOA). IncreasedOSM was observed at the short relative to the long SOA and this was morepronounced when subjects performed, rather than ignored, the arithmetictask. The results support a key prediction of Di Lollo et al.’s Reentrant ProcessingModel: if feedback processing is delayed then OSM can be observedfor spatially attended objects.41.27, 9:45 amExplicit Auditory Discrimination Improves During the VisualAttentional BlinkKeren Haroush 1 (kharoush@gmail.com), Shaul Hochstein 1,2 ; 1 Departmentof Neurobiology, Silberman institute of Life sciences, Hebrew University,Jerusalem,Israel, 2 Interdiciplinary Center for Neural Computation, HebrewUniversity, Jerusalem,IsraelGating of sensory information is an elementary function, necessary for survival.In a series of studies, we investigated how sensory modalities interactin this process. We previously probed implicit allocation of multisensoryattention during the Attentional Blink (AB), a failure to report a secondtarget closely following first target detection (Haroush et al., ECVP 2007). Inthat study, we examined AB effects on the event-related Mismatch Negativity(MMN), which is ‘automatically’ elicited upon appearance of a deviantwithin a sequence of sounds. We found that MMN amplitude surprisinglyincreased during the AB, presumably underlying enhanced implicit auditorychange detection processes. Here, we examined whether and how thiseffect translates into behavior, testing explicit auditory discrimination duringa visual AB. Subjects were asked to identify two visual targets (T1&T2)embedded within a rapid distractor stimulus stream, and simultaneouslyperform an auditory discrimination task in a two-alternative-forced-choice,2down-1up staircase paradigm. The first sound appeared at the beginningof the trial, (before visual T1), and the second simultaneously with visualT2, at variable SOAs. Three auditory protocol blocks were used with/withoutfixed reference tones to distinguish sensory classification from working-memory(WM) dependent discrimination (Nahum et al., 2009). Whenauditory discrimination was WM dependent, auditory performance duringvisual attentional blink trials significantly improved compared to trialswhere both visual targets were correctly reported. In contrast, sensoryclassification alone did not benefit from the visual AB, presumably becauseof its independence of WM. We conclude that attention-controlled WMresources that could not be used by the visual system during the AB arefreed to be employed by the auditory system. Notably, this attention allocationis evident despite the additional resources taxed by task-switching.These results have implications for current theories of multiple informationprocessing bottlenecks.Acknowledgement: Supported by a grant from the Israel Science FoundationPerception and action: Pointing, reaching,and graspingMonday, May 10, 11:00 - 12:30 pmTalk Session, Royal Ballroom 1-3Moderator: Eli Brenner42.11, 11:00 amWhy we need continuous visual control to intercept a movingtargetEli Brenner 1 (e.brenner@fbw.vu.nl), Jeroen BJ Smeets 1 ; 1 Human Movement<strong>Sciences</strong>, VU University AmsterdamIt is obviously advantageous to continuously adjust one’s movements if atarget that one wants to intercept moves in an unpredictable manner. Weexamined whether continuous visual control is also useful for interceptingtargets that move predictably. We have previously argued that theaccuracy with which people hit moving targets is close to what one wouldexpect from the known limits of human vision. This argument obviouslyrests on our having chosen the correct measures for representing humanvision. We therefore designed the present study to directly examine to whatextent continuously controlling one’s movements on the basis of updatedvisual information is beneficial when intercepting targets that move in acompletely predictable manner. Subjects hit virtual targets as they passeda goal. Just before reaching the goal the target could briefly disappear fromview. This was achieved by giving a section of the surface across whichthe target moved (just before it reached the gap) the same colour as thetarget. The advantage of making the target disappear in this manner is thatsubjects can anticipate from the beginning of each trial when the target willdisappear, so they can plan their movements in accordance with the timethat the information will be available. Both the accuracy and the precisionwith which the subjects hit the target were lower if the target briefly disappearedfrom view just before being hit. The extent to which the precisiondepended on the time for which the target was invisible is consistent withpredictions based on continuous control and the limits of human vision.Thus we can conclude that it is advantageous to have accurate visual informationthroughout an interception movement, even if the target movescompletely predictably, because the resolution of vision is a limiting factorwhen intercepting moving objects.42.12, 11:15 amThe ‘automatic pilot’ for the hand in patients with hemispatialneglectStephanie Rossit 1 (srossit@uwo.ca), Robert McIntosh 2 , Paresh Malhotra 3 , StephenButler 4 , Monika Harvey 5 ; 1 Department of Psychology, University of WesternOntario, London, Canada, 2 Department of Psychology, University of Edinburgh,Edinburgh, UK, 3 Division of Neurosciences and Mental Health, Imperial CollegeLondon, London, UK, 4 Department of Psychology, University of Strathclyde,Glasgow, UK, 5 Department of Psychology, University of Glasgow, UKLeft hemispatial neglect manifests itself in a rightward bias in perceptualtasks, yet the presence of this neglect-specific bias in visuomotor controlremains a matter of debate. Here we investigated the ability of neglectpatients (compared to patients without neglect and healthy controls) to rapidlyadjust or interrupt (stop) their ongoing reach in response to a rightwardor leftward target jump. Although neglect patients successfully correctedtheir reaches towards the left and right target shifts, these corrections weresignificantly slowed for leftward jumps. Interestingly though, in the stopcondition neglect patients performed involuntary corrections towards theleftward target, similarly to those seen for the control groups. Furthermore,and unexpectedly, we found that neglect patients were impaired at stoppingtheir movements in response to target jumps towards both sides ofspace. We argue that, in contrast to optic ataxic patients, who suffered fromlesions in their dorsal visual stream, neglect patients show an ‘automaticpilot’ for reaching, yet that this ‘pilot’ is markedly slowed when the targetjumps in a leftward direction. We also suggest that the inability to stopan ongoing reach might be related to non-lateralized deficits in responseinhibition.Acknowledgement: This work was supported by a grant (SFRH/BD/23230/2005) fromthe Foundation for Science and Technology (FCT, Portugal) to S. Rossit.42.13, 11:30 amNeural substrates of target selection for reaching movements insuperior colliculusJoo-Hyun Song 1 (jhsong@ski.org), Robert Rafal 2 , Robert McPeek 1 ; 1 The Smith-Kettlewell Eye Research Institute, San Francisco, CA, 2 Bangor University, UKThe primate superior colliculus (SC) is important for the execution of saccadiceye movements, but recent evidence suggests that it also plays a rolein the higher-level process of target selection for saccadic and pursuit eyemovements, as well as in covert attention shifts. Thus, we speculated thatSC activity may participate in a generalized salience map used for targetselection for a variety of purposes. To test this hypothesis, we recordedthe activity of isolated intermediate-layer SC neurons in monkeys trainedto perform a reach target selection task. The monkeys were rewarded formaintaining fixation and reaching to touch an odd-colored target presentedin an array of distractors. Even though no eye movements were made inthis task, many neurons discriminated the target before the onset of thereach, and this activity typically persisted throughout the trial, consistentwith SC involvement in target selection for reaching movements. To furtherdetermine if this SC activity plays a causal role in reach target selection, wetested the effects of temporary focal SC inactivation on monkeys’ performancein two reach target selection tasks. In one task, a target was followedafter a variable SOA by a distractor, and monkeys were rewarded for reachingto the target. In the second task, two potential targets were shown anda cue at the fovea indicated which was the target. Monkeys were requiredto maintain eye fixation throughout each trial. In both tasks, after SC inactivation,when the target appeared in the inactivated part of the visual field,Monday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>209


Monday Morning TalksVSS 2010 AbstractsMonday AMmonkeys made more reaching errors to the distractor. In contrast, monkeyswere unimpaired when the target was presented without distractors.These results establish that, in addition to its role in saccades, the SC playsa causal role in target selection for reaching movements.Acknowledgement: National Eye Institute grant R01-EY014885, Core grant P30-EY006883, and R.C. Atkinson Fellowship Award42.14, 11:45 amDevelopmental studies of visual-motor integration: A comparativeapproachLynne Kiorpes 1 (lynne@cns.nyu.edu), Gardiner von Trapp 1 , Amelie Pham 1 , JesseLingeman 2 , Kasey Soska 2 , Karen Adolph 2 , Claes von Hofsten 3 , Kerstin Rosander 3 ;1 Center for Neural Science, FAS, New York University, 2 Department ofPsychology, FAS, New York University, 3 Institutionen for Psykologi, UppsalaUniversitetEffortless, fluid integration of perception and action is ubiquitous duringsuccessful navigation of our ever-changing environment. How this integrationplays out in real time is understudied, largely because visual perceptionand motor actions are often studied piecemeal by different investigators.We have taken a comparative, developmental approach to this problem.We used a dynamic reaching paradigm to track developmental changesin visually-guided motor control, investigating how infants calibrate andrefine motor actions over development. We conducted parallel studies inhuman and macaque infants and found striking similarities across primatespecies. We tested visually-guided reaching in 50 human infants crosssectionally(6–15 mos) and 2 macaque monkeys longitudinally (5–6 mos).We measured handedness and latency as a function of target location, andlocalization and grasp orientation errors. Infants were seated in a swivelchair that was rotated to face a vertical reaching board at the beginning ofeach trial. Target position was varied in a pseudo-random order trial-totrial.All infants reached reliably for the targets. Human infants showeda slight bias for right hand reaches compared to left; they occasionallyreached with both hands. Monkeys performed similarly, except they had aslight left bias. Importantly, however, both species’ hand choices changedsystematically with object position across the reaching space. The shortestreach latencies were to the right and left of midline, with longer latenciesfor midline and extreme lateral locations. Localization and grasp orientationerrors were quickly corrected in human infants and declined with age;monkeys rarely mis-reached but grasp orientation errors declined over sessions.Human and monkey infants seamlessly used perceptual informationto plan motor actions across visual space, to guide actions adaptively, andto correct slight errors in execution. Even the youngest infants were adeptat perceptual-motor integration, and visually-guided actions only becamemore fluid over development.Acknowledgement: NIH R01EY05864R37HD3348642.15, 12:00 pmMapping Shape to Visuomotor Mapping: Generalization to NovelShapesMarc Ernst 1 (marc.ernst@tuebingen.mpg.de), Loes van Dam 1 ; 1 Max PlanckInstitute for Biological Cybernetics, Tübingen, GermanyThe accuracy of visually guided motor movements largely depends on thestability of the sensory environment that defines the required responsemapping. Thus, as the environment keeps changing we constantly have toadapt our motor responses to stay accurate. The more sensory informationwe receive about the current state of the environment the more accurate wemay be. Recruitment of additional cues that correlate with the environmentcan therefore aid in this adaptation process.It has previously been shown that subjects recruit previously irrelevant cuesto help them switch between 2 specific visuomotor mappings (e.g. Martin etal., 1996; van Dam et al., 2008). However, in rapidly changing environmentsadditional cues will only be of real benefit if it is possible to learn a morecontinuous correlation between the cue and required visuomotor response.Here we investigate transfer of explicitly trained cue-element/responsemappingcombinations to other cue elements from the same continuousscale (a shape morph).In our experiment subjects performed a rapid pointing task to targets forwhich we manipulated the visuomotor mapping. During training subjectsimultaneously learned two mappings to two different target shapes.The target shapes were taken from a set of shape morphs (we morphedbetween spiky and circular shapes). After five sessions of 180 training trials,using catch trials, we tested subjects’ performance on different target shapemorphs that could either come from an interpolation or an extrapolationalong the shape morph axis. Results show that for 7 out of the 12 subjectslearning is not restricted to the trained shapes but interpolates and partiallyalso extrapolates to other shapes along the morph axis. We conclude thatparticipants learned implicitly the newly defined shape axis when trainedwith two distinct visuomotor mappings and they generalize their visuomotormappings to this new dimension.Acknowledgement: HFSP Grant on Mechanisms of Associative Learning in HumanPerception42.16, 12:15 pmDivergent representations of manipulable and non-manipulableobjects revealed with repetition blindnessIrina Harris 1 (irina@psych.usyd.edu.au), Alexandra Murray 1 , William Hayward 2 ,Claire O’Callaghan 1 , Sally Andrews 1 ; 1 School of Psychology, University of Sydney,2 Department of Psychology, University of Hong KongNeuroimaging and neuropsychological studies suggest that manipulableobjects (i.e., objects associated with particular actions) have distributed representationsthat reflect not only their visual features but also the actionsthey afford. This study used rapid serial visual presentation (RSVP) toinvestigate the nature of the representations underlying identificationof manipulable objects. When stimuli are presented at RSVP rates, itemsrepeated within 500 msec of each other are frequently missed, a phenomenonknown as repetition blindness (RB). RB is thought to occur becauserepeated stimuli activate the same abstract memory representation (type)but are not individuated into distinct visual episodes (tokens) due to thespatio-temporal constraints of RSVP. In two experiments that employeddifferent stimulus sets (photographs vs line drawings), observers viewedRSVP streams containing three objects and six masks and attempted toidentify the objects. The first and third objects in the stream were eitherthe same object repeated, or distinct objects, and were either Action (i.e.,manipulable) or Non-Action (non-manipulable) objects. There were twomain findings. First, joint accuracy for reporting two distinct Action objectswas considerably lower than for Non-Action objects, even when the twoobject classes were equated in terms of ease of identification. Second,whereas Non-Action objects induced RB independent of the objects’ orientation,in keeping with previous findings (Harris & Dux, 2005; Haywardet al., in press), there was no RB at all for Action objects. Instead, significantpriming was obtained when an Action object was repeated in the sameorientation. Taken together, these findings implicate independent sourcesof visual and motor information, which require integration for successfulidentification. Under RSVP conditions, this renders Action objects vulnerableto interference from other objects associated with conflicting motorprograms, but facilitates individuation of repeated objects associated withthe same action.Acknowledgement: Supported by Australian Research Council grant DP0879206.Object recognition: CategoriesMonday, May 10, 11:00 - 12:30 pmTalk Session, Royal Ballroom 4-5Moderator: Sharon Gilaie-Dotan42.21, 11:00 amLocation information in category-selective areas: retinotopic orspatiotopic?Julie Golomb 1 (jgolomb@mit.edu), Nancy Kanwisher 1 ; 1 McGovern Institute forBrain Research, MITChallenging the classic view that the ventral and dorsal visual streams correspondto “what” and “where” pathways, recent studies have reported theexistence of location information, independent of object category, in traditionallyobject-selective regions of ventral visual cortex (e.g., Schwarzlose etal, 2008, PNAS). Does the location information in these higher-order visualareas reflect pure retinotopic position, or absolute location independent ofeye position? To find out, we functionally localized several regions in theventral visual stream, including the lateral occipital complex (LOC), fusiformface area (FFA), parahippocampal place area (PPA), and extrastriatebody area (EBA). We then used multivariate pattern analysis to measurecategory and location information within these areas during the main task,in which subjects viewed blocks of three different kinds of stimuli (faces,210 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsMonday Morning Talksscenes, bodies) in four different locations. The four locations varied in botheye position and stimulus position, generating pairs of conditions in whichthe stimuli occupied different retinotopic (eye-relative) positions but thesame spatiotopic (absolute screen) position, the same retinotopic positionbut different spatiotopic positions, the same in both retinotopic and spatiotopicposition, or different in both. In each of the object-selective regions,we found both location-invariant category information and categoryinvariantlocation information, replicating Schwarzlose et al. Moreover,the location information was specific to retinotopic coordinates. That is, themulti-voxel pattern of fMRI response was more similar (i.e., more highlycorrelated) across conditions that shared the same retinotopic position thanacross conditions that shared the same spatiotopic position. Furthermore,there was no evidence of any spatiotopic location information in any ofthe regions examined. In early visual cortex (identified using retinotopicmapping), no category information was apparent, and location informationwas again exclusively retinotopic. These results suggest that even higherordercategory-selective visual areas code stimuli according to a retinotopiccoordinate frame.Acknowledgement: R01-EY13455 (NK), F32-EY020157 (JG)42.22, 11:15 amThe functional neuroanatomy of object agnosia: A case studyChristina Konen 1 (ckonen@princeton.edu), Mayu Nishimura 2 , Marlene Behrmann 2 ,Sabine Kastner 1 ; 1 Department of Psychology, Princeton University, Princeton,NJ, 2 Department of Psychology, Carnegie Mellon University, Pittsburgh, PAObject agnosia is defined as an object recognition deficit and typically resultsfrom lesions of occipito-temporal cortex. However, little is known about thecortical (re-)organization of visual representations and, specifically, objectrepresentations in agnosia. We used fMRI to examine the cortical organizationwith respect to retinotopy and object-related activations in an agnosticpatient and control subjects. Patient SM has a severe deficit in object andface recognition following damage of the right hemisphere sustained in amotor vehicle accident. Standard retinotopic mapping was performed toprobe the organization of visual cortex in the lesioned and the non-lesionedhemisphere and to determine the lesion site relative to retinotopic cortex.Furthermore, we investigated object-selectivity in ventral visual cortexusing fMRI-adaptation paradigms. Retinotopic mapping showed regularpatterns of phase reversals in both hemispheres. Surface analysis revealedthat the lesion is located in the posterior part of the medial fusiform gyrusanterior to V4 and dorsolateral to VO1/VO2. The contrast between objectand blank presentations showed no significant difference in activated volumein SM, compared to healthy subjects. FMRI-adaptation induced by differenttypes of objects, however, revealed differences in activation patterns.In healthy subjects, object-selective responses were found bilaterally in theanatomical location of the lesion site as well as posterior, dorsal, and ventralto the site. In SM’s right hemisphere, voxels immediately surroundingthe lesion lacked object-selectivity. Object-selective voxels were exclusivelyfound approximately 5 mm posterior to the lesion. In SM’s left hemisphere,no object-selective responses were found in mirror-symmetric locations.Our data suggest that the right medial fusiform gyrus is critically involvedin causing object agnosia and, furthermore, in adversely affecting objectprocessing in structurally intact areas of the ventral pathway in the nonlesionedhemisphere. Future studies will show the impact of this isolatedlesion on object processing in the dorsal pathway.42.23, 11:30 amFast decoding of natural object categories from intracranial fieldpotentials in monkey’s visual cortexMaxime Cauchoix 1,2 (cauchoix@cerco.ups-tlse.fr), Thomas Serre 3 , GabrielKreiman 4,5,6 , Denis Fize 1,2 ; 1 Université de Toulouse, UPS, Centre de RechercheCerveau & Cognition, France, 2 CNRS, CerCo, Toulouse, France, 3 Brown UniversityCognitive & Linguistic <strong>Sciences</strong> Department, 4 Children’s Hospital Boston,Harvard Medical School, 5 Swartz Center for Theoretical Neuroscience, HarvardUniversity, 6 Center for Brain Science, Harvard UniversityObject categorization involves very fast cognitive processes. Previous studieshave demonstrated that both human and non-human primates can categorizenatural scenes as containing animals very rapidly and accurately(Thorpe et al. 1996; Fabre-Thorpe et al. 1998). How such abstract categoriescould be accessed by visual processes remain an open question. Heretwo macaque monkeys were trained to perform such animal categorizationusing natural scenes. During task performance, we recorded intracranialEEG from intermediate areas of the ventral stream of the visual cortex.Unlike standard brain imagery techniques, electrocorticogram provides agood balance between time resolution and spatial coverage. Using multivariatepattern analyses, we quantified at millisecond resolution the amountof visual information conveyed by intracranial field potentials from 12 electrodesin one monkey and 16 in the other. As previously demonstrated inhuman epileptic patients (Liu et al. 2009) our analyses suggest that categoryinformation can be decoded as early as 100 ms post-stimulus. More importantly,we found that the readout performance of a linear classifier was significantlycorrelated with reaction times using single trial signals from V2and V4. These results suggest that categorical decisions could be supportedby the early information conveyed by relatively low-level visual areas.42.24, 11:45 amGiving the brain a hand: Evidence for a hand selective visual areain the human left lateral occipito-temporal cortexStefania Bracci 1 (stefania.bracci@unn.ac.uk), Magdalena Ietswaart 1 , CristianaCavina-Pratesi 2 ; 1 School of Psychology and <strong>Sciences</strong> Northumbria University,Newcastle upon Tyne, UK, 2 Department of Psychology, Durham University, UKThere is accumulating evidence for a “map” of brain areas specialized torepresent and process specific categories of stimuli. Evolution may haveplayed a significant role in shaping such specializations for particular categories.For example, stimuli which have played a critical role in socialadaptive behaviour such as bodies and faces have dedicated cortical representationsin visual cortex: extrastriate body area (EBA) and fusiform facearea (FFA), respectively. The human hand with its unique structure (e.g.,finger-thumb opposition) has played a major role in human evolution, andis thus a prime candidate to be represented by a specialised brain area inthe visual cortex. Using functional magnetic resonance imaging (fMRI), weprovide the first evidence for a brain area selective for the human hand.In our study, 14 right handed participants looked at 8 different categoriesof stimuli (whole-bodies, body-parts, hands, fingers, feet, robotic-hands,tools and chairs) and performed the one-back task. The hand-selective areawas found in all participants within the lateral occipital sulcus and so wename this area the Lateral Occipital Hand Area (LOHA). LOHA respondsmore to hands compared to body stimuli and it is anatomically separatedfrom body selective areas (e.g., EBA). In addition, LOHA responds more tohands compared to i) hand parts (fingers), ii) other single body parts (feet)and iii) stimuli sharing functional traits with hands (robotic hands). Thesefindings suggest that LOHA is specialized for representing and processingthe shape of the human hand as a whole. Remarkably, in contrast withother body-selective sites, which are primarily lateralized in the right hemisphere(EBA and FFA), LOHA is localized in the left hemisphere. Overall,our study sheds further light on the functional organization of the humanvisual system and brings new evidence in support of the domain-specificitytheory for visual object recognition.42.25, 12:00 pmTop-down engagement modulates the neural expressions of visualexpertiseAssaf Harel 1,2 (assaf.harel@nih.gov), Sharon Gilaie-Dotan 3,4,5 , Rafael Malach 5 ,Shlomo Bentin 2,6 ; 1 Laboratory of Brain and Cognition, National Institute ofMental Health, National Institutes of Health, 2 Department of Psychology,Hebrew University of Jerusalem, Jerusalem, Israel, 3 Institute of CognitiveNeuroscience, University College London, UK, 4 Wellcome Trust Centre forNeuroimaging, University College London, UK , 5 Department of Neurobiology,Weizmann Institute of Science, Rehovot, Israel, 6 Center of Neural Computation,Hebrew University of Jerusalem, Jerusalem, IsraelPerceptual expertise is traditionally associated with enhanced brain activityin response to objects of expertise in category-selective visual cortex,primarily face-selective regions. We reassessed this view asking (1) Whatare the neural manifestations of visual expertise and are they confined tocategory-selective cortex and (2) Is expertise-related activity an automaticprocess or does it depend on the top-down engagement of the experts withtheir objects of expertise? We conducted two fMRI studies comparing neuralmanifestations of car expertise in absence of task constraints (Experiment1) and when the task-relevance of cars was explicitly manipulated(Experiment 2). We unveiled extensive expertise-related activity throughoutthe visual cortex, starting as early as V1, which extended into non-visualareas. However, when cars were task-irrelevant, the expertise-related activitydrastically diminished, indeed, becoming similar to the activity elicitedMonday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>211


Monday Morning TalksVSS 2010 Abstractsby cars in novices. We suggest that expertise entails voluntary top-downengagement of multiple neural networks in addition to stimulus-drivenactivation associated with perceptual mechanisms.Acknowledgement: This work was supported by the National Institute of Mental Health(R01 MH 64458 to SB) and by the Israel Foundations Trustees Program for theAdvancement of Research in the Social <strong>Sciences</strong> (Research Grant for Doctoral Students inthe Social <strong>Sciences</strong> to AH).42.26, 12:15 pmTrade-off between spatial resolution and gray-scale coding forletter recognitionMiYoung Kwon 1 (kwon0064@umn.edu), Gordon Legge 1 ; 1 Department ofPsychology, University of MinnesotaLetter recognition is usually thought to rely on the shape and arrangementof distinctive pattern features such as line segments and curves. In findingsto be reported, we have found that high levels of letter-recognition accuracyare possible when low-pass filtering reduces the spatial bandwidthsof letters to levels not expected to support adequate recognition of lettershape. We addressed this apparent discrepancy by testing the hypothesisthat the human visual system relies increasingly on grayscale coding(contrast coding) for letter recognition when spatial resolution is severelylimited. The hypothesis predicts that as spatial resolution for renderingletters decreases, subjects will rely more on grayscale variations, thereforerequiring a larger gap between contrast thresholds for letter detection andletter recognition. We measured contrast thresholds for detecting and recognizingsingle letters (Courier, 1°) drawn at random from the 26 lettersof the English alphabet. The letters were low-pass filtered (blurred) with athird-order Butterworth filter with bandwidths (defined as the frequencyat half amplitude) of 0.9, 1.2, 2, and 3.5 cycles per letter. Threshoolds werealso measured for unfiltered letters. Data from seven normally-sighted subjectsshowed that differences in contrast thresholds between detection andrecognition increased substantially with decreasing bandwidth. The ratioof recognition to detection thresholds increased from 1.5 for the unfilteredletters to 8.8 for the most blurred letters (0.9 c/letter). These findings supportthe hypothesized increased reliance on grayscale information for letterrecognition when spatial resolution is reduced.Acknowledgement: This work was supported by NIH grant R01 EY002934.Monday AM212 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


Eye movements: Selection and cognitionRoyal Ballroom 6-8, Boards 301–315Monday, May 10, 8:30 - 12:30 pmMonday Morning Posters43.301 Orientation statistics at fixationDeep Ganguli 1 (dganguli@cns.nyu.edu), Jeremy Freeman 1 , Umesh Rajashekar 1,2 ,Eero Simoncelli 1,2 ; 1 Center for Neural Science, New York University, 2 HowardHughes Medical Institute, New York UniversityEye movements are not random. When viewing images, human observerstend to fixate on regions that, on average, have higher contrast thanrandomly selected regions (Reinagel & Zador, 1999). We extend this analysisto the study of local orientation statistics via the “orientation tensor”(Granlund & Knutsson, 1994), computed as the 2x2 covariance matrix oflocal horizontal and vertical derivatives (i.e., the gradient vector) within animage patch. This may be converted into three natural parameters: energy,orientedness, and orientation. Energy is the total variance in the gradients,and is related to contrast; orientedness indicates the strength of the dominantorientation; orientation indicates the predominant orientation. We usean eye movement database (van der Linde et al., 2009) to measure the orientationtensor within local 1 deg image patches that are either fixated byhuman observers (n=29), or selected at random (by using fixations for a different,randomly chosen image). We then obtain image-specific log distributionsof the three parameters of the orientation tensor. Averaged acrossall images and subjects, energy is higher in fixated patches, consistent withsimilar reports on contrast, but we do not observe such differences for orientationor orientedness. However, when we compare fixated and randomdistributions of these parameters on an image-by-image basis, we observesystematic differences. In particular, for the majority of images, the distributionof fixated patches, when compared to that of random patches fromthat image, is closer to the generic distribution averaged over all images. Weuse multi-variate techniques to characterize this effect across the database.We find that fixated distributions shift towards the generic distribution byabout 10 to 20%, and the trend is significant for all three parameters. Ourresults suggest that when viewing a particular image, observers fixationsare biased towards locations that reflect the typical orientation statistics ofnatural scenes.43.302 Second-order saliency predicts observer eye movementswhen viewing natural imagesAaron Johnson 1 (aaron.johnson@concordia.ca), Azarakhsh Zarei 1 ; 1 Department ofPsychology & CSLP, Concordia University, Montreal, Canada.Humans move their eyes approximately three times per second while viewingnatural images, between which they fixate on features within the image.What humans choose to fixate can be driven by features within the earlystages of visual processing (salient features e.g. colour, luminance), topdowncontrol (e.g. task, scene schemas), or a combination of both. Recentmodels based on bottom-up saliency have shown that it is possible to predictsome of the locations that humans choose to fixate. However, none haveconsidered the information contained within the second-order features (e.g.texture) that are present within natural scenes. Here we tested the hypothesisthat a salience map incorporating second-order features can predicthuman fixation locations when viewing natural images. We collected eyemovements of 20 human observers while they viewed 80 high-resolutioncalibrated photographs of natural textures and scenes. To maintain naturalviewing behaviour but keep concentration, observers were asked to studythe scene in order to recognize sections from it in a follow-up forced-choicetest. Interestingly, human observer eye movement patterns when viewingnatural textures do not show the same central bias as with natural scenes.Salience maps were constructed for each image using a Gabor-based filter-rectify-filtermodel that detects the second-order features. We find thatthe fixation location predicted by a model that incorporates second-orderinformation does not differ from that of human observers when viewingnatural textures. However, when the model is applied to natural scenes, wefind that the ability of the model to predict human observer eye movementsdecreases, due to the failure in capturing the central bias. A further improvementto the model would be to incorporate a mixture of bottom-up salienceand top-down input in the form of a central bias, which may increase theperformance of the model in predicting human eye movements.Acknowledgement: NSERC & CIHR to AJ43.303 What is the shape of the visual information that drivessaccades in natural images? Evidence from a gaze-contingentdisplayTom Foulsham 1 (tfoulsham@psych.ubc.ca), Robert Teszka 1 , Alan Kingstone 1 ;1 University of British ColumbiaThe decision of where to move the eyes in natural scenes is influencedby both image features and the task at hand. Here, we consider how theinformation at fixation affects some of the biases typically found in humansaccades. In an encoding task, people tend to show a predominance of horizontalsaccades. Fixations are often biased towards the centre of the image,and saccade amplitudes show a characteristic distribution. How do thesepatterns change when peripheral regions are masked or blurred in a gazecontingentmoving window paradigm? In two experiments we recordedeye movements while observers inspected natural scenes in preparation fora recognition test. We manipulated the shape of a window of preservedvision at fixation: features inside the window were intact; peripheral backgroundwas either completely masked (Experiment 1) or blurred (Experiment2). The foveal window was square, or rectangular or elliptical, withmore preserved information either horizontally or vertically. If saccadesfunction to increase the new information gained on each fixation, a horizontalwindow should lead to more vertical saccades and vice versa. In fact, wefound the opposite pattern: vertical windows led to more vertical saccades,and horizontal windows were more similar to normal, unconstrained viewing.The shape of the window also affected fixation and amplitude distributions.These results suggest that saccades are influenced by the features currentlybeing processed, rather than by a desire to reveal new information,and that in normal vision these features are sampled from a horizontallyelongated region. The eyes would rather continue to explore a partiallyseen region than launch into the unknown.Acknowledgement: Commonwealth Postdoctoral Fellowship43.304 Temporal scramble disrupts eye movements to naturalisticvideosHelena Wang 1 (helena.wang@nyu.edu), Jeremy Freeman 1 , Elisha P. Merriam 1 , UriHasson 3 , David J. Heeger 1,2 ; 1 Center for Neural Science, New York University,2 Department of Psychology, New York University, 3 Department of Psychology,Princeton UniversityWhen viewing a scene, humans rapidly move their eyes to foveate visualfeatures and objects of interest. In natural conditions, this process is temporallycomplex, yet little is known about how the temporal structure ofnaturalistic stimuli affects the dynamics of eye movements under freeviewing. We tracked eye position while observers watched a 6-minutescene from a feature film that was shot as a continuous sequence (with nocuts). Consistent with previous reports (Hasson et al., J Neurosci, 2008),eye movements were highly reliable, both across repeated presentationsand across observers. We then divided the scene into clips of various durations(ranging from 500 ms to 30 s) and scrambled the temporal order of theclips, thereby introducing cuts. Eye-movement reliability, quantified as thecovariance between eye positions to the scrambled clips and those duringthe corresponding portions of the full-length scene, was found to increaseas a power-law function of clip duration, from ~0 for the 500 ms clips to anasymptote for clips >30 s in duration. We developed a model that assumedthat observers searched randomly following each cut, fixating at arbitrarylocations until finding a target of interest and then tracking it faithfully. Wefit the model to the data by analytically deriving the model’s predictionfor the relationship between clip duration and eye movement reliability(covariance). While simple, this model fit the data well with only two freeparameters (number of possible target locations, asymptotic covariance).However, the model fits exhibited a systematic bias at the shortest scrambleMonday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>213


Monday Morning PostersVSS 2010 AbstractsMonday AMdurations. We conclude that exploratory fixations depend critically on thetemporal continuity of stimuli, and that human observers might utilize arandom search strategy when viewing naturalistic, time-varying stimuli.Acknowledgement: NIH grant R21-DA02442343.305 Suboptimal Choice of Saccade Endpoint in Search withUnequal PayoffsJohn F. Ackermann 1 (jfa239@nyu.edu), Michael S. Landy 1,2 ; 1 Department ofPsychology, New York University, 2 Center for Neural Science, New YorkUniversityPurpose. Observers’ choice of saccade endpoint in searching for a targetembedded in noise is well modeled by an ideal observer (Najemnik &Geisler, Nature, 2005). In this study we ask whether observers’ choices ofsaccade endpoint optimally take rewards into account.Method. Observers searched for a target in noise. On each trial, the observerfixated the center of a display. Eight Gaussian-white-noise disks (1.5 degdiam, 10 deg eccentricity) appeared, equally spaced around fixation, to oneof which was added a low-contrast Gabor-patch target. Correct detectionsin most locations resulted in a 100-point reward, and either the top (90deg) or bottom (270 deg) position had a reward of 500 points (indicated inadvance). There was no reward for incorrect responses. The contrast of theGabor patch was adjusted so that d′ = 1 at 10 deg eccentricity. The observerhad 250 ms to initiate a saccade. The target remained visible for 200 ms followingthe end of the saccade, thus affording a second look at the stimulus.Subjects judged which patch contained the target. A full visibility map wasobtained and used to model saccadic choices of an ideal observer that maximizesexpected gain.Results and Conclusions. There were significant differences between actualand ideal distributions of saccade location in all conditions (8 target positionsx 2 reward positions). The ideal observer tends to make short saccadeshalfway between the initial fixation position and the target or high-rewardposition. Human observers make longer saccades, landing on or near the 8patches. Most saccades land on or near the target or the high-reward position.For each condition, efficiency was calculated as the ratio of the humanobserver’s actual gain to that expected of the ideal observer. Efficiencieswere near optimal for targets adjacent to the high-reward location.Acknowledgement: NIH EY0826643.306 Eye movements during picture exploration and naturalactionCéline Delerue 1 (celine.b.delerue@wanadoo.fr), Muriel Boucart 1 , Mary Hayhoe 2 ;1 Lab. Neurosciences Fonctionnelles et Pathologies, CNRS, Université Lille-Nordde France, 2 Center for Perceptual Systems, University of Texas, AustinMuch of the work on fixation patterns in complex scenes has been performedwith 2D images. However, in natural behaviour fixation patternsare highly task dependent. 2D images differ from the natural world in severalways, including the nature of the task demands and the dimensionalityof the display. To investigate the role of these factors in gaze patterns, wemonitored eye movements in both normal and schizophrenic participants.Schizophrenic patients have previously been shown to exhibit prolongedfixations and reduced spatial exploration in free viewing of 2D images. Participantsstarted with two free viewing tasks. They were asked to (1) lookat a scene on a computer screen (2D passive exploration) and (2) look at areal scene on a table (3D passive exploration). Then, participants performedtwo other “active viewing” tasks: (1) picturing themselves making a sandwichin front of a computer screen (2D active exploration) and (2) making asandwich (3D active exploration). The scenes contained both task- relevantand irrelevant objects. Temporal and spatial characteristics of gaze werecompared for each task. The primary factor in determining gaze locationand duration was the task demands. Fixation durations were longer for theactive than the passive task for both 2D and 3D images. Normal participantsdid not show any difference between 2D and 3D images in passive viewingcondition, although 2D and 3D active viewing conditions differed. Moreover,allocation of gaze between relevant and irrelevant objects differedin active viewing but not in passive viewing. Participants looked more atrelevant objects during the real task. For patients with schizophrenia, theintroduction of a task essentially eliminated differences from normal controlsthat are observed in passive viewing. Thus real versus 2D images hadlittle effect on viewing patterns, but the task constraints were critical.43.307 Eye movement preparation affects target selection formanual reachingMichael Hegenloh 1 (Michael.Hegenloh@gmail.com), Donatas Jonikaitis 1 ; 1 Generaland Experimental Psychology, Ludwig Maximilian University MunichDuring many daily tasks we typically look at an object first and then wereach towards it. It has been shown in monkeys that areas related to handmovement planning integrate information about the current eye and handposition. Similar interactions have been observed using psychophysicalstudies in humans. In a series of experiments we measured how reachingpreferences are updated when eye position is changing. In the firstexperiment participants were asked to reach to one of two locations in afree choice task, while fixating at different locations on the screen. In accordancewith previous studies, we demonstrated that selection of the reachingtarget is influenced by the current eye position: Participants were morelikely to choose targets closer to the current gaze direction. In the secondexperiment we asked participants to make a saccade to a cued location,and during saccade preparation we briefly flashed two possible reachingtargets. The results showed that reaching goal preferences were influencedby the future eye position for targets flashed 100 ms before the eye movementonset, suggesting that reaching target selection takes into account thefuture eye position before saccade onset. This extends physiological andbehavioural findings on eye - hand position interactions by demonstratingupdating of reaching preferences before the eye movements.43.308 Eye-hand coordination in finding and touching a targetamong distractorsHang Zhang 1,2 (hang.zhang@nyu.edu), Camille Morvan 1,2,3 , Louis-Alexandre Etezad-Heydari 1 , Laurence Maloney 1,2 ; 1 Department of Psychology, New York University,2 Center for Neural Science, New York University, 3 Department of Psychology,Harvard UniversityWe asked observers to find and touch a target among distractors. Observersearned money by touching the target. Earnings decrease linearly with movementtime. If observers did not initiate hand movement until the target wasfound they would earn much less than if they attempted to integrate visualsearch and reach. Two clusters of objects were located to left and right ofthe midline of the display, one cluster containing four objects, the othertwo. Each object was equally likely to be the target. Initially the observerdid not know which object was the target but could gain information bysearching. The observer could potentially update his movement trajectorybased on information from visual search. Optimal initial movement strategywas to move toward the larger cluster while optimal visual search strategywas to first search the smaller, thereby quickly learning which clustercontained the target. We compare observers’ initial search/reach to the performanceleading to maximum expected gain (MEG). Methods: Objects forthe search/reach task were presented on a 32” ELO touchscreen located ona virtual arc around a starting position. Eye movements were tracked withan Eyelink II tracker. Before the search/reach task, observers were trainedin moving on the touchscreen and in visual search with keypress response.Five naïve observers participated. Results: For each trial we recorded thedirection of initial movement and the cluster initially search. Two observerscorrectly searched the cluster with fewer objects first ( p


VSS 2010 AbstractsMonday Morning Postersfor 10 seconds at the study phase, during which participants’ eye movementswere recorded. It was followed by a recognition test in which the teststimulus was presented 500 milliseconds from either the same viewpointas they learned (non-rotation condition) or a various viewpoint (rotationcondition). The task was to respond as to whether or not it was the sameobject which had been presented earlier regardless of their rotation, andthe same learning objects were presented repetitively. In the beginningof experiment, participants fixated on the center of components of objectsmore frequently in the rotation condition, suggesting that objects wereencoded more categorically. The proportions of large saccades were thesame at first, but, after a few trials, they changed depending on the test condition:At the beginning of trials, the arising proportion of large saccadeswas significantly higher in the rotation condition than in the non-rotationcondition, and immediately after the large saccade longer durations arosemore often in the rotation condition. These results suggest that participantsmay encode 3-D objects more categorically in the rotation condition, andafter a few trials they obtain the global shape of object first based on theircategorical stored information.43.310 Saccade target selection in subjects cued to remembersingle or multiple visual featuresDavid C Cappadocia 1,2,3,5 (capslock@yorku.ca), Michael Vesia 1,2,5 , Patrick AByrne 1,5 , Xiaogang Yan 1,5 , J Douglas Crawford 1,2,3,4,5 ; 1 Centre for <strong>Vision</strong> Research,York University, 2 School of Kinesiology & Health Science, York University,3 Neuroscience Graduate Diploma Program, York University, 4 Departments ofPsychology & Biology, York University, 5 Canadian Action & Perception Network(CAPnet)Most visuomotor experiments use dots or simple shapes as targets, but inthe real world we act on complex objects with many visual features. Herewe tested the limitation of memory in a feature detection paradigm for saccades.6 Head-fixed subjects were shown a ‘probe’ template with a conjunctionof two features (shape and texture) at central fixation for 500 ms, andinstructed to remember either the shape, texture, or both features beforeeach trial. After a delay, subjects were presented with a mask followed bystimuli at four radial locations at an eccentricity of 5° for 1 second. Afterstimuli were extinguished, subjects were required to saccade to the stimulithat matched the probe. In trials where only one feature was to be remembered,the three other stimuli differed from the probe only in the given feature.In trials where both features were to be remembered, the other stimulidiffered in one or both features. We analysed the data with a 2(numberof features)x4(locations)x6(subjects) mixed-model ANOVA, with subjectsas a random factor. Results indicate that subjects performed significantlybetter if they only had to remember only one feature. Interestingly, a maineffect of probe location was also found. Post hoc tests revealed that subjectsresponded significantly better if the probe was shown left of center ratherthan to the top right, and if shown at the top left rather than bottom right.There were also significant interactions between probe location and bothnumber of features and subject. We are currently analyzing probability distributionsof correct and incorrect saccades to one or both features withineach task, and compared across both tasks. Future experiments will repeatthis paradigm with transcranial magnetic stimulation over parietal cortexto examine its causal role in the integration of visual features into the motorplan.Acknowledgement: Ontario Graduate Scholarship Program, CIHR, Canada ResearchChairs Program43.311 Dynamic interactions between visual working memory andsaccade planningJohn Spencer 1,2 (john-spencer@uiowa.edu), Sebastian Schneegans 3 , AndrewHollingworth 1 ; 1 Department of Psychology, University of Iowa, 2 Delta Center,University of Iowa, 3 Institute for Neurocomputing, Ruhr University, Bochum,GermanyIn a recent line of psychophysical experiments, we found that workingmemory for a surface feature (color) interacts dynamically with saccadicmotor planning, even if subjects are instructed to make saccades based onlyon spatial cues. A match between the remembered color and the color ofeither the designated target or a distractor influences saccade target selection,metrics of averaging saccades, and saccade latency in a systematicfashion. We give a theoretical account for these effects using the frameworkof dynamic neural fields, in which neural processes are modeled throughthe evolution of continuous activity distributions over metric feature spaces.In an architecture that is consistent with visual processing pathways in theprimate cortex, we use separate multi-layer representations for spatial andsurface feature information, which are both coupled bidirectionally to acombined perceptual representation of visual input. Peaks of activity in thetop layer of the spatial representation indicate the metrics of saccadic motorplans. In the feature representation, the contents of working memory arerepresented by activity peaks that are self-sustained by means of lateralinteractions. Although these memory peaks do not evoke any overt activityin the earlier perceptual representations by themselves, they influencethe evolution of activity in response to a visual stimulus. They can therebyexert a biasing effect on the formation of a motor plan. With this model, wesimulated the complete experimental time course, including formation ofworking memory from a visual cue, planning and execution of saccadesunder different stimulus conditions, and subsequent test of the memoryperformance. We were able to replicate the key experimental observationsregarding saccade target selection, metrics, and latency. Our workshows how neural processes supporting perception, memory, and motorplanning can interact even if they are localized in distinct representationalstructures.Acknowledgement: NIH R01MH6248043.312 Visual information extraction for static and dynamic facialexpression of emotions: an eye-tracking experimentCynthia Roy 1 (cynthia.roy.1@umontreal.ca), Caroline Blais 1 , Daniel Fiset 1 , FrédéricGosselin 1 ; 1 Université de Montréal, Psychology departmentHuman faces convey a great deal of information for human social interactions.In this wealth of information, rapid and exact inferences about whatothers think or feel play a crucial role in tuning our behaviors. Most studiesaimed at identifying the visual processes and strategies subtending facialemotion recognition have used static stimuli. However, there is a growingbody of evidence that recognizing facial emotion in the real-world involvesmotion (e.g., Kamachi et al., 2001; Ambadar, Schooler, & Cohn, 2005). Thegoal of the present study was to compare eye movements during the recognitionof facial expression of emotions in static and dynamic stimuli. Weused the stimuli from the STOIC database (Roy et al., submitted; the databaseincludes static and dynamic facial expression of emotion of the six basiccategories—fear, happiness, sadness, disgust, anger, and surprise—pluspain and neutral). Twenty participants each completed 320 trials (4 blocksof 80 stimuli, containing either static of dynamic stimuli). After each 500-msstimulus, participants had to recognize the displayed emotion. Participantswore the EyeLink II head-mounted eye-tracking device while looking atthe photos or videos showing different emotions. Participants were moreaccurate with dynamic than with static stimuli (83% vs. 78%). Average fixationmaps were computed for each emotion and stimulus condition usingthe correct answers only. Eye movements clearly differed for the static anddynamic stimuli: For the dynamic faces, the gaze of participants remainedclose to the center of the face, whereas, for the static faces, their gaze rapidlyspread outward. This was true for all the facial expressions tested. We willargue that the ampler eye movements observed with static faces result froma ventral-stream compensation strategy due to the relative lack of informationuseful to the dorsal-stream.Acknowledgement: NSERC43.313 Differences in Own- and Other-race Face Scanning inInfantsAndrea Wheeler 1 (andrea.wheeler@utoronto.ca), Gizelle Anzures 1 , Paul Quinn 2 ,Olivier Pascalis 3 , Alan Slater 4 , Kang Lee 1 ; 1 University of Toronto, 2 University ofDelaware, 3 Université Pierre Mendès Grenoble, 4 University of ExeterThe other-race effect has been found to exist in both adults (Meissner &Brigham, 2001) and infants (Kelly et al., 2007). It is most often described interms of discrimination abilities and manifests itself as an own-race recognitionadvantage. While recognition advantages for own-race faces havebeen found as early as 3 months (Sangrigoli & de Schonen, 2004), whatremains unclear is whether different attentional patterns can be detectedduring the scanning of own- versus other-race faces in infancy. The presentstudy investigated whether infants viewing own- and other-race facesdisplayed differential scanning and fixation patterns that may contribute tothe previously reported own-race recognition advantage.Participants were Caucasian infants (n =22) aged 6 to 10 months (M = 8.5months). Infants were presented with two videos on a Tobii Eye-Trackingscreen while their fixations and scanning patterns were recorded. EachMonday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>215


Monday Morning PostersVSS 2010 AbstractsMonday AMvideo contained the face of an adult female talking directly into the cameraagainst a neutral background for a duration of 30 seconds. The identityof the face and the order of the presentations were counterbalanced andrandomized across participants. Data was analyzed by comparing the proportionof infants’ fixations to the different facial features across conditions(races).An analysis of variance (with results to date) revealed a significant interactionbetween race and feature, in that infants looked significantly longer atthe eyes of the own-race faces as compared to the other-race faces, p


VSS 2010 AbstractsMonday Morning Posters43.318 Working memory, feature-based attention, and theirinteraction modulate the perception of motion direction in humanobserversDiego Mendoza 1 (diego.mendoza@mail.mcgill.ca), Megan Schneiderman 1 , JulioMartinez-Trujillo 1 ; 1 Department of Physiology, McGill UniversityAttending to a visual stimulus feature modulates the perception of thatfeature. Here, we used moving stimuli to investigate whether maintainingthe representation of a visual feature in working memory (WM) producesa similar effect, and whether such effect interacts with the effect offeature-based attention (FBA). Seven subjects identified the direction of abrief pulse of coherent motion occurring in a 0% coherence random dotpattern (RDP). Concurrently, they performed a second task consisting ofeither a) attending to four sequentially presented moving RDPs and detectingwhether they changed direction (Experiment 1), or b) remembering thedirection of a moving RDP (sample) and, after a delay, determining howmany of four sequentially presented RDPs (tests) matched the sample’sdirection (Experiments 2&3). In Experiment 1, the pulse co-occurred withone test, while subjects attended to it. In Experiment 2, the pulse occurredduring an inter-test interval, while subjects remembered the sample. InExperiment 3, the pulse co-occurred with one of the tests, while subjectsremembered the sample and attended to the test. Pulse identification performancewas significantly higher when the pulse direction was the sameas the concurrently attended RDP (FBA, Experiment 1) or the rememberedsample (WM, Experiment 2), than when it was opposite. In Experiment3, performance was highest when both the remembered sample and theattended test directions (WM+FBA) were the same as the pulse direction,intermediate when one was the same and the other opposite, and lowestwhen both were opposite. When subjects performed the pulse identificationtask but ignored the sample and tests, performance was unaffected bythe sample and test directions, demonstrating that mere exposure to thesestimuli did not influence pulse identification. Our results show that WMand FBA can individually and simultaneously modulate the perception ofmotion direction in human subjects.43.319 The Role of Selective Attention in Visual Working MemoryCapacity: An ERP studyJohanna Kreither 1,2 (jkreither@ucdavis.edu), Javier Lopez-Calderon 1,3 , FranciscoAboitiz 3 , Steven Luck 1 ; 1 Center for Mind and Brain and Departmentof Psychology, University of California, Davis, 2 Department of Psychology,Universidad de Chile, 3 Department of Psychiatry, Pontificia Universidad Catolicade ChileVisual working memory (VWM) capacity has been consistently measuredboth behaviorally and electrophysiologically using change detection tasks(See, e.g., Luck & Vogel, 1997; Vogel & Machizawa 2004). Many studieshave estimated that the average healthy adult can hold 3-4 items in VWM.Recent research suggests that selective attention influences storage capacity.Therefore, the present study sought to determine if sensory processingis enhanced at the locations of objects being encoded into VWM, just asin studies that directly manipulate spatial attention. Observers performeda lateralized change detection task, in which a cue indicates that subjectsshould encode the colors of the items on one side of fixation, ignoring theitems on the other side. While subjects were encoding the items on oneside, a high-contrast dartboard probe stimulus was sometimes presentedon either the attended or unattended side. We examined neural activityduring the maintenance interval when no probe was presented, along withthe sensory response elicited by the probe stimulus. As in previous studies,the amplitude of the delay activity increased monotonically as the memoryload increased, up to the storage capacity limit. We also found that the P1sensory response elicited by the probe stimulus was enhanced when theprobe appeared on the attended side of the display, indicating that encodinginformation into VWM involves a modulation of sensory transmissionin the spatial region of the items being encoded. Moreover, the magnitudeof this P1 sensory modulation was largest when the number of items beingencoded was near the capacity limit (3-4 items) and was much smallerwhen the number of items being encoded was either well below capacity(1-2 items) or well above capacity (5-6). These results suggest that visuospatialattention plays a role in VWM encoding, especially when capacityis being challenged.Acknowledgement: This research was made possible by grant R01MH076226 from theNational Institute of Mental Health.43.320 What are the differences between comparing visualworking memory representations with perceptual inputs andcomparing two perceptual representations?Joo-Seok Hyun 1 (jshyun@cau.ac.kr), Steven Luck 2 ; 1 Department of Psychology,Chung-Ang University, Seoul, Korea, 2 Center for Mind & Brain, University ofCalifornia, Davis, USAWe reported that the speed of judging presence or absence of any differencebetween memory and test items (any-difference task) stays relativelyconstant as the number of difference increases. Contrarily, we also foundthat the speed of judging that of any sameness between memory and testitems (any-sameness task) accelerates as the number of sameness increases.To assess the nature of this contrasting difference, the present study letsubjects directly compare two sets of four colored boxes that are simultaneouslypresented across fixation under any-difference and any-samenessmanipulation. The RT results showed that increasing the number ofdifference or sameness between sample and test moderately acceleratedspeed of subjects’ responses in both conditions. These RT patterns werealso observed when position matching between the boxes was made easier.There was however a robust pattern of faster RTs in any-different trialswith every item being the same across two sets than any-sameness trialswith every item being different. The pattern virtually replicates the ‘Fastsame’effect reported in our previous study. The present results indicatethat comparing two perceptual representations may undergo a matchingprocess that is in part similar to the process for comparing VWM representationswith perceptual inputs.Acknowledgement: NIMH grant R01MH63001, KRF-2009-332-H0002743.321 Role of LIP persistent activity in visual working memoryKevin Johnston 1 (kevinj@biomed.queensu.ca), Emiliano Brunamonti 1 , Neil Thomas 1 ,Martin Pare 1 ; 1 Centre for Neuroscience Studies, Queen’s UniversityParietal cortical areas have been implicated as a critical neural substrate forvisual working memory. Human fMRI and ERP studies have revealed persistentparietal activation during delay periods of visual working memorytasks and shown that such activation scales with the capacity limit of visualworking memory. A second line of evidence has been provided by neuralrecordings in primates performing memory-guided saccades. Neuronsin parietal cortical areas, such as the lateral intraparietal area (LIP), havebeen shown to exhibit persistent activity during the memory delay of thistask, but the contribution of LIP persistent activity to mnemonic processesremains poorly understood. Specifically, it is unclear whether persistentactivity carries a retrospective visual or prospective saccade-related representation,or how it could be related to the capacity limit of visual workingmemory. To address this, we recorded the activity of single LIP neurons inthree monkeys while they performed memory-guided saccades and carriedout two sets of analyses. We first compared the visual and motor responsesof each neuron with persistent delay period activity. LIP neurons exhibiteda pattern of activity consistent with a noisy retrospective code: most neuronshad greater visual than saccade-related activity, and persistent activitywas most strongly linked with the preferred direction of each neuron, buthighly variable. We then investigated how persistent activity could relateto visual working memory capacity. We derived estimates of baseline andpersistent activity from our sample of LIP neurons, and used these valuesto compute the theoretical number of discriminable representations thatcould be carried over a range of realistic simulated activity distributionsusing an ROC analysis. This number ranged from one to approximatelyfour. These data show that LIP persistent activity is visually biased andsuggest a neural basis for the capacity limit of visual working memory.Acknowledgement: CIHR and NSERC43.322 A biased-competition account of VSTM capacity limits asrevealed by fMRINiklas Ihssen 1 (n.ihssen@bangor.ac.uk), David Linden 1 , Kimron Shapiro 1 ; 1 Schoolof Psychology, Bangor UniversityThe capacity of visual short-term memory can be increased significantlywhen the to-be-remembered objects are presented sequentially across twodisplays, rather than simultaneously in one display (Ihssen, Linden, & Shapiro,VSS, 2009). Interestingly, a similar performance increase is observedwhen the (simultaneous) display is repeated. The present study sought toelucidate brain mechanisms underlying the sequential and repetition benefit.We used functional magnetic resonance imaging during a change detectiontask where participants had to maintain 8 different objects (colours andMonday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>217


Monday Morning PostersVSS 2010 AbstractsMonday AMwhite shapes). Objects were presented either simultaneously (baseline condition),split into two temporally separated 4-object arrays (sequential condition),or presented twice (repetition condition). Importantly, conditionswere matched for perceptual load and visual onset by filling the sequential4-object arrays with placeholders and presenting a second “empty”placeholder array in the baseline condition. Whole-brain BOLD analysesrevealed two main results: Relative to the baseline condition, both sequentialand repeated conditions evoked stronger brain responses in extrastriatevisual areas. Mirroring memory performance, BOLD amplification inthese areas may relate to the higher number of object representations thatare activated along the ventral pathway. In contrast to the occipital effects,two key regions of the frontoparietal attention/working memory networkdissociated the sequential and repetition conditions. In the inferior parietallobe and the frontal eye fields, sequential displays elicited reduced brainactivation, relative to BOLD responses in the repetition and baseline conditions,which showed no difference. Implications of these findings are discussedwithin the framework of biased competition.Acknowledgement: Wales Institute of Cognitive Neuroscience43.323 α-oscillations and the fidelity of visual memoryJie Huang 1 (jiehuang@brandeis.edu), Robert Sekuler 1 ; 1 Volen Center for ComplexSystems and Department of Psychology, Brandeis UniversityLow amplitude of α-oscillations in EEG is associated with selective attention,and has been taken as a sign of enhanced neuronal processing intask-relevant cortical areas (W. Klimesch). As fidelity of encoding is a prerequisitefor visual memory, we hypothesized that α-amplitude would beinversely related to the fidelity with which a visual stimulus is encodedand later recalled. We tested this hypothesis by recording scalp EEG whilesubjects attended to lateralized Gabors whose spatial frequency they wouldhave to reproduce from memory. On each trial, two Gabors were presentedsimultaneously for 300 msec, one to the left of fixation, the other an equaldistance to the right. Either before stimulus presentation or after, subjectswere cued which Gabor’s frequency they would have to reproduce frommemory; subjects were to ignore the non-cued stimulus. Each trial’s reproducedspatial frequency showed a strong influence from information thatis not relevant to the task, that is, reproductions were attracted toward theaverage spatial frequency of Gabors seen on preceding trials. This effectwas greatly reduced when the cue preceded the stimulus pair rather thanfollowed it. This improved suppression of task-irrelevant information wasaccompanied by a consistent change in the contralateral α amplitude. Atvarious, widely-distributed electrode sites, we saw a lateralized, attention-drivenreduction in α amplitude, implying enhanced processing of thecued Gabor. However, occipito-parietal electrode sites showed lateralizedα amplitude only during early visual encoding (0-200 msec after stimulusonset). Moreover, this reduced α amplitude at contralateral occipito-parietallocations to the cued Gabor was associated with a reduction in theinfluence of task-irrelevant information from preceding trials. These resultssuggest that neural enhancement of relevant information is a determinantof optimal memory performance, and that it protects remembered stimulusfrom intrusion of irrelevant information.Acknowledgement: Supported by NIH grant MH-06840443.324 An electrophysiological measure of visual short-termmemory capacity within and across hemifieldsJean-Francois Delvenne 1 (j.f.delvenne@leeds.ac.uk), Laura Kaddour 1 , JulieCastronovo 1,2 ; 1 University of Leeds, UK, 2 University of Louvain, BelgiumRecent ERPs studies have identified a specific electrophysiological correlateof the contents of visual short-term memory (VSTM) (McCollough et al.,2007; Vogel & Machizawa, 2004). A sustained posterior negative wave wasobserved throughout the memory retention period, which was larger overthe contralateral side of the brain (with respect to the position of the memoryitems in the visual field) relative to the ipsilateral side. Importantly, theamplitude of this contralateral delay activity (CDA) increased progressivelywith the number of items to be remembered, reaching an asymptotic limitat around 3-4 objects. This contralateral organization of visual memoriesraises the possibility that each hemisphere has its own capacity of storage:more items could be held in memory when they are split between the leftand right hemifields as when they are all presented within a single hemifield.In the present study, we measured CDA amplitude in 15 participantswhile they remembered colored squares from either one hemifield or bothhemifields. We found that the amplitude of the CDA was modulated bythe total number of items held in memory, independently of their spatialdistribution in the visual field. When individuals had to remember one sideof the memory array, the CDA activity increased for arrays of one, two,and three items, but ceased to get larger for arrays of four items. However,when individuals had to memorize the items from both sides of thememory array, this contralateral activity reached its asymptotic limit forarrays of two items per side. These results suggest that despite being contralaterallyorganized, VSTM is limited by the number of objects from bothhemifields. VSTM may consist of a pool of resources that can be allocatedflexibly to one or both hemifields and allow a maximum of 3-4 objects to bemaintained simultaneously.Acknowledgement: This work was supported by the Experimental Psychology <strong>Society</strong>, UK43.325 A contribution of persistent FEF activity to object-basedworking memory?Kelsey Clark 1 , Behrad Noudoost 1 , Tirin Moore 1 ; 1 Neurobiology,School of Medicine,Stanford UniversityWe examine delay period activity in the FEF during the performance of anobject-based working memory task. In the task, the monkey is briefly presenteda single image (sample) in the periphery. Following the sample presentation,the monkey must remember the sample throughout a 1-secondblank delay period. Following the delay, one target (a repeat of the sampleimage) and one distractor object appear in the periphery and the monkeymust saccade to the matching target to receive a reward. We compare FEFneuronal activity during blocks in which target and distractor imagesalways appear at locations that include the sample location (overlappingcondition) with blocks in which the target/distractor always appear at positionsrotated 90 degrees from the sample position (orthogonal condition).Thus we can examine the degree to which spatially selective delay periodactivity of FEF neurons contributes to object-based working memory. Wealso use a memory-guided saccade task to identify the functional classes ofFEF neurons that might contribute to the persistence of purely object-basedinformation. We observe persistent, spatially selective delay period activityin the FEF consistent with the use of spatial signals in maintaining objectinformation.Acknowledgement: NIH grant EY014924, NSF grant IOB-0546891, and the McKnightFoundation43.326 Using Multi-Voxel Pattern Analysis to explore the role ofretinotopic visual cortex in visual short-term memory: mappedmemories or plain prospective attention?Alejandro Vicente-Grabovetsky 1 (a.vicente.grab@gmail.com), Rhodri Cusack 1 ;1 MRC Cognition and Brain <strong>Sciences</strong> Unit, University of CambridgeIntroduction: There are two long-standing debates regarding the nature ofvisual short-term memory (VSTM) representations and their neural underpinnings.One question is whether VSTM depends on the same neural circuitryas vision or whether it functions separately. The second question iswhether VSTM and attention use overlapping mechanisms. Methods: Toevaluate these questions, two experiments evaluated retinotopic activationduring attention and VSTM maintenance. Participants attended to andremembered the contents of two (out of four) visually presented sectors.After this they were tested for change detection on the sectors: in the firstexperiment change detection was performed in the same location (prospectiveattention encouraged), while in the second the change detection wasperformed centrally regardless of the sectors’ location (prospective attentiondiscouraged). Results: During the VSTM maintenance period, bothunivariate methods and Multi-Voxel Pattern Analysis showed evidence ofspatial encoding in visual cortex only where prospective attention to thesector locations was encouraged, but no evidence when it was not required.However, spatial encoding was evident during attention in both experiments,ruling out an explanation based on power. This spatial selectivitywas equivalent to that obtained from purely sensory stimulation. Conclusion:We conclude that VSTM does not use the same low-level visualcircuitry as typical visual processing or attention, suggesting it is underpinnedby different mechanisms than either of these. Visual processing andattention, on the other hand, appear to have similar neural correlates. Wesuggest that previous findings of VSTM activating visual cortex are due toresidual attentional activation.Acknowledgement: Medical Research Council218 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsMonday Morning Posters43.327 Dissociating feature complexity from number of objects inVSTM storage using the contralateral delay activityMaha Adamo 1 (maha@psych.utoronto.ca), Kristin Wilson 1 , Morgan D. Barense 1 ,Susanne Ferber 1 ; 1 Department of Psychology, University of TorontoMany recent studies have examined the neural correlates of visual shorttermmemory (VSTM) maintenance using an ERP component known asthe contralateral delay activity (CDA), whose amplitude corresponds tomemory load within individuals and to memory capacity across individuals.The parietal distribution of the CDA makes it a particularly compellinglocus of capacity-limited VSTM storage given that it overlaps with fMRIfindings of feature- and location-based VSTM systems located in the superiorand inferior intra-parietal sulcus. An under-explored question, however,is the extent to which the CDA indexes the feature complexity of itemsto be remembered or the number of objects/locations to be remembered, orboth. We employed a lateralized change detection task in which the featurecomplexity and number of items to be remembered were independentlymanipulated. Items to be remembered were either simple features (shape,color, or orientation) or conjunctions of these features, and they were presentedeither at one location or at three locations. Behavioural results demonstratedthat individuals performed comparably for simple features andconjunctions presented one at a time, while performance for simple featuresdeclined when three were presented at different locations relative towhen they were conjoined in one object. We found that ERP amplitudes atthe lateral, posterior sites that are typically measured in the CDA reflectedthe number of objects to be remembered, while more central, anterior sitesindexed the complexity of the objects to be remembered. Thus, feature- andlocation-based systems in the parietal cortex can be dissociated even at thecourse spatial resolution of ERP.43.328 Accessing a working memory representation delaysupdating that representationJudith Fan 1 (jefan@fas.harvard.edu), George Alvarez 1 ; 1 Department of Psychology,Harvard UniversityRecent research into probabilistic models of mental representation haveprofited from requiring participants to give multiple responses on a giventrial (Vul et al., 2009). This method assumes that accessing a mental representationleaves it intact for subsequent sampling. Here we tested the consequencesof sampling (memory access) on the mental representation of anobject’s velocity. On each trial, observers saw an object move at a constantvelocity and were instructed to continue mentally tracking the object’s positionas it moved behind a virtual occluder. Either one or three visual markerswere posted at a range of distances (early, middle, late) along the lengthof the occluded path. Observers responded by pressing a key when theobject was imagined to have reached each marker. The results demonstratethat the accuracy of the velocity representation depends on whether previousresponses were given. When three responses were given, the object’svelocity was reliably underestimated (even on the first response), andthe degree of underestimation increased with position. Velocity was alsounderestimated in the single-click condition, but the degree of underestimationdid not increase as a function of position [the interaction betweennumber-of-responses and position was highly reliable, F(2,30) = 10.779,p


Monday Morning PostersVSS 2010 AbstractsMonday AMoriented edges (V1-like Gabor filters); and motion. All monkeys are significantlyattracted towards salient stimuli, with salience computed as the sumof the five features, for saccades both into normal and lesioned hemifields(t-tests, p


VSS 2010 AbstractsMonday Morning Posters43.406 Can’t Take My Eyes Off of You: Delayed Attentional DisengagementBased on Attention SetWalter Boot 1 (boot@psy.fsu.edu), James Brockmole 2 ; 1 Department of Psychology,Florida State University, 2 Department of Psychology, University of Notre DameThe attentional consequences of task-irrelevant properties of objects outsidethe focus of attention have been studied extensively (attention capture).However, the extent to which task-irrelevant properties influenceattentional deployment when they are already within the focus of attentionhas generally been ignored. This is an important oversight becauseattention capture effects should be thought of as being composed of boththe pull of attention to a location, and the holding of attention once it getsthere. We present the results of a series of experiments examining the abilityof task-relevant and irrelevant properties of the currently fixated itemto hold attention (as measured by dwell times). The attentional disengagementparadigm has participants start each search trial fixating an objectthat could never be the search target. Surprisingly, completely irrelevantproperties of this object determined how long attention dwelled and whereattention went next. Task-relevant properties at this irrelevant location alsoincreased dwell times. We present evidence that contingent disengagementeffects are not restricted to simple target features. In one experiment participantsviewed displays containing many circles and each trial began withparticipants fixating a circle that was never the target circle (the sole redcircle). The task was to indicate the presence or absent of a target letter(e.g., p) within the red target circle as quickly as possible, and participantswere told to ignore the initially fixated circle. However, when this initiallyfixated, always irrelevant item contained the target letter disengagementwas significantly delayed. Additionally, when this item contained a lettersimilar to the target (e.g., q), disengagement was delayed compared to adissimilar letter (e.g., i). In this series of experiments we found stimulusdrivenand contingent capture effects on disengagement, and we presentthe disengagement paradigm as a promising means to study complexattention sets.43.407 Fatal attraction or reluctance to part: Is oculomotor disengagementindependent of the initial capture of the eyes?Sabine Born 1 (sabine.born@unige.ch), Dirk Kerzel 1 , Jan Theeuwes 2 ; 1 Faculté dePsychologie et des <strong>Sciences</strong> de l’Education, Université de Genève, Switzerland,2 Department of Cognitive Psychology, Vrije Universiteit Amsterdam, theNetherlandsHighly salient distractor stimuli may prolong reaction times to a target stimulus.This distraction effect is largely due to the fact that our eyes are sometimescaptured by the distractor. To respond to the target, the eyes needto be redirected from the distractor to the target which is time-consuming.Distractors that are similar to the target cause a stronger distraction effectthan dissimilar distractors. On the one hand, this can be explained by thefinding that the eyes go more often to a distractor that looks like the targetthan to a distractor that does not look like it. On the other hand, the largerinterference caused by the similar distractors is due to more difficulty disengagingthe eyes from an object that looks like the target than from anobject that looks quite different. The goal of the present study was to testwhether these two processes (oculomotor capture and disengagement) areindependent. We used a variant of the oculomotor capture paradigm. Participantswere asked to make an eye movement to a gray target. Simultaneouslywith the target, we presented a green onset distractor. After ashort delay (30-40 ms), the distractor changed either to gray (target-similarcolor) or to red (dissimilar color). Results show a clear dissociation betweenoculomotor capture and disengagement. Whereas there were only smalldifferences in the percentage of capture, dwell times on the distractor weresubstantially longer when the distractor changed to the target-similar color.If, however, the color change occurred later in time (60-80 ms), this similarityeffect in dwell times was gone as well. The latter finding is discussed interms of rapid disengagement of covert attention from the distractor siteand a critical time window for the influence of distractor characteristics ongaze dwell times.43.408 Fixations on Low Resolution ImagesTilke Judd 1 (tjudd@mit.edu), Frédo Durand 1 , Antonio Torralba 1 ; 1 MassachusettsInstitute of TechnologyWhen an observer looks at an image, his eyes fixate on a few select pointsthat correspond to interesting image locations. However, how this processis affected by image resolution is not well understood. Here we investigatehow image resolution affects human fixations through an eye trackingexperiment. We showed 100 images at different resolutions to 30 observers.Each image was shown at one of seven resolutions (width of 4, 8, 16, 32, 64,256, 1024 pixels) and upsampled to the original size of 1024x768 pixels fordisplay. We found that: 1) As image resolution decreases, users fixate infewer locations and fixations get more concentrated near the center. 2) Fixationsfrom lower resolution images can predict fixations on higher resolutionimages. We measure how well one observer’s fixations predict anotherobserver’s fixations on the same image at different resolutions using thearea under the ROC curves as a metric. Fixations on an image at full resolutionare predicted better by fixations on the same image as the resolutionincreases, but the rate of improvement declines after a resolution of 64px.3) Fixations are most consistent across users on images at a resolution of 32and 64px. More specifically, the fixations on 32 and 64 resolution imagespredict fixations on the same image better than fixations from any otherresolution predict fixations of that resolution. The fixations on the lowestand highest resolution images are harder to predict. These findings suggestthat working with fixations at an image resolution of 32-64px could be bothperceptually adequate and computationally attractive.Acknowledgement: NSF CAREER awards 0447561 and IIS 0747120. Frédo Durandacknowledges a Microsoft Research New Faculty Fellowship, a Sloan Fellowship, RoyalDutch Shell, the Quanta T-Party, and the MIT-Singapore GAMBIT lab.43.409 Eye movements while viewing captioned and narratedvideosNicholas M. Ross 1 (nickross@rci.rutgers.edu), Eileen Kowler 1 ; 1 Department ofPsychology, Rutgers UniversityResearch on visual attention has been mainly limited to static images, but ineveryday life we often rely on a narrative to guide us through dynamicallychanging scenes. Sometimes the narrative is presented via audio and in specialcases as a caption. Narratives can help guide attention, but may requireadditional processing that increases task demands, or distracts attentionfrom relevant locations when presented as a caption. This study examinedhow narrative and video interact to drive attention while viewing documentaryclips. Video clips (~120 s each) were cut from 4 documentaries suchthat no talking heads were present in any frame. Videos were accompaniedby narration in the form of either audio or captions. Videos with both audioand captions, or neither, were also tested. In order to motivate viewing,multiple choice tests on content were given after each clip. Captions werestrong attractors of gaze. 56% of saccades were devoted to reading captionswhen no audio was present, and 41% when audio was present (surprisinglylarge, given that audio made reading of captions unnecessary). Durationsof fixations were shorter for reading captions (~260 ms) than inspectingthe video (~420 ms), regardless of the presence of audio. Fixations made toinspect the video clustered near the scene center when narration was present.In the absence of narration, eye movements were more exploratory;the 2D scatter of saccadic endpoint locations increased by up to 70%. Thesechanges to the spatial distribution of saccades, as well as the adoption ofthe time-consuming strategy of reading captions even when redundantwith audio narration, show that eye movements while inspecting videosare motivated mainly by a cognitive strategy of searching for clues thatfacilitate the interpretation of viewed events.Acknowledgement: NSF 054911543.410 Modeling gaze priorities in drivingBrian Sullivan 1 (brians@mail.utexas.edu), Constantin Rothkopf 2 , Mary Hayhoe 1 ,Dana Ballard 1 ; 1 Center for Perceptual Systems, University of Texas at Austin,2 Frankfurt Institute for Advanced Studies, Goethe UniversityGaze behavior in complex tasks (e.g. navigation) has been studied[1,2]but it is still unknown in detail how humans allocate gaze within complexscenes especially in temporally demanding contexts. We have previouslystudied gaze allocation in a virtual walking environment and modeledhuman behavior using a reinforcement-learning model[2]. We adapted thisapproach to the study of visuomotor behavior in virtual driving, allowingfor controlled visual stimuli (e.g. other car paths) and monitoring of humanmotor control (e.g. steering). The model chooses amongst a set of visuomotorbehaviors for following and avoiding other cars and successfullydirects the car through an urban environment. Performance of the modelwas compared to that of human subjects. Eye movements were trackedwhile driving in a virtual environment presented in a head mounted display.Subjects were seated in a car interior mounted on a 6 degree-of-freedomhydraulic platform that allows the simulation of vehicle movements.Monday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>221


Monday Morning PostersVSS 2010 AbstractsMonday AMSubjects were instructed to follow a ‘leader’ car and to avoid other trafficpresent. Our analysis assumes that subjects gaze strategies are a direct measureof task priorities. These priorities can be derived from subject behaviorusing inverse reinforcement learning to extract individual reward values.The majority of fixations were devoted to keeping gaze on the leader carand a smaller proportion on objects of avoidance, with relatively few fixationson other objects unless no traffic was present. We discuss the detailedmeasures underlying gaze and motor behavior in these experiments andtheir relationship to the reinforcement-learning model. Additionally, wewill discuss future refinements of the model and techniques using inversereinforcement learning that allow for better fitting of human data. [1]Jovancevic J, Sullivan B, Hayhoe M.; J Vis. 2006 Dec 15. [2] Rothkopf CA.Modular models of task based visually guided behavior; Ph. D. Thesis43.411 The Dynamics of Gaze When Viewing Dynamic FacesMelissa Vo 1 (mlvo@search.bwh.harvard.edu), Tim Smith 2 , John Henderson 2 ;1 Harvard Medical School, Brigham & Women’s Hospital, 2 University of EdinburghHow do we attend to faces in realistic encounters? Is it, for example, truethat we tend to look at somebody’s eyes? Most of the work on face perceptionhas come from static face presentations raising the question whetherprevious findings actually scale to reality. An intermediate step towardsreal-world face perception is to use dynamic displays of faces. Here wemonitored participants’ eye movements while they watched videos featuringclose-ups of pedestrians engaged in interviews. Dynamic interest areaswere used to measure fixation distributions on moving face regions includingeyes, nose, and mouth. Additionally, fixation distributions were analyzedas a function of events such as speech, head movement, or gaze direction.Contrary to previous findings using static displays, we observed nogeneral preference to fixate the eyes. Rather, gaze was dynamically adjustedto the dynamics of faces: When a depicted face was speaking, participantsshowed increased gaze towards the mouth, while the eyes were preferablyfixated when the face engaged with the viewers by looking straight into thecamera. Further, when two faces were present and one face looked at theother, viewers followed the observed gaze from one face to the other. Thus,especially in dynamic displays, observed gaze direction seems to promotegaze following. Interestingly, when a face moved quickly, participantstended to look more at the nose than at any other face region. We interpretthis “nose tracking” as a strategy to use a centered viewing position tooptimally track and monitor moving faces. All in all, these findings provideevidence for the wealth of moment-to-moment adjustments of gaze controlthat become necessary when viewing dynamic faces. Since human interactionheavily relies on the understanding of information conveyed by facialmovements, it is of key interest to learn more about gaze dynamics whileviewing dynamic faces.43.412 An Eye for Art: Effects of Art Expertise on the Visual Explorationof DrawingsJohan Wagemans 1 (johan.wagemans@psy.kuleuven.be), Karen De Ryck 1 , Peter DeGraef 1 ; 1 Laboratory of Experimental Psychology, University of Leuven, BelgiumWhen presented with a complex new visual stimulus, viewers unfold aninformation sampling strategy which through a series of fixations and saccadesprovides them with the information they require given the task athand. When that task is only loosely structured, as is the case when oneis asked for an aesthetic appreciation of an unknown work of art, there isno single “best” location to send the eye to next. Under these conditions,scanpaths may be shaped by featural saliency, by systematic oculomotorbiases, by image composition or by an active search for elements that allowaesthetic judgment. In the present study, we have attempted to assess therelative impact of these various determinants by asking art novices and artexperts to evaluate a series of drawings (“Kalligrafie” by Anne-Mie VanKerckhoven). During their inspection, eye movements were registeredand afterwards measures of appreciation and evaluation were collected.Repeated stimulus exposure and presence vs. absence of an explanation ofthe artist’s modus operandi were used to assess the effects of episodic andsemantic experience with the viewed drawings. Scan paths were analyzedby means of fixation dispersion, fixation duration and fixation sequencemeasures. The obtained profiles were related to aesthetic judgments inorder to determine whether different ways of looking at a work of artexplain differences in judging it. In addition, the observed scan paths werecompared to predictions made on the basis of featural saliency models, oculomotorbias models, and artist-defined region-of-interest models. Resultsindicated that art expertise mediates the predictive validity of these modelsof eye guidance when viewing art, and that parameters of scan paths allowpredictive inferences about the aesthetic judgments that follow them.Acknowledgement: METH/08/02Attention: Mechanisms and modelsOrchid Ballroom, Boards 413–424Monday, May 10, 8:30 - 12:30 pm43.413 Comparing signal detection models of perceptual decisionconfidenceBrian Maniscalco 1 (brian@psych.columbia.edu), Hakwan Lau 1 ; 1 Department ofPsychology, Columbia UniversityIntroduction: We investigated the mechanisms underlying reports of perceptualconfidence. In particular, is all the information used for perceptualdecisions available to confidence reporting? Some models of perceptionsuggest that there are multiple channels of information, and subjectivereports such as confidence ratings can only tap into one of the channels(e.g. cortical, but not subcortical channels). Is this intuitive view correct?We capitalize on an original psychophysical finding (Lau & Passingham2006 PNAS) that subjective reports of perceptual confidence and perceptualperformance (d’) can dissociate, and apply formal model comparison techniquesto identify the mechanism underlying confidence reporting.Methods and Results: We considered several signal detection theory models,including: A simple SDT model where confidence is determined bysetting criteria on the primary decision axis; a late noise model where thenoisy perceptual signal becomes even noisier when making confidencejudgments; and a two-channel model where only one channel contributesto confidence judgments. We compared models by evaluating thelikelihood of each model, given the metacontrast masking data, using theAkaike information criterion. All models could account for perceptual performance,but the late noise model provided the best fit to the observedperformance-confidence dissociation.Discussion: Our results suggest that simple SDT models may not adequatelycharacterize the relationship between perceptual decisions and confidence,because they do not provide a process that allows for the kind of performance-confidencedissociation observed in the metacontrast masking paradigm.However, this extra process need not be an extra information processingchannel. Our best-fitting model was a hierarchical, single-channelmodel where noisy perceptual signals accrue further noise when used forrating confidence. This suggests that confidence decisions may be made bymechanisms downstream from perceptual decision mechanisms.43.414 The Attentional Attraction Field: Modeling spatial andtemporal effects of spatial attentionOrit Baruch 1 (oritb@research.haifa.ac.il), Yaffa Yeshurun 1 ; 1 Psychology Departement,University of HaifaAttentional effects were found for both neuronal and behavioral responses.Most of the studies considered spatial aspects of perception, but somerevealed attentional effects in the temporal domain. Here we propose amodel that is based on the conception of attention as an attraction field:The allocation of attention to a location attracts (shifts) the centers of receptivefields towards this location. We show that this attentional attractionof receptive fields can serve as a simple unifying framework to explain adiverse range of attentional effects including gain enhancement, enhancedcontrast sensitivity, enhanced spatial resolution, prolonged temporal integration,prolonged perceived duration, prior onset and degraded temporalresolution. Additionally, the model successfully simulates multiplicativeand non-multiplicative modulations of neuronal response and suppressedresponse surrounding the focus of attention. Thus, this model offers a novelway of looking at attentional effects. Instead of assuming that the fundamentalimpact of attention is enhancing neuronal response, we suggest thatenhanced response and other seemingly unrelated attentional effects mayall be a consequence of this attentional attraction field. Notably, this modellinks physiological measurements at the unit level with psychophysicalobservations of both the spatial and temporal domains of perception.222 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsMonday Morning Posters43.415 Pre-Stimulus EEG Oscillations Reveal Periodic Sampling OfVisual AttentionNiko Busch 1,2 (niko.busch@googlemail.com), Rufin VanRullen 3,4 ; 1 Berlin School ofMind and Brain, Humboldt Universität, Berlin, Germany., 2 Institute of MedicalPsychology, Charité - Universitätsmedizin Berlin, Germany , 3 Université deToulouse, UPS, Centre de Recherche Cerveau & Cognition, France , 4 CNRS,CerCo, Toulouse, FranceOur senses are constantly confronted with an excess of information. Onemechanism that limits this input to a manageable amount is selective attention.An important effect of sustained attention is the facilitation of perceptionthrough enhanced contrast sensitivity. While the term `sustained’ suggeststhat this facilitative effect endures continuously as long as somethingis attended, we present electrophysiological evidence that perception atattended locations is actually modulated periodically.Subjects had to detect brief flashes of light that were presented peripherallyat the individual contrast threshold such that subjects detected approximatelyhalf of the flashes (hits) and entirely missed the other half (misses).Additionally, a central cue instructed subjects where to focus their attention,so that stimuli could be presented either at the attended or the unattendedlocation. EEG was recorded concurrently.As expected, the contrast threshold was lower for attended than for unattendedstimuli. Analysis of the EEG data revealed that event-relatedpotentials (ERPs) were of much larger amplitude for hits than for misses.Moreover, the single-trial amplitude of the ERP was correlated with thesingle-trial phase of spontaneous EEG oscillations in the theta (~7Hz) frequency-bandjust before stimulus onset. In fact, the single-trial phase in thistime-frequency range was significantly predictive of detection performancefor attended stimuli - but not for unattended ones.Spontaneous EEG oscillations correspond to ongoing periodic fluctuationsof the local electrical field and the excitability of neuronal populations. Thepresent results extend our recent finding that visual detection performancefluctuates over time along with the phase of such oscillations in the theta(4-8Hz) and alpha (8-13Hz) range. By demonstrating that this effect existsonly for attended stimuli, the data suggest that sustained attention in factoperates in a periodic fashion.Acknowledgement: EURYI and ANR 06JCJC-015443.416 The role of salience-driven control in visual selectionMieke Donk 1 (w.donk@psy.vu.nl); 1 Department of Cognitive Psychology, VrijeUniversiteit AmsterdamSalient objects in the visual field tend to attract attention and the eyes. However,recent evidence has shown that salience affects visual selection onlyduring the short time interval immediately following the onset of a visualscene (Donk & van Zoest, 2008). The aim of the present study was to furtherexamine the short-lived nature of salience effects. In a series of experiments,we investigated how the salience of different orientation singletonsaffected probe reaction time as a function of Stimulus Onset Asynchrony(SOA) between the presentation of a singleton display and a probe display.It was tested whether the transient nature of salience effects could beexplained by (1) people using a specific attentional set acting against themaintenance of salience, (2) response priming, or (3) eye movements. Theresults demonstrated that these factors could not explain the short-livednature of salience effects. The results were discussed in terms of currentmodels on visual selection.Donk, M. & van Zoest, W. (2008). Effects of salience are short-lived. PsychologicalScience, 19(7), 733-739.43.417 Unifying two theories of local versus global perception:Attention to relative spatial frequency is the medium for shapelevelintegrationAnastasia V. Flevaris 1,2 (ani@berkeley.edu), Shlomo Bentin 3 , Lynn C. Robertson 1,2 ;1 Department of Psychology, University of California, Berkeley, 2 Veterans AdministrationMedical Center, Martinez, 3 Department of Psychology and the Centerof Neural Computation, Hebrew University, JerusalemHemispheric asymmetries in the perception of hierarchically arrangedvisual stimuli (i.e., “hierarchical perception”) have long been established,and a myriad of studies have demonstrated that the left hemisphere (LH)is biased towards local processing and that the right hemisphere (RH) isbiased towards global processing. However, the mechanisms that producethese asymmetric biases are still debated. Hubner and Volberg (2005)recently proposed that the identities of shapes in hierarchical displays areinitially represented separately from their hierarchical level (local/global),and that the LH is more involved in binding shapes to the local level whilethe RH is more involved in binding shapes to the global level (“integrationtheory”). This is in contrast to previous models implicating the importanceof attentional selection of spatial scale in hierarchical perception (e.g., DoubleFiltering by Frequency (DFF) theory, Ivry & Robertson, 1998), whichproposes that asymmetric biases towards relatively high (by the LH) andrelatively low (by the RH) SFs underlie the hemispheric asymmetry in localversus global processing, respectively. Rather than considering these twotheories as mutually exclusive, we unify them into a single framework andprovide evidence that selective attention of SF is the medium for hierarchicalintegration. Attention to the higher or lower SFs in a previously presentedcompound grating modulated shape-level binding errors in a subsequentlypresented hierarchical display. Specifically, attentional selectionof higher SFs facilitated binding by the LH of shapes to the local level andattentional selection of lower SFs facilitated binding by the RH of shapes tothe global level.43.418 How objects and spatial attention interact: Prefrontal-parietalinteractions determine attention switching costs and theirindividual differencesNicholas C Foley 1,2,3 (nfoley@bu.edu), Stephen Grossberg 1,2,3 , Ennio Mingolla 1,2,3 ;1 Department of Cognitive and Neural Systems, Boston University, 2 Center forAdaptive Systems, Boston University, 3 Center of Excellence for Learning inEducation, Science and Technology, Boston UniversityHow are spatial and object attention coordinated to achieve rapid objectlearning and recognition during eye movement search? How do prefrontalpriming and parietal spatial mechanisms interact to determine the reactiontime costs of intra-object attention shifts, inter-object attention shifts, andshifts between visible objects and covertly cued locations and their effectson individual differences (Brown and Denny, 2007; Roggeveen et al., 2009)?The current work builds on the ARTSCAN model (Fazl, Grossberg andMingolla, 2009) of how spatial attention in the Where cortical stream coordinatesstable, view-invariant object category learning in the What corticalstream under free viewing conditions. Our model explains psychologicaldata about covert attention switching and multifocal attention without eyemovements. The model predicts that ‘attentional shrouds’ (Tyler and Konsevich,1995) are formed when surface representations in cortical area V4resonate with spatial attention in posterior parietal cortex (PPC) and prefrontalcortex (PFC) while shrouds compete among themselves for dominance.Winning shrouds support view invariant object category learningand active surface-shroud resonances support conscious surface perception.In the present model, visual inputs are transformed by simulated corticalmagnification and then separated into left and right hemifield representations,consistent with both anatomical and behavioral evidence of independentattention resources in the left and right visual hemifields (Alvarezand Cavanagh, 2005). Activity levels of filled-in surface representations aremodulated by attention from shroud representations in PPC and PFC, consistentwith V4 neuronal data (Reynolds and Desimone, 2004). Attentivecompetition between multiple objects is simulated in variations of the twoobjectcueing paradigm of Egly, Driver and Rafael (1994).Acknowledgement: Supported in part by CELEST, an NSF Science of Learning Center(SBE-0354378), the SyNAPSE program of DARPA (HR001109-03-0001, HR001-09-C-0011), the National Science Foundation (BCS-0235398), and the Office of Naval Research(N00014-01-1-0624).43.419 Re-thinking the active-passive distinction in attention froma philosophical viewpointCarolyn Suchy-Dicey 1 (carolynsd@gmail.com), Takeo Watanabe 2 ; 1 PhilosophyDepartment, Boston University, 2 Psychology Department, Boston UniversityWhether active and passive, top-down and bottom-up, or endogenous andexogenous, attention is typically divided into two types. To show the relationshipbetween attention and other functions (sleep, memory, learning),one needs to show whether the type of attention in question is of the activeor passive variety. However, the division between active and passive isnot sharp in any area of consciousness research. In phenomenology, theexperience of voluntariness is taken to indicate activity, but this experienceis often confused with others. In psychology, task-dependent behavior istaken to indicate activity, but is often conflated with complex automaticbehavior. In neuroscience, top-down processes are taken to exclusivelyMonday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>223


Monday Morning PostersVSS 2010 AbstractsMonday AMindicate activity despite the fact that both top-down and bottom-up activationsare always present in the brain. Moreover, work in attention hasshown that the results of so-called passive and active processes are sometimesinseparable. Carrasco, et al. (2004), for example, show that activeattention results in the same change in perceptual contrast that is enactedby bottom-up mechanisms. Likewise, Reynolds and Desimone (2003) showthat top-down and bottom-up attention effect neural contrast in the sameway. Thus, the passive-active distinction does not seem to neatly separatetwo types of attention. Perhaps a more convincing model of attention combinesactive and passive processing into a single mechanism of control. Onesuch potential model is what we call the Unitary Saliency Map Model, firstsuggested by Koch and Ullman (1985) and developed by Treue (2003). Insuch a model, top-down and bottom-up processes each feed into the samesaliency map, from which attention is controlled. We argue that this makessense of the phenomenological, psychological, and neuroscientific data.Finally, the acceptance of such a model will force us to review some of ourprevious findings on attention and its relation to consciousness.Acknowledgement: NIH-NEI R21 EY018925, NIH-NEI R01 EY015980-04A2, NIH-NEI R01EY01946643.420 Application of a Bottom-Up Visual Surprise Model for EventDetection in Dynamic Natural ScenesRandolph Voorhies 1 (voorhies@usc.edu), Lior Elazary 1 , Laurent Itti 1,2 ; 1 Departmentof Computer Science, University of Southern California, 2 NeuroscienceGraduate Program, University of Southern CaliforniaWe present an application of a neuromorphic visual attention model tothe field of large-scale video surveillance and show that it outperforms astate-of-the-art method at the task of event detection. Our work extends Ittiand Baldi’s Surprise framework as described by “A Principled Approach toDetecting Surprising Events in Video” in CVPR 2005. The Surprise frameworkis a biologically plausible and validated model of primate visualattention which uses a new Bayesian model of information to detect unexpectedchanges in feature detectors modeled after those in the mammalianprimary visual cortex. We extend this model to cover extremely large fieldsof view, and present methods for processing and aggregating such largeamounts of visual data. Our system is tested on real-world data in whichevents containing both pedestrians and vehicles are staged in an outdoorenvironment and are shot on a 16 mega-pixel camera at 3 frames per second.In these tests, we show that our system is able to provide a greaterthan 12.5% gain in an ROC AUC analysis over a reference (OpenCV) algorithm(“Foreground Object Detection from Videos Containing ComplexBackground,” Li, et al, 2003). Furthermore, our system is rigorously testedand compared against the same algorithm on artificially generated targetevents in which image noise and target size is independently controlled. Inthese tests, we show an approximately 27% improvement in noise invariance,and an approximately 10% improvement in scale invariance over thecomparison algorithm. The results from these tests suggest the importanceof strong collaboration between the neuroscience and computer sciencecommunities in developing the next generation of vision algorithms.Acknowledgement: DARPA CT2WS Project43.421 Perception of simultaneity is impaired in correspondenceto the amount of allocated attention: Evidence from a visual priorentry studyKatharina Weiß 1 (katharina.weiss@upb.de), Ingrid Scharlau 1 ; 1 University of Paderborn,Department of Cultural <strong>Sciences</strong>Studies on visual prior entry show that two stimuli presented simultaneously,or with short temporal delay, are only rarely perceived as simultaneousif one of these stimuli is attended to. If the same two stimuli areequally unattended, simultaneity is frequently perceived. The temporalprofile model (Stelmach & Herdman, 1991, Journal of Experimental Psychology:Human Perception and Performance, Vol. 17(2), pp. 539-550)predicts such an impairment of simultaneity perception by attention andallows to quantify this impairment. The higher the amount of attentionselectively allocated towards one of two stimuli, the less often these stimulishould be perceived as simultaneous. We tested this hypothesis in a visualprior-entry paradigm, using masked and non-masked peripheral cues fororienting attention. The amount of attentional allocation was manipulatedby varying the temporal delay between cue and cued target (34 ms, 68 msand 102 ms). Since in larger cue-target delays the cue has more time to shiftattention towards its location, a higher amount of attention should be allocatedtowards the respective cued target. Observers judged simultaneityof two visual stimuli presented with varying temporal delays either witha temporal-order or a simultaneity-judgment task. Results supported thehypothesis that perception of simultaneity depends on the amount of attentionalallocation: The larger the cue-target delay, the less frequent were thesimultaneous judgments. Visibility of the cue and the judgment task hadno influence on this effect. These results provide a challenge for theories ontemporal (order) perception because they contradict an (implicit) assumptionof most models, viz. that simultaneity should be perceived if temporalorder cannot be detected and vice versa.43.422 fMRI evidence for top-down influences on perceptualdistractionJocelyn Sy 1 (sy@psych.ucsb.edu), Barry Giesbrecht 1 ; 1 Department of Psychology,University of California, Santa BarbaraStudies of visuospatial attention have demonstrated that the extent towhich task-irrelevant information is processed depends on the perceptualdemands of processing task-relevant information (e.g., Lavie, 1995). Thisresult has been explained by the load theory of attention that assumes thatperceptual resources are allocated automatically and exhaustively (e.g.,Lavie et al., 2004). Here, we tested the extent to which the automatic allocationof perceptual resources can be influenced by top-down attentionalcontrol systems using behavioral and neural measures. Fourteen observerssearched for a target letter (X or N) amongst a homogeneous (low-load) orheterogeneous (high-load) array of distractors. Prior to the search array,a color change at fixation cued the display load (84% valid; 16% invalid).Processing of task-irrelevant information was assessed behaviorally bymeasuring the interference caused by a task-irrelevant flanker letter.fMRI methods were used to record BOLD responses during the task. Thecue+target trials were randomly intermixed with trials in which there wasa cue, but no target (cue-only) and trials in which there were no stimuli. Theresults indicated that behavioral interference was modulated by cue validity,such that on valid trials, there was little interference, but on invalid trialsthere was greater interference under low than high load. fMRI analysesof the cue-only trials revealed regions of the dorsal frontoparietal network.BOLD responses in subregions of this network (bilateral IPL; left MFG) oncue+target trials were correlated with individual differences in behavioralinterference. In visual cortex, areas that represented the task-irrelevantlocations (identified by a separate localizer scan) showed larger cue-onlyresponses on low-load trials than on high-load trials. In contrast, areas thatrepresented the task-relevant locations showed larger cue-only responseson high-load trials. These results suggest that top-down expectations influencethe allocation of perceptual resources to compensate for anticipatedlevels of perceptual load.Acknowledgement: UCSB Academic Senate Grant43.423 Modulation of attention decision thresholds is responsiblefor inter-trial biases of attention in the distractor previewingeffectYuan-Chi Tseng 1,2 (yctseng@illinois.edu), Joshua Glaser 3,4 , Alejandro Lleras 2,5 ;1 Human Factors Division, University of Illinois at Urbana-Champaign, 2 BeckmanInstitute, University of Illinois at Urbana-Champaign, 3 Physics, University ofIllinois at Urbana-Champaign, 4 Mathmatics, University of Illinois at Urbana-Champaign,5 Psychology, University of Illinois at Urbana-ChampaignThe distractor previewing effect (DPE) is observed in oddball search tasksand refers to delayed responses to targets that have been associated withdistractor status on an immediately preceding target-absent trial. A recenteye-movement study of the DPE (Caddigan & Lleras, 2008) showed thatwhen participants were asked to make a saccade to a color-oddball target,saccade latency was slower and saccades were less accurate when the targetcolor in the current trial was the same as the color of the distractors inthe preceding target-absent trial. These changes in eye-movement behaviorcan be due to changes in bottom-up signals or to top-down modulationsabout what to do with those signals (or to both). In terms of bottom-upchanges, attention may be less likely to be attracted to the target because itssalience has decreased, thereby slowing the rate of evidence accumulationtowards an attention movement in its direction. Alternatively, top-downmodulations may also be at play: heightening attentional decision thresholdswould require longer accumulation periods before an attention movementis executed. Here we modeled the eye-movement data using a computationalmodel based on a leaky, competing accumulator in which both224 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsMonday Morning Posterstarget and distractors have their own parameters of signal strengths anddecision thresholds. Goodness-of-fit tests showed that changes to signalstrengths alone (evidence accumulation) cannot account for the observeddata. Only when changes to the decision thresholds were modeled (heighteningthresholds to recently seen distractor colors and simultaneously loweringthresholds to other features) was the model able to accurately predictthe saccade latency and landing accuracy data. Our results clearly supporta top-down interpretation of the DPE (see Lleras et al., 2008) and furtherspecifies how attentional biases are instantiated between trials: as modulationsof decision thresholds responsible for triggering attention (and eyemovements) toward specific features in the display.43.424 Visual attention related to difficulty in n-back tasksSheila Crewther 1 (s.crewther@latrobe.edu.au), Gemma Lamp 2 , Andrea Sanchez-Rockliffe 1 , David Crewther 2 ; 1 La Trobe University, Melbourne Australia, 2 Brain<strong>Sciences</strong> institute, Swinburne University, Melbourne, AustraliaBehavioral and fMRI techniques have been utilized to investigate the neuroanatomicalcorrelates of goal directed visual attention. It was hypothesizedthat comparing target to non-target activation for each participant(Single subject event-related functional magnetic resonance imaging [ERfMRI],on two visual 1-back working memory tasks, with three levels ofdifficulty, would reveal a network of frontal and parietal sites, similar toCorbetta’s visual attention networks with a significant positive correlationbetween the accuracy level for each task and the strength of the signal contrastbetween target and non-target activation. One task used highly familiarcartoon faces of varying colour and emotional expression, expected toprimarily activate the left hemisphere, while the second task was expectedto activate the right hemisphere, using 3-D cubes with 0, 45 or 90 deg rotations.The 1-back design required manipulation, continuous updating andselective attention, with each task type requiring different button pressesto differentiate repeat and non-repeat responses. The block designed andER-FMRI results both demonstrated fronto-parietal networks of activationpredominately in the left hemisphere for both tasks differing with respectto stimulus class and across individuals. No correlation was observedbetween the strength of activation and task accuracy. Increasing difficultyof the mental rotation 1-back task appeared to activate a bilateral networkof areas with greater bilateral parietal than frontal activation, while thefacial attributes tasks activated largely LH dominant frontal areas.Attention: Inattention and attentionblindnessOrchid Ballroom, Boards 425–438Monday, May 10, 8:30 - 12:30 pm43.425 Attentional blink magnitude is predicted by the ability tokeep irrelevant material out of working memoryKaren Arnell 1 (karnell@brocku.ca), Shawn Stubitz 1 ; 1 Department of Psychology,Brock University, Ontario , CanadaParticipants have difficulty reporting the second of two masked targets ifthis second target is presented within 500 ms of the first target -- an AttentionalBlink (AB). Even unselected, healthy, young participants differ inthe magnitude of their AB. Previous studies (Arnell, Stokes, MacLean &Gicante, 2010; Colzato, Spape, Pannebakker, & Hommel, 2007) have shownthat individual differences in working memory performance using theOSPAN task can predict individual differences in AB magnitude whereindividuals with higher OSPAN scores show smaller ABs. Working memoryperformance also predicts AB magnitude over and above more capacitybased memory measures which are unrelated to the AB (Arnell et al., 2010).Why might working memory performance predict the AB? One possibilityis that individuals showing smaller ABs are better able to keep irrelevantinformation out of working memory. The present study employed an individualdifferences design, an AB task, and two visual working memorytasks to examine whether the ability to exclude irrelevant information fromvisual working memory (working memory filtering efficiency) could predictindividual differences in the AB. Visual working memory capacity waspositively related to filtering efficiency, but did not predict AB magnitude.However, the degree to which irrelevant stimuli were admitted into visualworking memory (i.e., poor filtering efficiency) was positively correlatedwith AB magnitude over and above visual working memory capacity suchthat good filtering efficiency was associated with smaller ABs. Good filteringefficiency may benefit AB performance by not allowing irrelevant RSVPdistractors to gain access to working memory.Acknowledgement: Natural <strong>Sciences</strong> & Engineering Research Counil (NSERC), CanadianFoundation for Innovation (CFI) & Ontario Innovation Trust (OIT)43.426 Attentional Blink without MaskingVincent Berthet 1 (vksberthet@gmail.com), Sid Kouider 1 ; 1 Laboratoire de <strong>Sciences</strong>Cognitives et Psycholinguistique, CNRS/EHESS/DEC-ENS, Paris, France.The Attentional Blink (AB) is a well-known RSVP paradigm in which twovisual targets (T1 and T2) are embedded in a stream a distractors. In thisparadigm, performance on T2 is largely impaired when it appears brieflyafter T1 (i.e. within 200-500ms). This paradigm is thought to reveal the timecourse of attention. An important feature of the AB paradigm concerns thenecessity of a light masking of the two targets. Thus, although the classicalinterpretation of the AB effect refers to the limited capacity of attentionalresources, this interpretation is not straightforward since masking T2 alsocontributes to its impaired visibility. Here, by contrast, we demonstrate thatwhile the masking of T2 is necessary in a standard AB task, such a conditionis no longer necessary when T2 is a gabor patch at threshold. Indeed, wereasoned that having T2 at threshold would maximize the probability of anAB effect without masking. These results support a clear capacity limitedaccount of the AB effect without relying on masking and call for more considerationof the role of temporal attention in theories of the AB.43.427 Word Superiority within the Attentional BlinkElena Gorbunova 1 (gorbunovaes@gmail.com), Maria Falikman 1 ; 1 LomonosovMoscow State UniversityThere is enough evidence of word superiority effects (WSE) on letter processingunder various masked presentation conditions, including forward,backward, metacontrast and lateral masking. However, there are indicationsthat without focused attention there might be no word superiority (e.g.Pantyushkov, Horowitz, & Falikman, 2008). The question remains whetherthe WSE would improve performance for stimuli lacking attention, e.g. dueto the attentional blink (AB). Previously, we’ve observed a sort of the WSEon the AB (Falikman, 2002), but for words presented letter-by-letter, withat least some letters presumably safe from the AB. Here, we studied theinfluence of a simultaneously presented word context on the letter targetprocessing within the AB. In a rapid serial visual presentation procedure,observers were presented with strings consisting of five identical digits,among which two strings of letters were embedded. After each trial, participantsreported the central letter of each letter string using the 2AFC procedurefor the 1st and for the 2nd letter in order. The 1st target was alwaysflanked by identical letters, whereas the 2nd target (the probe) was embeddedin the string of letters forming a 5-letter nonpronunceable nonword, apronounceable pseudoword, or a Russian word, in which the middle lettercould be replaced with another letter, as in the Reicher-Wheeler paradigm(e.g. river-rider). For probes embedded in nonwords, a standard AB wasobtained. For probe letters embedded in words, there was no AB, the resultthat might be considered a word-superiority effect. For pseudowords, theprobe performance within the AB was better than with nonwords, but ingeneral still poorer than with words. Thus, word superiority shown mightbe partly, but not entirely explained by the closest familiar context set byletters flanking the probe. Supported by RFBR, grant #08-06-00171-а.43.428 T1 difficulty modulates the attentional blink only when T1 isunmaskedSimon Nielsen 1 (sini@imm.dtu.dk), Tobias Andersen 1 ; 1 Cognitive Systems, DTUInformatics, Technical University of DenmarkThe attentional blink (AB) is consistently observed when people arerequired to identify or detect two consecutive targets (T1 and T2). T2 suffersin performance when it is presented less than 500 ms after T1. The twostage theory (Chun & Potter, 1995) proposes that the AB is caused by limitedprocessing resources being occupied by T1 when T2 is presented. If so,it is expected that varying T1 difficulty should modulate the AB magnitude.Previous findings however are inconsistent: Christmann & Leuthold (2004)manipulated T1 difficulty by contrast and found that an easy T1 (high contrast)decreased the AB, but in a similar experiment Chua (2005) found theopposite. McLaughlin and colleagues (2001) varied T1 difficulty by targetexposure and found no effect on the AB. In previous experiments (Nielsen,Petersen & Andersen, VSS 2009) we found no evidence of AB interferencefrom varying T1 difficulty with contrast and exposure. We suggested thatMonday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>225


Monday Morning PostersVSS 2010 AbstractsMonday AMthe use of pattern masks might have compromised ours, and similar studies.In a new set of experiments we test this hypothesis and vary T1 difficultywith contrast, only this time we omit T1’s mask. We find significantAB interference from manipulating T1. In the easy condition (high contrast)we observe an increase in AB magnitude for SOA’s of 200 ms. These findingssupports the hypothesis that visual masking has an antagonistic influenceon the AB effects of T1 difficulty. The result however, is the oppositeof what we should expect from the two stage theory. We hypothesize thatthe rapid onset of T1 induces an attentional capture effect, which increaseswith contrast. This challenges the use of contrast to manipulate T1 in studiesexamining how an easy T1 affects the AB – any positive effects may becompromised by the increased capture effect.43.429 Boosting back to the future: Explaining order reversals inthe attentional blinkChristian Olivers 1 (cnl.olivers@psy.vu.nl), Frederic Hilkenmeier 2 , Martijn Meeter 1 ,Ingrid Scharlau 2 ; 1 Department of Cognitive Psychology, VU UniversityAmsterdam, Netherlands, 2 Department of Cognitive Psychology, University ofPaderborn, GermanyThe second of two targets (T2) is often missed when it follows the first (T1)within 500 ms in a rapid stream of distractors– a finding referred to as theattentional blink. No attentional blink occurs when T2 immediately followsT1, at lag 1. Intriguingly, T2 is then often reported before T1, even though itoccurs 100 ms later. These order reversals have been attributed to limitedcapacityepisodic representations within which order is completely lost. Weprovide evidence that order reversals are instead due to prior entry: T1causes an attentional enhancement that is beneficial to T2 and speeds up itsprocessing. This predicts that order reversals should be reduced when T1itself is enhanced, e.g. by a cue. Conversely, order reversals should increasewhen T2 is cued instead. These predictions are borne out by the results.Moreover, the observers that exhibited the greatest shift in performancebetween T1 and T2 also showed the greatest change in the number of orderreversals. These results support the theory that an attentional boost ratherthan deficit underlies order reversals, lag-1 sparing, and the attentionalblink.43.430 Specific Task Strategies Affect Repetition BlindnessWinnie Chan 1 (winyc@graduate.hku.hk), William Hayward 1 ; 1 University of HongKongRepetition Blindness (RB) refers to a cognitive phenomenon in which participantsfail to report repeated items in a rapid serial visual presentation(RSVP) stream. Report and detection are two tasks commonly used to measureRB. Participants are required to report targets in the report tasks, whilethey are required to detect repetition in the detection task. However, it isunclear whether strategic differences between the two tasks affect RB. InExperiment 1, we measured RB with the two tasks by using two commontypes of stimuli, letters or words, as the targets, and with symbols as thedistractors. A significant RB was found in the detection task, but not in thereport task. This surprising result may be due to the order effect of the twotasks. Therefore, we manipulated the order of the two tasks sequentiallyin Experiment 2 and studied the lag interval between two targets as well.The result was consistent with Experiment 1 in that RB was found in thedetection task across 4 lag intervals but priming was found in the reporttask. Thus, across the two experiments, RB was found more easily in ourdetection task than in our report task. Therefore, strategic processing in RBmay be differentially involved across tasks, and may have stronger effectson report tasks than detection tasks.Acknowledgement: This research was supported by a grant from the Hong KongResearch Grants Council (HKU744008H) to William G. Hayward.43.431 High perceptual load does not induce inattentional blindnessor early selectionJoshua Cosman 1,2 (joshua-cosman@uiowa.edu), Shaun Vecera 1,2 ; 1 Univeristy ofIowa Department of Psychology, 2 Univeristy of Iowa Department of NeurosciencePerceptual load theory has been one of the most influential theories of attentionalselection during the past fifteen years, providing a resolution to theearly versus late selection debate by arguing for a flexible, load-dependentmechanism of selection. A number of recent behavioral and neurophysiologicalstudies have demonstrated that high perceptual load displaysproduce inattentional blindness, in which participants are “blind” to taskirrelevantflanking stimuli that appear in the display. Presumably, whenthe perceptual load of the primary task is high, early selection occurs andparticipants completely fail to process task-irrelevant information. Such aninterpretation argues for a strong, load-dependent early-selection mechanismoccurring very early during perceptual processing. However, becauseinattentional blindness phenomena might be attributed to memory failures,not perceptual failures, we hypothesized that more sensitive measures offlanker identification might provide evidence that participants processedthe task-irrelevant flankers to a relatively late level. In the current experimentswe measured distractor-related Simon effects and sequential effectsto assay interference from flanking stimuli, instead of measuring traditionalflanker effects. When distractor processing was measured using traditionalflanker effects we replicated the basic load effect: Participants showed noflanker interference under high perceptual load. However, when examiningthe Simon effect and sequential effects, large interference effects wereobserved in high-load displays, indicating that the distractors’ identitieswere processed and affected responses. These findings suggest that highperceptual load neither induces ‘blindness’ for task-irrelevant informationnor involves early selection.43.432 Blind, Blinder, Blindest: Individual differences in changeblindness and inattentional blindnessMelinda S. Jensen 1 (jensenm@uiuc.edu), Daniel J. Simons 1 ; 1 University of Illinois atUrbana ChampaignResearch on change blindness and inattentional blindness has exploredwhy and when failures of visual awareness occur, but few studies haveexamined who is most susceptible to failures of awareness. Although afew studies have shown group differences in the detection of changes andunexpected events, most have focused solely on how scene content influencesdetection for groups with special interests rather than on more globalindividual differences. Here we explored whether individual differencesin perception, attention, cognitive style, and personality predict changeblindness and inattentional blindness. Participants completed a battery ofchange blindness, inattentional blindness, perceptual, and personality measures,including both incidental and intentional tests of visual awareness. Avariety of personality measures, including factors related to effort, amiability,intelligence, and speed covaried with performance on basic measuresof attention and perception as well as with change detection performance.Most of these individual differences in personality appear to influence thestrategies people are likely to use when performing the tasks. Interestingly,the pattern of predictors for intentional change detection tasks differedfrom that for unexpected changes in incidental change detection and forunexpected objects in an inattentional blindness task. For example, betterfunctional field of view is associated with more efficient intentional searchfor change, but not with the detection of unexpected visual events. Performanceon a flicker change detection task was unrelated to the likelihood ofnoticing unexpected objects or changes. These findings are consistent withthe idea that individual differences in perceptual and attentional abilitiesdo not predict detection of unexpected events. We consider how individualdifferences in personality interact with the task demands for intentionaland incidental tasks to predict who will notice expected and unexpectedvisual events.43.433 Change Blindness: A Comparison of Selective Attention ofNovice and Experienced DriversAndrew F. Osborn 1 (andrew.osborn@email.fandm.edu), D. Alfred Owens 1 ;1 Psychology, Franklin & Marshall CollegeThis study used a change blindness paradigm to investigate differences inselective attention between non-drivers (no driving experience), new drivers(one-year or less driving experience) and experienced drivers (threeyears or more driving experience). A modified flicker method used typicalroad scenes to test change blindness for stimuli of varying relevanceto driving. Twelve photographs of common road scenes, which ranged incomplexity from open rural roads to congested urban streets, were usedto create 36 pairs of modified and unmodified road scenes. The changingelements were selected to include three levels of conspicuity and relevanceto the driving task: relevant/conspicuous, relevant/inconspicuous and notrelevant. The participants’ task was to identify the manipulated elementin each pair of road scenes within a 30 second time constraint. Analysis ofvariance showed a main effect in response time between experience groups,with non-drivers exhibiting significantly slower response times in detectingthe element of modification compared to drivers with three years or more226 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsMonday Morning Posters43.442 Grouping of orientation but not position cues in theabsence of awarenessD. Samuel Schwarzkopf 1,2 (s.schwarzkopf@fil.ion.ucl.ac.uk), Geraint Rees 1,2 ; 1 WTCentre for Neuroimaging at UCL, 12 Queen Square, London WC1N 3BG, UnitedKingdom, 2 UCL Institute of Cognitive Neuroscience, 17 Queen Square, LondonWC1N 3AR, United KingdomHow the brain constructs a coherent representation of the environmentfrom noisy and discrete visual input remains poorly understood. Here weexplored whether awareness of the stimulus plays a role in the integrationof local features into a representation of global shape. Participants wereprimed with a shape defined either by position or orientation cues, andperformed a shape discrimination task on a subsequently presented probeshape. Crucially, the probe could either be defined by the same or differentcues as the prime, which allowed us to distinguish the effect of primingby local features and global shape. We found a robust priming benefit forvisible primes with response times being faster when the probe and primewere the same shape, regardless of the defining cue. However, renderingthe prime invisible uncovered a dissociation: while there was only localpriming for position-defined primes, we found only global priming for orientation-definedprimes. This suggests that the brain extrapolates globalshape from orientation cues in the absence of awareness of a stimulus, but itdoes not integrate invisible position information. In further control experimentswe tested the effects of invisible priming on processing of local elementswithout a global interpretation.Acknowledgement: Wellcome Trust43.443 Finding the egg in the snow: The effects of spatial proximityand collinearity on contour integration in adults and childrenBat-Sheva Hadad 1,2 (hadadb@mcmaster.ca), Daphne Maurer 1 , Terri L. Lewis 1 ;1 Department of psychology, Neuroscience & Behaviour, McMaster University,2 Department of Psychology, Emek-Israel CollegeWe tested adults and children aged 7 and 14 on the ability to integrate contourelements across variations in the collinearity of the target elements andin their spatial proximity. Participants were asked to find the 14 Gaborsforming an egg-shaped contour among randomly positioned backgroundGabors. Across trials, the density of background noise Gabors was variedaccording to a staircase procedure to determine a threshold for each combinationof collinearity and spatial proximity. Thresholds were expressedas the relative density of the background noise elements to target elements(Δ).When the collinearity of the target Gabors was high, the thresholds ofadults (n = 24 in each of Experiments 1 and 2) were largely independentof spatial proximity and varied only with Δ. It was only when collinearitywas less reliable because the orientation of the elements was randomlyjittered that spatial proximity began to influence adults’ thresholds. Thesepatterns correspond well to the probability that real-world contours composea single object: collinear elements are more likely to reflect parts ofa real object and adults integrate them easily regardless of the proximityamong those collinear elements. The results from 7- and 14-year-olds (n=24per age group) demonstrate a gradual improvement of contour integrationthroughout childhood and the slow development of sensitivity to the statisticsof natural scenes. Unlike adults, integration in children at both ageswas limited by spatial proximity regardless of collinearity and one strongcue did not compensate for the other. Only after age 14 did collinearity, themost reliable cue, come to compensate efficiently for spatial proximity.43.444 The role of grouping in shape formation: New effects due tothe directional symmetryJurgis Skilters 1 (jurgis.skilters@lu.lv), Maria Tanca 2 , Baingio Pinna 2 ; 1 Univ. ofLatvia, Center for the Cognitive <strong>Sciences</strong> and Semantics / Dept. of TheoreticalPhilosophy and Logic, Latvia , 2 University of Sassari, Dept. of Architecture andPlanning, ItalyThe problem of perceptual organization was studied by Wertheimerin terms of grouping by showing how elements in the visual field ‘gotogether’ to form an integrated, holistic percept according to some generalwell-known principles. Grouping per se does not make any predictionabout shape. The role of the gestalt principles is to define the rules of“what is or stay with what” i.e. the grouping and not the shape. The notionof ‘whole’ due to grouping is phenomenally different from the one due toshape. The form of grouping represents the groups of elements that assumethe role of “parts” within a holistic percept. The form of shape is insteadthe result of a global perceptual process emerging parallel to or after theform of grouping and giving to the whole a unitary form mostly along theboundary contours. This suggests that grouping and shape formation canbe considered as two complementary integrated processes of perceptualorganization. The main purposes of this work are (i) to study the relationshipbetween grouping and shape perception, (ii) to demonstrate that theform of grouping can influence the form of shape, and (iii) to demonstratethat the directional symmetry is a second order organization that polarizesthe perception of the shape and that represents the basic principle of shapeformation. Psychophysical experiments under motion conditions revealedseveral new shape illusions due to grouping and depending on the directionalsymmetry.Acknowledgement: Supported by Fondo d’Ateneo ex 60% (to BP).43.445 Visual grouping in Gabor lattices: a psychophysical andcomputational studyNathalie Van Humbeeck 1,2 (nathalie.vh@gmail.com), Johan Wagemans 2 , RogerWatt 1 ; 1 University of Stirling, Scotland, United Kingdom, 2 University of Leuven,BelgiumIn this study we examined the relative contribution of two perceptualgrouping principles, namely proximity and collinearity, to the perception ofa global orientation. For this purpose, we used Gabor lattices, two-dimensionalpatterns of regularly placed Gabor patches aligned in a sheared gridwith two different principal directions (its axes). The distance betweenGabor elements along each axis of the grid and the local orientation ofthe Gabor elements with respect to the grid were manipulated, in orderto examine the effects of proximity and collinearity, respectively. We alsoexamined whether the presentation time of the Gabor lattice had an influenceon which grouping principle dominated the participants’ percept. Wefound that proximity and collinearity interacted with each other to determinewhich axis was seen to be the global orientation. We found a relativepreference for grouping based on collinearity for Gabor lattices in the shortduration condition, whereas there was a relative preference for groupingbased on proximity for Gabor lattices in the long duration condition. Wewill explain the pattern of results in terms of first- and second-order filterstuned to different orientations and scales.Acknowledgement: Erasmus43.446 Creating links in empty space: an fMRI study of perceptualorganizationMitsouko van Assche 1 (mitsouko.van-assche@etu.unistra.fr ), Anne Giersch 1 ;1 Inserm U666, Clinique Psychiatrique, Centre Hospitalier Régional Universitairede StrasbourgWhen faced to the need of comparing two objects at the same time, informationselection can be modulated by the presence of grouping factors. Automaticallyprocessed grouping cues enable effortless selection of groups ofobjects. On the other hand, selecting non-automatically grouped objectsimplies the creation of mental links between them through top-down processes.To explore the neural basis of bottom-up grouping versus top-downgrouping, 16 participants were tested in a variant of the Repetition DiscriminationTask (Beck & Palmer, JEP:HPP 2002) in a fMRI experiment. Circlesand squares were presented in spatial alternation except for two figures,the target pair (i.e., two contiguous figures that were identical, either twosquares or two circles), around a central fixation point. Contiguous figurescould be linked by a connector (within-group pair) or not (between-grouppair), and located within the same or in separate hemifields. Participantshad to determine the identity of the target pair (i.e., circles or squares).Two blocks incited subjects to prioritize either target type, by manipulatingthe proportion of within-group and between-group trials. Each blockwas followed by a series of trials with equivalent proportion of within- andbetween-group trials, with an event-related design. Continuous eye movementsrecording in the scanner ensured to check correct central fixation.The behavioural data show a cost of RT for between-group compared towithin-group, and reproduce earlier results (Van Assche, Gos, Giersch, VisCogn 2008). Frontal and parieto-occipital areas were more recruited to identifybetween-group compared to within-group targets, especially when inseparate hemifields. For between-group pairs, supplementary internal temporalactivations were observed after inciting to prioritize between-groupcompared to within-group pairs. The results are discussed in terms of thebuilding of a hierarchical representation superimposing between-group onwithin-group pairs.Monday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>229


Monday Morning PostersVSS 2010 AbstractsMonday AM43.447 Perceptual Organization based on Gestalts: EmergentFeatures in Two-Line SpaceAnna Stupina 1 (ais@rice.edu), James Pomerantz 1 ; 1 Rice UniversityWhat exactly are the “parts” that make up a whole object, and how andwhen do parts group? The answer we propose hinges on Emergent Features(EFs), defined as features that (1) are possessed by no individual partbut that materialize only from the configuration; and that (2) make theobject more salient than its parts. The Configural Superiority Effect (CSE)was used to diagnose EFs in an odd-quadrant visual discrimination task.The CSE is obtained when discrimination between two parts (e.g., a verticalline segment in one quadrant vs. a horizontal in each of the other three)is made faster by adding the same context element to each quadrant (e.g.,another vertical). Such a result suggests that adding a second line segmentcreates EFs that are processed more quickly than are isolated segments.Previous work looking for CSEs with dot and three-line patterns has demonstratedseveral EFs, including orientation, proximity, and linearity. Thisexperiment focuses on two-line configurations. A portion of the infinite,8-dimensional space of all possible configurations of two line segments wassystematically sampled by varying the x and y coordinates of the secondsegment, thus sweeping out a 2-dimensional plane through that space. Thedisplays were coded to determine what EF differences arose between theodd quadrant and the other three. RTs were then mapped across this planeand were compared with RTs predicted from the number of EF differencesbetween the odd quadrant and the other three and on the direction of thosedifferences (present vs. absent in odd quadrant). The results show large differencesin performance depending on the location of the context segmentand demonstrate salient EFs including Parallelism, Connectivity, Intersections,and others.43.448 Classification of seismic images depends on perceptualskill more than geological expertiseWalter Gerbino 1 (gerbino@units.it), Chiara Micelli 1 ; 1 Department of Psychology“Gaetano Kanizsa”, University of Trieste. ItalyExpert interpreters inspect seismic images to identify relevant features anddiagnose the possible presence of interesting subsoil structures. Typically,a 2D seismic image is a set of adjoining seismic traces referring to variationsof acoustic impedance that, taken together, compose a non mimetic representationof the subsoil. Seismic interpreters must rely on both domainspecificknowledge in the field of structural geology and general-purposevisual abilities involved in texture segregation and feature matching.We studied three groups of observers with different degrees of expertisewith seismic images (researchers of the National Institute of Oceanographyand Experimental Geophysics; geology students; psychology students)and compared their performance in a task in which they should classify atarget fragment as belonging or not belonging to a large seismic image. Asexpected, more experienced observers performed better than less experiencedobservers; furthermore, observers of all groups classified meaningfultargets (those with clear geologically relevant features) more efficientlythan non-meaningful targets (those with uncertain features). Against ourexpectations, however, the superiority of meaningful over non-meaningfultargets did not increase at increasing expertise; rather, it appeared todepend on the level of individual perceptual skill, which was broadly distributedover the three groups. We argue that performance in the classificationof seismic image fragments - which is a possible component of aseismic interpreter’s work – reflects general-purpose visual abilities morethan geological expertise.Acknowledgement: MS01_00006 grant (Industry 2015)43.449 Representing grating-texture surface begins with spreadingof grating-texture from the surface boundary contourYong R Su 1,2 (ysu@salus.edu), Teng Leng Ooi 1 , Zijiang J He 2 ; 1 Department of Basic<strong>Sciences</strong>, Pennsylvania College of Optometry at Salus University, USA, 2 Departmentof Psychological and Brain <strong>Sciences</strong>, Universtiy of Louisville, USAResearch on color filling-in suggests the visual system represents a homogeneoussurface by first coding the surface boundary contours and thenfilling-in the interior with the surface feature (color). Here, we investigatedif texture surfaces are similarly represented. A trial began with the presentationof a vertical grating display (4 cpd, 4.14 x 4.14 deg). After 200 msec, arectangular region (length = 2.67 deg) with horizontal grating (4 cpd) wasadded onto the central area of the vertical grating display. The upper andlower boundary contours of the rectangular region were blurred with aGaussian kernel (FWHM = 0.6 deg), leaving only the right and left boundarycontours with sharp edges. This rectangular region was presented for30, 50, 100, 150, 200, 250, or 500 msec. Observers were instructed to judgethe proportion of the central area of the rectangular region not filled withthe horizontal grating, and report the proportion based on a rating scalefrom 0 to 6. A scale of “0” indicates the entire length of the rectangularregion being filled with horizontal grating, whereas “6” indicates no horizontalgrating being seen in the entire rectangular region. It is predictedif the representation of the rectangular region begins with the horizontalgrating spreading from the boundary contours, the rated number (proportionof the central area not filled with horizontal grating) will decreasewith increasing stimulus duration. Confirming the prediction, we foundthe area of the rectangular region represented by horizontal grating textureincreased with stimulus duration. We further fitted our data according tothe cortical magnification factors in areas V1 and V2, and found the averagegrating spreading speeds were relatively constant at 49.5 and 78.3 cm/s,respectively. Thus, our study underscores the important role of boundarycontours in representing texture surfaces.Acknowledgement: NIH (R01 EY015804)43.450 Contribution of motion parallax to depth ordering, depthmagnitude and segmentationAhmad Yoonessi 1 (ahmad.yoonessi@mail.mcgill.ca), Curtis Baker 1 ; 1 McGill <strong>Vision</strong>Research, McGill University, Montreal, CanadaMotion parallax, i.e. differential retinal image motion resulting frommovement of the observer, provides an important visual cue to segmentationand depth perception. Previously we examined its role in segmentation(VSS 2009), and here we additionally explore its contribution to depthperception.Subjects performed lateral head translation while an electromagnetictracker recorded head position. Stimuli consisted of random dots on a blackbackground, whose horizontal displacements were synchronized proportionatelyto head motion by a scale factor (gain), and were modulated usingsquare or sinewave envelopes to generate shearing motion.Subjects performed three tasks: depth ordering, depth magnitude andsegmentation. In depth ordering they performed a 2AFC task, reportingwhether the half-cycle above vs below the centre of the screen appearednearer. Depth magnitude estimates were obtained by matching the perceiveddepth to that of a texture-mapped 3d surface of similar shapewhich was rendered in a perspective view. Segmentation performance wasassessed by measuring discrimination thresholds for envelope orientation.This task included two conditions: one in which stimuli were synched tothe head motion and the other in which previously recorded motions of thestimuli were “played-back”.For square wave modulation, good depth ordering performance wasobtained only at low gain values; however sinewave modulation yieldedunambiguous depth across a broader range of gains. In the depth magnitudetask, subjects matched proportionately greater depths for larger gainvalues. In the segmentation task, orientation discrimination showed surprisinglysimilar thresholds for head motion and playback.These results suggest that the ecological range of depths in which motionparallax gives good segmentation is very wide, whereas for good depthperception it is quite limited. The dependence of depth ordering on modulationwaveform suggests that motion parallax is more useful for depth differenceswithin one object than between occluding objects.Acknowledgement: Supported by NSERC grant OGP0001978 to C.B.43.451 Local propagation of border-ownershipVicky Froyen 1,2 (vicky.froyen@gmail.com), Jacob Feldman 1,3 , Manish Singh 1,3 ;1 Center for Cognitive Science, Rutgers University, New Brunswick, NJ,USA, 2 University of Leuven (K.U.Leuven), Leuven, Belgium, 3 Department ofPsychology, Rutgers University, New Brunswick, NJ, USAMost studies of figure/ground have used methods that presume a singleglobal figural assignment, such as asking subjects which entire regionappears in front. In our study, we used the motion-probe method introducedin Kim and Feldman (2009) designed to assess figure/ground locallyat arbitrary points along a boundary, seeking evidence of local propagationof border-ownership (figure/ground assignment) along the boundary. Inthe motion-probe method, a small spatially circumscribed motion signalis created at a point on the boundary between two coloured regions, and230 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsMonday Morning Postersthe subject is asked which colour appeared to move; because the figuralregion “owns” the boundary, the response reflects border-ownership. Inour study, subjects were shown semicircular shapes, to which a bar wasadded in such a way that in some configurations the T-junctions induced aclear local change in figure/ground assignment (example display at http://ruccs.rutgers.edu/~jacob/Demos/figure_ground.html). We then assessedfigure/ground at various other points along the border, ranging from relativelynear the inducing bar to relatively far, giving us the opportunity tocapture the propagation of the figural status induced by the junction cue.We found a systematic effect of probe position, with probes closer to theinducer showing an increasingly strong tendency to receive figure/groundassignment consistent with the inducer---that is, as if the figural statuspropagated spatially from the point of the inducer. A computational modelof the propagation mechanism based on Bayesian belief networks suggestsintriguing parallels to known properties of neural coding of border ownershipin visual cortex.43.452 Neural adaptation reveals cultural tuning in local/globalprocessingDavid J. Kelly 1 (davidk@psy.gla.ac.uk), Luca Vizioli 1 , Ania Dzieciol 1 , RobertoCaldara 1 ; 1 Department of Psychology and Centre for Cognitive Neuroimaging,University of Glasgow, UKCultural differences in the way adults from East Asian and Western Caucasiansocieties perceive and attend to visual stimuli have been consistentlydemonstrated in recent years. Westerners display an analytical processingstyle, attending to focal objects and their features. By contrast, Easternersshow interest in context and relationships between objects, which is indicativeof holistic processing. Although much behavioural evidence supportsthe existence of these cultural processing styles, the neural mechanismsunderlying such perceptual biases are poorly understood. The combinationof Navon figures, which contain both global and local elements, andthe measurement of neural adaptation constitute an ideal way to probethis issue. Here we exploited a novel neural adaptation single-trial EEGmethod and recorded electrophysiological signals from British and Chineseobservers while viewing two sequentially presented Navon Shapes.To control for potential confounds related to Westerners’ familiarity withletters from the Roman alphabet, we constructed Navon figures made fromgeometric shapes. Additionally, to control for potential attentional biasesand eye movements, observers performed a colour change detection taskon a central fixation. In each trial, participants sequentially viewed a Navonshape followed a further shape from one of four categories: the same, localchanges, global changes, local and global changes. Both groups displayedmost adaptation at P1 and N170 when neither element was changed andmost when both were altered. However, the critical results come from thelocal or global change conditions. By contrast to Westerners, Easternersshowed no sensitivity to local changes, with as much adaptation occurringas when no elements were changed. This suggests that default neural codingof local and global properties occurs very early in visual processing anddiffers markedly between cultures, with inefficient coding of local elementsin Easterners. Such visual tuning could underlie more complex behaviouraldifferences observed across human populations.Acknowledgement: ESRC43.453 Detection of Closure Reverses Unilateral Field Advantagefor Repetition DetectionSerena Butcher 1 (serena.butcher@gmail.com), Marlene Behrmann 2 ; 1 HamiltonCollege, 2 Carnegie Mellon UniversityPrevious research suggests that subjects are faster and more accuratedetecting repeated elements presented unilaterally (both items in the samevisual field) versus bilaterally (one item in each visual field). This findinghas been explained in terms of an efficient within-field organization processfor groups defined by similarity and proximity (Butcher & Cavanagh,2008). But what about other grouping cues? Here, we examine the cue ofclosure. On each trial, subjects were presented with four items, each occupyingone of four positions defined by left/right x up/down around centralfixation. The participants’ task was to report whether any two out of fouritems were the same or whether all four were unique. The repeated targetstimuli were square brackets. The distractors were composed by rearrangingthe line segments of the targets. The repeated brackets occurred ineither the same orientation(“[ [ “ no-closure) or mirror reversed orientation( “[ ]” closure). We found a significant unilateral field advantage in the noclosurecondition (12 ms, t(18) = 2.0, p = 0.05), replicating previous workon detecting repetitions presented in the same orientation. However, in theclosure condition, we found bilateral repetitions were detected significantlyfaster than unilateral repetitions (28 ms, t(19)= 3.83 p = 001). These resultssuggest that closure is more efficiently detected across visual fields versuswithin a hemifield.43.454 The visual attractor illusionTal Makovski 1 (tal.makovski@gmail.com), Khena M. Swallow 1 , Yuhong V. Jiang 1 ;1 Department of Psychology and Center for Cognitive <strong>Sciences</strong>, University ofMinnesota, Twin Cities, MN, USAThe perception of an object’s features can often be biased by the object’simmediate surroundings, leading to many perceptual illusions. In contrast,the presence of nearby, static objects often enhances the perceived spatiallocation of another object. Here we present a new type of visual illusion inwhich the presence of a static object (the attractor) alters another object’sperceived location. Participants localized the edge of a briefly presentedand masked target object (e.g., an outline square or a centrally presentedline). Localization was accurate when the masked target was presented inisolation. However, when another nearby object (e.g., a face) was presentedat the same time as the target, localization deviated toward the nearbyobject. This “visual attractor illusion” (VAI) was found across differentattractor types and across different colors of targets and masks. The VAI isa relatively unique phenomenon that can be distinguished from other mislocalizationeffects such as foveal bias, the flash-lag effect or the landmarkeffect. Furthermore, the VAI is a relatively high-level effect that appears tobe modulated by attention: It was stronger when the attractor object wastask-relevant rather than task-irrelevant, and diminished as the experimentprogressed. Visual transients also play an important role in the illusion,which depends on the sudden onset of the attractor object and backwardmasking of the target. We discuss two possible mechanisms: 1) the distortionof perceptual space by the brief appearance of an object, which drawsin the perceived location of a neighboring object; 2) localization of a maskedtarget may be weighted towards the position of a concurrently presentedvisual transient. The VAI may provide a unique example of a groupingand-assimilationeffect in the spatial domain.Acknowledgement: Grant-In-Aid University of Minnesota43.455 Collinear Facilitation Is Recovered Across Disparities byEmbedding in a Slanted SurfacePi-Chun Huang 1 (pi_chun2001@yahoo.com.tw), Chien-Chung Chen 1,2 , ChristopherTyler 3 ; 1 Department of Psychology, National Taiwan University, 2 Neurobiologyand Cognitive Science Center, National Taiwan University, 3 Smith-Kettlewell EyeResearch InstituteThe detection threshold of a Gabor target can be reduced by the presenceof collinear flanking Gabors. Such collinear facilitation is disruptedwhen the target and the flanker have different disparity (Huang et al, 2006,<strong>Vision</strong> Research). Here, we further investigated whether it is the depth orsurface difference between the target and the flanker that causes the abolitionof collinear facilitation. The target and the flankers were 2 cy/degvertical Gabor patches. The distance between the target and the flankerswas three wavelengths. There were three viewing conditions: target andflankers were set (1) in the same frontoparallel plane; (2) at different disparitiesin different frontoparallel planes; and (3) at different disparities butembedded in the same slanted plane as defined by the orientation differencein stimuli between the left and right eye images. The Zero disparitywas maintained by reference squares presented at the edge of the display.We measured the target contrast detection threshold with and withoutthe flankers present with a temporal 2AFC paradigm and the Psi staircasemethod. Strong collinear facilitation was observed when the target andthe flankers were either in the same frontoparallel plane or embedded inthe same slanted surface even though the target and the flankers were atdifferent disparities. The facilitation disappeared when the stimuli at thisdisparity difference were in different frontoparallel planes. Noticed that,for all viewing conditions, the target and the flankers were always collinearwhen monocularly viewed and thus produced collinear facilitation. Evenif the collinear facilitation was operating at the monocular level, once thetarget and flankers occupied different disparities, the collinear facilitatoryMonday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>231


Monday Morning PostersVSS 2010 AbstractsMonday AMeffect was disrupted. Our results suggest that it is the difference in surfaceassignment, not the difference in disparity per se, that causes the disruptionof collinear facilitation.Acknowledgement: Supported by 96-2413-H-002-006-MY3 to CCC and AFOSR FA09-1-0678 to CWT43.456 The effect of background grouping on central task inpatients with parietal lobe lesionsSetu Havanur 1 (setu.gh@gmail.com), Glyn Humphreys 1 , Harriet Allen 1 ; 1 School ofPsychology, University of BirminghamWe investigated the effects of grouping between irrelevant backgroundstimuli on a central task. We compared performance between non-lesionedcontrol participants and patients with visual neglect and extinction. Ourpatient group included right and left visual neglect patients. On each trialparticipants reported the presence or absence of a central target (a singledigit) presented on (0.5 contrast) random noise patch (duration =100ms).The visibility of the central target was matched between participants suchthat performance on a pretest was always 80% correct. In addition to thecentral stimulus a pattern of black and white dots was presented on eitherside of fixation on a gray background. Those dots were either arranged inalternating rows of black and white dots or randomly placed in the samematrix (intermixed in a single block). Patients were faster (p=0.03) on thecentral task when the background dots were in rows compared to whenthey were in random positions. However, the main effects of the field,patient group (left vs right hemisphere lesion) and their interactions werenot significant. We did not find this effect of grouping in the non-lesionedcontrol participants (Fs


VSS 2010 AbstractsMonday Morning Posterssignals are pooled differently, via intersection-of-constraints (IOC) and vector-averageprocesses, respectively. Previous research (e.g. VisRes, 2003,2290-2301) has also indicated that form cues can influence how motionsignals are perceived. We investigated whether forms cues can affect thepooling of motion signals and whether they differentially affect the poolingof 1D and 2D signals. Global-Gabor (GG) and global-plaid (GP) stimuliwere used. These stimuli consist of multiple apertures that contain eitherGabors or plaids, respectively. In the GG stimulus the global solution isdefined by having the Gabor carriers move (1D signals) such that they areconsistent with a single IOC-defined solution. In the GP stimuli the plaidmotion (2D signals) are consistent with a single VA solution. Form cuescan be introduced by adding orientation information to the apertures thatis either consistent (aligned with) or inconsistent (orthogonal to) with theglobal-solution. With the 1D stimuli, inconsistent form cues resulted in aloss of the IOC solution; observers instead perceived motion along the axisdefined by the orientation cue. With the 2D signals, form cues had minimaleffect. These results indicate that form cues can affect the pooling of 1D butnot 2D signals. It is possible that the form cues do this by disambiguatingthe family of possible 2D solutions by providing the direction of motion,hence turning the 1D signals into 2D signals.Acknowledgement: Australian Research Council through the ARC Centre of Excellence forVisual Science #CE056190343.503 Different pooling of motion information for perceptualspeed discrimination and behavioral speed estimationClaudio Simoncini 1 (claudio.simoncini@incm.cnrs-mrs.fr), Laurent U. Perrinet 1 ,Anna Montagnini 1 , Pascal Mamassian 2 , Guillaume S. Masson 1 ; 1 Team DyVA, INCM,CNRS & Université de la Méditerranée, Marseille, France, 2 LPP, CNRS & ParisDescartes, Paris, FranceTo measure speed and direction of moving objects, the cortical motion systempools information across different spatiotemporal channels. Here, weinvestigate this integration process for two different tasks. Primate ocularfollowing responses are driven by speed information extracted by populationof speed-tuned neurons. They provide an excellent probe for speedestimation. We contrasted these responses with a psychophysical speeddiscrimination task ran in the same subjects and with the same stimuli. Weused short presentations (250ms) of “motion clouds” (Schrater et al 2000) inwhich the width of the spatial frequency distribution (σsf) was varied fordifferent mean speed (10-50°/s). Eye movements were recorded with anEyeLink1000, using classical paradigm for ocular following. Stimuli weredisplayed on a CRT monitor (1280x1024@100Hz) and covered 47° of visualangle. All experiments were run on 2 naive subjects. We found that largerσsf elicited stronger initial eye velocity during the open-loop part of trackingresponses. This facilitating effect was larger with higher speeds. Bycontrast, larger σsf had a detrimental effect upon speed discrimination performance.Speed discrimination thresholds were significantly higher (52%)with large spatial frequency distributions, irrespective of the mean stimulusspeed. These results provide a framework to investigate how motioninformation is adaptively pooled for solving different motion tasks.Paul R. Schrater, David C. Knill and Eero P. Simoncelli (2000) “ Mechanismsof visual motion detection” Nature Neuroscience 3, 64 - 68.Acknowledgement: CODDE project (EU Marie Curie ITN), CNRS43.504 A dynamical neural model of motion integrationÉmilien Tlapale 1 (Emilien.Tlapale@sophia.inria.fr), Guillaume S. Masson 2 , PierreKornprobst 1 ; 1 NeuroMathComp, INRIA Sophia Antipolis, France, 2 DyVA, INCM,UMR 6193, CNRS & Université de la Méditerranée, FranceWe propose a dynamical model of 2D motion integration where diffusionof motion information is modulated by luminance information. This modelincorporates feedforward, feedback and inhibitive lateral connections andis inspired by the neural architecture and dynamics of motion processingcortical areas in the primate (V1, V2, and MT).The first aspect of our contribution is to propose a new anisotropic integrationmodel where motion diffusion through recurrent connectivity betweencortical areas working at different spatial scales is gated by the luminancedistribution in the image. This simple model offers a competitive alternativeto less parsimonious models based on a large set of cortical layersimplementing specific form or motion features detectors.A second aspect that is often ignored by many 2D motion integration modelsis that biological computation of global motion is highly dynamical.When presented with simple lines, plaids or barberpoles stimuli, the perceiveddirection reported by human observers, as well as the response ofmotion sensitive neurons, will shift over time. We demonstrate that the proposedapproach produces results compatible with several psychophysicalexperiments concerning not only the resulting global motion perception,but also concerning the oculomotor dynamics Our model can also explainseveral properties of MT neurons regarding the dynamics of selectivemotion integration, a fundamental property of object motion disambiguationand segmentation.As a whole, we present an improved motion integration model, which isnumerically tractable and reproduces key aspect of cortical motion integrationin primate.Acknowledgement: This research work has received funding from the EuropeanCommunity’s Seventh Framework Program under grant agreement N°215866, projectSEARISE and the Région Provence-Alpes-Côte d’Azur. GSM was supported by the CNRS,the European Community (FACETS, IST-FET, VIh Framework, N°025213) and the AgenceNationale de la Recherche (ANR, NATSTATS).43.505 A model of figure-ground segregation from texture accretionand deletion in random dot motion displaysTimothy Barnes 1 (barnes@cns.bu.edu), Ennio Mingolla 1 ; 1 Department of Cognitiveand Neural Systems, Boston UniversityAccretion or deletion of texture unambiguously specifies occlusion and canproduce a strong perception of depth segregation between two surfaceseven in the absence of other cues. Given two abutting regions of uniformrandom texture with different motion velocities, one region will appear tobe situated farther away and behind the other (i.e., the ground) if its textureis accreted or deleted at the boundary between the regions, irrespective ofregion and boundary velocities (Kaplan 1969, P&P 6(4):193–198). Consequently,a region with moving texture appears farther away than a stationaryregion if the boundary is stationary, but it appears closer (i.e. the figure)if the boundary is moving coherently with the moving texture. Computationalstudies demonstrate how V1, V2, MT, and MST can interact first tocreate a motion-defined boundary and then to signal texture accretion ordeletion at that boundary. The model’s motion system detects discontinuitiesin the optic flow field and modulates the strength of existing boundariesat those retinal locations. A weak speed-depth bias brings faster-movingtexture regions forward in depth, which is consistent with percepts ofdisplays containing shearing motion alone — i.e., where motion is parallelto the resulting emergent boundary between regions — in which the fasterregion appears closer (Royden et al. 1988, Perception 17:289–296). Themodel’s form system completes this modulated boundary and tracks themotion of any boundaries defined by texture. The model includes a simplepredictive circuit that signals occlusion when texture defined boundariesunexpectedly appear or disappear.Acknowledgement: TB and EM were supported in part by CELEST, an NSF Science ofLearning Center (NSF SBE-0354378) and HRL Labs LLC (DARPA prime HR001-09-C-0011). EM was also supported in part by HP (DARPA prime HR001109-03-0001).43.506 Humans assume isotropic orientation structure whensolving the ‘aperture problem’ for motionDavid Kane 1 (d.kane@ucl.ac.uk), Peter Bex 2 , Steven Dakin 1 ; 1 Institute of Ophthalmology,UCL, 2 Schepens Eye Research Institute, Harvard Medical SchoolWe examined how global direction judgements with stimuli prone to the“aperture problem” depend on the local orientation structure of the stimulus.Observers adjusted the orientation of a line to match the overall directionof four randomly positioned Gabors whose carrier velocities wereconsistent with rigid-translation in a single random direction. The fourGabor orientations were either randomly distributed or evenly spaced at45° intervals. Response variability was ~20° in the evenly spaced conditionand ~30° in the random orientation condition. The degree of correlationbetween observers’ errors when retested with identical stimuli was greaterin the random orientation condition, demonstrating that the increase invariability is almost entirely determined by trial-by-trial differences in theorientation structure of the stimuli. In contrast the majority of the errors(~80%) in the evenly spaced condition are random. Because two or moredifferent velocities uniquely specify a particular global direction, an idealobserverthat fits a cosine to the local velocity distribution will not produceerrors, while adding random noise will produce unpredictable errors(unlike human observers). However when the motion energy model isincorporated as a local motion stage, the representation of each local velocityis no longer discrete and the energy from differently oriented elementsmay overlap. Predictable errors may then arise from a mismatch betweenMonday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>233


Monday Morning PostersVSS 2010 AbstractsMonday AMthe local motion energy distribution (on a trial-by-trial basis) and a globalmotion stage that assumes an isotropic orientation structure (i.e. a cosine).The model now generates errors in the random orientation condition thatcorrelate with the observers’ errors (R2 ~0.48). This compares to an R2 of~0.56 for the correlation between observers’ test-retest errors, demonstratingthat the model captures around 85% of the stimulus-driven variability.43.507 Feature invariant spatial pooling of first- and second-ordermotion signals for solution of aperture problemKazushi Maruya 1 , Shin’ya Nishida 1 ; 1 NTT Communication Science LaboratoriesThe visual system solves the aperture problem by means of integratinglocal 1-dimensional motion signals into a global 2-dimensional motion.Although it is well known that motion pooling occurs within first-order(luminance-based) motion or within second-order (non-luminance-based)motion of the same type, whether it generally occurs across differentmotion types in a feature-invariant manner remains a matter of controversyin the past plaid literature. Furthermore, this issue has neither testedwith objective performance measures, nor under the condition where local1-dimensional motion signals are cleanly separated in space with no possibilityof local artifacts. Here we challenged this problem by measuringdirection-discrimination performance of a four-aperture motion stimulus.The stimulus consisted of four oscillating bars, simulating a situation wherethe contour of a 12.8 x 12.8 deg diamond was translating along a circularpath, and seen through four Gaussian apertures (SD: 1.07 deg), each locatedat the center of an edge. The attribute that defined each oscillating bar waseither luminance, temporal frequency of dynamic random dots, or binoculardisparity of dynamic random-dot stereogram. The results indicate thatobservers could judge the direction of global circular translation (clockwiseor anti-clockwise) not only when all the edges were defined by the sameattribute, but also when adjacent bars were defined by different attributes,although the attribute had some influence on the performance of directiondiscrimination and the quality of perceived global motion. In addition,motion pooling between first-order and second-order motions was possibleeven when first-order motion that did not contain obvious positionalshift of features that might be detected by second-order mechanism. Theseresults indicate that second-order motion signals do contribute the solutionof aperture problem either solely or in corporate with first-order signals.43.508 Neural responses involved in 1D motion spatial poolingKaoru Amano 1,2 (amano@brain.k.u-tokyo.ac.jp), Kazushi Maruya 2 , Shin’ya Nishida 2 ;1 The University of Tokyo, 2 NTT Communication Science LaboratoriesTo compute two-dimensional (2D) trajectory of visual motion, the visualsystem has to integrate one-dimensional (1D) local motion signals not onlyacross orientation, but also across space. While previous studies found theevidence suggesting that monkey MT/MST or human MT complex (hMT+)is involved in the integration of spatially-overlapping 1D motion signals, itremains unclear where and when in the brain 1D signals are spatially pooled.Here we non-invasively recorded human neural responses related to thepooling of 1D motion signals using a whole head MEG system (PQ2440R;Yokogawa). The first experiment recorded MEG responses evoked by thechange from incoherent (0% coherence) to coherent (30, 50, 71 or 100%)Global-Gabor motion (Amano et al, 2009, Journal of <strong>Vision</strong>). Patches with1.7 deg stationary Gaussian envelopes were presented within an annuluswhose inner and outer diameters were 5 and 27 deg, respectively. The secondexperiment tested the direction-selectivity of the responses to Global-Gabor stimuli by using an adaptation paradigm. Motion coherence of bothtest and adaptation stimuli was 100%. The global motion direction of adaptationstimulus was fixed while that of test stimulus was randomly chosenfrom the two opposing (adapted and non-adapted) directions. We madethe orientations of test Gabors orthogonal to those of adaptation Gabors toexclude local adaptation effects. In both experiments, beamformer analysisfound the evoked activities in hMT+ peaking at around 150-200 ms. Theresponses monotonically increased with the increase in motion coherence(Exp. 1), and were significantly smaller for the adapted global directionthan for the opposite direction (Exp. 2). Our finding that hMT+ responsesshow both coherence dependency and direction selectivity to global motionsupports the idea that hMT+ is the locus of 1D motion spatial pooling.43.509 Low-level mechanisms do not explain paradoxical motionperceptsDavis M. Glasser 1 (dglasser@cvs.rochester.edu), Duje Tadin 1 ; 1 Center for VisualScience, University of RochesterClassic psychophysical studies have shown that increasing the size of lowcontrast moving stimuli increases their discriminability, indicating spatialsummation mechanisms. More recently, a number of studies reported thatfor moderate and high contrasts, size increases yield substantial deteriorationsof motion perception — a result described as psychophysical spatialsuppression. While this result resembles known characteristics of suppressivecenter-surround neural mechanisms, a recent study (Aaen-Stockdale etal., 2009) argued that observed size-dependent changes in motion perceptioncan be explained by differences in contrast sensitivity for stimuli of differentsizes. Here, we tested this hypothesis using duration threshold measurements— an experimental approach used in several spatial suppressionstudies. The results replicated previous reports by demonstrating spatialsuppression at a fixed, high contrast. Importantly, we observed strong spatialsuppression even when stimuli were normalized relative to their contrastthresholds. While the exact mechanisms underlying spatial suppressionstill need to be adequately characterized, this study demonstrates thata low-level explanation proposed by Aaen-Stockdale et al. (2009) cannotaccount for spatial suppression results.43.510 A Bio-Inspired Evaluation Methodology for Motion EstimationPierre Kornprobst 1 (pierre.kornprobst@inria.fr), Emilien Tlapale 1 , Jan Bouecke 2 ,Heiko Neumann 2 , Guillaume S. Masson 3 ; 1 INRIA, EPI Neuromathcomp, France,2 Institute of Neural Information Processing, Ulm University, Germany,3 INCM,UMR 6193 CNRS-Université de la Méditerranée, FranceEvaluation of neural computational models of motion perception currentlylacks a proper methodology for benchmarking. Here, we propose an evaluationmethodology for motion estimation which is based on human visualperformance, as measured in psychophysics and neurobiology. Offeringproper evaluation methodology is essential to continue progress in modeling.This general idea has been very well understood and applied in computervision where challenging benchmarks are now available, allowingmodels to be compared and further improved. The proposed standardizedtools allow to compare different approaches, and to challenge current modelsof motion processing in order to define current failures in our comprehensionof visual cortical function. We built a database of image sequencesto depict input test cases corresponding to displays used in psychophysicalsettings or in physiological experiments. The data sets are fully annotatedin terms of image and stimulus size and ground truth data concerningdynamics, direction, speed, etc. Since different kinds of models have differentkinds of representation and granularity, we had to define genericoutputs for each considered experiment as well as correctness evaluationtools. We propose to use output data generated by the considered modelas read out that relates to observer task or functional behavior. Amplitudeof pursuit or direction likelihoods are two examples. We probed severalmodels of motion perception by utilizing the proposed benchmark Theemployed models show very different properties and internal mechanisms,such as feedforward normalizing models of V1 and MT processing andrecurrent feedback models. Our results demonstrate the usefulness of theapproach by highlighting current properties and failures in processing. Sowe provide here a valuable tool to unravel the fundamental mechanismsof the visual cortex in motion perception. The complete database as well asdetailed scoring instructions and results derived by investigating severalmodels are available at http://www-sop.inria.fr/neuromathcomp/software/motionpsychobenchAcknowledgement: This research work has received funding from the EuropeanCommunity’s Seventh Framework Programme under grant agreement N°215866, projectSEARISE and the Région Provence-Alpes-Côte d’Azur. GSM was supported by the CNRS,the European Community (FACETS, IST-FET, Sixth Framework, N°025213) and the AgenceNationale de la Recherche (ANR, NATSTATS).43.511 Monkey and humans exhibit similar direction suppressioneffectsCatherine Lynn 1 (clynn05@qub.ac.uk), William Curran 1 ; 1 School of Psychology,Queen’s University BelfastSingle cell recording studies of motion-processing neurons in nonhumanprimates provide important data with which to develop models of motionprocessing in the human visual system. Bearing this in mind, it is importantto establish whether activity of motion-processing neurons in nonhumanprimates mirrors that of human motion-processing neurons. Previousresearch (Snowden et al, 1991) has mapped out direction tuning of suppressiveeffects in macaque MT neurons. Specifically, a neuron’s spiking234 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsMonday Morning Postersrate in the presence of its preferred motion is suppressed when an additionaldirection is added to the stimulus; despite the fact that the additionalmotion direction causes the neuron to fire when presented in isolation. Weused a motion adaptation phenomenon, the direction aftereffect (DAE), totest whether this pattern of suppression applies to human motion-sensitiveneurons. Motion adapters that evoke a stronger response in neurons usuallyresult in greater changes in the neurons’ direction tuning functions,which are thought to impact on DAE magnitude. We measured DAE magnitudefollowing adaptation to random dot kinematograms, in which eitherall dots moved in the same direction (45 deg clockwise from vertical up) orhalf had a direction of 45 deg and the other half moved in one of severalother directions clockwise from 45 deg. We then measured DAE magnitudefollowing adaptation to each of the individual directions used in the firstexperiment. If macaque MT is an accurate model of human motion-processing,it would predict that 1) DAE magnitude will drop off with increasingdirection difference in experiment 1, and 2) additional directions causingDAE suppression will induce a measurable DAE when presented in isolation.This is precisely the pattern of results we obtained; supporting theview that the response properties of nonhuman motion-processing neuronsmirror those of human motion-processing neurons.43.512 Human MT+ response saturates rapidly as a function ofsampling density in natural dynamic scenesSzonya Durant 1 (szonya.durant@rhul.ac.uk), Johannes. M. Zanker 1 ; 1 Departmentof Psychology, Royal Holloway University of LondonIt is known from macaque single cell electrophysiology that response torandom dot optic flow movies saturates rapidly in MST as a function ofdot density. In this experiment we used fMRI to investigate if human MT+similarly saturates as a function of sampling density with more naturalisticoptic flow scenes. We recorded grayscale movies using a camera movingforward. We covered the movies with a uniform grey area on which transparenthard-edged circular apertures of a fixed size (0.2 times the height ofthe clip in diameter) were placed in random locations. We presented moviesvisible through 10, 40 and 160 circular apertures. In a fourth conditionwe “cut out” the motion visible through the 160 apertures and randomlyrearranged the apertures, so that local motion was preserved, but no globalmotion associated with forward movement remained. Participants viewedthese movies in the scanner, whilst performing a central foveal attentiontask. We localised regions of interest from separate sessions for V1, V2,V3, V4 and MT+. We found that although V1 and other early striate areasincrease their response with the number of apertures, area MT+, althoughresponding significantly above baseline to all conditions does not responddifferentially to the different number of apertures. This result holds if wesplit area MT+ into MT and MST, based on ipsilateral responses. As theamount of visible motion across these conditions does not affect the MT+response we suggest this is due to the early saturation of response with theamount of motion present. However, we found no difference in any of thevisual areas between the scrambled and normal 160 aperture conditions,suggesting these results are not necessarily dependent on the presence ofcoherent global motion.Acknowledgement: The Leverhulme TrustFace perception: Neural processingVista Ballroom, Boards 513–529Monday, May 10, 8:30 - 12:30 pm43.513 Characterizing the face processing network in the humanbrain: a large-scale fMRI localizer studyLaurence Dricot 1,2 (laurence.dricot@nefy.ucl.ac.be), Bernard Hanseeuw 2 , ChristineSchiltz 3 , Bruno Rossion 1,2 ; 1 Institute of Neuroscience and Psychological science,University of Louvain, 2 Institute of Neuroscience, University of Louvain, 3 Schoolof Education, University of LuxemburgA whole network of brain areas showing larger response to faces than othervisual stimuli has been identified in the human brain using fMRI (Sergent,1992; Haxby, 2000). Most studies identify only a subset of this network, bycomparing the presentation of face pictures to all kinds of object categoriesmixed up (e.g., Kanwisher, 1997), or to scrambled faces (e.g., Ishaï, 2005),using different statistical thresholds. Given these differences of approaches,the (sub)cortical face network can be artificially overextended (Downing &Wiggett, 2008), or minimized in different studies, both at the local (size ofregions) and global (number of regions) levels. Here we conducted an analysisof a large set of right-handed subjects (40), tested with a new wholebrainlocalizer to control for both high-level and low-level differencesbetween faces and objects. Pictures of faces, cars and their phase-scrambledcounterparts were used in a 2x2 block design. Group-level (random effect)and single subject (ROI) analyses were made. A conjunction of two contrasts(F-SF and F-C) identified 6 regions: FFA, OFA, amygdala, pSTS, AITand thalamus. All these regions but the amygdala showed clear right lateralization.Interestingly, the FFA showed the least face-selective responseamong the cortical face network: it presented a significantly larger responseto pictures of cars than scrambled cars [t=9.3, much more than amygdala(t=2.6), AIT (t=2.1) and other regions (NS)], and was also sensitive to lowlevelproperties of faces [SF – SO; t=5.1; NS in other areas]. These observationssuggest that, contrary to other areas of the network, including theOFA, the FFA is a region that may contain populations of neurons that arespecific to faces intermixed with populations responding more generally toobject categories. More generally, this study helps understanding the extentand specificity of the network of (sub)cortical areas particularly involved inface processing.43.514 The contribution of Fourier amplitude spectrum differencesto the early electrophysiological (i.e. P1) amplitude differencebetween face and nonface object categoriesCorentin Jacques 1,2 (corentin.g.jacques@uclouvain.be), Bruno Rossion 2 ; 1 Departmentof Computer Science, Department of Psychology, Stanford University,2 Department of Psychology, Universite Catholique de LouvainEvent-related potential (ERP) studies in humans indicate that the early activationof visual face representations in the human brain takes place duringthe time-window of the occipito-temporal N170 component. Similarly to theN170, the P1 visual component preceding the N170 has also been reportedas being larger in response to face compared to nonface stimuli. This observationhas been taken by some authors as evidence for an early sensitivityto faces in the visual cortex at around 100 ms. However, because the P1component is highly sensitive to manipulations of the spatial frequencycontent of an image, part of the P1 amplitude difference between facesand nonfaces may be related to differences between the Fourier amplitudespectrums (FAS, a parameter that conveys the global low-level statisticsof an image) of these categories. To identify the contribution of the FAS tothe P1 amplitude difference between face and nonface stimuli we recordedERPs while subjects viewed faces and cars either in their original unalteredversion or in a version in which the Fourier phase information of one categorywas combined with the FAS of the other category. When presentedin their original version, faces elicited a larger P1 compared to cars, in linewith previous observations. This effect was most consistent over the righthemisphere occipito-temporal electrodes. In contrast, switching the FASbetween faces and cars resulted in a larger P1 for cars, again mainly overthe right occipito-temporal electrodes. These findings suggest that the P1amplitude difference between face and nonface stimuli are, at least partly,related to differences in the FAS between these categories. Moreover, eventhough these P1 differences do not reflect face categorization per se, theymay nevertheless reflect the use of lower-level visual statistic frequentlyassociated with a human face to allow fast basic-level face categorization.Acknowledgement: Fonds de la Recherche Scientifique - FNRS43.515 Dynamics of face detection revealed by fMRI: the right FFAgets it firstFang Jiang 1 (fang.jiang@uclouvain.be), Laurence Dricot 1 , Jochen Weber 2 , GiuliaRighi 3 , Michael Tarr 4 , Rainer Goebel 5 , Bruno Rossion 1 ; 1 University of Louvain,2 Columbia University, 3 Children’s Hospital Boston, 4 Carnegie Mellon University,5 University of MaastrichtOur goal was to use fMRI to uncover dynamics of visual scene face detectionin the human brain, by means of a paradigm that slowly and graduallyreveals faces. Such paradigms have been used previously to examine topdownfacilitation (e.g., Eger et al., 2007; James et al., 2006) and to dissociatemultiple stages in visual recognition (e.g., Carlson et al., 2006). Here,we used the RISE methods (Sadr & Sinha 2004) to create image sequencesof visual scenes in which faces or cars are revealed progressively as theyemerge from noise. Participants were asked to respond as soon as theydetected a face or car during the sequence presentation. Among the facesensitiveregions identified based on localizer data, the right fusiform facearea (“FFA”) showed the earliest difference between face and car activa-Monday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>235


Monday Morning PostersVSS 2010 AbstractsMonday AMtion. Specifically, the right FFA showed higher activation to faces than tocars before the more posteriorly located face-sensitive area of the lateraloccipital cortex (“Occipital Face Area”, “OFA”). Whole-brain analysis confirmedthese findings, with a face-sensitive cluster in the right fusiformgyrus showing face selectivity shortly before successful behavioral detection.Overall, these observations suggest that following low-level visualanalysis, a face stimulus is detected initially by responses of populations ofneurons in the right middle fusiform gyrus, spreading to a whole networkof (sub)cortical face-sensitive areas for further processing. Our results provideinteresting evidence for non-hierarchical emergence of face-selectivityamong known face-sensitive cortical regions, that is, with “ OFA” facespecificresponses not necessarily preceding face-specific “FFA” activation(Rossion, 2008).Acknowledgement: Belgian National Fund for Scientific Research Research GrantARC07/12-007, The Human Frontier Science Program Postdoctoral FellowshipLT00103/2008-L43.516 Cerebral lateralization of the face-cortical network in lefthanders:only the FFA does not get it rightHenryk Bukowski 1,2 (hbbukowski@gmail.com), Bruno Rossion 1,2 , Christine Schiltz 3 ,Bernard Hanseeuw 2 , Laurence Dricot 1,2 ; 1 Institute of Psychological Science,University of Louvain, 2 Institute of Neuroscience, University of Louvain, 3 Schoolof Education, University of LuxemburgFace processing is a function that is highly lateralized in humans, as supportedby original evidence from brain lesion studies (Hecaen & Anguerlergues,1962), followed by studies using divided visual field presentations(Heller & Levy, 1981), neuroimaging (Sergent et al., 1992) and event-relatedpotentials (Bentin et al., 1996). Studies in non-human primates (Perrett etal., 1988; Zangenehpour & Chaudhuri, 2005), or other mammals (Peirce &Kendrick, 2001) support the right lateralization of the function, which maybe related to a dominance of the right hemisphere in global visual processing.However, in humans there is evidence that manual preference mayshift or qualify the pattern of lateralization for faces in the visual cortex:face recognition impairments following unilateral left hemisphere braindamage have been found only in a few left-handers (e.g., Mattson et al.,1992; Barton, 2009). Here we measured the pattern of lateralization in theentire cortical face network in right and left-handers (12 subjects in eachgroup) using a well-balanced face-localizer block paradigm in fMRI (faces,cars, and their phase-scrambled versions). While the FFA was stronglyright lateralized in right-handers, as described previously, it was equallystrong in both hemispheres in left-handers. In contrast, other areas of theface-sensitive network (posterior superior temporal sulcus, pSTS; occipitalface area, OFA; anterior infero-temporal cortex, AIT; amygdala) remainedidentically right lateralized in both left- and right-handers. Accordingly,our results strongly suggest that the face-sensitive network is equally lateralizedfor left- and right-handers, and thus the face processing is not influencedby handedness. However, the FFA is an important exception since itis right-lateralized for right-handers but its recruitment is more balancedbetween hemispheres for left-handers. These observations carry importanttheoretical and clinical implications for the aetiology of brain lateralizationdepending on the left- or right-handedness and the neuropsychologicalundertaking of prosopagnosic patients.43.517 Dissociable temporal components of neural similarity inface perception: An ERP studyDavid Kahn 1 (dakahn@mail.med.upenn.edu), Alison Harris 1 , David Wolk 1 , GeoffreyAguirre 1 ; 1 Department of Neurology, University of PennsylvaniaPsychological models suggest that perceptual similarity can be subdividedinto geometric effects, such as metric distance in stimulus space, and nongeometriceffects such as stimulus-specific biases. However, the time courseof neural similarity processing remains unclear. We investigated this questionusing a neural adaptation paradigm to study event-related potentials(ERP) related to facial similarity. We find an ERP component between the“face-selective” N170 and N250 responses (the “P200”) that is modulatedby transitions of face appearance, consistent with neural adaptation to thegeometric similarity of face transitions. In contrast, the N170 and N250reflect non-geometric stimulus bias, with different degrees of adaptationdepending upon the direction of face transition within the stimulus space.Thus, the behavioral distinction between geometric and non-geometricsimilarity effects is consistent with dissociable neural responses across thetime course of face perception. In line with prior results implicating theN170 and N250 in perception and memory, respectively, these data supportan intermediate role of the P200 in consolidation of the perceptualrepresentation. Together, these results demonstrate that the neural codingof perceptual similarity, in terms of both geometric and non-geometricrepresentation, occurs rapidly and from relatively early in the perceptualprocessing stream.Acknowledgement: This work was supported by K08 MH 7 2926-01 and a Burroughs-Wellcome Career Development Award43.518 Delineate the temporal sequence and mechanisms forperceiving individual facesXin Zheng 1 (xz02kz@brocku.ca), Catherine J. Mondloch 1 , Sidney J. Segalowitz 1 ;1 Psychology Department, Brock UniversityIn two event-related potential (ERP) studies, we examined neural correlatesof individual face perception. In Study 1, 36 individual female and 9male faces were randomly presented, and participants were instructed topress a button for male faces. Based on similarity ratings from a previousbehavioral study, the female faces could be located in a multidimensional“face-space”. The facial characteristics representing the “face-space” andtherefore important for judging face similarities include eye color, facewidth, eye size and top-of-face height. The face-sensitive N170 componentwas affected by all these factors. In addition, there was a hemisphere difference:the right N170 amplitude was related to eye color and face width,while the left N170 amplitude was related to eye size and top-of-face heightwhen bottom-of-face height was small. In Study 2, we created a set of facesthat varied in identity strength by morphing each of the 36 female faceswith an average face formed from the entire set; the relative weighting of anoriginal face ranged from 100% to 0% in 10% decrements. Participants wereinstructed to press a button whenever they detected a target identity. Accuracydata indicated an ambiguous region between 30% and 60% identitystrength. Neither the P1 nor the N170 to non-target faces were influencedby identity strength. However, the amplitude of the P2 component (230-270 ms) became smaller as identity strength decreased, with no categoricalboundary effect. Collectively, these results provide electrocortical evidenceof structural decoding of individual faces before 200 ms that involves ratherfine-tuned analyses of multiple facial characteristics, which might be carriedout separately by two hemispheres. Following structural decoding, theelectrocortical evidence of individual face identification occurs around 250ms with minimal response to “average” faces.Acknowledgement: NSERC43.519 DIY ERPsNicholas A. Del Grosso 1 (s10.ndelgrosso@wittenberg.edu), Darcy Dubuc 1 , MichaelD. Anes 1 ; 1 Department of Psychology, Wittenberg UniversityWe detail the use of several off-the-shelf hardware and software componentsto create an inexpensive “homemade” ERP device. We used thismachine to try to find the N170 to faces relative to other object classes andto faces relative to inverted faces. We took hardware bandpass filtered (1-100 Hz) analog outputs of a common polygraph (Grass Instruments model79D; used models were available at the time of abstract submission for wellunder $700) which normally provides output to pens, and instead sent thesignal through a National Instruments USB data acquisition card. MAT-LAB was used to present stimuli and to analyze signal output. We havethus far used single electrodes placed via the 10-10 system and with regardto published coordinates. Electrodes were placed over the left and rightFFA and we used the ear as reference. Despite the paucity of electrodes,initial results are promising and show strong negative deflections to facesin the range of 100-200 ms post-stimulus. The goal of our poster presentationis to present our hardware and software methods in detail to the visioncommunity and to gather feedback that might be helpful to us and to othersmall colleges with minimal cognitive neuroscientific equipment budgets.43.520 Dynamic and static faces: Electrophysiological responsesto emotion onsets, offsets, and non-moving stimuliLaura Dixon 1 (lkdixon@uvic.ca), James Tanaka 1 ; 1 Cognition and Brain <strong>Sciences</strong>Program, Department of Psychology, University of VictoriaIn real life, facial expressions are fleeting occurrences that appear suddenly,like a burst of happiness or flash of anger, and then, just as quickly, theexpression vanishes from the face. What are the brain mechanisms thatallow us to discern the rapid onset and offset of facial expressions quicklyand effortlessly? In this study, we examine the neural correlates of dynamicfacial expressions using event-related potentials (ERPs). Participants werepresented with happy, angry or neutral faces while EEG was recorded from236 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsMonday Morning Posters36 scalp electrodes. In the expression onset condition, a neutral face waspresented for 500 ms, immediately followed by either a happy or angry facefor 500 ms. In the expression offset condition, the happy or angry face wasshown for 500 ms, immediately followed by a face with a neutral expressionfor 500 ms. The onset and offset conditions were compared to a static conditionin which a single happy, angry and neutral face was shown for 500ms. The EEG data showed that in the right posterior scalp sites, the onsetof the happy or angry expression elicited a larger potential than their staticversions suggesting that dynamic faces are more salient than static images.Moreover, the direction of the dynamics appears to be critical where theonset expressions produced larger brain potentials than offset expressions.These findings indicate that observers are more sensitive to the dynamicexpressions than static expressions. However, the direction of the facialdynamics also seems important where the sudden appearance of a facialexpression elicited more brain activity than its abrupt disappearance.43.521 TMS evidence for feedforward and feedback mechanisms offace and body perceptionDavid Pitcher 1,2 (dpitcher@mit.edu), Brad Duchaine 2 , Vincent Walsh 2 , NancyKanwisher 1 ; 1 McGovern Institute for Brain Research, Massachusetts Instituteof Technology, U.S.A., 2 Institute of Cognitive Neuroscience, University CollegeLondon, U.K.Neuroscientists seeking to understand the cognitive mechanisms thatunderlie visual object perception have used functional magnetic resonanceimaging (fMRI) to identify spatially distinct cortical regions in the humanbrain selective for different object categories. One such region, the occipitalface area (OFA), shows a stronger response to faces than to other objectcategories and has been proposed to be the first stage in a cortical networkspecialized for face perception. We sought to more precisely establish whenthe OFA is engaged in face perception using transcranial magnetic stimulation(TMS). Ten subjects performed a delayed match to sample face discriminationtask while double pulse TMS (separated by 10ms) was deliveredover each subject’s functionally localised OFA. Results showed thatTMS disrupted task performance at two distinct latencies, 40-50 ms afterstimulus onset and 100-110ms after stimulus onset. In a second experimentwe investigated whether TMS delivered over an adjacent body-selectiveregion, the extrastriate body area (EBA), would produce a similar temporalpattern of impairment. Ten subjects performed a delayed match to samplebody discrimination task while double pulse TMS was delivered over eachsubject’s functionally localised EBA. Results again showed two impairmentwindows, the first at 40-50ms and the second at 100-110ms after stimulusonset. The first impairment window at 40-50ms appears to reflect an earlyfeed forward stage of face and body processing. The later impairment windowat 100-110ms could reflect a second wave of feed forward informationor task specific feedback mechanisms originating from higher corticalareas.Acknowledgement: BBSRC43.522 Turn that frown upside-down! Inferring facial actions frompairs of images in a neurally plausible computational modelJoshua Susskind 1 (josh@aclab.ca), Adam Anderson 1 , Geoffrey Hinton 2 ;1 Psychology, University of Toronto, 2 Computer Science, University of TorontoMost approaches to image recognition focus on the problem of inferringa categorical label or action code from a static image, ignoring dynamicaspects of appearance that may be critical to perception. Even methodsthat examine behavior over time, such as in a video sequence, tend to labeleach image frame independently, ignoring frame-to-frame dynamics. Thisviewpoint suggests that it is time-independent categorical information thatis important, and not the patterns of actions that relate stimulus configurationstogether across time. The current work focuses on face perceptionand demonstrates that there is important information that can be extractedfrom pairs of images by examining how the face transforms in appearancefrom one image to another. Using a biologically plausible neural networkmodel called a conditional Restricted Boltzmann Machine that performsunsupervised Hebbian learning, we show that the network can infer variousfacial actions from a sequence of images (e.g., transforming a frowninto a smile or moving the face from one location of the image frame toanother). Critically, after inferring the actions relating two face images fromone individual, the network can apply the transformation to a test face froman unknown individual, without any knowledge of facial identity, expressions,or muscle movements. By visualizing the factors that encode andbreak down facial actions into a distributed representation, we demonstratea kind of factorial action code that the network learns in an unsupervisedmanner to separate identity characteristics from rigid (affine) and non-rigidexpression transformations. Models of this sort suggest that neural representationsof action can factor out specific information about a face or objectsuch as its identity that remain constant from its dynamic behavior, both ofwhich are important aspects of perceptual inference.43.523 Preference bias is induced by task-irrelevant motion only ifit is weakKazuhisa Shibata 1 (kazuhisa@bu.edu), Takeo Watanabe 1 ; 1 Department ofPsychology, Boston UniversityIt has been found that our preference decision on a visual stimulus is influencedby memory, decision history, and gaze bias. It is generally thoughtthat the stronger the signals of these factors are and the more highly correlatedthey are to a task, the more influential they are on preference decision.However, here, we report that preference decision is modulated by taskirrelevantmotion only when the motion signal is weak. Twenty subjectswere asked to choose one of two faces, presented on the left and right of acentral fixation point, by moving a joystick to the left or right without eyemovements. On each trial during face presentation, a task-irrelevant randomdot pattern (moving either leftward or rightward) was presented atone of several coherence levels (0, 5, 10, 20, 50, and 100%). Subjects’ choiceswere significantly biased toward direction of task-irrelevant motion whenmotion signal was weak (5% coherence), but not higher coherence levels.Following each choice, the subjects were asked to rate their relative preferencefor the chosen face (“How much do you like the chosen face comparedwith the other?”) using the joystick on a five-point scale. Subjects’ relativepreferences were significantly elevated when position of the chosen facecorresponded to direction of (5% coherent) task-irrelevant motion. Theseeffects were not observed when task-irrelevant motion moved upward ordownward. Another control experiment showed that these effects did notoccur when preference decision was made without the lever movement,suggesting that the effects were not simply due to eye movements or attentionshifts induced by task-irrelevant motion. On the contrary to the generalthought, these results indicate that preference decision on a visual stimulusis gravely influenced by apparently “trivial” signal—the signal that is notonly irrelevant to the decision but also extremely weak.Acknowledgement: NIH-NEI (R21 EY018925, R01 EY015980-04A2, R01 EY019466) andUehara Memorial Foundation43.524 Neural representation of face perception in the fusiformface areaManabu Shikauchi 1 (mshikauchi@brain.med.kyoto-u.ac.jp), Tomohiro Shibata 2 ,Shigeyuki Oba 3 , Shin Ishii 3 ; 1 Graduate school of Medicine, Kyoto University,2 Graduate school of information science, Nara institute of science and technology,3 Graduate school of informatics, Kyoto UniversityHuman functional magnetic resonance imaging (fMRI) studies have shownthat the fusiform face area (FFA) resides at one of the highest levels of thevisual pathways and is specialized for face perception (Kanwisher, et al.,1997). Although previous fMRI studies employing fMRI adaptation paradigmssuggested norm-based encoding is adopted in this area (Loffler, etal., 2005; Jiang, et al., 2006), it is unclear whether reconstructing the perceivedface from the fMRI signals without using the fMRI rapid adaptationparadigm is feasible, which we investigated in this study. We employeda database of photo-realistic human face images. In the fMRI experiment,participants were required to gaze at an unfamiliar face image, which isa morphed image using two face images in the database, for the targetperiod, and to memorize it. The morphing was norm-based, based on principalcomponent analysis (PCA) (Blanz and Vetter, 1999). After a blankingperiod, the two face images used for the morphed image were presented,and the participants were requested to report which face was similar tothe morphed image (discrimination period). The FFA was identified bycontrasting the brain activities between the target period and the blankingperiod, so that our analysis focused on the identified FFA regions (fROI).We found the correlative area in FFA with face variations. Face discriminationbehaviors were well explained by the signal detection theory basedon a face space model. We then examined whether the target face can bereconstructed from the fROI signals by using canonical correlation analysis(CCA) which finds the maximally correlated low-dimensional spacebetween the fROI data and the target image. A good reconstruction per-Monday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>237


Monday Morning PostersVSS 2010 AbstractsMonday AMformance in terms of the similarity between a true and reconstructed faceimages in the CCA space was obtained for around 30% of the trials, supportingthe norm-based encoding in FFA.43.525 Orientation-encoding in the FFA is selective to faces:Evidence from multivoxel pattern analysisFernando Ramírez 1,2,3 (fernando.ramirez@bccn-berlin.de), Radoslaw MartinCichy 1,3 , John-Dylan Haynes 1,2,3,4 ; 1 Bernstein Center for Computational Neuroscience,2 Charité - Universitätsmedizin Berlin, 3 Berlin School of Mind and Brain,4 Max Planck Institute for Human Cognitive and Brain <strong>Sciences</strong>The fusiform face area (FFA) is a region of the human ventral visual pathwaythat exhibits a stronger response to faces than to objects. The role ofthis region within the face perception network is not well understood, andits face selectivity has been debated. Furthermore, it is unclear which specificproperties of visual stimuli are systematically reflected in the patternsof activation of this region. There is evidence from various sources thatFFA might encode orientation. This includes psychophysics of face selectiveview-point aftereffects, fMRI adaptation results, and electrophysiologicalexperiments that have revealed neurons that are highly tuned to faceorientation in the macaque homologue of FFA. Here we directly exploredthe encoding of orientation using a combination of functional magneticresonance imaging (fMRI) and multivoxel pattern analysis (MVPA). Wepresented subjects with synthetic images of faces and cars that were rotatedin depth and presented either above or below fixation. We explored orientation-relatedinformation available in fine-grained activity patterns inFFA and early visual cortex. Distributed signals from the FFA allowedabove-chance classification of within-category orientation information onlyfor faces. This was also generalized to faces and objects presented in differentretinotopic positions. In contrast, classification in early visual cortexresulted in equal, above-chance classification of face and car orientationinformation, but only when trained and tested on corresponding retinotopicpositions. Classification across position was substantially decreasedfor both categories in early visual cortex. We conclude that category-selectiveeffects of stimulus orientation are reflected in the fine grained patternsof activation in FFA, and that the structure of these patterns is partiallytranslation invariant.43.526 Complex Contextual Processing in V1 during Face CategorizationsFraser Smith 1,2 (fraser@psy.gla.ac.uk), Lucy Petro 1,2 , Philippe Schyns 1,2 , LarsMuckli 1,2 ; 1 Department of Psychology, University of Glasgow, UK, 2 Centre forCognitive Neuroimaging, University of Glasgow, UKPrimary visual cortex (area V1) and higher visual areas are reciprocallyconnected. To understand the nature of this reciprocal processing in moredetail, we investigated the importance of area V1 (and its subregions) duringcomplex face categorization tasks. It is generally assumed that genderor expression classification of faces is a complex cognitive task that relieson processing in higher visual areas. Here we tested the hypothesis that primaryvisual cortex (V1) is involved in the processing of facial expressions.In an fMRI experiment we delineated the borders of area V1 and subsequentlymapped the cortical representation of eye and mouth regions duringa face categorization task. We then trained a multivariate pattern classifier(linear SVM) to classify happy and fearful faces on the basis of V1 datafrom these “eye” and “mouth” regions, and from the remaining V1 area.We found that all three regions resulted in successful classification dependingon task. In a second step we investigated the spatial distribution of themost informative vertices throughout V1 in more detail. Again we saw theimportance of the cortical representation of the eyes and mouth, but alsoa strong contribution from outside these regions, i.e. in “non-diagnostic”V1. Our findings are compatible with the idea that contextual informationmodulates area V1 not only in very restricted regions of processing of themost diagnostic information but also in a more distributed way.Acknowledgement: This research was supported by a Biotechnology and Biological<strong>Sciences</strong> Research Council grant to LM (BB/005044/1), an Economic and SocialResearch Council grant to PGS (R000237901), and a Economic and Social ResearchCouncil grant to LSP (PTA-031-2006-00386).43.527 Does he look scared to you? Effects of trait anxiety uponneural dissimilarity measures for ambiguous and pure emotionalexpressionsAnwar Nunez-Elizalde 1 (anwarnunez@berkeley.edu), Alex Hawthorne Foss 1 , GeoffreyAguirre 2 , Sonia Bishop 1 ; 1 Psychology & HWNI, UC Berkeley , 2 Neurology,University of PennsylvaniaPrevious work has shown that trait anxiety is associated with interpretativebiases in the perception of facial expressions, specifically an increasedtendency to judge ambiguous facial expressions as fearful. Using functionalmagnetic resonance imaging and both univariate and multivariate analysistechniques, we investigated the neural correlates of these biases in perception.We focused specifically on neural regions previously implicated inface processing: the superior temporal sulcus (STS), amygdala and fusiformface area (FFA). Subjects were presented with pictures of faces thatshowed one of three pure emotional expressions (fear, sad, surprise) orintermediate morphs between these same expressions. These expressionswere selected based on previous research indicating that these dimensionsare the ones where anxiety-related perceptual biases are most likely to beobserved. BOLD response parameter estimates were calculated using univariateregression and multivariate pattern analysis. No anxiety-relateddifferences were observed in the univariate analysis. For the multivariateanalysis, a linear classifier was used to quantify (dis)similarities betweenneural representations of intermediate morphs and those of end-point pureexpressions. It was predicted that for the two continua containing fear,individual differences in trait anxiety would modulate the extent to whichneural representations of intermediate morphs showed greater similarity topure fear than the other constituent expression. Bilateral regions of interestwere investigated focusing on the superior temporal sulcus (STS), amygdalaand fusiform face area (FFA). Results from this preliminary study indicatedthat anxiety-related biases in the neural representations of ambiguousexpressions containing some percentage of the expression fear werepredominantly observed in STS, with high trait anxious individuals showingreduced distances between the neural representations for these morphsand those for pure fear. Additional data from a follow-up experiment willbe presented. Results are discussed in the context of content-based modelsof anxiety.43.528 Right middle fusiform gyrus processes facial featuresinteractively: evidence from a balanced congruency designValerie Goffaux 1,2 (valerie.goffaux@maastrichtuniversity.nl), Christine Schiltz 2 ,Rainer Goebel 1 ; 1 Dept of Neurocognition, University of Maastricht, The Netherlands,2 EMACS Unit, Dept. of Psychology and Educational <strong>Sciences</strong>, Universityof Luxemburg, LuxemburgThe features composing a face are not processed in isolation by the visualsystem. Rather, they are encoded interactively as commonly illustrated bythe composite illusion. Interactions between face features are observerdependentas they are sensitive to face orientation in the picture plane.Interactive face processing (IFP) as indexed by the composite illusionpresumably arises in face-preferring cortical regions located in the rightmiddle fusiform gyrus, coined rFFA. Yet, composite illusion is a limitedmarker of IFP because it restricts the study of IFP to a single responsemodality (“same” responses) and the sharp edges introduced in compositestimuli are known to impair the processing of face information. The presentexperiment re-addresses IFP in the human brain using the congruencyparadigm, which bypasses previous limitations: (1) IFP is measured in allresponse modalities and (2) face stimuli are not distorted by artificial edges.In a slow-event-related design, subjects were presented with face pairs anddecided whether a target face region (i.e. eyes+eyebrows) was same or differentwhile ignoring the other distracter features (e.g. nose and mouth).In congruent conditions, target and distracter features call for an identicaldecision. In incongruent conditions, they call for opposite decisions. Faceswere presented at upright and inverted orientations. Our results reveal thatperformance was better when the target region was embedded in a congruentthan an incongruent face context, indicating that distracter and targetfeatures were processed interactively. In the rFFA, neural response was asstrong in incongruent conditions as when all features differed in a pair, suggestingthat feature incongruency was treated as full identity change in thisregion. Inversion eliminated these differences in rFFA activity. This patternwas not found in other face-selective regions. Our results thus strengthenthe view that rFFA is the main site for face interactive processing.238 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsMonday Morning Posters43.529 The Role of Isolated Face Features and Feature Combinationsin the Fusiform Face AreaLindsay Dachille 1 (Ldachill@indiana.edu), Thomas James 1 ; 1 Psychological andBrain <strong>Sciences</strong>, Indiana UniversityA critical issue in object recognition research is how features of objects areintegrated into a perceptual whole. Much of the previous research on perceptualintegration has focused on the role of configural or holistic processingwith faces. There has also been a considerable amount of fMRI researchinvestigating the response properties of face-selective areas of cortex, suchas the fusiform face area (FFA). Here, we investigated the neural mechanismsof facial feature integration in humans using fMRI. Gaussian windowswere applied to whole faces to create facial features representing theleft eye, right eye, nose, and mouth. Individual subject thresholds werefound for four-feature combinations using a staircase procedure while theyperformed a one-back matching task. During imaging, stimulus conditionsincluded features in isolation and in combinations of two (both eyes) orfour. Two specific regions of interest (ROI) were localized, the right FFAand the lateral occipital complex (LOC). The activation pattern of the rFFAwas significantly different from the LOC. The LOC showed similar levels ofactivation to all stimulus conditions. The rFFA showed low levels of activationwith mouth and nose features, greater activation with eye features andthe greatest activation with the four-feature combination. The two-featureeyes combination stimulus did not produce more activation than the eyefeatures in isolation. The results converge with previous behavioral andeye-tracking results to suggest a greater contribution of eye features thanother types of features for face recognition. The results also suggest thatactivation in the rFFA represents a heterogeneous population of neuronsthat represent isolated features in addition to specific combinations.Multisensory processing: Visual-auditoryinteractionsVista Ballroom, Boards 530–547Monday, May 10, 8:30 - 12:30 pm43.530 The Auditory Capture of Visual Timing Extends to Short-Range Apparent MotionHulusi Kafaligonul 1 (hulusi@salk.edu), Gene Stoner 1 ; 1 <strong>Vision</strong> Center Laboratory,The Salk Institute for Biological StudiesFreeman and Driver (2008) reported that brief sounds can bias the perceiveddirection of visual apparent-motion stimuli (see also Getzmann, 2007), aneffect attributed to “temporal capture” of visual stimuli by the sounds(Morein-Zamir et al., 2003). Cortical area MT is a key substrate in visualmotion perception (e.g. Britten et al., 1992; Salzman et al., 1990), but thespatial and temporal intervals (i.e. 14 deg and 300 ms, respectively) of Freemanand Driver’s stimuli are much too large to engage area MT (Mikami etal., 1986 a, b; Newsome et al., 1986). Since such long-range motion stimuliare reportedly more sensitive to higher-order influences than are shortrangemotion (Horowitz & Treisman, 1989; Shiffrar & Freyd, 1993), weasked whether sound also impacts the perception of motion stimuli knownto engage area MT. In experiment 1, subjects (N=7) judged the dominantmotion direction of vertically-oriented bars that alternated between rightand left of fixation. Spatial intervals ranged from 0.2 deg to 3.0 deg andtemporal intervals varied from 60 ms to 240 ms. Without sound, perceiveddirection favors the smaller temporal interval (e.g. rightward motion isfavored if the left-right interval is smaller than the right-left interval). Wefound that sounds systematically biased perceived direction in a mannerconsistent with temporal-capture. In experiment 2, subjects (N=7) judgedthe relative speeds of silent two-frame motion stimuli with those accompaniedby two brief sounds (which either lagged or led the presentationof the individual bars). In further support of the temporal-capture hypothesis,perceived speed was determined by the timing of the sounds. Takentogether, our findings suggest that brief stationary sounds may be able toshift the temporal tuning of area MT neurons for visual motion stimuli.Acknowledgement: Supported by 2R01EY01287243.531 Crossmodal interaction in metacontrast maskingSu-Ling Yeh 1 (suling@ntu.edu.tw), Yi-Lin Chen 1 ; 1 Department of Psychology,National Taiwan UniversityMetacontrast masking (MM) refers to the phenomenon of reduced targetvisibility due to a temporally lagging and spatially non-overlappingmask, and it has been attributed to inhibition between low-level visualchannels such that transient activity triggered by the onset of the delayedmask inhibits sustained activity regarding the contours of the precedingtarget. Theories of MM have considered it to occur exclusively in the visualdomain, without concerning signals from other modalities, such as audition.The current study aims to explore the possible effects of sound on MMby using a contour discrimination task and measuring the perceptual sensitivitychange (d’) of the visual target with or without a sound. In Experiment1, the sound was presented at different points in time with respectto the target. The results showed that the visibility of the masked targetwas elevated when sound was presented before the target. Accordingly,in Experiment 2, we adopted a spatial cueing paradigm in which the spatialcongruency of the sound and target was manipulated. In Experiment3, the target-sound SOA was manipulated further to probe the temporalwindow of the effect of sound on MM. An equivalent visual cue also wasused for comparison in Experiments 2 and 3 to examine whether within- orcross-modal spatial cues would shift attention to its location in the standardMM task. The results showed that MM was affected by sound at thetime when masking was reduced in the period of recovery from maximalmasking SOA, indicating that sound enhanced target visibility in MM byorienting attention to its location, probably through a feedback modulation,to sustain the object representation of the visual target. This study setsa new example of audio-visual interaction for a phenomenon classicallyconsidered to be visual only.Acknowledgement: This research was supported by the National Science Council inTaiwan (NSC 96-2413-H-002-009-MY3 and 98-2410-H-002-023-MY3)43.532 Task-irrelevant sound facilitates visual motion detectionRobyn Kim 1 (robynkim@ucla.edu), Ladan Shams 1 ; 1 Department of Psychology,UCLADifferent sense modalities interact in a variety of perceptual tasks. Forexample, auditory motion can influence visual motion direction discrimination(Meyer & Wuerger, 2001; Soto-Faraco et al., 2002). Many multisensoryintegration phenomena have been explained well by interactions at aninference level (e.g., Bayesian or maximum likelihood inference). However,signals of different modalities may also interact at a sensory level. Thisstudy tests whether sound can affect perceptual sensitivity to visual motion,aside from any inference or decisional influence. Subjects performed a 2IFCvisual coherent motion detection task, in which one interval containedcoherent motion in a fixed direction (e.g., 0°), and the other contained randommotion. Subjects were asked to detect which interval contained thecoherent motion stimulus. In addition to visual trials, our task-irrelevantcongruent group experienced audiovisual trials in which sound moved inBOTH intervals, in the same direction as the coherent visual stimulus. Sincesound moved identically in both intervals, it provided no indication as towhich interval contained the visual coherent motion, and thus was taskirrelevant.Another group received audiovisual trials with task-relevantsound (i.e., moving only in the interval with coherent visual motion), and athird group experienced task-irrelevant incongruent sound (i.e., moving inboth intervals but in the opposite direction of visual motion). As expected,subjects performed best when the visual stimulus was accompanied by congruent,task-relevant sound. Surprisingly though, visual motion detectionwas also significantly enhanced by the task-irrelevant congruent sound.Task-irrelevant sound moving in the opposite direction of visual motiondid not yield any benefit, ruling out the possibility that the effect resultsfrom general alerting or attentional modulation by sound. These resultssuggest that auditory motion can facilitate perception of visual motion ata sensory level, independent of its contribution to inference and decisionprocesses.43.533 Audiovisual relative timings determine sound-induced flashfission versus flash fusion effectsTrevor Hine 1,2 (t.hine@griffith.edu.au), Amanda White 1,2 ; 1 Applied CognitiveNeuroscience Unit, Griffith Institute of Health and Medical Research, 2 School ofPsychology, Griffith University, AustraliaWhen observers are required to make judgments of the number of rapidlypresented flashes of light, there is a tendency to either overestimate thecount (‘flash fission’) or underestimate the count (‘flash fusion’), dependingon the duration of the inter-flash interval (Bowen, 1989). Similarly, pairingthe flashes with more or less loud, rapid beeps also results in fissionMonday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>239


Monday Morning PostersVSS 2010 AbstractsMonday AMand fusion effects (the sound-induced flash illusion, Andersen, Tiippana,& Sams, 2004; Shams, Kamitani, & Shimojo, 2000, 2002). Our aims were todetermine how much these sound induced effects are dependent upon timingsbetween clicks and flashes, and how these critical timings relate to theaudiovisual ‘window of integration’ of around 100msec. A high contrast, 2ºdisc was flashed (11.7 ms, 7º periphery) in the presence of 0, 1, 2, or 3 beeps(7 ms, ~ 75dbA, 3.5 kHz) with various audiovisual relative timings between12 and 300 msec. Results from naïve observers demonstrate flash fusionwhen >100 ms separated all stimuli, whereas flash fission was reported forseparation


VSS 2010 AbstractsMonday Morning Posterswere randomly interspersed with single or multiple standard flashes in thepresence of 0, 1, 2 or 3 beeps. Observers were instructed to count either thenumber of transients or flashes seen depending on the condition. With oursingle transient stimulus greater fission effects were observed than withstandard flashes, but only when beeps occurred during the ramp. For standardflashes, fission and fusion were observed dependent on the relativetiming of the audiovisual stimuli. Despite the fact that our single transientstimulus eliminated bias towards reporting two events, a greater numberof transients were counted than with a two-transient flash. It is suggestedthat ambiguity in the ramp stimulus makes an associated transient morevulnerable to auditory capture. Thus, consistent with the law of inverseeffectiveness, the more accurately perceived auditory stimuli dominate thevisual percept.Acknowledgement: ACNRU postgraduate scholarship (AW)43.538 Reciprocal interference from sound and form informationin stimulus identificationGenevieve Desmarais 1 (gdesmarais@mta.ca), Megan Fisher 2 , Jeffrey Nicol 3 ;1 Department of Psychology, Mount Allison University, 2 Autism Research Center,IWK Health Center, 3 Department of Psychology, Nipissing UniversityPast studies have shown that incongruent visual information can bias theperception of location of sound, as well as the perception of sound identification.We used an audiovisual Stroop-like task to investigate whetherincongruent sound and shape information interferes with stimulus identificationat a conceptual level. Healthy undergraduates learned to identifynovel shapes that were associated with distinct sounds and non-words (e.g.,a curved shape has a high-pitched sound and is called “baiv”). After an initialtraining phase, participants completed a speeded naming task wherethey were simultaneously presented with a shape and a sound. They wereinstructed to identify the shape, the sound, or the stimulus presented, andthe order of conditions was counterbalanced across participants. Crucially,25% of test trials consisted of incongruent information: the sound and shapepresented were not one of the learned associations. An analysis of the reactiontimes revealed a main effect of instructions: participants respondedfastest when identifying shapes, and slowest when identifying sounds. Wealso observed a main effect of congruency: participants responded fasteron congruent trials than on incongruent trials. These main effects werequalified by a two-way interaction between instructions and congruency:the size of the interference was largest when participants were asked toidentify the sound and ignore visual information. This study demonstratesthat incongruent information can impact the identification of stimuli at aconceptual level. Importantly, though vision is perceived as being the dominantsense in humans, irrelevant sound information still interfered withvisual shape identification.Acknowledgement: Mount Allison University43.539 The effects of characteristic and spatially congruentsounds on visual search in natural visual scenesDaniel K. Rogers 1 (rogersda@tcd.ie), Jason S. Chan 1 , Fiona N. Newell 1 ; 1 School ofPsychology/Institute of Neuroscience,Trinity College DublinCrossmodal facilitation has been an emerging area in visual spatial perceptionin recent years. While there has been much research into the visualeffects on sound localization (e.g ventriloquist effect), relatively little isknown about how sound can influence visual spatial perception. Moreover,very little is known about how sounds can influence attentional deploymenta visual search task. Here we investigated whether characteristicand/or spatially congruent sounds can affect visual search performance ina complex visual scene. In Experiment 1a, participants were asked to indicatethe presence or absence of a visual target in a complex visual scene.In this experiment, the sound could be spatially congruent or incongruentbut the sound was always semantically relevant to the visual target.Results showed a significant benefit for spatially congruent sound whentargets were relatively small and appeared in peripheral vision. In Experiment1b, we varied the number of visual targets (6) and manipulated boththe spatial congruency and the characteristic relevance of a sound. In bothexperiments, we found that sound significantly affected visual search performance(even though participants were instructed to ignore the sound)but characteristic sound had a greater effect on visual search performancethan spatial congruency. However, when the target was more difficult tolocate, spatially congruent sounds benefitted performance. Our findingssuggest that characteristic and spatially congruent sound can affect visualsearch performance and have important implications for our understandingof multisensory influences on target detection in realistic visual scenes.Acknowledgement: Science Foundation Ireland43.540 Viewing condition shifts the perceived auditory soundscapeAdria E. N. Hoover 1 (adriah@yorku.ca), Laurence R. Harris 1 , Jennifer K. E.Steeves 1 ; 1 Centre for <strong>Vision</strong> Research, York University, Toronto, CanadaEarly-blind individuals have superior sound processing abilities comparedto sighted individuals (Lessard et al., 1998 Nature 395: 278). Here we askwhether sound-processing ability was affected in normally sighted individualsby closing one or both eyes. Sound localization: Participants judgedthe location of ramped-onset double bursts (30 ms each separated by 30ms) of white noise played through 16 speakers equally spaced along theazimuth (from -90 to 90 °) in a semicircular array and hidden behind a curtain.Participants listened under four viewing conditions: 1) eyes closed, 2)eyes open, 3) left eye open, and 4) right eye open. Perceived sound locationwas reported relative to a visual scale. Participants were more accurate inthe central visual field and less accurate in the periphery with eyes closedcompared to when both eyes were open. When viewing monocularly theperceived location of all sounds shifted toward the centre of the visiblevisual field (left for left eye viewing, right for right eye viewing) and errorincreased in the non-visible field. These findings suggest that the perceivedposition of centrally located sound sources (even when no useful visualinformation is available) are shifted toward the centre of the visible, visualfield. Sound discrimination: Participants were asked to discriminate the relativelocation of two sound bursts. Two arrays of 8 speakers were equallyspaced between 40 – 60 ° in the left and right periphery. Participants listenedunder the same viewing conditions as in the localization task. Participantshad lower thresholds with both eyes open than with both eyesclosed or viewing monocularly. In normally sighted individuals sounddiscrimination ability is not improved when eyes are closed. Viewing conditiondifferentially affects spatial sound processing depending upon thenature of the task.Acknowledgement: LRH and JKES are sponsored by NSERC AH has a NSERC graduatefellowship43.541 Aurally aided visual search in depth using ‘virtual’ crowdsof peopleJason S. Chan 1 (jchan@tcd.ie), Corrina Maguinness 1 , Simon Dobbyn 2 , PaulMcDonald 3 , Henry J. Rice 3 , Carol O’Sullivan 2 , Fiona N. Newell 1 ; 1 Trinity CollegeDublin, School of Psychology and Institute of Neuroscience, 2 Trinity CollegeDublin, Department of Computer Science, 3 Trinity College Dublin, Departmentof Mechanical EngineeringIt is well known that a sound can improve visual target detection when bothstimuli are presented from the same location along the horizontal plane(Perrott, Cisneros, McKinley, & D’-Angelo, 1996; Spence & Driver, 1996).However in those studies, the auditory and visual stimuli were always congruentalong the depth plane. In previous experiments, we demonstratedthat it is not enough for an auditory stimulus to be congruent along the horizontalplane; it must be congruent in depth as well. However, congruencyalong the depth plane may not be crucial in virtual reality (VR). It is wellknown that visual distance perception in VR suffers from a compression ofspace, whereby objects appear closer to the observer than they are intendedto be. In the following experiment we presented virtual scenes of peopleand the participant’s task was to locate a target individual in the visualscene. Congruent and incongruent virtual voice information, containingdistance and direction location cues, were paired with the target. We foundthat response times were facilitated by a congruent sound. Participantswere significantly worse when the sound was incongruent to the visual targetin terms of either the horizontal or depth plane. Ongoing experimentsare also investigating the effects of moving audio-visual stimuli on targetdetection in virtual scenes. Our findings suggest that a sound can have asignificant influence on locating visual targets presented in depth in virtualdisplays and has implications for understanding crossmodal influences inspatial attention and also in the design of realistic virtual environments.Acknowledgement: Science Foundation Ireland43.542 Classification of Natural Sounds from Visual Cortex ActivityPetra Vetter 1 (p.vetter@psy.gla.ac.uk), Fraser Smith 1 , Lucy Petro 1 , Lars Muckli 1 ;1 Centre for Cognitive Neuroimaging, Dept. of Psychology, University ofGlasgowMonday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>241


Monday Morning PostersVSS 2010 AbstractsMonday AMWe investigated whether contextual auditory information is containedin the neural activity pattern of visual cortex in a category-specific manner.While being blindfolded, subjects were presented with three typesof natural sounds: a forest scene (animate/non-human), a talking crowd(animate/human) and traffic noise (inanimate). We used multivariate patternanalysis (linear support vector machines) to classify the three differentsounds from BOLD activity pattern in early visual cortex as identified withretinotopic mapping. Preliminary results show above chance classificationin visual areas V2 and V3. This suggests that contextual information fromthe auditory modality shapes the neural activity pattern in early visual cortex,in a category-specific manner and in the absence of visual stimulation.Acknowledgement: BBSRC43.543 Learning to bind faces and voices: a gender-congruencyadvantageElan Barenholtz 1 (elan.barenholtz@fau.edu), Meredith Davidson 1 , David Lewkowicz1 , Lauren Kogelschatz 1 ; 1 Dept. of Psychology, Florida Atlantic UniversityFaces and voices of familiar people are mutually informative, i.e. hearinga familiar person’s face allows the observer to infer the speaker’s face andvice-versa. Development of this cross-modal knowledge may be due tosimple associative pairing or may represent a specialized process in whichfaces and voices are bound into an ‘identity’. Here, we present two experimentssuggesting that binding into an identity is essential to efficientlylearning face-voice pairs. In both experiments we compared how well peoplelearned to match faces and voices across three types of face-voice pairs:when the faces an voices werae recorded from the same individual (‘TrueVoice’), when they belonged to different individuals of the same gender(‘Gender Matched’), and when they belonged to individuals of differentgender (‘Gender Mismatched’). In Experiment 1, where the faces and voiceswere presented statically, subjects showed much better performance in theGender Matched vs. Mismatched conditions, as well as a smaller advantagefor the True Voice vs. Gender-Matched condition. These results suggestthat when faces and voices are congruent— and are thus likely to be boundinto an identity— learning is improved relative to when they are incongruent.In Experiment 2, we introduced a dynamic condition, where the audioof the false voices (both Gender Matched and Gender Mismatched) wasdubbed onto the video of the paired face. Performance for the Gender-Mismatchedpairs showed strong improvement in the dynamic condition relativeto the static condition. No such difference between static and dynamicconditions was found for the other, congruent, face-voice pair conditions.These results suggest that that the dubbing of the incongruent face-voicepairs ‘forced’ them to be bound into an identity, improving learning. Weconclude that that binding into an identity is a critical factor in developingcross-modal knowledge of faces and voices.43.544 Audiovisual Phonological Fusion and AsynchronyMelissa Troyer 1,2 (mltroyer@mit.edu), Jeremy Loebach 1,3 , David Pisoni 1 ; 1 Departmentof Psychological and Brain <strong>Sciences</strong>, Indiana University, Bloomington,2 Department of Brain and Cognitive <strong>Sciences</strong>, Massachusetts Institute ofTechnology, 3 Department of Psychology, St. Olaf CollegeOur perception of the world involves the integration of many stimuli fromdifferent sensory modalities over time. When auditory and visual stimuliare presented asynchronously, subjects identify them as occurring at thesame time despite up to three hundred milliseconds of offset (Conrey &Pisoni, 2006; Van Wassenhove et al., 2007). Moreover, when auditory andvisual information differ slightly in content, subjects often perceive them asbeing part of the same event. In Audiovisual Phonological Fusion (AVPF),the auditory information indicates one speech sound (e.g., /l/) but thevisual information indicates another (e.g., /b/), and the conflicting informationis integrated to form a percept that contains both of these sounds(/bl/) (Radicke, 2007). The current study investigated the relationshipbetween AVPF and temporal asynchrony. Subjects were presented withstimuli that differed in the amount of temporal offset ranging from 300 msof auditory lead to 500 ms of visual lead and were asked to perform twotasks. In the fusion task, subjects were asked to report what they thoughtthe speaker said. In the asynchrony judgment task, subjects were asked todetermine whether the auditory and visual portions occurred at the sametime (“in sync”) or at different times (“out of sync.”). The stimuli presentedin both tasks were the same, but the ordering of the tasks was manipulated.We found that (1) AVPF was moderately robust to temporal asynchrony;(2) synchrony judgments were robust for AVPF stimuli; and (3) the orderingof the tasks can modulate performance; i.e., subjects who complete theperceptual fusion task first are more likely to judge items as occurring at thesame time on the asynchrony judgment task.Acknowledgement: NIH-NIDC R01-0011143.546 Integration and the perceptual unity of audio-visual utterancesShoko Kanaya 1 (skanaya@l.u-tokyo.ac.jp), Kazuhiko Yokosawa 1 ; 1 The University ofTokyoWhen multisensory stimuli are unified as an event from one commonsource, the integration of these stimuli is considered to be facilitated. Somestudies estimated the degree of perceptual unification by using participants’subjective reports concerning whether they felt the stimuli were comingtogether, although this conversely reflects the occurrence of integration. Weinvestigated whether integrated speech perception affects the unification ofaudio-visual utterances. The visual stimulus was a movie of two bilateralfaces uttering /pa/ and /ka/ respectively, and the auditory stimulus wasone channel of voiced utterance of /pa/ or /ka/ from one of two speakers.When the auditory utterance is /pa/, audio-visual integration mightelicit speech perception altered like /ta/, based on the McGurk illusion inJapanese participants. In addition, whether the auditory and visual stimuliof the corresponding spatial locations were consistent was manipulated tomake the unification distinct or not. In our task, participants reported boththe face perceived as the talker and the syllable perceived to be uttered bythe talker. In this situation, participants would report the unified informationsource as the talker. If the outcome of integrated speech perceptionaffects the unification, the judgment about the talker should be related tothe auditory utterance as the presence or absence marker of the illusory perception.The results showed that the perceived talker was not affected bythe kind of auditory utterance, although the participants reported illusoryhearing only for the auditory /pa/. When the audio-visual stimuli werepresented repeatedly for more distinct unification, the judgment about thetalker was influenced only by the frequency of the presentation, and not bythe auditory utterance. Moreover, these results were independent of thespatially manipulated ambiguity. These findings indicate that the perceptionof unity of audio-visual information is the cause and not the result ofintegration.43.547 Crossmodal constraints on human visual awareness: Auditorysemantic context modulates binocular rivalryYi-Chuan Chen 1 (yi-chuan.chen@psy.ox.ac.uk), Su-Ling Yeh 2 , Charles Spence 1 ;1 Department of Experimental Psychology, University of Oxford, 2 Department ofPsychology, National Taiwan UniversityThe world provides contextual information to multiple sensory modalitiesthat we humans can utilize in order to construct coherent representationsof the environment. In order to investigate whether crossmodal semanticcontext modulates visual awareness, we measured participants’ dominantpercept under conditions of binocular rivalry while they listening to anongoing background auditory soundtrack. Binocular rivalry refers to thephenomenon whereby when different figures are presented to the correspondinglocation in each eye, observers perceive each figure as beingdominant in alternation over time. In Experiment 1, the participants vieweda dichoptic figure consisting of a bird and a car in silence (no-sound condition)or else they listened to either a bird, a car, or a restaurant soundtrack.The target of participants’ attentional control over the dichoptic figure andthe relative luminance contrast between the figures presented to each eyewere varied in Experiments 2 and 3, and the meaning of the sound (i.e., birdor car) that participants listened to was independent of the manipulationstaking place in the visual modality. In all three experiments, a robust modulationof binocular rivalry by auditory semantic context was observed. Wetherefore suggest that this crossmodal semantic congruency effect cannotsimply be attributed to the meaning of the soundtrack automatically guidingparticipants’ attention or else biasing their responses; instead, auditorysemantic contextual cues likely operate by enhancing the representationof semantically-congruent visual stimuli. These results indicate that crossmodalsemantic congruency can serve as a constraint helping to resolveperceptual conflict in the visual system. We further suggest that when consideringhow the dominant percept in binocular rivalry (and so, humanvisual awareness) emerges, information from other sensory modalities also242 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsMonday Morning Postersneeds to be considered; and, in turn, that multisensory stimulation providesa novel means of probing the mechanisms underlying human visualawareness.Acknowledgement: This research was supported by a joint project funded by the BritishAcademy (CQ ROKK0) and the National Science Council in Taiwan (NSC 97-2911-I-002-038). Yi-Chuan Chen is supported by the Ministry of Education in Taiwan (SAS-96109-1-US-37).3D perception: Spatial layoutVista Ballroom, Boards 548–556Monday, May 10, 8:30 - 12:30 pm43.548 Comparing different measures of space perception acrossreal and virtual environmentsSarah H. Creem-Regehr 1 (sarah.creem@psych.utah.edu), Michael N. Geuss 1 ,Tina R. Ziemek 2 , Garrett C. Allen 2 , Jeanine K. Stefanucci 1 , William B. Thompson 2 ;1 Department of Psychology, University of Utah, 2 School of Computing, Universityof UtahA direct comparison between perceptual judgments in real and virtualenvironments (VEs) provides a way to evaluate the “perceptual fidelity”of the VE, testing whether the VE allows observers to behave as they do inthe real world. We have previously demonstrated that participants underestimatedistances to targets on the ground in VEs when compared to theirperformance in the real world (e.g., Thompson et al. 2004, Presence) andthat judgments of the width of perceived passability of an aperture areoverestimated relative to shoulder width (Creem-Regehr et al. 2008, VSS).However it is unknown how judgments of perceived affordances directlycompare to those made in the real world. The present study allowed for acomparison between distance judgments and perceived affordances acrossclosely matched procedures and settings in a real and virtual environment.Participants viewed two poles in a real classroom or a virtual model ofthe same classroom. Similar to previous results, participants significantlyunderestimated the distance to the target in VEs as compared to the realclassroom. Affordance judgments of passability between the two poleswere analyzed as a percentage of participants’ shoulder widths. When ina VE, participants required significantly wider spaces to indicate passagethan when in the real classroom. The result of a greater passability widthto shoulder width ratio in the VE is consistent with an underestimation ofperceived size of the aperture. This may be associated with underestimationof distance or other factors such as uncertainty of scale or judgmentsmade with respect to the observer’s unseen body. Future work will assessthe generalizability of these results by testing other distance, size and affordancejudgments in matched real and virtual spaces. The effectiveness ofVEs to accurately portray real world environments and the utility of VEs asperceptual tools will be discussed.Acknowledgement: This work was supported by NSF grant 091448843.549 Perceived slant from optic flow in active and passiveviewing of natural and virtual surfacesCarlo Fantoni 1 (carlo.fantoni@iit.it), Corrado Caudek 2 , Fulvio Domini 1,3 ; 1 Center forNeuroscience and Cognitive Systems, Italian Institute of Technology, 2 Departmentof Psychology, University of Florence, 3 Department of Cognitive andLinguistic <strong>Sciences</strong>, Brown UniversityMotivation. Recent evidence suggests that extra-retinal signals play animportant role in the perception of 3D structure from motion (SfM). Accordingto the stationarity assumption (SA, Wexler, Lamouret, & Droulez, 2001),a correct solution to the SfM problem can be found for a moving observerviewing a stationary object, by assuming a veridical estimate of the observer’stranslation. According to SA, perception of surface slant should be: (1)more accurate for active than for passive vision; (2) more accurate for naturalthan for virtual objects (because of the cue-conflict inherent to virtualstimuli). Method. We performed three experiments involving both activeand passive observers. The task was to estimate the slant of a static randomdotplanar surface. We manipulated the surface slant and the translationspeed the observer’s head. The translational displacements and orientationof the participant’s head were recorded on-time by an Optotrack Certussystem and the virtual stimuli were generated in real time on a high-definitionCRT (passive observers received a replay of the same optic flow). Naturalsimuli were dotted planar surfaces. Results. Perceived surface orientationincreased with both increasing slant and translation velocity. Thesesystematic biases were found for both virtual and natural stimuli, and forboth active and passive observers. Conclusion. Extra-retinal informationavailable to active vision is not sufficient for a veridical solution to the SfMproblem. Also for active vision, the first-order properties of the optic floware the main determinant of perceived surface slant. If the first-order propertiesof the optic flow are kept constant, the surface is perceived as havinga constant orientation, regardless of actual orientation; if the first-orderproperties of the optic flow are varied (e.g., by manipulating the translationspeed of the observer’s head), surface slant is perceived as varying, regardlessof whether distal slant is constant.43.550 Scale expansion in the estimation of slantFrank Durgin 1 (fdurgin1@swarthmore.edu), Zhi Li 1 ; 1 Department of Psychology,Swarthmore CollegeAlthough it is widely believed that perception must be veridical for actionto be accurate, an alternative view is that some systematic perceptual errors(e.g., scale expansion) may improve motor performance by enhancing codingprecision. Sloped surfaces look and feel (when stood upon) much steeperthan they are. The visual perception of geographical slant (GS: surface slantrelative to the horizontal) can, in theory, be estimated by combining estimatesof optical slant (OS: surface slant relative to the line of gaze) and gazedeclination (GD: direction of gaze relative to the horizontal): GS = OS - GD.In studies of downhill slant perception (Li & Durgin, JOV 2009), we foundthat this simple geometric model predicted visual slant perception basedon measured scale-expansion in both perceptual variables: Estimates ofoptical slant and estimates of the orientation of the axis of gaze itself. Herewe show that the same model can be applied to uphill surface orientation.Using an immersive VR system with corrected and calibrated optics, in twoexperiments we measured (1) the perceived geographical slant of irregular3D surfaces presented straight ahead or above or below eye level by 22.5or 45 deg, and (2) the perceived direction of gaze when looking at targetsranging in visual direction from -52.5 to +52.5 deg from horizontal. All surfaceswere simulated at a viewing distance of 1.5 m. The best-fitting modelof the slant estimation data estimated that gaze direction was exaggeratedby a factor of 1.5, exactly as found when directly measured. The model fitto the data estimated that changes in optical slant over the entire measuredrange were perceptually scaled by a factor of 1.4 at this viewing distance.Scale expansion of optical slant may serve a functional role in the evaluationof upcoming ground plane orientation during locomotion.Acknowledgement: Swarthmore College Faculty Research Grant43.551 Slant perception differs for planar and uneven surfacesZhi Li 1 (zhi.li.sh@gmail.com), Frank Durgin 1 ; 1 Psychology Department, SwarthmoreCollegeSimulated environments often seem too small. Attempts to improve theperception of scale often involve applying realistic but planar textures toenvironmental surfaces. We have adopted a different approach in whichthree-dimensional objects (“rocks”) that provide binocular, motion, andsurface-occlusion cues for surface unevenness are embedded in simulatedplanar surfaces. Here we provide direct evidence that perceived surface orientationis more veridical when such uneven textures are used, especiallyin near space. The simulated surfaces used in this study were presented inan immersive virtual environment through a head mounted display usingpincushion-corrected and calibrated optics with full head-motion compensation.Participants made verbal estimates of the geographical (relative tohorizontal) orientations of large surfaces presented at different simulateddistances, but scaled in 3 dimensions so that the projected textures wereequivalent across distances. At the nearest viewing distance (1 m), estimatesof surface orientation for slopes below 30 deg were fairly accuratewhen the textures were rendered with rocks. (They protruded about 5 cm atthis distance.) When the rocks were instead flattened against the surface, sothat surface unevenness was compressed to less than 0.01 cm (“flagstones”),slanted surfaces in the same range appeared much more frontal than theywere. At a much farther viewing distance (16 m) perceived surface orientationwas steeper for both types of displays, but the surfaces embedded withrocks (now boulders), still appeared less frontal than the surfaces embeddedwith (equally large) flagstones. The near and far-space effects of unevensurfaces may be moderated somewhat differently. The far-space effect, inparticular, may reflect the presence of gradients of surface self-occlusion foruneven surfaces. Factors affecting perceived surface orientation may alsoplay an important role in scaling perceived distance.Acknowledgement: Swarthmore Faculty Research GrantMonday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>243


Monday Morning PostersVSS 2010 AbstractsMonday AM43.552 Looking for skies without gravity – differentiating viewingdirections without vestibular information changeOliver Toskovic 1 (otoskovi@f.bg.ac.rs); 1 Faculty of Philosophy, Kosovska Mitrovicaand Laboratory for Experimental Psychology, University of BelgradeIn previous research we showed that physically shorter distances towardszenith are seen as equal to physically longer distances towards horizon. Thisanisotropic tendency is a consequence of an interaction of non-visual (vestibularand proprioceptive) with visual information. The aim of the presentresearch was to investigate whether the same regularity can be found evenwhen there is no change in non-visual information. Two experiments weredone, in which 28 participants had the task to equalize perceived distancesof three stimuli in three directions (separation between directions was 45degrees). In the first experiment participants performed estimates while sittingon a chair, and in the second while lying on a rotating chair, on a leftside of their body. In both experiments, experimenter moved participantstowards different directions. In the first experiment, looking at differentdirections changes vestibular information, while in the second, vestibularinformation is constant. We used customized equipment to present stimuliand special glasses to prevent subjects’ eye movements. On both experiments,results have shown that on 1m distance perceived distance was thesame on all directions. Also, in both experiments, on 3m and 5m distancethere was a significant difference between two most extreme directions(with 90 degrees separation angle), but no significant difference betweennearby directions (with 45 degrees separation angle). In the first experiment,horizontal direction was perceived as shortest and vertical as longest.In the second, the last direction while subject was rotated upwards wasperceived as the shortest, and the last direction while subject was rotatedbackwards was perceived as the longest. These results suggest that anisotropyof perceived distance exists in both cases, when there is a change investibular information, and when vestibular information is constant.43.553 Memory for others’ height is scaled to eye heightElyssa Twedt 1 (twedt@virginia.edu), L. Elizabeth Crawford 2 , Dennis Proffitt 1 ;1 Department of Psychology, University of Virginia, 2 Department of Psychology,University of RichmondSedgwick (1973) noted that the perceived size of objects can be scaled relativeto an observer’s own eye height (EH). EH scaling has been shown toaffect judgments of relative size, such that accuracy is best for objects ateye height (Bertamini, Yang, & Proffitt, 1998). The present study extendedthese findings to determine whether EH scaling is preserved in memory. Inthree experiments, we assessed how an observer’s own height influencesmemories of others’ heights. If EH scaling is preserved in memory, thenjudgments should be most accurate for targets that match the observer’sheight and should decline with deviation from that height. In Experiments1 and 2, participants viewed target faces on sticks of varying heights. Aftereach, they turned to face a comparison face and judged whether the comparisonwas taller or shorter than the target. Target and comparison heightswere adjusted so that each participant viewed targets that were shorter,taller, and the same as their own height. In both experiments, we found thatparticipants were most accurate judging targets that were near their ownheight. Whereas in Experiment 1 participants were always standing, wemanipulated current height in Experiment 2 by having a seated and standingcondition. Because accuracy was best for congruent trials (e.g., judgingtargets near seated height while seated), we concluded that people are usingtheir current height to aid in judgments. To test the real-world implicationsof these findings, in Experiment 3, participants judged the heights of otherpeople, rather than artificial targets. Again, we found that eye height influencedjudgments of others’ heights. These experiments provide evidencethat EH scaling used in perception is preserved in memory.43.554 Perception of the height of a barrier is scaled to the bodyJeanine Stefanucci 1 (jeanine.stefanucci@psych.utah.edu), Michael Geuss 1 ;1 Department of Psychology, College of Social and Behavioral <strong>Sciences</strong>,University of UtahWhen walking through or judging passage through an aperture, peopleallow for a margin of safety (Warren & Whang, 1987). Furthermore, thismargin is preserved when the body is widened by holding a large object(Wagman & Taylor, 2005). In a series of five experiments, we questionwhether actions and judgments toward a different dimension, height, arealso scaled to the body. In Experiment 1, participants allowed for a 3% marginof safety when walking under a horizontal barrier. In Experiment 2, participantswere made taller by wearing a helmet and strapping blocks undertheir shoes. For both manipulations, participants were conservative in theirwalking behavior (e.g., allowing for a larger margin of safety). In the final3 experiments, participants judged whether they could walk under barriersviewed from a static position and visually matched the height of the barrierto a horizontally projected line. In Experiments 3 and 4, participants whoviewed the barriers while wearing the blocks or the helmet required a similarmargin of safety for passage under as participants whose height wasnot altered. Visually matched estimates were also no different when wearingthe blocks as compared to not. However, participants wearing a helmetvisually matched the height to be shorter than participants who did notwear a helmet. In the final experiment, experienced helmet wearers (ROTCmembers) showed no difference in visually matched estimates of heightcompared to those not wearing a helmet. Overall, the results suggest thatpeople allow for a margin of safety when walking under a barrier, which ispreserved when judging from a static position and when height is altered.However, estimates of the height may differ based on the type of judgmentbeing made and experience with alterations to height.Acknowledgement: This research was supported in part by NIH RO1MH075781-01A2grant and NSF Grant IIS-091448843.555 When does cortical arousal enhance performance in visualperception tasks?Adam J Woods 1 (ajwoods@gwmail.gwu.edu), John Philbeck 1 ; 1 Department ofPsychology, George Washington UniversityIntro: Cold pressor stimulation (CPS, immersing the foot in ice-water for50 seconds) decreases contrast thresholds without changing verbal distanceestimates (Woods et al., 2009). Apparently, enhancing cortical arousalinfluences some visual tasks but not others. The factors that elicit this influenceremain unclear. To begin investigating this issue, we conducted twoexperiments representing intermediate steps between the contrast thresholdand verbal distance estimation methodologies: a Depth Threshold task(modeled on the 2AFC contrast threshold methodology, but in the depthdomain) and a Distance Difference Threshold task (similar to the Depthtask but with targets presented successively rather than in pairs). Methods:Depth Threshold: Two groups (N’s = 18) underwent either CPS or“Sham” stimulation (immersing the foot in room temperature water for50 seconds). On each trial participants binocularly viewed two white rodsagainst a black background and judged which rod was closer, with depthseparation being adjusted adaptively across trials. Threshold (82% correctcriterion) was estimated before and after stimulation. Distance-DifferenceThreshold: Seventeen participants sequentially viewed 2 cones in identicalcheckerboard-covered alleyways (interstimulus interval ≈ 1.5s), judgingwhich cone was closer. Thresholds were determined before and afterCPS. Distance difference was adjusted adaptively across trials. Results:Depth thresholds decreased following CPS (t=3.4, p=0.003), but remainedunchanged following Sham stimulation (t=0.19, p=0.84). Distance-differencethresholds did not change from Baseline to Post-Stimulation (t=0.21,p=0.83). Discussion: When present, arousal-related affects could stem fromeither enhanced attention or bona-fide changes in visual appearance. Thesefactors are difficult to tease apart. In a separate experiment, we found thatCPS did not affect contrast thresholds in a 2AFC task when the interstimulusinterval was increased to 1.5 s. Thus, our results suggest that some aspectof the simultaneous or near-simultaneous comparison between stimuli maybe crucial for eliciting arousal-related effects in visual tasks.Acknowledgement: National Science Foundation Graduate Research Fellowship (Woods)43.556 Depth perception and the horizontal vertical illusiondepend on figural proportionsH. A. Sedgwick 1 (hsedgwick@sunyopt.edu), Ann M. Nolan 1 ; 1 SUNY College ofOptometryWe investigated the effect of varying figural proportions on perceiveddepth and on the horizontal vertical illusion in the frontal plane. Stimuliwere rectangles 10cm in width and varying in length from 8cm to 20 cm.The rectangles were presented, one at a time, either in depth (lying on thesurface of a table) or in the frontal plane (standing vertically on the table).Viewing was either monocular or binocular at a distance of 108cm. Observersverbally estimated the length of each rectangle assuming a width of 100arbitrary units. Results were normalized by dividing estimated length bytrue length. For frontal plane rectangles, binocular and monocular resultswere very similar. When true length was 10 (equal to width), the normalized244 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsMonday Morning Postersestimated length was 1.05. This was a significant horizontal vertical illusion(without a bisection component) of 5%. As the true length increased, thenormalized estimated length increased significantly. A linear regressionline fitted to the normalized results had a slope of 0.0165, indicating that forevery 10% increase in length, relative to width, another 1.65% was added tothe horizontal vertical illusion. For rectangles receding in depth, monocularand binocular results were quantitatively different although qualitativelysimilar. When true length was 1.0, the monocular normalized estimatedlength was 0.76 and the corresponding binocular length was 0.88, showingdepth compression in both cases. As the true length increased, the normalizedestimated ratios increased significantly, with regression line slopes of0.005 monocularly and 0.010 binocularly. Thus, for all four conditions, theestimated length to width ratio, when normalized so that the correct valuewas always 1.0, increased significantly as length increased. This producedan increasing horizontal vertical illusion for stimuli in the frontal plane anda decreasing compression of perceived depth for receding stimuli.Monday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>245


Tuesday AMTuesday Morning TalksPerceptual organization: Grouping andsegmentationTuesday, May 11, 8:15 - 10:00 amTalk Session, Royal Ballroom 1-3Moderator: Steven Franconeri51.11, 8:15 amTwo Processes in Feature Misbinding: (1) Enabling Misbinding and(2) Contributing FeaturesYang Sun 1,2 (berber.sun@gmail.com), Steven Shevell 1,2,3 ; 1 Psychology, Universityof Chicago, 2 Visual Science Laboratories, Institute for Mind and Biology, Universityof Chicago, 3 Ophthalmology & Visual Science, University of ChicagoPeripheral visual objects may be mistakenly perceived to have the featuresof objects in the central visual field (illusory conjunctions). An ambiguityresolvingmechanism is posited to use information from the center to resolveperipheral ambiguity (Wu, Kanai & Shimojo, Nature 2004). RATIONALE:(a) If objects with no motion do not cause motion ambiguity and ambiguityis necessary for misbinding, then misbinding of motion should not occurfor objects without motion. (b) If the central stimulus initiates resolutionof motion ambiguity, then misbinding should not occur when the center isblank. (c) If the center contributes essential information to resolve ambiguity,then the misbound feature values within the periphery should be inthe central stimulus. METHODS: The stimulus had random dots, each onered or green. The central stimulus was either (1) blank or (2) had red dotsmoving upward and green dots moving downward. The peripheral stimulusalways had red and green dots: peripheral red dots either (1) had nomotion (constant locations presented either steadily or pulsed on and off),or (2) had ambiguous motion (a new random location, independently foreach dot in each frame, or a new random direction of movement, independentlyfor each dot in each frame). Peripheral green dots moved (1) upward(same direction as central red dots) or (2) downward (opposite from centralred dots). Observers reported the directions of motion of the majorityof peripheral (1) red dots and (2) green dots. RESULTS: No misbinding ofmotion was found when peripheral red dots did not move or when the centerwas blank. When central red dots and peripheral green dots moved inopposite directions, the misbound motion of randomly moving peripheralred dots could be in either direction. CONCLUSION: The center initiatesmisbinding but is not the sole source for misbound feature values.Acknowledgement: NIH grant EY-0480251.12, 8:30 amWarped spatial perception within and near objectsTimothy Vickery 1 (tim.vickery@gmail.com), Marvin Chun 1 ; 1 Department ofPsychology, Yale UniversityWe report that spatial perception is systematically distorted in the space inand around objects. Two dots placed inside a rectangular object’s boundariesappeared farther apart than two equivalently spaced dots placed outsideof the object. We measured this expansion effect by placing referencedots in one corner of a monitor (either inside an object or without an objectpresent), and asking participants to match the spacing of dots in the oppositecorner. In four different experiments, we found significant distortionsof spatial distance judgment for reference dots inside objects compared tooutside, ranging up to 15% greater in a rectangular object in one experiment(N=20). To test whether this effect is modulated by the strength ofperceived organization, we compared the magnitude of illusory expansionacross 1.) separated portions of a partially occluded rectangle comparedwith separated objects; 2.) within an illusory Kanizsa square compared towhen the inducers were rotated 180 degrees; and 3.) within a single rectanglecompared with two separate rectangles. In all three cases, the strongstructureconditions (the occluded rectangle, illusory square, and singleobject) showed a significantly greater expansion than weaker-structureconditions. This illusion could not be explained by a simple depth-basedaccount, which would predict perceived contraction of space in the regionoccluding an object. However, the illusion did reverse to a contraction effectfor dot spacings that were near the edges of the objects and larger than theobject. In conclusion, space is systematically distorted by perceived structure.We propose that the allocation of attention to the surface of a selectedobject may result in distortions in spatial perception.Acknowledgement: This research was supported by research grants NIH R01-EY014193and P30-EY000785 to M.M.C51.13, 8:45 amEvidence For A Modular Filling-in Process During Contour InterpolationBrian Keane 1,2 (brian.keane@gmail.com), Philip Kellman 1 ; 1 Department ofPsychology, University of California, Los Angeles, 2 University Behavioral HealthCare, UMDNJPurpose. Information near filled-in regions alters the perception of interpolatedshape, but it is unknown whether this process depends on top-downinfluences. Here, we consider whether observer strategy can reduce fillingineffects when interpolation normally occurs, or elicit such effects wheninterpolation normally does not occur. Method. Subjects discriminatedbriefly-presented, partially-visible fat and thin shapes, the edges of whicheither induced or did not induce illusory contours (relatable and unrelatablecondition, respectively). Half the trials of each condition incorporatedtask-irrelevant distractor lines, known to disrupt the filling-in process. Halfof the observers were asked to treat the visible parts of the target as belongingto a single thing (group strategy); the other half were asked to treatthe parts as disconnected (ungroup strategy). A strategy was encouragedby giving subjects pictures of the fat and thin response templates in theinstruction phase of the experiment, and at the end of each trial. These picturesdepicted either unitary shapes or fragmented shapes, depending onthe strategy. Results. There were three main results. First, distractor linesimpaired performance in the relatable condition (p0.7). Second, for both relatable and unrelatablestimuli, strategy did not alter the effects of the distractor lines (p>0.7).Finally, the attempt to group relatable fragments improved performance(p0.3).Conclusions. These results suggest that a) filling-in during illusory contourformation cannot be easily removed via top-down strategy; b) filling-incannot be easily manufactured from stimuli that fail to elicit interpolation;and c) actively grouping fragments can readily improve discriminationperformance, but only when those fragments form interpolated contours.These findings indicate that while discriminating filled-in shapes dependson strategy, filling-in itself is relatively encapsulated from top-down influences.Acknowledgement: This research was supported by a grant to the first author from theAmerican Psychological Association.51.14, 9:00 amGrouping by common fate occurs for only one group at a timeBrian Levinthal 1 (brian.levinthal@gmail.com), Steven Franconeri 1 ; 1 Department ofPsychology, Northwestern UniversityGrouping allows the visual system to link separate regions of the visualfield into a single unit for some types of processing. Although much pastwork examines when grouping will occur, and the relative strength of differenttypes of grouping, less is known about the mechanism that causesthe separate regions to become linked. We present a series of experimentsdemonstrating that a type of motion-based grouping, often called “commonfate,” may be driven by selection of a common motion vector amongthe grouped objects. Selecting in a spatially global manner for the motionvector currently exhibited by one object should also activate other objectsthat exhibit the same pattern of motion, affording the shape created by thegroup, as well as a feeling that objects ‘go together’. This account makesa counter-intuitive prediction, that only one motion vector, and henceone common fate group, can be created at a time. Participants performeda search for a common fate group among non-common fate groups. Displayscontained four pairs of moving dots, where one dot per pair wasconstrained to a small region near fixation, the other was located in theperiphery. Among the four dot-pairs, one to four were locked in commonmotion, and we measured response times for participants to find at least246 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong> See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Morning Talksone synchronized dot pair. Search slopes were highly serial (~750ms/pair),but were flat when one motion vector was precued. We propose that selectionof an object’s motion vector is a prerequisite for grouping by commonfate, but this vector selection can group an unlimited number of objectssharing a pattern of motion.51.15, 9:15 amEarly activation of contextual associations during object recognitionKestas Kveraga 1 (kestas@nmr.mgh.harvard.edu), Avniel Ghuman 2 , Karim Kassam 3 ,Elissa Aminoff 4 , Matti Hamalainen 1 , Maximilien Chaumon 1,5 , Moshe Bar 1 ; 1 Radiology,Harvard Medical School, Massachusetts General Hospital, 2 NationalInstitutes of Mental Health, 3 Psychology, Harvard University, 4 Psychology,University of California-Santa Barbara, 5 Psychology, Boston CollegeOur visual system relies on stored memory associations to achieve recognition.Objects in natural scenes tend to be grouped by a particular semanticcontext and these contextual associations are employed during object recognition.Behavioral research has demonstrated that stimuli congruent withthe scene context are recognized more easily than incongruent stimuli (e.g.,Palmer, 1975; Biederman et al., 1982; Davenport and Potter, 2004). Investigationsof context-related activity using fMRI (e.g. Bar and Aminoff, 2003;Aminoff et al., 2008, Peters et al., 2009) revealed a network of regions thatare consistently engaged by contextual associations of objects and scenes.This network comprises the parahippocampal cortex (PHC), retrosplenialcomplex (RSC), and medial prefrontal cortex (MPFC). To understand howthis context network is recruited and activated to facilitate recognition, oneneeds to first reveal its temporal and connectivity properties. Therefore,we investigated the spatiotemporal dynamics of contextual associationprocessing here with a combination of fMRI and magnetoencephalography(MEG). We contrasted the neural response to objects with strong contextualassociations (SCA) with the response elicited by weak contextual associations(WCA). Both fMRI and MEG responses revealed stronger activationsin the context network for the SCA vs. WCA comparison. To explore thespatiotemporal dynamics of this process, we analyzed the phase synchrony,a measure of neural coupling, in the MEG data. The results show strongeroverall phase synchrony for SCA objects than for WCA objects within thecontext network. Furthermore, we found an early, enhanced phase synchronybetween the visual cortex and PHC, followed by PHC-RSC, andthen by somewhat later RSC-MPFC coupling, occurring mainly in the betaband between 150-450 ms. Our findings reveal for the first time the spatiotemporaland connectivity properties of context processing. Implications ofthese findings to our understanding of how contextual information is usedduring recognition will be discussed.Acknowledgement: NIH NS056615, NSF 0842947, NIMH K01-MH083011-0151.16, 9:30 amColor Contrast Polarity of Boundary Edge Affects Amodal andModal Surface CompletionTeng Leng Ooi 1 (tlooi@salus.edu), Yong R. Su 1,2 , Zijiang J. He 2 ; 1 Department Basic<strong>Sciences</strong>, Pennsylvania College of Optometry, USA, 2 Department Psychologicaland Brain <strong>Sciences</strong>, University of Louisville, USATwo loci on a natural surface are more likely to have the same than differentcolors. Does the visual system capitalize on this ecological regularityto integrate partially occluded fragments into a larger common surface?In particular, when the geometrical relationship between the boundaryedges of two image fragments are appropriate for amodal surface completionbetween them, do these edges need to have the same color contrastpolarity (CP)? To answer this, we investigated whether the color CP ofequiluminous, spatially aligned rectangles affects the amodal surface completionbetween them, and the consequent formation of the modal surfacethat occludes the amodal surface. We found using three divergent psychophysicaltasks that separated rectangles with the same color CP (red/red orgreen/green), rather than with opposite color CP (red/green or green/red),tend to integrate into a partially occluded surface. First, observers subjectivelyreported the perceived illusory contour (modal surface) as strongerwhen the separated rectangles had the same color CP. Second, observerswere more efficient in discriminating the orientation of modally completedellipses, formed from aligned rectangles with the same color CP. Orientationdiscrimination was worse when the aligned rectangles had opposite colorCP, which negated the formation of the modally completed ellipse. Third,when motion signals were added to the edges of the separated rectangleswith the same color CP, the rectangles were more likely to be perceptuallyintegrated and seen to move in synchrony (global motion). In all experiments,we also varied the luminance contrast of the yellow backgroundrelative to the equiluminous rectangles. We found the contribution of colorCP to surface completion remains substantial with either the brighter ordarker background. This indicates that even when the aligned rectanglescarry the same luminance CP information, color CP information can exertan effect on the surface completion process.Acknowledgement: NIH (R01 EY015804)51.17, 9:45 amBorder ownership signals reflect visual object continuityPhilip O’Herron 1 (poherro1@jhmi.edu), Rudiger von der Heydt 1 ; 1 Krieger Mind/Brain Institute Johns Hopkins UniversityTheories of visual cognition have postulated a processing stage where elementaryfeatures are linked together to form more complex representationstermed “object files” or “proto-objects”. The neural basis of the linking isnot known. How is the representation of a square different from the representationof four lines? One hint comes from the observation of borderownership (BOS) selectivity in monkey visual cortex. About half of theneurons in area V2 are selective for which side of a border in the receptivefield is the figure and which side is the ground. The left-hand side ofa square, for example, produces high firing rates in neurons of figure-rightpreference and low firing rates in neurons of figure-left preference. Theseneurons combine information from various figure-ground cues, includingstereoscopic depth, occlusion features and global shape. Do these neuronsjust integrate figure-ground cues, or do they reflect the formation ofproto-object representations? One important characteristic of visual objectsis continuity. The system can identify given objects across a sequence ofchanging images. If BOS signals reflect object-related coding, they shouldshow this continuity. But if they merely represent the figure-ground cues,they should change whenever the cues change. To answer this question wedevised stimuli in which the figure-ground cues reverse, while the objectsremain the same. During an initial motion phase, the figure-ground cuesindicate one side of ownership. At the end of the motion, the static configurationgives cues indicating the opposite side of ownership. We findthat the initial BOS assignment persists for seconds despite the change inthe figure-ground cues of the stimulus, indicating that the continuity of theobjects dominates the neural response. Thus BOS signals reflect the emergenceof proto-object representations.Motion: MechanismsTuesday, May 11, 8:15 - 10:00 amTalk Session, Royal Ballroom 4-5Moderator: Mehrdad Jazayeri51.21, 8:15 amMonkey and humans exhibit similar motion-processing mechanismsWilliam Curran 1 (w.curran@qub.ac.uk), Catherine Lynn 1 ; 1 School of Psychology,Queen’s University BelfastSingle cell recording studies have provided detailed understanding ofmotion-sensitive neurons in nonhuman primates. Previous research hasrevealed linear and non-linear increases in spike discharge rate in responseto increasing motion coherence and density, respectively, and a divisionlikeinhibition between neurons tuned to opposite directions. It is notknown to what extent these response properties mirror those of motionsensitiveneurons in the human brain. We used an adaptation phenomenon,the direction aftereffect (DAE), to investigate whether motion-sensitiveneurons in the human brain respond to varying motion density andcoherence in a similar manner to macaque. Motion adapters that evoke astronger response in neurons usually result in greater changes in the neurons’direction tuning functions, which are thought to impact on DAE magnitude.If motion-sensitive neurons in the human brain respond in a similarmanner to macaque, increasing motion density and coherence will result inchanges in neural spike discharge similar to those reported for macaque.Given the relationship between neural spiking and aftereffect magnitude,changes in the levels of neural activity will be revealed through DAE measurements;with increasing neural activity leading to increasing DAE magnitude.We measured DAE magnitude as a function of 1) varying adapterTuesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>247


Tuesday Morning TalksVSS 2010 AbstractsTuesday AMdot density; 2) repeating the experiment while adding dots moving in theopposite direction; and 3) varying motion coherence of the adapter. Theresultant DAE tuning functions show that changes in activity of humanmotion-sensitive neurons to changes in motion density and coherence beara strong resemblance to macaque data. We also show a division-like inhibitionbetween neural populations tuned to opposite directions, which alsomirrors neural inhibitory behaviour in macaque. These findings stronglysuggest that motion-sensitive neurons in human and nonhuman primatesshare common response and inhibitory characteristics.51.22, 8:30 amResponses of macaque MT neurons to multi-stable movingpatternsMehrdad Jazayeri 1,2 (mjaz@u.washington.edu), Pascal Wallisch 1 , J. AnthonyMovshon 1 ; 1 Center for Neural Science, New York University, 4 WashingtonPlace, New York, NY 10003, USA, 2 HHWF, Physiol. & Biophys., University ofWashington, Seattle, Washington 98195, USANeurons in area MT are sensitive to the direction of motion of gratings andof plaids made by summing two gratings moving in different directions.MT component-direction-selective (CDS) neurons respond independentlyto the gratings of a plaid, while pattern-direction-selective (PDS) neuronscombine component information to respond selectively to plaids that movein the direction preferred by single gratings. Adding a third moving gratingcreates a multistable “triplaid”, which alternates perceptually amongdifferent groupings of gratings and plaids. To examine how this multistablemotion percept might relate to the activity of CDS and PDS neurons, wemeasured the activity of 77 MT neurons in anaesthetized macaques to triplaidstimuli in which three identical moving gratings whose directionswere separated by 120 deg were introduced successively going from a grating(320 ms) to a plaid (320 ms) to a triplaid (1280 ms). CDS and PDS neurons– selected based on their responses to gratings and plaids – respondedstrikingly differently to triplaids. CDS neurons maintained their tuningproperties for more than 1 s, but PDS neurons were slowly and progressivelysuppressed and lost their direction tuning properties altogetherafter 0.3–0.6 s. PDS but not CDS responses to triplaids also depended onthe order in which the three components were introduced. We wonderedwhether these effects might be due to anesthesia and therefore repeated theexperiment in area MT of an awake macaque performing a fixation task.Responses to the onset of individual gratings were more transient in theawake macaque than under anesthesia, but the sustained suppression ofPDS responses persisted in both conditions. We attribute the differencesbetween CDS and PDS response properties to an opponent suppressionthat is more potent in PDS cells, and discuss how area MT might contributeto the multistable perception of direction in moving triplaids.Acknowledgement: NIH EY02017, EY0444051.23, 8:45 amDistinct binocular mechanisms for 3D motion perceptionThaddeus B. Czuba 1,4,5 (thad@mail.utexas.edu), Bas Rokers 1,2,3,4,5 , Alexander C.Huk 1,2,3,4,5 , Lawrence K. Cormack 1,3,4,5 ; 1 Psychology, 2 Neurobiology, 3 Institutefor Neuroscience, 4 Center for Perceptual Systems, 5 The University of Texas atAustinThe perception of 3D motion relies on two binocular cues, one based onchanging disparities over time (CD cue), and one based on the interocularvelocity differences (IOVD cue). While both cues are typically present whena real object moves through depth, the CD cue is easy to isolate and hastherefore received the most attention. More recently, however, the IOVDcue has been (behaviorally) isolated and shown to play a strong role in theperception of 3D motion.We probed the mechanisms responsible for 3D motion using a standardmotion adaptation paradigm. Observers adapted to random dot motiondirectly towards or away from them. The strength of the resulting motionaftereffect was determined from the shift in the psychometric function relatingdot motion coherence to perceived direction. The shifts in 3D motionthresholds were extremely large—around 45% coherence—double that offrontoparallel aftereffects measured using otherwise identical 2D motionstimuli. These results (and those from a variety of control conditions) areinconsistent with a simple inheritance of 2D aftereffects and reveal adaptationof a unique 3D motion mechanism.We next adapted observers to 3D motion stimuli that contained the isolatedCD or IOVD cue, or combined both cues (like most real-world motion).Each aftereffect was measured using an identical combined-cue variablemotion coherence stimulus. Adaptation to either the combined-cue orIOVD-isolating stimuli resulted in the same large aftereffects seen in thefirst experiment, while adaptation to the CD-isolating stimulus producedaftereffects less than half as large.These motion aftereffects reveal distinct representation of 3D directions ofmotion, indicate that separate mechanisms exist for processing the disparity-and velocity-based cues, and support recent work showing that, undermany conditions, the velocity-based cue plays a surprisingly fundamentalrole in 3D motion perception.51.24, 9:00 amBrain areas involved in perception of motion in depth: a humanfMRI studyHwan Sean Lee 1 , Sylvia van Stijn 1 , Miriam Schwalm 1 , Wolf Singer 1 , Axel Kohler 2 ;1 Neurophysiology, Max Planck Institute for Brain Research, Germany, 2 PsychiatricNeurophysiology, University Hospital of Psychiatry, SwitzerlandRecently Likova and Tyler (2007) reported a brain region anterior to thehuman MT complex (hMT+) that is specialized for motion in depth whileRokers, Cormack and Huk (2009) reported strong involvement of hMT+itself. To resolve these conflicting results, we developed dynamic randomdotstereograms (RDS) in which we could trace the processing phases of thedepth and motion components with functional magnetic resonance imaging.In our RDS, a number of layers composed of black random-dots onfrontoparallel planes were stacked in the in-depth direction against a graybackground predefining the motion path. In each frame, dots in one of thelayers switched from black to white and then returned to black in the successiveframe during which the contrast switching took place in anotherlayer. When switching occurred in neighboring layers toward one direction,observers perceived a plane smoothly traversing in depth (condition1); when the switching occurred in arbitrary layers in succession, observersperceived no coherent motion (condition 2). Both conditions require aprior process of representing a plane (white random-dot layer) in depth,which is possible only after binocular combination. In condition 3 the contrast-switchingdots were selected across arbitrary layers, which appearedas twinkling dots in depth (condition 3). By contrasting these conditions inblock designs, we found that both hMT+ and a region anterior to hMT+ areinvolved in the process. First, alternation of conditions 2 and 3, in whichsurface representation is the only feature in comparison, evoked positiveblood-oxygen-level dependent (BOLD) change that is mostly contained inhMT+ and in another visual area, putative V3A. On the other hand, alternationof conditions 1 and 2, in which perception of coherent in-depth motionis the feature of interest, evoked BOLD changes in a region anterior tohMT+ (including the anterior hMT+).51.25, 9:15 amVisual Illusion Contributes to the Break of the CurveballZhong-Lin Lu 1 (zhonglin@usc.edu), Arthur Shapiro 2 , Chang-Bing Huang 1 ; 1 LOBES,Department of Psychology, University of Southern California, 2 Department ofPsychology, American UniversityIn the game of baseball, the curveball follows a (physical) parabolic trajectoryfrom the pitcher’s hand to home plate, but batters often report thatthe path of the ball appears discontinuous. The perceived discontinuity isreferred to as the “break”. The discrepancy between the perceptual andphysical trajectories suggests that the break of the curveball is a perceptualillusion. A curveball contains two orthogonal motion signals: a globalmotion toward the batter (second-order motion), and a local spinning (firstordermotion). We have created a simplified visual display to simulate thetwo orthogonal motion signals in the curveball. In our display, a spinningdisk descends vertically on a screen; when viewed foveally, the diskappears to descend vertically, but when viewed with peripheral vision, thedisk appears to descend obliquely. We found that the perceived motiondirection of the disk deviated from vertical by about 0.67x eccentricity. Wecomputed the moment-by-moment perceived velocity of the curveball froman actual trajectory (Bahill and Baldwin, 2004) by assuming that the batter’sgaze shifts from the ball to the expected point of bat/ball contact when theball is 0.2 sec from home plate, and by adding a 0.67x eccentricity (degrees)deviation to the physical velocity. The results predict an observer’s perceptionof a discrete shift from the physical parabolic path traveled by a248 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Morning Talkscurveball and suggest that the misperception of the curveball’s path maybe attributable to a transition from foveal to peripheral visual processing ofthe image of the ball.Acknowledgement: National Eye Institute51.26, 9:30 amRecovering the functional form of the slow-and-smooth prior inglobal motion perceptionHongjing Lu 1,2 (hongjing@ucla.edu), Tungyou Lin 3 , Alan Lee 1 , Luminita Vese 3 , AlanYuille 1,2,4 ; 1 Department of Psychology, UCLA, 2 Department of Statistics, UCLA,3 Department of Mathematics, UCLA, 4 Department of Computer Science, UCLAHuman motion perception has been proposed as a rational system for combiningnoisy measurements with prior expectations. An essential goal isto find means to experimentally discover prior distributions used by thehuman visual system. We aim to infer the functional form of motion priorfrom human performance. Stimuli consisted of 144 gratings with randomorientations. Drifting velocities for signal gratings were determined byglobal motion, whereas those for noise gratings were randomly assigned.Observers were asked to discriminate global motion directions betweena reference and a testing stimulus. In session 1, human performance wasmeasured at ten different coherence levels, with a fixed angular differencebetween reference and testing direction. Session 2 measured performancebut for ten angular differences with 0.7 coherence ratio. The priors includedslowness, first-order and second-order smoothness. We focused on twofunctional forms for prior distributions: L2-norm (corresponding to Gaussiandistribution) and L1-norm regularization (approximating Student’s tdistribution, whose shape has heavier tails than Gaussian). The weights ofthe three prior terms were estimated for each functional form to maximizethe fit to human performance in the first experimental session. We foundthat the motion prior in the form of the Student’s t distribution providedbetter agreement with human performance than did Gaussian priors. Therecovered functional form of motion prior is consistent with objective statisticsmeasured in natural environment. In addition, large weight valueswere found for the second-order smoothness terms, indicating the importanceof high-order smoothness preference in motion perception. Furthervalidation used the fitted model to predict observer performance in thesecond experimental session. The average accuracy difference betweenhumans and model across ten experimental levels ranged within 3%~8%for five subjects. This excellent predictive power demonstrates the fruitfulnessof this approach.Acknowledgement: NSF BCS-0843880 to HL and NSF 0736015 to AL51.27, 9:45 amInvestigating the relationship between actual speed and perceivedvisual speed in humansJohn A. Perrone 1 (jpnz@waikato.ac.nz), Peter Thompson 2 , Richard J. Krauzlis 3 ;1 The University of Waikato, New Zealand, 2 The University of York, UK, 3 The SalkInstitute for Biological <strong>Sciences</strong>, U.S.A.A number of models have been proposed over the years that are able to estimatethe speed of a moving image feature such as an edge but it is not obvioushow these models should be assessed in terms of their performance.Over what range of speeds should a model’s estimates of image velocity beveridical in order for it to be classed as effective? There is currently a lackof data that can directly inform us as to what the function looks like thatlinks human estimates of speed (v´) to actual speed (v), i.e., v´ = f(v), f = ?On a plot of v´ versus v, it is difficult to establish the absolute location of thefunction but we will show that there already exists a range of psychophysicaldata which constrain the form it can take. For example, the U-shaped,speed discrimination (Weber fraction) curves obtained by a number ofresearchers (e.g., McKee, Vis Res., 1981; De Bruyn & Orban, Vis Res.1988)suggest that the v´= f(v) function for moving edges is s-shaped with themaximum slope occurring at intermediate speeds (approx 4 – 16 deg/s).We have discovered that this s-shape is also predicted by models of speedestimation that feature speed-tuned Middle Temporal (MT) neurons andwhich incorporate a weighted vector average (centroid) stage (e.g., Perrone& Krauzlis, VSS, 2009). Because the range of speed tunings in MT is naturallyconstrained at both the high and low speed ends, the centroid estimateof the MT activity distribution is biased as a result of ‘truncation effects’caused by these lower and upper bounds; speed estimates in the modelare overestimated at slow input speeds and underestimated at high inputspeeds producing an s-shaped, v´= f(v) function similar to that predicted bythe speed discrimination data.Acknowledgement: JP & RK supported by a Royal <strong>Society</strong> of New Zealand Marsden Fundgrant.Neural mechanisms: CortexTuesday, May 11, 11:00 - 12:45 pmTalk Session, Royal Ballroom 1-3Moderator: Melissa Saenz52.11, 11:00 amPerceptual learning and changes in white matter in the aged brainrevealed by diffusion-tensor imaging (DTI)Yuko Yotsumoto 1,2,3 (yuko@nmr.mgh.harvard.edu), Li-Hung Chang 3 , Rui Ni 4 ,David Salat 1,2 , George Andersen 5 , Takeo Watanabe 3 , Yuka Sasaki 1,2 ; 1 AthinoulaA. Martinos Center for Biomedical Imaging, Massachusetts General Hospital,2 Harvard medical school, 3 Department of Psychology, Boston University,4 Psychology department, Whichita State University, 5 Department of Psychology,University of California, RiversideAn extensive body of research has shown that vision declines withincreased age. Recent research (Yotsumoto et al., 2008; Ni et al., 2007) hasshown that perceptual learning can be used to improve visual performanceamong older subjects (age 65 or older) and that the improved performanceis associated with an increase in BOLD signal localized to a trained part ofthe visual field. An important question is whether the changes in BOLDsignal are associated with anatomical changes. To address this question, weused diffusion tensor fractional anisotropy (FA) as a method to index whitematter density changes in older subjects trained using perceptual learning.Nine older subjects aged 65-80 underwent three behavioral training sessionsof a texture discrimination task (TDT) (Karni and Sagi, 1991). Eachsession lasted about 45 minutes and was conducted on three separate days.They also participated in two MRI /fMRI sessions before and after the threetraining sessions. In MRI/fMRI sessions, FA values were obtained usingDTI, as well as BOLD activities during the task. Results indicated that FAvalues decreased below the regions of visual cortical areas retinotopicallycorresponding to the location of the trained stimulus (trained regions) whencompared to those before the training. These findings were associated withimproved TDT performance after training and significantly larger BOLDsignal in the trained region than untrained regions. These results raise thepossibility that at least with older people long-term performance and physiologicalchanges are supported by anatomical changes.Acknowledgement: NIH-NEI R21 EY018925, NIH-NEI R01 EY015980-04A2, NIH-NEIR01EY019466, NIH-NIA R01 AG031941, JSPS52.12, 11:15 amRetinotopic Organization of Visual-Callosal Fibers in HumansMelissa Saenz 1 (saenz@caltech.edu), Christof Koch 1 , Ione Fine 2 ; 1 Division ofBiology, California Institute of Technology, 2 Department of Psychology, Universityof WashingtonIntroduction: The visual cortex in each hemisphere is linked to the oppositehemisphere by axonal projections that pass through the splenium of thecorpus callosum. Earlier work suggests there may be retinotopic organizationwithin the splenium: Dougherty and Wandell (2005) traced callosalfibers from the splenium to a broad region of visual cortex that includedmultiple extrastriate regions and found a ventral-dorsal mapping (uppervs. lower visual field) running from the anterior-inferior corner to the posterior-superiorend of the splenium. However it is not clear whether theseresults are due to dorsal and ventral visual areas projecting to differentregions of the splenium, or whether individual visual areas also show retinotopicorganization within the splenium. Here, we demonstrate consistentretinotopic organization of V1 fibers within the human splenium. Methods:High-angular resolution diffusion-weighted MR imaging (HARDI,72 diffusion directions) and probabilistic diffusion tractography wereused to track fibers between seed points in the splenium and retinotopically-definedsub-regions of V1 in 6 human subjects with normal vision.V1 was divided into sub-regions (three eccentricity bands, upper vs. lowervisual field representations) based on functional retinotopic mapping ineach subject. Each tractography seed point within the splenium was thenlabeled according to its profile of connection probabilities to these V1 reti-Tuesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>249


Tuesday Morning TalksVSS 2010 AbstractsTuesday AMnotopic sub-regions. Results: For all 12 hemispheres, we found retinotopicorganization of V1 fibers within the splenium. The eccentricity mapping(of fovea to periphery) runs from the anterior-superior corner to the posterior-inferiorend of the splenium. This runs orthogonal to a ventral-dorsalmapping (upper vs. lower visual field) running from the anterior-inferiorcorner to the posterior-superior end of the splenium. These results give amore detailed view of the structural organization of the human spleniumthan previously reported and offer new opportunities to study structuralplasticity in the visual system.Acknowledgement: RO1 EY-01464552.13, 11:30 amFlexibility of temporal receptive windows (TRWs) in the humanbrainMiki M. Fukui 1 (miki.fukui@nyu.edu), Nava Rubin 1,2 ; 1 Department of Psychology,New York University, 2 Center for Neural Science, New York UniversityReal-life events vary in their temporal scales; accordingly, different brainareas must process information at scales appropriate for their functionalproperties. Denoting the period during which an incoming stimulus canaffect the neural response as an area’s “temporal receptive window” (TRW),we previously used fMRI to identify areas with TRWs ranging from veryshort (40 sec) in anterior cortex,e.g., the Frontal Eye Fields (FEF), (Hasson, Yang, Vallines, Heeger & Rubin,2008). Because a given real-life event can also vary in pace, we hypothesizedthat TRWs must be flexible enough to allow processing at a range of paces.Here, we investigated the flexibility of TRWs by measuring fMRI activitywhile observers viewed clips of a feature film that were time-stretched ortime-compressed (x0.3, x0.6, x1.5 and x3.0), and later, the original versionsof those clips. Observers completed post-scan questionnaires to assess theircomprehension of the plot of each clip. Localizer scans were obtained todefine early visual cortical areas (standard methods) and anterior, long-TRW ROIs (based on Hasson et al., 2008). Response reliability of each ROIwas assessed by computing the correlations between the time-courses ofthe responses to two repetitions of the same clip within and across-observers.Flexibility of an area’s TRW was assessed by computing the correlationbetween the time-courses of the response to an original clip and theresponse to a pace-modified clip, after un-stretching or un-compressing thelatter. Correlations between the responses to the pace-modified and originalclips were comparable to those between two repetitions of the original,more so in long-TRW areas (e.g., FEF) and when comprehension was good.These data suggest that durations of TRWs may be determined not by timeper se, but by the number of accumulating sub-events.Acknowledgement: Supported by NIH F31-EY19835 to MMF RO1-EY014030 to NR52.14, 11:45 amGamma-aminobutyric acid concentration is reduced in visualcortex in schizophrenia and correlates with orientation-specificsurround suppressionMichael Silver 1,2 (masilver@berkeley.edu), Richard Maddock 3 , Ariel Rokem 1 , JongYoon 3 ; 1 Helen Wills Neuroscience Institute, University of California, Berkeley,2 School of Optometry, University of California, Berkeley, 3 Department ofPsychiatry and Imaging Research Center, University of California, DavisThe neural mechanisms that underlie perceptual and cognitive deficits inschizophrenia remain largely unknown. The gamma-aminobutyric acid(GABA) hypothesis proposes that reduced GABA concentration and neurotransmissionin the brain result in cognitive impairments in schizophrenia.However, few in vivo studies have directly examined this hypothesis inindividuals with schizophrenia. We employed magnetic resonance spectroscopy(MRS) to measure visual cortical GABA levels in subjects withschizophrenia and demographically matched healthy control subjects andfound that the schizophrenia group had an approximately 10% reductionin visual cortical GABA concentration relative to the control group. We furthertested the GABA hypothesis by correlating visual cortical GABA levelswith orientation-specific surround suppression, a behavioral measure ofvisual inhibition thought to be dependent on GABAergic synaptic transmission.Subjects performed a contrast decrement detection task within avertically-oriented annulus grating. For some trials, the grating was surroundedby either a parallel vertical grating or an orthogonal horizontalgrating. Thresholds for contrast decrement detection were largest for theparallel surround condition, and the ratio of thresholds in the parallel andorthogonal surround conditions indexes the component of surround suppressionthat is selective for stimulus orientation. Previous work from ourgroup has shown that subjects with schizophrenia exhibit reduced orientation-specificsurround suppression of contrast decrement detection (Yoonet al., 2009). For subjects with both MRS and behavioral data, we found ahighly significant positive correlation between visual cortical GABA levelsand magnitude of orientation-specific surround suppression. Concentrationsof GABA in visual cortex were not correlated with contrast decrementdetection thresholds for stimuli that did not contain a surround. These findingssuggest that a deficit in neocortical GABA in the brains of subjects withschizophrenia results in impaired cortical inhibition and that GABAergicsynaptic transmission in visual cortex plays a critical role in orientationspecificsurround suppression.Acknowledgement: NARSAD and NIMH52.15, 12:00 pmDynamic synthesis of curvature in area V4Jeffrey Yau 1 (yau@jhu.edu), Anitha Pasupathy 2 , Scott Brincat 3 , Charles Connor 4 ;1 Department of Neurology, Division of Cognitive Neuroscience, Johns HopkinsUniversity School of Medicine, 2 Department of Biological Structure, Universityof Washington, 3 Department of Brain and Cognitive <strong>Sciences</strong>, MassachusettsInstitute of Technology, 4 Solomon H. Snyder Department of Neuroscience,Johns Hopkins University, Zanvyl Krieger Mind/Brain InstituteObject perception depends on integration of small, simple image fragments(represented in early visual cortex) into larger, more complex shapeconstructs (represented in intermediate and higher-level ventral pathwayvisual cortex). We have previously described a dynamic process for shapeintegration in macaque monkey posterior inferotemporal cortex (PIT)(Brincat & Connor, 2006, Neuron; VSS 2008; VSS 2009). In PIT, linear tuningfor individual curved contour fragments evolves into nonlinear selectivityfor more complex multi-fragment configurations over a time course ofapproximately 60 ms. Here, we describe the antecedent stage of shape integrationin area V4, which provides feedforward inputs to PIT. In V4, earlyresponses reflect linear tuning for individual contour orientation values,comparable to orientation tuning in early visual cortex (V1, V2). These signalsevolve into nonlinear tuning for curvature (change in orientation alongcontours) over a time course of approximately 50 ms. The emergence of V4curvature responses matches the time course of V4-like curvature signalsin PIT, implying that this dynamic process in V4 provides critical inputsignals to PIT. These results suggest a comprehensive model of sequentialshape synthesis in the ventral pathway. Orientation signals emerge first,and are dynamically synthesized into curvature signals in V4. V4-like curvaturesignals appear with nearly the same time course in PIT, and are subsequentlysynthesized into larger, more complex shape constructs. The timecourse of this transformation complements an extensive body of humanpsychophysical and neurophysiological research showing that object perceptiondevelops over a span of several hundred milliseconds from verycrude distinctions to finer categorization and identification.52.16, 12:15 pmEncoding a salient stimulus in the lateral intraparietal area (LIP)during a passive fixation taskFabrice Arcizet 1 (farcizet@mednet.ucla.edu), Koorosh Mirpour 2 , Weisong Ong 3 ,James Bisley 4 ; 1 UCLA, Department of Neurobiology, David Geffen School ofMedicine, 2 UCLA, Department of Neurobiology, David Geffen School of Medicine,3 UCLA, Department of Neurobiology, David Geffen School of Medicine,Interdepartmental PhD Program for Neuroscience , 4 UCLA, Department ofNeurobiology, David Geffen School of Medicine, Jules Stein Eye Institute,Department of Psychology and the Brain Research InstituteWhen exploring a visual scene, some objects preferentially grab our attentionbecause of their intrinsic properties. In this study, we examined theresponses of neurons in LIP to salient stimuli while naive animals performeda passive fixation task. We defined the salient stimulus as a colorpopout among stimuli of another color; either red or green. The animalsstarted a trial by fixating a central spot after which a circular array of 6stimuli was flashed for 750 ms. The array was arranged so that only oneof the stimuli was in the receptive field (RF). The animals had to keep fixationto be rewarded. We used 4 different conditions: the field condition, inwhich all the stimuli had the same color; the distractor condition, in whichthe salient stimulus was presented outside the receptive field, so a distractorwas inside the RF; the popout condition, in which the salient stimuluswas inside the RF; and the singleton condition, in which only a single250 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Morning Talksstimulus was presented inside the RF. We recorded from 42 LIP neuronsand found that the mean response to a salient stimulus was significantlyhigher than the mean response to a distractor, but significantly lower thanthe mean response to a singleton. The time at which the popout activityrose above the distractor activity was relatively early suggesting that bottom-upinformation from early visual areas converges at LIP. Interestingly,there was a tight correlation in response to the popouts and distractors ofsimilar colors, suggesting gain control. We also found that some LIP neuronsprefer a particular color, but these neurons still had elevated responsesto a popout compared to a distractor, consistent with the presence of gaincontrol. Together these results indicate that LIP highlights salient stimulieven when they are task irrelevant.Acknowledgement: Klingenstein Fund, Alfred P. Sloan Foundation, McKnight Foundationand the National Eye Institute (R01EY019273-01).52.17, 12:30 pmVisual responses of the dorsomedial area V6A to the presentationof objects to be graspedPatrizia Fattori 1 (patrizia.fattori@unibo.it), Annalisa Bosco 1 , Rossella Breveglieri 1 ,Claudio Galletti 1 ; 1 Department of Human and General Physiology, University ofBolognaThe medial posterior parietal area V6A has been recently shown to encodethe different types of grips used to grasp objects of different shapes (Fattoriet al., Journal Neurosci, in press). As V6A contains many neuronsactivated by visual stimulations (Galletti et al., 1996; 1999), and receives adirect visual input from the extrastriate visual area V6 (Galletti et al., 2001),the aim of the present study was to ascertain whether cells in V6A encodethe visual features of the objects to be grasped. 153 neurons were recordedfrom 2 monkeys trained to perform reach-to-grasp movements to objectswith different shapes: ball, handle, ring, plate, stick-in-groove. The monkeysfixated a LED, one object was illuminated for 500 ms, then, after avariable delay (0.5- 2s) the animal reached and grasped the same objectin the dark. About 70% of V6A cells (109/153) showed visual responsesto object presentation; 80% of these visual neurons (88/109) showed alsoreach-to-grasp-related discharges. About 30% of visual neurons displayedselectivity for an object or a set of objects (31/109), and half of these cells(17/31) showed reach-to-grasp responses modulated by the type of gripused to grasp them. At population level, the strenght of neural modulationsto the visual features of objects to be grasped is similar to that for coding thegrip postures suitable for grasping these objects. From these data it turnsout that most of V6A neurons are visually driven by the objects presentedin peripersonal space, with neurons discriminating the object type, andneurons able to code both object types and grip types. We conclude thatarea V6A is a visuomotor area of the dorsomedial visual stream involved incoding both the execution of reach-to-grasp actions and the visual featuresof objects to be grasped.Acknowledgement: MIUR, FP6-IST-027574-MATHESIS, Fondazione del Monte di Bolognae RavennaAttention: Object attention and objecttrackingTuesday, May 11, 11:00 - 12:45 pmTalk Session, Royal Ballroom 4-5Moderator: Todd Horowitz52.21, 11:00 amBeam me up, Scotty! Exogenous attention teleports but endogenousattention takes the shuttleRamakrishna Chakravarthi 1, 2 (chakravarthi@cerco.ups-tlse.fr), Rufin VanRullen 1,2 ; 1 Universite de Toulouse, UPS, Centre de Recherche Cerveau & Cognition,France, 2 CNRS, CerCo, Toulouse, FranceAnalyzing a scene requires shifting attention from object to object. Severalstudies have attempted to determine the speed of these attentional shifts,coming up with various estimates ranging from two to thirty shifts persecond. The discrepancy among these estimates is likely a result of severalfactors including the type of attention, cue and stimulus processingtimes, eccentricity, and distance between objects. Here, we adapt a methodpioneered by Carlson et al (2006) that directly measures attentional shifttimes. We present 10 ‘clocks’, with single revolving hands, in a ring aroundfixation. The observers are asked to report the hand position on one of theclocks at the onset of a transient cue. We use different combinations ofexogenous and endogenous cuing to determine shift times for both typesof attention. In experiment 1, we first endogenously cue a clock with a centralarrow. While the observer attends that clock, on some trials we cuethe same clock exogenously to evaluate ‘baseline’ processing time, and onother trials we exogenously cue another clock at a variable distance (1, 2 or5 clocks away) to determine the shift time for exogenous attention. Similarlyin experiment 2, we exogenously cue one clock and ask observers to eitherreport the observed time (baseline), or (in other blocks) endogenously shifttheir attention to another clock at a variable distance from the cued clock todetermine the shift time for endogenous attention. In agreement with previousstudies, our results reveal that endogenous attention is much slowerthan exogenous attention (endogenous: 250-350 ms; exogenous: 100-150ms). Surprisingly, the dependence of shift time on distance was minimal forexogenous attention, whereas it was several times higher for endogenousattention. This qualitative difference suggests distinct neural mechanismsfor the two modes of attention.Acknowledgement: EURYI, ANR 06JCJC-015452.22, 11:15 amReward Driven Prioritization Modulates Object-based Attention inHuman Visual CortexJeongmi Lee 1 (jeongmi0123@gmail.com), Sarah Shomstein 1 ; 1 Department ofPsychology, George Washington UniversityMost of the recent evidence suggests that stimuli that are rewarded stronglyattract visual attention and consequently modulate neural activity in thevisual cortex. This raises a possibility that reward and attentional systemsin the brain are greatly interconnected. However, to date, control mechanismsof attentional and reward systems have been investigated independently,and the nature of this relationship remains poorly understood. Toinvestigate the neural mechanisms of reward and attention, using eventrelatedfMRI, we employed a variant of the Egly, Driver and Rafal paradigmcomplemented with three different monetary reward schedules: (i)reward delivered randomly to either the same- or different-object target; (ii)higher reward delivered to the same-object target; and (iii) higher rewarddelivered to the different-object target. Since the exact same visual stimuliare presented in all three experiments, any differences in neural activity canonly be attributed to the reward manipulation. It was observed that rewardschedule exclusively modulated activation in the early visual areas. BOLDresponse was enhanced for the same-object location as compared to the different-objectlocation when reward schedule was biased toward the sameobjectlocation (same as the traditional object-based effect). On the contrary,reward schedule biased toward the different-object location reversed thetraditional object-based effect, exhibiting enhanced BOLD activation for thedifferent-object location as compared to the same-object location. Behavioralresults also supported the reward-based modulation effect, as evidencedby faster RTs for object locations with higher reward, independently ofwhether such location was in the same- or different-object. Importantly,the magnitude of the object-based effect was not modulated by rewardschedule differentially (neither behaviorally nor in BOLD response). Theseresults indicate that reward priority exclusively guides attention, and suggestthe possibility that the control mechanisms of reward and attentionalsystems in the brain are interdependent.52.23, 11:30 amProbing the distribution of attention to targets and distractors inmultiple object trackingEdward Vogel 1 (vogel@darkwing.uoregon.edu), Andrew McCollough 1 , TraftonDrew 1 , Todd Horowitz 2 ; 1 Department of Psychology, University of Oregon,2 Harvard Medical SchoolHow is attention allocated during multiple object tracking (MOT)? In previousresearch, we have demonstrated a significant enhancement of theanterior N1 component (150ms post-stimulus) for task-irrelevant probes ontargets relative to distractors (Drew et al. 2009). We argued that this reflectsattentional enhancement for targets during MOT. Here we use this ERPcomponent to study how attentional allocation responds to various trackingchallenges. In Experiment 1, observers tracked 2 targets among 2, 4,or 6 distractors. The anterior N1 amplitude to targets increased relative todistractors as distractor load increased. In Experiment 2, observers tracked2, 3, or 4 targets among 6 distractors. Here the target-distractor ampli-Tuesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>251


Tuesday Morning TalksVSS 2010 AbstractsTuesday AMtude difference decreased as target load increased. Note that both of thesemanipulations vary the density of the display, yet the attentional systemresponds differently depending on whether additional items are targets ordistractors. Increased distractor load leads to increased attentional focus ontargets, while increased target load tends to reduce this focus. We suggestthat the first effect reflects a strategic decision to increase the attentionalallocation to targets faced with greater threat from distractors. In contrast,the second effect presumably reflects failures of attention as attention isspread more thinly across targets. These results suggest that during multipleobject tracking attention is focused more tightly on targets as spacingdecreases, but this strategy is inhibited by increased target load. Thusthe attentional system responds flexibly and intelligently to protect targetsfrom distractor interference.Acknowledgement: NIMH52.24, 11:45 amObject attention sharpens the tuning of the perceptual templateand interacts with task precisionBarbara Dosher 1 (bdosher@uci.edu), Songmei Han 1,2 , Zhonglin Lu 3 ; 1 Departmentof Cognitive <strong>Sciences</strong>, University of California, Irvine, CA 92697, USA , 2 TheApollo Group, Scottsdale, AZ 85251, USA, 3 Department of Psychology, Universityof Southern California, Los Angeles, CA 85251, USAThe identification of two attributes of a single object exceeds the identificationof the same attributes, one in each of two objects. If focusing attentionon one object narrows the tuning of the perceptual template, the effectshould be magnified when the similarity of the alternatives fall on the rapidlychanging portion of the template where performance is most sensitiveto changes tuning. Recent results suggest that attention effects depend ondiscrimination precision. The goal of the current project was to extend thetaxonomy of attention by quantitatively examining the interaction betweenfocusing attention and judgment precision. Observers made moderatelyprecise judgments of the orientation (±10°) and phase (center light/dark)of one Gabor object or the orientation of one and the phase of another insix levels of external noise. The objects appeared at 7 deg eccentricity leftand right of fixation. The family of contrast psychometric functions in differentexternal noises showed object attention effects at all contrasts, witha magnitude that varied considerably across observers. An elaborated perceptualtemplate model, the ePTM (Jeon, Lu, & Dosher, 2009), that dealswith non-orthogonal stimuli, accounts for the full family of contrast psychometricfunctions in both single-object and dual-object conditions forthese moderately precise discriminations, providing a direct test of templatesharpening. The ePTM framework provides a systematic account ofobject attention and the joint effects of external noise, contrast, and orientationdifference, with object attention resulting in narrower tuning andtherefore higher asymptotic performance across external noise levels and areduced effect of external noise, as suggested by (Liu, Dosher, & Lu, 2009). Object attention affects the tuning of the template and excludes externalnoise with its impact dependent upon judgment precision. The attentionprecisionframework provides an explanation of variation in the magnitudeof attention effects in different tasks.Acknowledgement: Funded by 5R01MH81018 and by the AFOSR52.25, 12:00 pmPredictability matters for multiple object trackingTodd Horowitz 1,2 (toddh@search.bwh.harvard.edu), Yoana Kuzmova 1 ; 1 VisualAttention Laboratory, Department of Surgery, Brigham and Women’s Hospital,2 Department of Ophthalmology, Harvard Medical SchoolMost accounts of multiple object tracking (MOT) suggest that only the spatialarrangement of objects at any one time is important for explaining performance.In contrast, we argue that observers predict future target positions.Previously this proposition was tested by studying the recovery oftargets after a period of invisibility (Fencsik, Klieger, & Horowitz, 2007;Keane & Pylyshyn, 2006). Here, we test the predictive hypothesis in a continuoustracking paradigm.In two experiments, we asked observers to track three out of twelve movingdisks for three to six seconds, and varied the average turn angle. We heldspeed constant at 8°/s, but direction for each disk changed with probability.025 on each 13.33 ms frame. Observers marked all targets at the end of thetrial. Experiment 1 used turn angles of 0°, 30°, and 90°, while Experiment 2used 0°, 15°, 30°, 45°, 60°, 75°, and 90°. Turn angle was fixed for all objectswithin a trial but varied across trials. In both experiments, accuracy wasmaximal at 0° and declined as turn angle increased (Exp 1: p = .001; Exp 2:p = .001). In Experiment 2, the steepest decline in accuracy was from 0° to30°, while accuracy was roughly constant from 45° to 90°.These data demonstrate that it is easier to track predictably moving targets.Since velocity, density, and other factors known to affect MOT performancewere constant, this suggests that observers predict target motion online toimprove tracking. Furthermore, the pattern of data in Experiment 2 is compatiblewith a model in which the visual system assumes that target trajectorieswill vary only within a narrow 30° band.Acknowledgement: NIH MH6557652.26, 12:15 pmSplitting attention over multiple objectsSteven Franconeri 1 (franconeri@northwestern.edu), Sarah Helseth 1 , Priscilla Mok 2 ;1 Dept of Psychology, Northwestern University, 2 Dept of Psychology, BrownUniversityWe often need to deal with multiple objects at once, when we monitor forchanges in a set of objects, compare the features or locations of multipleobjects, or store the appearances of objects in memory. In some tasks, jugglingmultiple objects might require sequential processing, while in othersit may be possible to draw information from multiple objects simultaneously.In a series of experiments using both static and moving objects, weexplore the underlying mechanism and limits of selecting multiple objects.First, we show that simultaneous selection can occur. Participants wereasked to mentally mark a set of locations in an array that would containa search target, and task accuracy suggested that they could search exclusivelythrough those locations in a later display. But when the search taskwas made more difficult by making targets more featurally similar to distractors,fewer locations could be marked. This result suggests that markinglocations entails encoding information from those locations (Awh &Jonides, 2001), and that tougher searches require selecting fewer locations,which cannot be recovered. Second, we show that multiple locations arenot encoded as a constellation that relies on shape memory. Participantssearched through several marked locations, and performance was notimpaired by adding a shape memory dual task. A control task showed thatadding an identical dual task to a single shape memory task did impairperformance, suggesting that marking does not rely on shape memory. Athird set of experiments using multiple object tracking tasks suggests thatonce objects are selected, they can move and selection can be maintainedwith no additional cost. Finally, we argue that in both location marking andmultiple object tracking tasks the key performance and capacity-limitingfactor is the spacing among objects.52.27, 12:30 pmChasing vs. Stalking: Interrupting the Perception of AnimacyTao Gao 1 (tao.gao@yale.edu), Brian J. Scholl 1 ; 1 Perception & Cognition Lab,Department of Psychology, Yale UniversityVisual experience involves not only physical features such as color andshape, but also higher-level properties such as animacy and goal-directedbehavior. Perceiving animacy is an inherently dynamic experience, in partbecause agents’ goals and mental states may be constantly in flux -- unlikemany of their physical properties. How does the visual system maintainand update representations of agents’ goal-directed behavior over time andmotion? The present study explored this question in the context of a particularlysalient form of perceived animacy: chasing, in which one shape (the‘wolf’) pursues another shape (the ‘sheep’). The participants themselvescontrolled the movements of the sheep, and the perception of chasing wasassessed in terms of their ability to avoid being caught by the wolf -- whichlooked identical to many moving distractors, and so could be identifiedonly by its motion. In these experiments the wolf’s pursuit was periodicallyinterrupted by short intervals in which it did not chase the sheep.When the wolf moved randomly during these interruptions, the detectionof chasing was greatly impaired. This could be for two reasons: decreasedevidence in favor of chasing, or increased evidence against chasing. Theseinterpretations were tested by having the wolf simply remain static (orjiggle in place) during the interruptions (among distractors that behavedsimilarly). In these cases chasing detection was unimpaired, supporting the‘evidence against chasing’ model. Moreover, random-motion interruptionsonly impaired chasing detection when they were grouped into fewer temporallyextended chunks rather than being dispersed into a greater numberof shorter intervals. These results reveal (1) how perceived animacy is252 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Morning Talksdetermined by the character and temporal grouping (rather than just thebrute amount) of ‘pursuit’ over time; and (2) how these temporal dynamicscan lead the visual system to either construct or actively reject interpretationsof chasing.Tuesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>253


Tuesday Morning PostersTuesday AMMemory: Objects and features in workingand short-term memoryRoyal Ballroom 6-8, Boards 301–316Tuesday, May 11, 8:30 - 12:30 pm53.301 Dual Memory Systems Store Direction of Motion Informationfor Multiple Moving ObjectsHaluk Ogmen 1,2 (ogmen@uh.edu), Christopher Shooner 1 , Srimant Tripathy 3 ,Harold Bedell 2,4 ; 1 Department of Electrical & Computer Engineering, Universityof Houston, 2 Center for Neuro-Engineering & Cognitive Science, Universityof Houston, 3 Department of Optometry, University of Bradford, 4 College ofOptometry, University of HoustonPurpose: The ability to establish and maintain the identities of movingobjects is essential to behavioral success, yet very little is known about theunderlying mechanisms. The multiple-object tracking experimental paradigm(MOT-EP) has been used extensively for studying how attention,position and motion cues contribute to this task. Among the unresolvedissues are the relative importance of motion information and the role ofvarious memory mechanisms. We sought to quantify the capacity and thetemporal dynamics of the memory systems involved in storing directionof-motioninformation when viewing a multiple-object motion stimulus.Methods: Observers viewed three to nine objects in random linear motionand reported motion direction of a cued object after motion ended. In threeexperiments, we (1) measured performance as a function of set-size, (2)characterized the temporal dynamics of memory using seven cue delaysranging from 0ms to 3s, and (3) examined interactions between the dynamicsof memory and the read-out processes by comparing performance withpartial and full report. Results: Direction reports show a graded deteriorationin performance with increased set size. This lends support to a flexible-capacitytheory of MOT-EP. Temporal dynamics of memory followsan exponential function that decays within 1s to a steady-state plateauabove chance performance. This outcome indicates the existence of twocomplementary memory systems, one transient with high-capacity and asecond sustained with low-capacity. For the transient high-capacity memory,retention capacity was equally high whether object motion lasted 5sor 200ms. We found a significant partial-report advantage, which providesfurther support for a rapidly decaying high-capacity memory. Conclusions:Our results show that dual memory systems store direction of motion informationfor multiple moving objects. This finding provides a possible reconciliationto seemingly contradictory results previously published in theliterature.Acknowledgement: NIH R01 EY01816553.302 Feature coactivation in object file reviewing: Responsetime distribution analysesJun Saiki 1 (saiki@cv.jinkan.kyoto-u.ac.jp); 1 Graduate School of Human andEnvironmental Studies, Kyoto UniversityObject file studies using object reviewing paradigm revealed that object featurescan be accessed through addressing their spatiotemporal locations. Inthe divided attention literature, evidence for coactivation of different featureshas been reported using redundant signals paradigm. These two linesof research lead to a question whether object files play significant roles inintegrating features, which remains unsolved because evaluation of featurecoactivation requires response time (RT) distribution analysis, never donewith object reviewing paradigm. The current study conducted RT distributionanalyses with a new task combining object reviewing and redundantsignals paradigms. Observers saw a preview display composed of two coloredboxes containing a letter, above and below the fixation, followed bya linking display. Then, they saw a target display with a single object, andjudged whether the target contains color or shape of preview objects asquickly as possible. For match trials, features are either at the same-object(SO) or different-object (DO) as in the reviewing paradigm. Type of matchwas color, shape, or color-and-shape (object) as in the redundant signal paradigm.In the object condition, the mixed condition combining one SO andone DO features was also included. Mean RT revealed significant redundancygain in both SO and DO conditions, and object-specific preview benefitin all match type conditions except for the color condition. Race modelinequality test using RT distribution showed evidence for feature coactivationin SO and DO conditions with similar magnitudes, not supportingthe idea that feature coactivation is modulated by access to an object file.Further analysis with ex-Gaussian distribution for color-shape conditionsrevealed that faster and slower RT components were modulated by matchof single feature, and of feature combination, respectively, suggesting thatobject file reviewing is feature-based, but response selection is sensitive tofeature combinations.Acknowledgement: Supported by MEXT Grant-in-Aid #21300103, and Global COE D07 toKyoto University53.303 Dual processes in the recognition of objects in visualworking memoryYu-Chen Tseng 1 (yuchtzeng@gmail.com), Cheng-Ta Yang 2 , Yei-Yu Yeh 1 ; 1 Departmentof Psychology, National Taiwan University, 2 Department of Psychologyand Institute of Cognitive Science, National Cheng-Kong UniversityEmpirical evidence has suggested two independent retrieval processes,intentional recollection and automatic process based on familiarity, operatein retrieving long-term memory. Yonelinas (1994) and Wixted (2007)proposed different models describing the relationship between recollectionand familiarity: the dual process signal detection (DPSD) model andthe unequal variance signal detection (UVSD) model. The major differencebetween these two models is the purity of processes. In the DPSDmodel, recollection and familiarity do not contribute to a single retrievalperformance. In contrast, UVSD model suggests that the two processes aresimultaneously used while retrieving. The phenomenon change blindnesshas revealed that the visual information around us is not always encodedor functional available. Thus, the change detection paradigm has beenwidely used as an indirect method to measure whether visual informationis functional available in visual working memory. Previous studies haveshown that recognition of pre-change objects is worse than recognition ofa post-change object (Beck & Levin, 2003; Mitroff, Simons, & Levin, 2004).However, the memory retrieval process underlying change detection, orvisual working memory, has not yet been exhaustively discussed. In thisstudy, we investigated different retrieval processes involved in recognizinga pre-change object after change detection and the purity of processesin the retrieval process. We adopted a Bayesian analysis to estimate theproportion of each process involved in recognition under successful andfailed change detection. Our results showed that the UVSD model providesa better fit than the DPSD, suggesting the simultaneity of two processes inretrieving visual working memory. Moreover, the mean parameter valueof recollection was higher under successful detection than detection failure.In contrast, there was no difference in familiarity between successfuland failed change detection. Recollection is involved in successful changedetection.53.304 Spatio-Temporal Working Memory is Impaired by MultipleObject TrackingYuming Xuan 1 (xuanym@psych.ac.cn), Hang Zhang 2 , Xiaolan Fu 1 ; 1 State Key Laboratoryof Brain & Cognitive Science, Institute of Psychology, Chinese Academyof <strong>Sciences</strong>, 2 Department of Psychology, New York University, and Center forNeural Science, New York UniversityOur previous study (Zhang, Xuan, Fu, & Pylyshyn, in press) showed thatobject-location working memory (WM) was impaired by a secondary multipleobject tracking (MOT) task but non-spatial visual WM could survivethe MOT task. Thus there might be no competition between perceptualobjects selected and objects maintained in visual WM. Considering thatspatio-temporal WM and short-term object-location tasks might use differentmemory mechanisms (Zimmer, Speiser, & Seidler, 2003), competitionof perception and attention system was examined by looking at the dualtaskinterference between a spatio-temporal WM task (Corsi Block Task,CBT) and a secondary MOT task in the present study. In Experiment 1, CBTperformance was shown to be impaired by the secondary MOT task, whilepassively viewing the MOT scene but tracking none of the objects did notresult in any damage to the CBT performance. In Experiment 2, we found254 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong> See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Morning Postersthat tracking per se could be harmful to the spatio-temporal WM. In Experiment3, results showed that tracking more targets or tracking in higherspeed caused more impairment to the spatio-temporal WM. In experiment4, low spatio-temporal WM load (2 or 3 locations) was also showed to beimpaired by the MOT task. In sum, spatio-temporal WM seemed to be vulnerableto a secondary MOT task. In a CBT task, to remember a temporallocation sequence needs to shift spatial attention between these locationsin rehearsal and impairment will be present when interrupted by tracking.Furthermore, in all experiments, MOT performance was shown to be notimpaired by the spatio-temporal WM load, indicating that perceptual systemhas a priority in accessing limited resource over memory system.Acknowledgement: 973 Program (2006CB303101), National Natural Science Foundationof China (90820305, 30600182)53.305 Feature binding across visual and manual domains:Evidence from a VSTM studyRaju Sapkota 1,2 (Raju.Sapkota@anglia.ac.uk), Shahina Pardhan 1 , Ian van derLinde 1,3 ; 1 <strong>Vision</strong> & Eye Research Unit, Postgraduate Medical Institute, AngliaRuskin University, Cambridge CB1 1PT, UK., 2 Department of Optometry &Ophthalmic Dispensing, Anglia Ruskin University, Cambridge, CB1 1PT, UK,3 Department of Computing & Technology, Anglia Ruskin University, ChelmsfordCM1 1SQ, UK.In this study, binding in visual short-term memory (VSTM) across visualand manual domains was investigated. Six human observers performed ayes-no recognition task for object appearance in three experimental conditions(fully counterbalanced) in which unfamiliar, nonverbal 1/f noise discsserved as stimuli. In the memory display, four stimuli (each subtending 2deg) were presented sequentially (each for 850ms) at random spatial positions.Following a 1000ms blank interval, a test stimulus was presented. Incondition 1, observers executed hand movements (spatial tapping) duringthe memory display by touching a pointer on a graphics tablet at a positioncorresponding to the screen coordinate of each stimulus as it appeared. Thetest stimulus was presented at one of the coordinates used in the precedingmemory display. Condition 2 was identical to condition 1, except that spatialtapping was not performed. In condition 3, both memory and test stimuliwere presented at (different) random coordinates; observers performedspatial tapping during the memory display (like condition 1), except thatthe positions of test stimuli did not correspond to preceding hand/screenpositions. In all three experimental conditions, the cursor was invisible.Observers completed a training session in which the cursor was visible toassociate graphics tablet coordinates with screen coordinates. Performance,measured in d’, was significantly greater in condition 1 than in conditions2 [F(1,5)=20.35, p


Tuesday Morning PostersVSS 2010 AbstractsTuesday AMIn the coloured conditions, the identity and colour of one item changed.Synaesthetes showed superior performance to controls in the white lettercondition at the smaller setsize, where their accuracy was equivalent to thecoloured letter condition. In the non-letter condition, where no synaestheticcolours were elicited, performance did not differ between the groups, rulingout a baseline difference in change-detection ability. Thus, synaestheticcolours can act as an additional cue to the presence of a change, akin to areal colour change.Acknowledgement: National Health & Medical Research Council, Menzies Foundation,Australian Research Council53.310 Strategic Control of Visual Working Memory for Global andLocal FeaturesMichael Patterson 1 (mdpatterson@ntu.edu.sg), Wan Ting Low 1 ; 1 Division ofPsychology, School of Humanities and Social <strong>Sciences</strong>, Nanyang TechnologicalUniversityPrevious studies have demonstrated a bias to focus on global over localfeatures in both visual attention and visual working memory. In two newstudies, we created novel stimuli too complex to be remembered in everydetail. We examined the effect of the varying presentation delays of poststimulusinstructions that directed participants to focus on global or localfeatures. We predicted that instructions to focus on specific features wouldreduce working memory load, but at the cost of diminishing memory forother features of the stimulus. In study 1, participants viewed polygonsmade up of twenty lines which were grouped into four colors. The polygonswere displayed for 1 sec, followed by a 0-4 sec delay. Next, instructionswere given to focus on either a part (1/4 of the polygon), an object(1/2 of the polygon), or the whole polygon. After another 0-4 sec delay,participants selected between four images, only one of which matchedthe initial stimulus. Consistent with our previous research, instructionsincreased accuracy. However, instructions also influenced error types. Participantswho had been instructed to focus on global features erroneouslyselected lures with a large change to only one part. Participants who hadbeen instructed to focus on object-level properties erroneously selectedlures with small changes to every part, yet kept the original global shape. Inthe second study, participants viewed Navon figures made up of polygonsinstead of letters. Lures contained either global changes, local changes, orboth global and local changes. Instructions guided participants to focus oneach of these levels. The presence of instructions lead to a decrease in performanceif the instructions were shown immediately after the stimulus,indicating that visual information must be consolidated within workingmemory before strategic control of focus can occur.53.311 The time course of consolidation of ensemble feature invisual working memoryHee Yeon Im 1 (heeyeon.im@jhu.edu), Justin Halberda 1 ; 1 Johns Hopkins UniversityCollections of visual objects can be grouped and statistical properties of thegroup encoded as ensemble features. It is known that ensemble features canbe represented from a group of multiple items from a very brief display. Inthe current study, we measured the time-course of consolidation of averageorientation into visual working memory and compared it to that of individualorientation. There were two separate blocks for individual orientationand ensemble orientation. For both blocks, participants performed achange-detection task for orientations of colored gratings, and shortly afterthe presentation of the memory array, pattern masks were presented to disruptfurther consolidation (a method similar to Vogel et al, 2006). The stimulusonset asynchrony (SOA) was varied randomly from trial to trial. Halfof the trials in a block included a change: either the orientation of one of theindividual gratings (individual block) or the average orientation of one setof gratings (ensemble block). Participants indicated whether the two arrayswere the same or different. The pattern of performance as a function of SOAfor the individual block was consistent with a previous study reporting theconsolidation of color for individual items (Vogel et al., 2006). Importantand new, the pattern of performance as a function of SOA was identicalacross the individual and ensemble blocks. The rate of consolidation forthe ensemble feature was comparable to that for the individual feature.This result suggests that ensemble features are extracted from an ensemblegroup just as an individual feature is extracted from a single object with thesame rate of consolidation time required.53.312 Ensemble statistics influence the representation of itemsin visual working memoryGeorge Alvarez 1 (alvarez@wjh.harvard.edu), Timothy Brady 2 ; 1 Department ofPsychology, Harvard University, 2 Department of Brain & Cognitive <strong>Sciences</strong>,Massachusetts Institute of TechnologyInfluential models of visual working memory treat each item as an independentunit and assume there are no interactions between items. However,even in displays with simple colored circles there are higher-orderensemble statistics that observers can compute quickly and accurately (e.g.,Ariely, 2001). An optimal encoding strategy would take these higher-orderregularities into account. We examined how a specific ensemble statistic-the mean size of a set of items- influences visual working memory. Observerswere presented with 400 individual displays consisting of three red,three blue, and three green circles of varying size. The task was to rememberthe size of all of the red and blue circles, but to ignore the green circles(we assume that ignoring the green circles requires the target items to beselected by color, Huang, Treisman, Pashler, 2007; Halberda, Sires, Feigenson,2006). Each display was briefly presented, then disappeared, and thena single circle reappeared in black at the location that a red or blue circlehad occupied. Observers used the mouse to resize this new black circle tothe size of the red or blue circle they had previously seen. We find evidencethat the remembered size of each individual item is biased toward the meansize of the circles of the same color. In Experiment 2, the irrelevant greencircles were removed, making it possible to select the red and blue items asa single group, and no bias towards the mean of the color set was observed.Combined, these results suggest that items in visual working memory arenot represented in isolation. Instead, observers use constraints from thehigher-order ensemble statistics of the set to reduce uncertainty about thesize of individual items and thereby encode the items more efficiently.53.313 Complexity and similarity in visual memoryBenoit Brisson 1 (benoit.brisson.1@ulaval.ca), Michel-Pierre Coll 1 , SébastienTremblay 1 ; 1 École de Psychologie, Université LavalRetaining information in an active and accessible state over the short-termis critical for any cognitive activity. It has been estimated that immediatevisual memory (also known as short-term memory or working memory)can maintain only about four objects simultaneously. However, the basicdeterminants of this capacity limit remain a matter of debate. For example,whether capacity is reduced as object complexity increases is yetunresolved. On the other hand, many researchers agree that in changedetection tasks – which are widely used to investigate capacity limits ofimmediate memory – similarity between the memory and the test items(memory-test similarity) negatively affects change detection performance.In contrast, similarity between memory items (memory-array similarity)has been shown recently to benefit performance, at least for simple objects.In the present study, similarity continua were used to manipulate memorytestand memory-array similarity for both simple and complex objects, inorder to thoroughly examine the impact of complexity and memory-arraysimilarity on the retention of information in memory. Results show thatthe number of memory representations is fixed across object complexity,but that their resolution (or precision) decreases as complexity increases.In contrast, memory-array similarity increases mnemonic resolution, anincrease that even compensates for the deleterious effect of complexity.Acknowledgement: Natural <strong>Sciences</strong> and Engineering Research Council of Canada(NSERC)53.314 The effect of grouping on visual working memorySeongmin Hwang 1 (bewithsm@gmail.com), Sang Chul Chong 1,2 ; 1 GraduateProgram in Cognitive Science, Yonsei University, 2 Department of Psychology,Yonsei UniversityThe purpose of our study was to investigate the effect of grouping on visualworking memory using a change-detection task. In Experiment 1, we presentedthe sample display with either 2, 4 or 6 colored circles for 100 ms,followed by a blank period of 900 ms, and the test display until response.Two circles were connected by a line in the grouped condition while aline was merely presented between two circles without connection in thenon-grouped condition. Participants’ task was to detect the color changebetween the sample and the test display. The color was changed only forone circle and for the 50% of trials. To report changes, participants had topress the left mouse button and indicate the location of the change. Theyreported no change by pressing the right mouse button. When we calculated256 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Morning Postersthe correctness in detection of color changes regardless of the correctnessin locations, performance in the grouped condition did not significantlydiffer from the non-grouped condition. However, when we computed thecorrectness based on both color changes and locations, performance in thenon-grouped condition was significantly better than in the grouped condition.If the visual system treated a grouped item as an object, changes inthe grouped condition would have been less salient because only part ofthe object changed its color in this condition. We tested this hypothesis inExperiment 2. The potential location of changes was designated by presentingonly one pair (grouped or non-grouped) at test display. When participantsknew the potential location of changes, their performance in thegrouped condition did not significantly differ from that in the non-groupedcondition. Our findings suggested that grouped items were treated asobjects in visual working memory and this grouping effect paradoxicallycaused the reduction of working memory capacity.Acknowledgement: This research was supported by Basic Science Research Programthrough the National Research Foundation of Korea(NRF) funded by the Ministry ofEducation, Science and Technology(2009-0089090)53.315 Don’t stop remembering: Motivational effects on visualshort-term memory maintenanceMotoyuki Sanada 1 (sanada@darwin.c.u-tokyo.ac.jp), Koki Ikeda 1 , Kenta Kimura 1 ,Toshikazu Hasegawa 1 ; 1 The University of TokyoAlthough it has been shown that motivation (e.g. monetary incentive) canenhance short-term memory capacity, little is known about how and wherethis effect occurs. Recent progress in visual short-term memory (VSTM)research suggests at least two major possibilities (Awh et al., 2006). Thatis, (1) motivation facilitates attentional gating at VSTM encoding, and/or(2) motivation supports VSTM maintenance by keeping sustained attentionactive. Previous studies, however, have failed to distinguish these two,since they manipulated motivational factors before encoding, and thereforepossibly modulated the two processes simultaneously. Thus, the goal ofthe current study was to unravel this confound and examine the plausibilityof the second account in particular. A VSTM task (Vogel & Machizawa,2004) was combined with retro-cueing paradigm (e.g. Lepsien & Nobre,2006). In each trial, monetary incentive cues appeared during 1,000 msretention period (500 ms after memory array) as two pure tones that differedin frequency, which indicated high and low rewards that participantscould obtain in that trial, if they answered correctly. In order to preventrandom response especially in low motivation condition, we assigned negativereward (punishment) for incorrect answer in both conditions. Resultsshowed that the VSTM performance (percentage of correct answers) wassignificantly facilitated in high reward condition than the other, providingthe first evidence that motivation can affect VSTM maintenance directly.Possible neural bases for this effect will be discussed with the data from afollow-up ERP study.53.316 The role of attention in working memory for emotional facesPaul MJ Thomas 1 (pspa60@bangor.ac.uk), Margaret C Jackson 1 , David EJ Linden 1 ,Jane E Raymond 1 ; 1 School of Psychology, University of BangorFace identities are better remembered in working memory (WM) whenfaces are angry versus happy or neutral (Jackson et al., 2009: JEPHPP, vol35). This ‘angry benefit’ correlates specifically with activity in the globuspallidus (Jackson et al, 2008: PLoS One, vol 3), part of the basal gangliathat has been suggested to act as an attentional filter allowing only relevantinformation into WM. This finding is broadly consistent with evidence thatthreatening faces capture attention more efficiently than non-threateningfaces. Could the angry face benefit in WM be due to greater attentionalcapture by angry versus happy/neutral faces? To investigate this, we presenteda single emotional face (angry or happy) among one or three neutralfaces in a WM encoding display. In other (non-singleton) conditions, allfaces shared the same expression (as in the original studies). WM for faceidentity was tested 1000ms later by asking whether a single probe face was‘present’ or ‘absent’ in the encoding display. WM was probed for emotionalsingletons and neutral others. If angry faces capture attention better thanhappy faces, enhanced WM for angry versus happy singletons, and poorerWM for neutral others when accompanied by an angry versus happy singleton,is expected. However, we found non-significant WM differences inthese condition comparisons, suggesting that attentional capture does notunderpin the original angry face benefit. Interestingly, at high load, WMwas better when an angry singleton (1 angry among 3 neutral) versus angrynon-singleton (1 angry among 3 other angry) was probed, an effect not significantfor happy faces. This suggests that angry but not happy singletonsmay be preferentially prioritised for WM selection via emotional grouping.The ability to isolate in WM an angry face from several other non-angryfaces might reflect enhanced preparation to prioritize a threat response ifrequired.Perceptual learning: Mechanisms andmodelsRoyal Ballroom 6-8, Boards 317–331Tuesday, May 11, 8:30 - 12:30 pm53.317 Greater focused attention to a task target leads to strongertask-irrelevant learningTsung-Ren Huang 1 (tren@bu.edu), Takeo Watanabe 1 ; 1 Department of Psychology,Boston UniversityMere exposure to task-irrelevant coherent motion leads to performanceimprovement on the motion (Watanabe et al, 2001). The underlying mechanismfor such task-irrelevant perceptual learning (TIPL) has yet to beclarified. TIPL could arise as a result of distraction from a central task orattentional leakage from location of a task target. We tested whether anyof these possibilities is true. In Experiment 1, hierarchical letters (Navon,1977) were presented at the center of a display. Participants (n=4) wereasked to recognize either large compound or small component letters in ablock design. A task-irrelevant 5% coherent motion display was presentedin a periphery. Given a fixed high contrast, small (harder) letters inducedstronger TIPL than large (easier) letters. In Experiment 2 (n=8), only smallletters were used as targets for recognition, with two letter contrastsalternating across blocks. Given a fixed scale of task targets, low-contrast(harder) letters induced stronger TIPL than high-contrast (easier) letters. InExperiment 3 (n=9), the task was to recognize regular letters at the center, inlarge or small size, again using a block design. Given a fixed task difficultycontrolled by the staircase method throughout training, small letters didnot induce weaker TIPL than large letters. In all the experiments traininglasted five days. Given that with a harder task, the degree of involvementof focused attention is greater and the attentional window size is smallertoward the task targets (e.g., Ikeda & Takeuchi, 1975; Rees et al., 1997; Yiet al., 2004), our results cannot be explained by the mere involvement ofthe traditional focused attention concept in task-irrelevant processing. Theresults are rather in accordance with the model in which a harder task at acentral field more greatly boosts signals outside a window of focused attentionand leads to greater TIPL.Acknowledgement: This work is supported by NIH-NEI R21 EY018925, R01 EY015980-04A2, and R01 EY019466.53.318 Different properties between reward-driven exposure-basedand reward-driven task involved perceptual learningDongho Kim 1 (kimdh@bu.edu), Takeo Watanabe 1 ; 1 Department of Psychology,Boston UniversityIt has been found that sensitivity to a visual feature is enhanced when thefeature is repeatedly paired with reward (Seitz, Kim & Watanabe, 2009,Neuron). We call this type of learning reward-driven exposure-based perceptuallearning (REPL). In a previous study (Kim, Seitz, Watanabe, 2008,VSS), we presented three different orientations (60 deg separated from eachother) which were followed by reward at the probabilities of 80% (positivecontingency), 50% (zero-contingency) and 20% (negative contingency),respectively. We found significant performance improvement for boththe positive-contingency orientation and zero-contingency orientation,but no significant improvement for the negative-contingency orientation.Given that PL occurs not only as a result of exposure (Watanabe, Sasakiand Nanez, 2001) but also of task-involvements (Fahle & Poggio, 2002), aquestion arises as to whether reward-driven task-involvement PL (RTPL)occurs in the same way as REPL. To address this question, in the presentstudy, we trained a new group of four subjects with an operant conditioningprocedure in which subjects performed an orientation discriminationtask, and the reward was given only when the subject answeredcorrectly. To compare these results with REPL, we conducted sensitivitytests before and after operant training. After training, we found significantperformance improvement only for the positive contingency orientation.These results suggest that the mechanisms underlying REPL and RTPL aredifferent. One possible model is that when subjects were trained with theTuesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>257


Tuesday Morning PostersVSS 2010 AbstractsTuesday AMRTPL procedure, the learning of 50% reward probability was inhibited byan attentional signal, whereas this inhibition did not occur when they weretrained with the REPL procedure.Acknowledgement: This research was supported by NIH-NEI (R21 EY018925, R01EY015980-04A2, R01 EY019466) and NSF-CELEST (BCS-PR04-137).53.319 Visual Learning with Reliable and Unreliable FeaturesRobert Jacobs 1 (robbie@bcs.rochester.edu), A. Emin Orhan 1 , Melchi Michel 2 ;1 Center for Visual Science, University of Rochester, 2 Center for PerceptualSystems, University of Texas at AustinPrevious studies on sensory integration showed that people weight informationbased on a sensory cue or feature proportional to that feature’s reliability.However, these studies tell us little about the implications of featurereliability for perceptual learning. Here, we address this issue in the contextof perceptual learning of binary classification tasks. We develop a Bayesianmodel that, unlike previous models, allows us to compute not just pointestimates, but complete distributions over the weights associated with differentfeatures (via Markov chain Monte Carlo sampling of the weightsof a logistic regressor). Using the model, we develop ideal observers for asimple two-dimensional binary classification task and for a binary patterndiscrimination task that was used in Experiment 2 of Michel and Jacobs(2008). We find that the statistical information provided by the stimuli andcorresponding class labels on a finite number of training trials stronglyconstrains the possible weight values associated with unreliable features,but only weakly constrains the weight values associated with reliable features.To test whether human observers are sensitive to these propertiesof the task environment, we apply the model to a human subject’s experimentaldata (the stimuli that the subject saw and the subject’s responses).We find that the subject was indeed sensitive to this statistical information.Additional analyses indicate that subjects showed sub-optimal learningperformances because they tended to underestimate the magnitude ofweights associated with reliable features. A possible explanation for thisresult is that people performing this task might be regularized learnerswith a strong bias toward small weight values. Alternatively, it may be thatpeople are engaging in exploration of the weight space, rather than exploitingtheir potentially near-optimal knowledge regarding the weight valuesassociated with visual features.Acknowledgement: NSF research grant DRL-081725053.320 Brain plasticity associated with supervised and unsupervisedlearning in a coherent-motion detection taskMark W. Greenlee 1 (mark.greenlee@psychologie.uni-regensburg.de), KatharinaRosengarth 1 , Tina Plank 1 ; 1 Experimental Psychology, University of Regensburg,GermanyWe investigated the role of trial-by-trial feedback during training on theneural correlates of perceptual learning in a coherent-motion detectionparadigm. Stimuli were four patches of randomly moving dots were presentedsimultaneously, one in each visual quadrant. Over six training sessions(with a total of 5340 trials per observer) subjects learned to detectcoherent motion in a predefined quadrant. During training, half of oursubjects received feedback after each response, indicating whether theywere correct or incorrect on that trial, whereas the other subjects did notget feedback. We investigated whether the presence of feedback duringtraining had an effect on learning success (performance, reactions times)and on the resultant BOLD response to motion stimuli presented within thetrained quadrant (measured in three separate sessions). Behavioral data of4 subjects showed improved performance with increasing practice. Feedbackled a significant benefit in performance and to lower reactions times.After training with feedback, subjects exhibited bilateral BOLD responsesin hMT+ that first increased (from session 1 to 2) and then decreased (fromsession 2 to 3). Without feedback during training the BOLD signal in hMT+was reduced and showed a shallower, monotonic learning curve. Theseresults point to a learning-specific alteration in the activity of MT neuronsthat selectively respond to coherent-motion stimuli. Trial-by-trial feedbackenhanced performance and led to a different time course of the BOLDresponse over training.Acknowledgement: BMBF Project 01GW0761: Brain plasticity and perceptual learning53.321 Category Learning Produces the Atypicality Bias in ObjectPerceptionJustin Kantner 1 (jkantner@uvic.ca), James Tanaka 1 ; 1 Cognition and Brain <strong>Sciences</strong>Program, Department of Psychology, University of VictoriaWhen a morph face is produced with equal physical contributions froma typical parent face and an atypical parent face, the morph is judged tobe more similar to the atypical parent. This discontinuity between physicaland perceptual distance relationships, called the “atypicality bias”(Tanaka, Giles, Kremen, & Simon, 1998), has also been demonstrated withthe object classes of birds and cars (Tanaka & Corneille, 2007). The presentwork tested the hypothesis that the atypicality bias is not a productof static physical properties of typical or atypical exemplars, but emergesonly after the category structure of a given stimulus domain (and thus thenature of its typical members) has been learned. Participants were trainedto discriminate between two categories of novel shape stimuli (“blobs”)with which they had no pre-experimental familiarity. Although typical andatypical blob exemplars appeared with equal frequency during categorytraining, the typical blobs within a given family were structurally similar toone another, whereas the atypical blobs were dissimilar to each other and tothe typical exemplars. The magnitude of the atypicality bias was assessedin a preference task administered pre- and post-training. The blobs elicitedno bias prior to category training, but, as predicted, elicited a significantatypicality bias after training. This change in object perception with categorylearning is considered from the standpoint of theories that representitem similarities in terms of the relative locations of items in a multi-dimensionalspace. We propose that category learning alters the dimensions ofthe space, effectively increasing the perceptual distance between the morphand its typical parent, with the result that the morph appears more similarto its atypical parent than to its typical parent.53.322 Cholinergic enhancement augments the magnitude andspecificity of perceptual learning in the human visual system: apharmacological fMRI studyAriel Rokem 1 (arokem@berkeley.edu), Michael Silver 1,2 ; 1 Helen Wills NeuroscienceInstitute, University of California, Berkeley, 2 School of Optometry, University ofCalifornia, BerkeleyThe neurotransmitter acetylcholine (ACh) has previously been shown toplay a critical role in cognitive processes such as attention and learning. Inthis study, we examined the role of ACh in perceptual learning (PL) of amotion direction discrimination task in human subjects. We conducted adouble-blind, placebo-controlled, crossover study, in which each participanttrained twice on the task, once while cholinergic neurotransmissionwas pharmacologically enhanced by the cholinesterase inhibitor donepeziland once while ingesting a placebo. Relative to placebo, donepezilincreased the improvement in direction discrimination performance dueto PL. Furthermore, PL under the influence of donepezil was more specificfor the direction of motion that was discriminated during training and forthe visual field locations in which training occurred. In order to study theneural mechanisms underlying these effects, we measured fMRI responsesto either trained or untrained directions of motion before and after training,in both placebo and drug conditions. Spatial specificity was assessed bycomparing pre- and post-training fMRI responses in portions of retinotopiccortex representing the spatial locations of trained and untrained stimuli.Direction specificity was assessed with fMRI adaptation (fMRI-A), a procedurebased on the fact that when presented with a pair of stimuli in succession,neurons will typically respond more weakly to the second stimulus ifthey also responded to the first stimulus. Consequently, two consecutivelypresented stimuli will generate a smaller response if they excite overlappingpopulations of neurons. In each block, an adapting direction (trainedor untrained) was presented, and in each trial an additional probe stimulus,which differed from the adapting direction by some angular offset, wasshown. The dependence of the response amplitude on this angular offsetallowed the generation of adaptation ‘direction tuning curves’ for motionsensitiveareas in visual cortex.Acknowledgement: This work was supported by NIH grant R21-EY17926 (MAS), theHellman Family Faculty Fund (MAS), and National Research Service Award F31-AG032209(AR).53.323 Learn to be fast: gain accuracy with speedAnna Sterkin 1 (anna.sterkin@gmail.com), Oren Yehezkel 1 , Uri Polat 1 ; 1 Faculty ofMedicine, Goldschleger Eye Research Institute, Sheba Medical Center, TelHashomer, Tel Aviv University, Israel.Our recent neurophysiological findings provided evidence for collinearfacilitation in detecting low-contrast Gabor patches (GPs) and for the abolishmentof these collinear interactions by backward masking (BM). It wassuggested that the suppression induced by the BM eliminates the collinear258 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Morning Postersfacilitation. Moreover, our recent behavioral study showed that trainingon a BM task improves the processing speed. Here we applied perceptuallearning on BM in a detection task that strengthens the facilitatory lateralinteractions, in ten overnight sessions, in order to study whether reinforcedfacilitatory interactions can overcome the suppressive effects induced byBM. Event-Related Potentials (ERPs) were recorded before and after thetraining. Low-contrast, foveal target GP was simultaneously flanked bytwo collinear high-contrast GPs. In the BM task, another identical mask waspresented at different time-intervals (ISIs). Before training, BM inducedsuppression of target detection, at the ISI of 50 ms, in agreement with earlierbehavioral findings. This ISI coincides with the active time-window oflateral interactions. After training, our results show a remarkable improvementin all behavioral measurements, including percent correct, sensitivity(d’), reaction time and the decision criterion for this ISI. The ERP resultsshow that before training, BM canceled the physiological markers of facilitationat the same ISI of 50 ms, measured as the amplitude of the negativeN1 ERP peak (latency of 260 ms). After the training, the sensory representation,reflected by P1 peak, has not changed, consistent with the unchangedphysical parameters of the stimulus. Instead, the shorter latency (by 20 ms,latency of 240 ms) and the increased amplitude of N1 represent the developmentof facilitatory lateral interactions between the target and the collinearflankers. Thus, previously effective backward masking became ineffectivein disrupting the collinear facilitation. We suggest that perceptual learningthat strengthens collinear facilitation results in a faster processing speed.Acknowledgement: Supported by grants from the National Institute for Psychobiology inIsrael, funded by the Charles E. Smith Family and the Israel Science Foundation53.324 Changes in Fixation Strategy May account for a portion ofPerceptual Learning observed in visual tasksPatrick J. Hibbeler 1 (hibbelpj@muohio.edu), Dave Ellemberg 2 , Aaron Johnson 3 , LynnA. Olzak 1 ; 1 Department of Psychology, Miami University, 2 Department of Kinesiology,University of Montreal, 3 Department of Psychology, Concordia UniversityPerceptual learning in visual discrimination can be observed by monitoringan increase in an observer’s ability to perform a certain task with practice.Perceptual learning has been previously linked to several different mechanismsthat can account for the increase in an observer’s ability: learning toperform the task it self (Anderson, Psychological Review, 94, 192, 1987),learning an optimal response strategy/adjusting criteria (Doane, Alderton,Sohn & Pellegrino, Journal of Experimental Psychology, 22, 1218, 1996), aswell as changes in how the physical stimuli are perceived and processedby the observer (Gibson, 1969; Goldstone, Annual Review of Psychology,49, 585, 1998). Observers can learn to visually fixate on areas of an image/stimuli that provide information necessary to complete their task, whileavoiding areas that are not informative. This form of perceptual learningsuggests a learned change in the observer’s visual fixation strategy, an areaof perceptual learning that has not been studied with visual hyperacuityparadigms. During training for visual hyperacuity discriminations basedon small differences in the spatial frequency or orientation of suprathresholdsinusoidal gratings, observers had their eye fixations recorded. Resultsshowed a change in fixation strategy for all observers as their experienceincreased and the difficulty of the discriminations increased. Observersvaried in their fixation changes, as well as their final fixation points. Therewas a negative correlation between fixation variance and number of trialscompleted, but this value did not reach significance for most observers.These results suggest that observers modify their fixation strategy over timeto optimize their performance on the discrimination task. This is somewhatcontradicted by the observation that incorrect responses belong to the samedistribution of eye fixations as correct responses.Acknowledgement: This study was funded in part by an NIH grant to LAO, NSERC and CFIgrants to DE, and a CIHR grant to AJ.53.325 ERP evidence for the involvement of high-level brain mechanismsin perceptual learningGong-Liang Zhang 1 (zgl571@yahoo.com.cn), Lin-Juan Cong 1 , Yan Song 1 , Cong Yu 1 ;1 State Key Laboratory of Cognitive Neuroscience and Learning, Beijing NormalUniversityLocation specificity in perceptual learning can be eliminated throughproper training procedures (Xiao et al., CurBio_08), suggesting that learningmay result from training improved decision making in non-retinotopichigh brain areas. This conclusion gains support from ERP recordings in thisstudy. We trained observers with a Vernier task in the lower right visualfield for six days. Pre- and post-training thresholds were compared at thetrained and untrained (lower left visual field) locations. 64-channel EEGwas recorded pre-/post-training at the trained and untrained locations forVernier offsets either near the pre-training threshold (5’) or sub-threshold(2.8’).Our results show that (1) Vernier learning was specific to the trainedlocation in most observers but transferred significantly in the remainingobservers. (2) The frontal P2 (210~270ms), which may be related to decisionmaking, had shorter latency and smaller amplitude after training for allobservers showing or not showing location specificity at both locations. (3)The posterior N1 (160ms-200ms), which may be related to spatial attention,increased significantly after training at the trained location but decreased atthe untrained location in observers showing location specificity. However,posterior N1 increased significantly at both trained and untrained locationsfor observers who showed significant learning transfer. (4) The EEGdifferences were similar at Vernier offsets either near pre-training threshold,which became supra-threshold post-training, or sub-threshold, whichbecame near-threshold post-training.The ERP evidence is consistent with our rule-learning based perceptuallearning model, in which a decision unit in the high-level brain learns therules of reweighing the V1 inputs (better decision making). Such reweighingrules are unspecific to stimulus locations. However, learned rules canonly apply to a new location if the brain can attend to the V1 inputs atthe new location properly. The latter can be accomplished through locationtraining.Acknowledgement: Natural Science Foundation of China grants 30725018 & 3060018053.326 Increases in perceptual capacity as a function of perceptuallearning: behavioral regularities and possible neural mechanismsMichael Wenger 1 (mjw19@psu.edu), Rebecca Von Der Heide 1 , Jennifer Bittner 1 ,Daniel Fitousi 1 ; 1 Department of Psychology, The Pennsylvania State UniversityStandard indicators of the acquisition of visual perceptual expertise includesystematic reductions in detection and identification thresholds, along withdecreases in mean response times (RTs). One additional regularity documentedin recent work has to do with changes in the ability to adapt tovariations in perceptual workload, characterized as perceptual capacity,and measured at the level of the hazard function of the RT distribution.The present effort tests the potential of a computational modeling approachcapable of accounting for these behavioral results, while simultaneouslypredicting patterns of scalp-level EEG. The approach is intended to allow forthe representation of multiple competing hypotheses for the neural mechanismsresponsible for these observable variables (i.e., placing the alternativehypotheses on a “level playing field”), and for the ability to systematicallyrelate these hypotheses to formal models for perceptual behavior. The neuralmodeling approach uses populations of discrete-time integrate-and-fireneurons, connected as networks. The architecture is based on the knowncircuitry of early visual areas as well as known connectivity into and outof early visual areas. The architecture is shown to be capable of instantiatinga set of prominent competing hypotheses for neural mechanisms (Gilbert,Sigman, & Crist, 2001): changes in cortical recruitment, sharpening offeature-specific tuning curves, changes in synaptic weightings, changes inwithin-region synchrony, and changes in across-region coherence, in bothfeed-forward and feed-back relations. In addition, it is shown that underreasonable simplifying assumptions, the models are also capable of makingpredictions for both observable response behavior and scalp-level EEG. Wepresent data from an initial empirical test of these predictions, suggestingthat changes in measures of synchrony across and within sensor regionsbest account for the prominent increases in perceptual capacity that accruewith the acquisition of perceptual expertise.Acknowledgement: NIMH53.327 Local Perceptual Learning for Motion Pattern Discrimination:a Neural ModelStefan Ringbauer 1 (stefan.ringbauer@uni-ulm.de), Florian Raudies 1 , HeikoNeumann 1 ; 1 Institute of Neural Information Processing, University of UlmProblem. Perceptual learning increases the performance of motion patterndiscrimination (Nishina et al., J.of <strong>Vision</strong> 2009). The results suggest thatlocal, not global learning mechanisms gained the improvement. The questionremains which mechanisms of cortical motion processing are involvedand how neural mechanisms of learning can account for this achievement.Tuesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>259


Tuesday Morning PostersVSS 2010 AbstractsTuesday AMMethod. We build upon a neural model for motion and motion patterndetection that incorporates major stages of the dorsal pathway, namelyareas V1, MT, and MSTd which has been extended by a stage for decisionmakingin area LIP. MSTd cells are sensitive to patterns of motion by integratingmotion direction sensitive MT activities along the convergent feedforwardsignal pathway. Feedback from MSTd to MT neurons modulatestheir activity. The strength of connection weights between MT and MSTdneurons can be adapted by repetitive presentation of motion patterns.MSTd to MT feedback also modulates the weight adaptation process byemploying a variant of Hebbian learning using Oja’s rule (J.Math.Biology1982). As a consequence MT cell tuning changes and in turn improves thediscrimination performance of perceived motion patterns. Results and Conclusion.Model simulations quantitatively replicate the findings of Nishinaand co-workers. Specifically, discrimination learning between target andneutral pattern improves from d’=0.075 to d’=0.113. The model predictsthat the presentation of rotation patterns leads to the same performance asfor the radial motion patterns. In addition, our computational simulationssuggest that decision performance as well as the threshold differences formotion discrimination drop if noise is added to the visual stimulus. Ourmodel predicts that feedback from area MSTd to MT stabilizes the learningunder conditions when noise significantly impairs the coherence of theinput motion. This suggests that while the perceptual learning in this casemight indeed be local, more global information is involved for stabilizingthe learning process.Acknowledgement: Federal Ministry of Education and Research 01GW0763 (BPPL),Graduate School at the University of Ulm (MAEIC)53.328 Does perceptual learning require consciousness or attention?Julia D. I. Meuwese 1 (j.d.i.meuwese@uva.nl), H. Steven Scholte 1 , Victor A. F.Lamme 1,2 ; 1 Cognitive Neuroscience Group, Department of Psychology, Universityof Amsterdam, 2 The Netherlands Institute for NeuroscienceIt has been proposed that visual attention and consciousness are separate(Koch and Tsuchiya, 2007) and possibly even orthogonal processes (Lamme,2003). The two converge when conscious visual percepts are attended, andhence become available for conscious report. A lack of reportability canhowever have two causes: the absence of attention or the absence of a consciouspercept. This raises an important question in the field of perceptuallearning. It is known that learning can occur in the absence of consciousreportability, but given the recent theoretical developments it is now suddenlyunclear which of the two ingredients – consciousness or attention– is not necessary for learning. We present textured figure-ground stimuli,and manipulate reportability either by masking (which interferes with consciousness)or with an inattention paradigm (which only interferes withattention). During the second session (24 hours later) learning is assessedvia differences in figure-ground ERPs and via a detection task. Preliminaryfindings suggest that early learning effects are found for stimuli presentedin the inattention paradigm, and not for masked stimuli. These results suggestthat learning requires consciousness, and not attention, and furtherstrengthen the idea that consciousness is separate from attention.53.329 Role of attention in visual perceptual learning: evidencesfrom event-related potentialsYulong Ding 1 (dingyulong007@gmail.com), Zhe Qu 1 , You Wang 1 , Xiaoli Chen 1 ;1 Department of Psychology, Sun Yat-Sen University, Guangzhou, ChinaThe role of attention in perceptual learning is a focus question during recentyears. However, the brain mechanism of attentional modulation on visualperceptual learning is still unclear. By recording event-related potentialsfrom human adults, the present study investigated how top-down attentionmodulates visual perceptual learning. 30 subjects were randomly dividedinto two groups: active & passive learning group. Each subject received 1.5h’s training when ERP was recorded. Subjects of active learning group weretrained to discriminate the line orientation, while those of passive learninggroup just passively viewed the stimuli used in active learning group.All the subjects received tests on line orientation discrimination task justbefore and after the training, as well as on the next day. Behavioral resultsshowed that, subjects of active training group obtained larger improvementof performance than those of passive learning group. While the learningeffect of passive group could transfer to different stimulus orientations andoccurred mainly after the training, that of active group was orientation-specificand occurred mainly during the training. ERP results showed that, forpassive learning group, both posterior P1(90-110ms) and N1(120-160 ms)decreased in amplitude along 1.5h’s training, while posterior P2(210-250ms) did not change. For the active group, however, P1 did not change; N1decreased but the decrement was smaller than that of passive group; whileP2 increased in amplitude with training. The present study implies that topdownattention does modulate the short-term perceptual learning, leadingto the stimulus-specific learning effect in behavioral performance as well asincrements of neural activity which are opposite to the sensory adaptationeffects caused by stimuli repetition and originate from quite early stage ofvisual processing, within 100 ms after stimulus onset.Acknowledgement: This work was supported by the National Nature Science Foundationof China grants (30570605) and the Open Project Grant of the State Key Laboratory ofBrain and Cognitive Science, China.53.330 Implicit Learning of Background Texture while Learning toBreak CamouflageXin Chen 1 (chenxincx@gmail.com), Jay Hegdé 1,2 ; 1 Brain and Behavior DiscoveryInstitute and <strong>Vision</strong> Discovery Institute, Medical College of Georgia, Augusta,GA, 2 Department of Ophthalmology, Medical College of Georgia, Augusta, GAIt can be difficult to recognize a visual object camouflaged against its background,even when the object is familiar and is ‘in plain sight’. However, theability of the visual system to break camouflage can be improved with training.What the visual system learns during such training remains unclear.We hypothesized that learning to break camouflage involves learning,however implicitly, the statistical properties of the background, becausethis information is computationally helpful in breaking camouflage. To testthis hypothesis, we synthesized a large number of novel instances of familiarnatural textures (e.g., pebbles) using the texture synthesis algorithm ofPortilla and Simoncelli (2000). We created novel camouflaged visual scenesby camouflaging a familiar object (face) against each instance of synthesizedtexture. We used some of these images to train normal adult human subjectsto break camouflage using a two-alternative forced-choice detectionparadigm (i.e., target present or absent), until subjects reached a criterionperformance of d’ ≥ 1.5. We tested the detection performance before andafter the training using previously unseen instances of the same texture.We found that the detection performance of the subjects was significantlybetter after the training relative to the performance before the training (e.g.,d’ of 0.5 before training vs. 1.5 after training for a typical subject), indicatingthat the exposure to a given texture improved camouflage breakingin novel instances of the texture. Importantly, detection performance alsoimproved for unfamiliar objects (e.g., ‘digital embryos’) that the subjectsdid not encounter during training, suggesting that the transfer of learningwas not dependent on learning of the target per se. Moreover, the transferof background learning was not specific to a given texture. Together, ourresults indicate the subjects can implicitly learn the background textures ofcamouflaged scenes even when not explicitly required to learn it.Acknowledgement: Supported by Medical College of Georgia53.331 Understanding how people learn the features of objects asBayesian inferenceJoseph L. Austerweil 1 (joseph.austerweil@gmail.com), Thomas L. Griffiths 1 ;1 Department of Psychology, UC BerkeleyResearch in perceptual learning has demonstrated that human feature representationscan change with experience (Goldstone, 1998). However, previouscomputational models for learning feature representations have presupposedthe number of features (Goldstone, 2003) or complex basic unitsare known a priori (Orban et. al., 2008). We propose a nonparametric Bayesianframework that infers feature representations to represent observedstimuli without specifying the number of features a priori from raw sensoryinformation (Austerweil & Griffiths, 2008). This approach captures twomain phenomena from the perceptual learning literature: differentiation(Pevtzow & Goldstone, 1994) and unitization (Shiffrin & Lightfoot, 1997).Additionally, our approach makes a novel prediction about how peoplelearn features. It predicts that people should infer the whole objects as featuresif the parts which compose objects strongly co-vary across objectsand the parts as features if the parts are largely independent. In our firstexperiment, we demonstrated that one group of participants who observedobjects whose parts co-varied did not generalize to unseen combinations ofthose parts (Austerweil & Griffiths, 2009). The other group of participantswho observed parts occurring independently did generalize to seen combinationsof parts. We demonstrate that the following pre-existing psychologicalframeworks or models cannot explain these results: exemplar mod-260 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Morning Postersels (Nofosoky, 1986), prototype models (Reed, 1972), changes of concavity(Hoffman & Richards, 1985), and recognition-by-components (Biederman,1987). This suggests participants were using distributional information toinfer features to base their generalization judgments as our model suggests.In a second experiment, we replicate this effect with a set of rendered 3-Dobjects, showing the effect works in two very different types of objects. Asour computational framework suggests, part correlation is an importantcue that people use to infer feature representations.Acknowledgement: Grant FA9550-07-1- 0351 from the Air Force Office of ScientificResearch.Color and light: Surfaces and materialsOrchid Ballroom, Boards 401–410Tuesday, May 11, 8:30 - 12:30 pm53.401 Lightness estimation errors in a 3D contextYoana Dimitrova 1,2 (y.dimitrova@ucl.ac.uk), Peter McOwan 1,3 , Alan Johnston 1,2 ;1 Centre for Mathematics and Physics in the Life <strong>Sciences</strong> and ExperimentalBiology, University College London, 2 Division of Psychology and Language<strong>Sciences</strong>, Department of Cognitive, Perceptual and Brain <strong>Sciences</strong>, UniversityCollege London, 3 School of Electronic Engineering and Computer Science,Department of Computer Science, Queen Mary, University of LondonThe problem of recovering reflectance from a single image is inherentlyunder-constrained. Therefore, the visual system must use heuristics orbiases to recover reflectance, whilst discounting geometry and illumination.In a previous study, we investigated lightness perception in a 3D context(Dimitrova YD, McOwan P, Johnston A, 2009, Perception 38: ECVPAbstract supplement, p. 31). Participants systematically overestimatedreflectance for vertically eccentric illuminant angles and underestimatedfor illuminant angles close to the horizontal plane. These errors were robustto additional cues to light source direction and depth as well as to theremoval of local features. Errors rose significantly with the increase in theratio of directional to ambient light. Modelling of the data indicated threepossible causes of the reflectance misestimation: a light direction bias, abias in the proportion of ambient to directional light or a simple brightnessaveraging over the image. To investigate perceived illuminant direction,participants were asked to adjust the illumination direction for a sphereuntil it matched the lighting direction for a dodecahedron rendered usinga range of illuminant elevations. The pattern of light direction adjustmentswas consistent with a bias of perceived illuminant elevation that is bothshifted away from vertically eccentric angles and away from the horizontalplane. These adjustments were similar the results of modelling reflectanceerrors with a light direction bias. This finding supports the view that thesystematic errors in reflectance settings are at least in part caused by a biasin the assumed direction of illumination.53.402 The effects of color categorization on shadow perceptionJames Christensen 1 (christensen.68@osu.edu), William Miller 1,2 ; 1 Human EffectivenessDirectorate, Air Force Research Lab, 2 Psychology Department, Universityof DaytonAs Cavanagh and Leclerc (1989) discussed, a significant change in hueacross a cast shadow boundary is unlikely under real-world conditions.Such a change would require both multiple light sources of different huesand minimal interreflections that could otherwise mute differences in hue.Despite this constraint, they found that a change in color across a shadowboundary does not prevent shape from shadow perception, as long as thereare luminance differences. The present study sought to demonstrate thatrather than being largely ignored by observers, the color constraint onshadows is a cue that can inhibit a shadow percept, even when anothershadow cue such as a penumbra is present. Perceptually matched pairsof color patches were generated that consisted of a base color and a possibleshadow color, presented as two halves of a circular stimulus image.The base color was categorized as blue but near the blue-green boundary,while the possible shadow colors were adjusted to be each equally differentfrom the base color, either crossing into the green color category or closerto prototypical blue. Possible shadows were always less luminant than thebase color. This resulted in blue/blue stimulus pairs and blue/green pairs.Observers then completed a rating task that included these pairs as well asvariously achromatic pairs or equiluminant pairs. The blue/blue stimulusresulted in higher shadow ratings than the blue/green pair. We concludethat it is not merely the presence of a color difference that inhibits shadowperception, but instead a categorical difference in color that may then combinewith other edge type cues, similar to cue combination theories of shapeperception.53.403 Perception of surface glossiness in infantsJiale Yang 1 (oc074006@grad.tamacc.chuo-u.ac.jp), Yumiko Otsuka 2 , SoKanazawa 2 , Masami K. Yamaguchi 1,3 , Isamu Motoyoshi 4 ; 1 Chuo University, 2 JapanWomen’s University, 3 PRESTO, JST, 4 NTT Communication Science Laboratories,NTTHuman adults can easily judge the glossiness of natural surfaces. The presentstudy examined the glossiness perception in infants. Using computergraphics, we created gray-scale images of three objects that had identical3D structure with different surface qualities. The first object was uniformlymatte, the second one was glossy, and the third one was matte but coveredwith white paint splashes. The glossy and paint surfaces had similarluminance histograms that are positively skewed while the matte surfacehad a negatively skewed histogram. Twenty four infants, aged 5-6 and 7-8 months, were presented with the two objects side by side. In one conditionthey were glossy vs. matte, and in the other glossy vs. paint. Theresults showed that the 7-8-month-old infants, but not 5-6-month-oldinfants, significantly preferred the glossy object both to the matte and paintobjects. The preference for the glossy surface to the paint surface cannotbe accounted for by the difference in the histogram statistics, indicatingthat infants could discriminate between highlight and white paint. Thesefindings suggests that the 7-8-month-old infants are sensitive to the surfacequality and have a preference for glossy objects on the basis of neural representationsmore than simple image statistics. The developmental period ofsensitivity to highlights found in present study is consistent with previousfinding that the perception of shape from shading emerges around 7 monthof age (Granrud, Yonas, and Opland, 1985).53.404 Hue torusRumi Tokunaga 1 (tokunaga.rumi@kochi-tech.ac.jp), Alexander Logvinenko 2 ;1 Department of Information Systems Engineering, Kochi University of Technology,Japan, 2 Department of <strong>Vision</strong> <strong>Sciences</strong>, Glasgow Caledonian University,UKOne can alter the colour appearance of an object either by painting it orby changing its illumination. Both material and lighting changes can resultin a change of hue. We report on an experiment which shows that “material”hues are different from “lighting” hues. Two identical sets of Munsellpapers (5R4/14, 5YR7/12, 5Y8/12, 5G6/10, 10BG5/8, 5PB5/12 and10P5/12) were presented in two displays. In separate sessions of the experiment,the displays were illuminated independently by one of five lights:red, yellow, green, blue and purple, giving a total of 15 possible illuminationconditions (red-red, red-yellow, etc). The lights were approximatelyequiluminant with CIE 1976 u’v’-coordinates (0.382, 0.488), (0.199, 0.530),(0.127, 0.532), (0.183, 0.210), and (0.259, 0.365). Dissimilarity judgmentswere made between papers in the two displays (as in asymmetric colourmatching). Each pair was evaluated 6 times by ranking. As a standard pair,the paper 5Y8/12 lit by the yellow light and the paper 5PB5/12 lit by theblue light were presented at all times during the experiment to indicate themaximal rank. Two trichromatic observers participated in the experiment.The dissimilarity judgements were analyzed by using a non-metric multidimensionalscaling technique. The output configuration was of a slightlydistorted torus-like pattern (“doughnut”). When one changes the material(reflectance) property moving from paper to paper under the same light,one travels the circumference of the doughnut (referred to as material hue).When one changes the lighting property, moving from light to light for thesame paper, one travels the cross-sectional circle of the doughnut (referredto as lighting hue). Thus, the material and lighting hues are found to be dissociatedin the dissimilarity space. We conclude, contrary to general belief,that the manifold of object-colour hues is two-dimensional, being topologicallyequivalent to a torus.53.405 Both the complexity of illumination and the presence ofsurrounding objects influence the perception of glossSusan F. te Pas 1 (s.tepas@uu.nl), Sylvia C. Pont 2 , Katinka van der Kooij 1 ; 1 ExperimentalPsychology Helmholtz Institute Utrecht University, 2 Industrial DesignDelft University of TechnologyTuesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>261


Tuesday Morning PostersVSS 2010 AbstractsTuesday AMIntroduction: Human observers seem to robustly and effortlessly classifymaterial properties, even when the optical input changes completely dueto illumination changes. Previous research by Fleming et al. (2003) showsthat the complexity of the illumination affects our judgments of glossiness.Here, we investigate the effects of both the nature of the illumination andthe presence of context objects on the perceived glossiness of a referenceobject. Method: We compare perceived glossiness for complicated illumination(containing high frequency variations in the spatial luminance distribution,a bit like sunlight filtered through foliage), collimated illuminationand diffuse illumination. As context objects, we use an arrangement offruits, vegetables and vases that all either retained their original color andglossiness, were all spray painted matte gray, or were all spray paintedspecular gray. Participants viewed a gray reference object in that was eitherphotographed in isolation or placed in a number of complex scenes, underthree different illuminations. They matched the glossiness of a test objectthat was photographed in isolation on a matte background with collimatedillumination to the glossiness of this reference object. Results: We found ahuge underestimation of the glossiness of the object when the object wasilluminated with a diffuse light source, compared to when the object wasilluminated with a collimated light source, whereas glossiness was overestimatedwhen illuminated with a highly complicated light source. In someparticipants, these biases were slightly reduced when specular or coloredcontext objects were present. Conclusions: Results indicate that a richerenvironment, with complicated, more natural, illumination and a varietyof different context materials, help us judge glossiness more accurately.Acknowledgments: This work was supported by the Netherlands Organizationfor Scientific Research (NWO).53.406 Real-world illumination measurements with a multidirectionalphotometerYaniv Morgenstern 1 (yaniv@yorku.ca), Richard F. Murray 1 , Wilson S. Geisler 2 ; 1Department of Psychology and Centre for <strong>Vision</strong> Research, York University,2 Center for Perceptual Systems, University of Texas at AustinThe visual system resolves ambiguity by relying on assumptions that reflectenvironmental regularities. One well-known assumption, used to interpretambiguous 2D images, is the light-from-above prior. However, recent workhas shown the visual system represents more complex assumptions of naturalillumination than a single overhead light source (Fleming et al., 2003;Doerschner et al., 2007). To investigate these hidden assumptions previousresearchers have used multi-directional photographic methods to measureand statistically characterize natural illumination. These methods providehigh-resolution, high-dynamic range images of the complete surroundingscene. For some purposes, such as understanding illumination of Lambertianobjects, a coarser lighting measurement that represents the first threeorders of spherical harmonics would suffice (Basri and Jacobs, 2001; Ramamoorthiand Hanrahan, 2001). We will describe a multidirectional photometerwe have developed, that makes fast and accurate measurements of lowdegreespherical harmonic components of real-world lighting. The multidirectionalphotometer is a 20 cm diameter aluminum sphere, mounted with64 approximately evenly spaced photodiodes. Each photodiode is filteredto match the photopic spectral sensitivity of the human visual system, andfitted with an aperture that reduces its directional selectivity so as to providethe sharpest image possible with 64 sensors. The device measures lightranging from low-lit indoor scenes to direct sunlight, and makes severalcomplete measurements per second. We discuss design decisions such ashow many photodiodes to use, how to distribute the photodiodes over thesphere, and what directional selectivity to give the individual photodiodes.We also discuss a linear-systems approach to using the photodiode measurementsto reconstruct ambient lighting as a sum of basis functions. Wewill present preliminary findings on the statistical characterization of lowdegreespherical harmonics of light in natural scenes, and discuss how suchmeasurements can be used to advance our understanding of shape fromshading and lightness constancy.53.407 A Model of Illumination Direction Recovery Applied toDynamic Three-Dimensional ScenesHolly E. Gerhard 1 (holly.gerhard@nyu.edu), Laurence T. Maloney 1,2 ; 1 Departmentof Psychology, New York University, 2 Center for Neural Science, New YorkUniversityBackground: Gerhard & Maloney (ECVP 2009) measured the accuracy ofhuman observers in judging the direction of movement of a collimated lightsource illuminating a simple rendered scene. The scene contained a smoothrandomly-generated Gaussian bump landscape. The light source was notdirectly visible and observers had to judge the direction of movement givenonly the effect of the illumination change on the landscape. All viewingwas binocular. This task is of particular interest because motion direction isambiguous without this depth information. Eight observers completed theexperiment. Observers could accurately judge the direction of movementsspanning as little as 10 degrees over 750 msec but judgments were consistentlyless reliable for some scenes than others. Observers also varied inaccuracy. Goal: We present a computational model of the task intended topredict ideal performance, individual differences across observers, and differencesin accuracy across scenes. The model recovers change in illuminationdirection using 1) the luminance map of the stereo images across time,2) the depth map of the scene that we assume is available to the observer.Model: The ideal model computes luminance gradients at Canny-definedluminance edges in the image and combines this information with a measureof local surface shape to recover illumination direction. To computemotion direction, final position and initial position are compared. Results:We simulated the performance of the model on the stimuli previouslyviewed by observers in Gerhard & Maloney. The model’s variation in directionestimation was 7.2 degrees. Maximum likelihood estimates of humanvariability based on performance were much larger, 27 to 46 degrees. Weuse the model to mimic human performance for each observer and also toinvestigate the scene factors that enhance or diminish performance.Acknowledgement: NIH EY0826653.408 Dissimilarity Scaling of Lightness Across Changes of Illuminantand Surface SlantSean C. Madigan 1 (smadigan@psych.upenn.edu), David H. Brainard 1 ; 1 Departmentof Psychology, University of PennsylvaniaPurpose. Some studies of lightness and color constancy have describedasymmetric matching conditions under which observers are unable to finda perfect match; that is, the best match was associated with a residual perceptualdifference (e.g., Brainard, Brunt, and Speigle, 1997). Logvinenkoand Maloney (2006) quantified this phenomenon by having observers ratedifferences in perceived surface lightness across changes surface reflectanceand illumination, and modeling the dissimilarity data using multidimensionalscaling (MDS). Their data required a two-dimensional representationfor lightness, with one dimension corresponding (roughly) to surface reflectanceand the other (roughly) to illumination intensity. The finding of a twodimensionalrepresentation explains why observers cannot make a perfectasymmetric lightness match across a change of illumination by adjustingthe match stimulus intensity alone. We sought to replicate Logvinenko andMaloney’s result, and to extend it to the case where viewing geometry wasalso varied. Methods. Observers viewed pairs of grayscale matte flat teststimuli, one presented in each of two adjacent illuminated viewing chambers.They rated the dissimilarity of each pair. Observers scaled all possiblepairs from a stimulus set containing 6 surface reflectances seen in 3 scenecontexts. Across one context change illumination varied, while across theother surface slant varied. Non-metric MDS was used to model the data.Results. The data from each of four observers were well-accounted for bya one-dimensional representation. This representation was similar in structurefor all observers; for each the position p of a surface with reflectancer was approximated by p = log(r) + c, where the constant c depended onscene context. Accounting for our dissimilarity data did not require two ormore dimensions of perceived lightness. Conclusion. Understanding whatvariations of viewing conditions produce a lightness representation withmultiple dimensions remains to be determined.Acknowledgement: Supported by NIH RO1 EY10016 and P30 EY001583.53.409 Effects of microscale and mesoscale structure on surfaceappearanceSuparna Kalghatgi 1 (skk4068@rit.edu), James Ferwerda 1 ; 1 Munsell Color ScienceLaboratory, Carlson Center for Imaging Science, Rochester Institute ofTechnologyReal-world surfaces typically have geometric features at a range of spatialscales. At the microscale, opaque surfaces are often characterized by bidirectionalreflectance distribution functions (BRDFs), which describe how asurface scatters incident light. At the mesoscale surfaces often exhibit visibletexture – stochastic or patterned arrangements of geometric featuresthat provide visual information about tactile surface properties such asroughness, smoothness, softness, etc.. These textures also affect how light262 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Morning Postersis scattered by the surface, but the effects are at a different spatial scalethan those captured by the BRDF. Normally both microscale and mesoscalesurface properties contribute to overall surface appearance, however underparticular illumination and viewing conditions, one or the other may dominate.In this project we investigated how microscale and mesoscale surfaceproperties interact to determine perceived surface lightness. We measuredthe BRDFs and textures of flat surfaces covered with matte latex wallpaints applied by spray or roller, then created computer graphics models ofthese surfaces and rendered center/surround targets with identical BRDFsbut different textures. Observation of the images under directional lightingshows that as the viewing angle changes from normal to grazing, thelightness contrast of the center and surround regions change non-monotonicallywith the rougher textured surface first appearing lighter than thesmoother one, then darker as the specular angle is approached, then potentiallylighter again near grazing. This complex behavior is due to both thesurface physics and simultaneous contrast effects, and is the cause of thewell-known “touch-up problem” in the paint industry. We have conductedpsychophysical studies that characterize how the perceived lightness differencesof surfaces vary with BRDF and texture properties, and are developingmodels that can predict lightness differences for various lighting andviewing conditions, and provide prescriptions for minimizing the effect.53.410 Effects of material on the color appearance of real objectsMartin Giesel 1 (Martin.Giesel@psychol.uni-giessen.de), Karl R. Gegenfurtner 1 ;1 Justus-Liebig-University Giessen, GermanyThe objects in our environment are made from a wide range of materials.The color appearance of the objects is influenced by the geometry of theillumination, the three-dimensional structure of the objects, and the surfacereflectance properties of their materials. Only few studies have investigatedthe effect of material properties on color perception. In most of these studiesthe stimuli were three-dimensional objects rendered on a computer screen.Here we set out to investigate color perception for real objects made fromdifferent materials. The surface properties of the materials ranged fromsmooth and glossy (porcelain) to matte and corrugated (crumpled paper).We tested objects with similar colors made from different materials andobjects made from the same material that differed only in color. The objectswere placed on black cloth in a chamber under controlled lighting conditions.In an asymmetric matching task observers matched the color andlightness of the objects by adjusting the chromaticity and the luminanceof a homogeneous, uniformly colored disk presented against a black backgroundon a CRT screen. The screen was located close to the objects so thatit was not directly illuminated by the lamps that illuminated the objects. Todetermine the chromatic and luminance distributions of the objects theirsurfaces were measured with a spectroradiometer at numerous pointsfrom the viewpoint of the observers. We also measured the chromatic andluminance distributions of the materials when they could be presented asapproximately flat surfaces (paper and wool). The observers’ matches weremeasured with the spectroradiometer. In general observers’ matches wereclose to the true chromatic and luminance distributions due to the objects.However, observers systematically tended to discount the variations inreflected light induced by the geometry of the objects and rather matchedthe light reflected from the materials themselves.Acknowledgement: Supported by DFG grant Ge 879/9Spatial vision: Cognitive factorsOrchid Ballroom, Boards 411–415Tuesday, May 11, 8:30 - 12:30 pm53.411 Implicit verbal categories modulate spatial perceptionAlexander Kranjec 1 (akranjec@mail.med.upenn.edu), Gary Lupyan 1 ; 1 Center forCognitive Neuroscience, University of PennsylvaniaWe report evidence for a differential modulatory effect of spatial verbalcategories on categorical and coordinate spatial processing. Kosslyn (1987)proposed a hemispheric bias for processing two types of spatial information.Categorical information refers to spatial relations that are discrete andabstract, as lexicalized by locative prepositions. More fine-grained coordinateinformation pertains to metric distance important for visual search.Participants made same/different judgments on pairs of dot-cross configurationspresented simultaneously for 200ms to the left and right of centralfixation. Pairs could be different in respect to either their categorical orcoordinate relations. For categorical trials, dots were located in differentquadrants the same distance from the center of each cross; for coordinatetrials, dots were located in the same quadrant at different distances from thecenter of each cross. The orientation of the crosses was also manipulated.In AMBIGUOUS trials, crosses were composed of intersecting vertical andhorizontal lines forming a +. In UNAMBIGUOUS trials, both crosses wererotated 45° to form an × such that the quadrants were now unambiguouslyassociated with verbal spatial categories (right/left/up/down). Effects oforientation were predicated on the view that UNAMBIGUOUS trials automaticallygenerate unique spatial labels associated with each quadrantwhereas AMBIGUOUS trials do not. We predicted that relative to AMBIG-UOUS trials, performance on UNAMBIGUOUS trials would be (1) betterfor categorical stimuli because in UNAMBIGUOUS trials the activation ofunique verbal labels should facilitate discrimination between different spatialcategories and (2) worse for coordinate stimuli because the strongerspatial category attractor makes discriminating two locations within thesame spatial category more difficult. Both predictions were confirmed bythe data with all predicted main effects highly reliable. The robust orientation× spatial information-type interactions (Accuracy: F(1,9)=12.96, p


Tuesday Morning PostersVSS 2010 AbstractsTuesday AMwe performed a multiple linear regression between the random filters andresponse accuracy. Regression coefficients were smoothed (FWHM = 2.35),z-scored, and a pixel test was applied (Chauvin et al, 2005). For now, five ofthe 12 participants yield significant results in both hemifields. Nonetheless,a t-test on the SF peaks of these five subjects already confirms the use ofhigher SFs for words presented to the right hemifield (M = 1.64 cycles/letter)than for words presented to the left hemifield (M = 1.37 cycles/letter;mean difference = 0.27 cycles/letter, t(4) = 3.64, p


VSS 2010 AbstractsTuesday Morning Posterscue is presented during stimulus maintenance. These results suggest thatthe brain stores many objects independent of attention, and that attention isonly necessary to report or otherwise cognitively manipulate these items.53.418 Shared VSTM resources for enumerating sets and forencoding their colorsSonia Poltoratski 1 (soniapol@wjh.harvard.edu), Yaoda Xu 1 ; 1 Harvard UniversityDepartment of PsychologySeveral species, including humans, have been shown to posses the ability tononverbally represent the approximate number of items in a set. Recently,Halberda et al. (2006, Psychol. Sci.) showed that with displays containingmultiple spatially overlapping sets, observers can successfully enumeratethree such sets (two subsets plus the superset of all items). This threesetlimit on enumeration has been argued to converge with previouslyobserved three-item limits of object-based attention and visual short-termmemory (VSTM), with each set functioning as an individual entry to attentionand VSTM. This proposal implies that the same VSTM resources maybe used both for storing sets for enumeration and for storing single objectfeatures such as colors and shapes. In the present study, we tested thisproposal using a paradigm similar to that of Halberda et al.: participantsbriefly viewed displays of dots of different colors and were asked to enumeratethe approximate number of dots of a specific color (the probe color).The probe color was given either before or after the display was shown.Accuracy on paired ‘probe before’ and ‘probe after’ trials was comparedto assess the number of sets that participants could successfully encode.Occasionally, we probed a color that was not present, allowing us to measurethe number of colors that participants could encode successfully fromthe displays. Replicating Halberda et al., we found that participants couldsuccessfully enumerate two subsets of the colored dots. Interestingly, participantscould only encode about two colors from the same displays. Inother words, when participants were able to encode the color of a set, theycould also enumerate the number of items in that set successfully. Theseresults indicate that VSTM resources for enumerating sets and for encodingobject colors are shared.Acknowledgement: This research was supported by NSF grant 0855112 to Y.X.53.419 Filtering Efficiency in Visual Working MemoryRoy Luria 1 (rluria@uoregon.edu), Edward Vogel 1 ; 1 University of OregonWhat determines when filtering irrelevant items is efficient? In 3 experimentswe investigated perceptual load and individual differences in workingmemory (WM) capacity as determinants of filtering efficiency, usingboth behavioral and electrophysiological markers. Participants performeda visual search task that contained a target, neutral distractors and a flankerdistractor. We used the contralateral delay activity (CDA) to monitor theamount of information being stored in visual WM. The assumption is thatwhen filtering is efficient only the target should be processed in visual WMand irrelevant items could be filtered out early in processing, but as filteringbecomes inefficient, more and more irrelevant items will be storedin visual WM. The results indicated that individual differences in WMcapacity and perceptual load both independently influenced the filtering ofirrelevant information from visual WM. Namely, filtering the flanker wasefficient only under high perceptual load (as indicated by behavioral measures),but in both low and high perceptual load, individual differences inWM capacity correlated with filtering efficiency (as indicated by the CDAamplitude). Furthermore, the results identified the target search process asresponsible for the inefficient filtering. Interestingly, facilitating the searchprocess by presenting a spatial cue that signaled the target location madefiltering more efficient in general, but high WM capacity individuals stillbenefited to a larger extent relative to low WM individuals.53.420 Enumeration by location: Exploring the role of spatialinformation in numerosity judgmentsHarry Haladjian 1,2 (haladjian@ruccs.rutgers.edu), Zenon Pylyshyn 1 , CharlesGallistel 1,2 ; 1 Rutgers Center for Cognitive Science, 2 Rutgers University Departmentof PsychologyEnumerating a set of visual objects requires the individuation of theseitems, which inherently relies on location information. To examine the roleof location memory in small-number judgments (subitizing), we deviseda task that presented observers a brief display of small discs and thenrequired them to mark the location of each disc on a blank screen. In doingso, observers provided an indirect measure of their representation of thenumerosity of the display. Observers were tested on three stimulus durations(50, 200, 350 ms) and eight numerosities (2-9 discs); the black discswere approximately 1 degree in visual angle and placed randomly on agray screen. Following a full-screen mask, observers marked the disc locationson a blank screen by using a mouse pointer to place markers (“X”) foreach disc. This provided a measure of recall for object locations and displaynumerosity. ANOVAs on enumeration performance revealed significantmain effects for numerosity and display duration (with interactions). Highenumeration accuracy was observed for displays containing up to six discs(>90% of trials with perfect recall); error rates increased rapidly for largernumerosities. When observers made counting errors, they were generallyunderestimates. In the location analysis, error was measured as the distancebetween a stimulus disc and a paired response disc (discs were pairedusing nearest-neighbor methods). Location errors were significantly worsein the 50-ms presentation duration and for larger numerosities. We speculatethat the process of adding markers for each object provided a way tokeep track of which objects had already been counted and thus improvedenumeration accuracy. The methodology for this new subitizing task andthe implications of the current findings will be discussed.Acknowledgement: NSF 054911553.421 Visual working memory supports configuration, but notmaintenance or application, of attentional control settingsLingling Wang 1 (dangdang@psych.udel.edu), Steven Most 1 ; 1 Department ofPsychology, University of DelawareWhat mechanism allows people to tune attention to search for pre-specifiedtargets? One possibility is that such “attentional sets” involve the holdingof target features in visual working memory (VWM). Alternatively, VWMmight support the configuration of attentional set but become less centralonce configuration is completed. We tested these possibilities by manipulatingconcurrent VWM load during a classic “contingent attentional capture”task, where non-targets that contain features of a pre-specified targettypically capture attention (indexed via response time; Folk et al., 1992).In Experiment 1, participants made speeded judgments about a target(identifying it as either ‘=’ or ‘X’); the target was always red for the halfthe participants and always green for the other half, and it was precededby a cue that a) either was of the same color as the target or the oppositecolor, and b) appeared in the same location as the target or in a differentlocation. Each trial occurred during the retention interval of a VWM task,where participants attempted to remember the colors of either two squares(low load) or four squares (high load). Results revealed that cues matchingthe target color captured attention more robustly than cues of the oppositecolor (i.e., contingent capture), but that this was unaffected by VWM load.Experiment 2 introduced the need to configure attentional set trial-by-trial,as well as the between-subjects factor of whether VWM load was inducedbefore or after providing information about the target color on each trial.Results revealed no effect of VWM load when information about the targetcolor appeared prior to the induction of VWM load, but there was a significanteffect of VWM load when such information was provided afterwards.Thus, VWM appears crucial for the configuration of attentional set but notnecessarily for its maintenance or application.53.422 Rapid Recovery of Moving Targets Following Task DisruptionDavid Fencsik 1 (david.fencsik@csueastbay.edu), Skyler Place 2 , Melanie Johnson 1 ,Todd Horowitz 3,4 ; 1 Department of Psychology, California State University, EastBay, 2 Department of Psychological and Brain <strong>Sciences</strong>, Indiana University,3 Visual Attention Laboratory, Brigham and Women’s Hospital, 4 Department ofOphthalmology, Harvard Medical SchoolTracking tasks, such as monitoring traffic while driving or supervising childrenon a playground, would seem to require continuous visual attention.In fact, we can track multiple moving objects even through disruptions tothe task, such as looking away. How do we do this? We have previouslysuggested a mechanism that stores information about tracked objects offlineduring disruption, so the visual system can perform a secondary task, thenlater resume tracking without complete loss (Horowitz et al., 2006).Here we studied the timecourse of target recovery following a brief disruption.Participants tracked a set of moving targets among identical movingdistractors. During tracking, all objects disappeared simultaneously, thenreappeared after a brief gap. Objects continued to move during the gap. Ata delay between 0-1280 ms following reappearance, one object was probed.Participants identified the probed object as a target or distractor. In differentexperiments, we varied the gap duration from 133-507 ms, the trackingTuesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>265


Tuesday Morning PostersVSS 2010 AbstractsTuesday AMload from 1-4 targets, and the set of probe delays. In all experiments, RTdecreased over probe delays of roughly 0-50 ms following reappearance,then remained constant.We assumed that the visual system takes some amount of time to recoverthe targets following the gap. Under this assumption, RT should declinelinearly as a function of probe delay with a slope of -1. Recovery time maybe estimated as the probe delay at which RT reaches baseline and stopsdeclining. Recovery time was estimated to be about 45 ms in all experiments,regardless of gap duration or tracking load. These results suggestthat recovery of tracked objects following disruption occurs rapidly andin parallel.Acknowledgement: Supported by NIH Grant MH65576 to TSHMultisensory processing: Cross-modalperceptionOrchid Ballroom, Boards 423–436Tuesday, May 11, 8:30 - 12:30 pm53.423 The Contribution of Left and Right Visual Fields toPerceived OrientationRyan R Dearing 1 (rdearing@yorku.ca), Laurence R Harris 2 , Richard T Dyde 3 ;1 Department of Kinesiology and Health Science, York University, 2 Departmentof Psychology, York University, 3 Centre for <strong>Vision</strong> Research, York University<strong>Vision</strong> plays an important role in our ability to orient. A major factor in selforientation is the perceived direction of up. Previous studies involving perceptualupright (PU: the orientation at which objects are best recognized)have demonstrated a consistent leftward bias in a character recognitiontask. We hypothesized that this leftward bias may be due to unequal visualorientation cue ratings across the visual field. We assessed the perceptualupright using OCHART (Oriented CHAracter Recognition Task) whichmeasures the orientation at which a letter probe is “perceptually upright”(Dyde et al., 2006 Exp Brain Res. 173: 612). The probe was presented inthe centre of a background masked down to a circle subtending 35° anddivided into two with a vertical black line separating the two halves.Images on each half were photographs of an outdoor scene upright, tiltedclockwise or counter-clockwise by 112.5°, or a grey field of equal averageluminance. The effects of the two sides were additive in determining the PUwith no obvious dominance of one side or the other. However, when theoriented scene was presented only on one side (with the other side grey) thestrongest effect on the probe was found when visual cues were tilted in thesame direction as the visual field on which they were presented. Our datasuggest that orientation cues presented on the left and right sides of spaceare weighted approximately equally by the brain. The brain integrates theconflicting visual cues on either side of the visual field when determiningperceptual upright.Acknowledgement: Canadian Space Agency, Natural <strong>Sciences</strong> and Engineering ResearchCouncil of Canada53.424 Perceptual orientation judgements in astronauts: pre-flightresultsRichard T Dyde 1 (dyde@yorku.ca), James E Zacher 1 , Michael R Jenkin 2 , HeatherL Jenkin 3 , Laurence R Harris 3 ; 1 Centre for <strong>Vision</strong> Research, York University,Toronto, Ontario, Canada, 2 Computer Science and Engineering, York University,Toronto, Ontario, Canada, 3 Department of Psychology, York University, Toronto,Ontario, CanadaIntroduction: Bodies in the Space Environment (BISE) is a Canadian SpaceAgency sponsored experiment currently running onboard the InternationalSpace Station. The project examines the effect of long-term exposure tomicrogravity on perceived object orientation. Methods: Thirteen astronautswere tested using the Oriented Character Recognition Test (OCHART) aspart of the pre-flight data collection process. OCHART measures the orientationat which a letter probe is “perceptually upright” (PU) (Dyde et al.2006 Exp. Brain Res. 173: 612). OCHART was performed while upright andlying right-side-down (rsd). By varying the background orientation and theorientation of the subjects the relative contribution of vision, gravity andthe body can be determined. Data from 49 undergraduate students werecollected for comparison. Students’ data variance was computed and a secondpool of 24 student subjects was constructed to match variance in theastronaut subject pool. Results: When in an upright posture the directionof PU was more influenced by a tilted visual background for the completestudent group compared to astronauts. Astronauts’ PUs had significantlysmaller variances than those of the complete student group. Comparingthe astronauts with a group of 24 students (matched for variance) showedthat there was no difference in the influence of the tilted background on PUbetween these two groups. When these two groups’ data was compared inthe rsd posture, the direction of PU was reliably closer to the axis of gravity(and further away from the body centre-line) for the students compared toastronauts. Discussion: Initial data suggest that astronauts rely less on theaxis of gravity when performing an orientation measure compared to a student-groupmatched for variance. Experiments are ongoing with a subjectpool age-matched to the astronaut group.Acknowledgement: Supported by NASA Cooperative Agreement NCC9-58 with theNational Space Biomedical Research Institute, the Canadian Space Agency, and grantsfrom the Natural <strong>Sciences</strong> and Engineering Research Council of Canada to L.R. Harris andM. R. Jenkin.53.425 Not peripersonal space but the working area of the handdetermines the presence and absence of the visual capture of thefelt hand location in a mirror along the sagittal planeTakako Yoshida 1 (yoshida@cse.sys.t.u-tokyo.ac.jp), Yuki Miyazaki 2 , Tenji Wake 3 ;1 Graduate School of Interdisciplinary Information Studies, The University ofTokyo, 2 Department of Psychology, Graduate School of Humanities, TokyoMetropolitan University, 3 Faculty of Human <strong>Sciences</strong>, Kanagawa UniversityBased on the mirror-box techniques of Ramachandran and colleagues,when normal healthy participants views their left arm in a mirror positionedalong the midsagittal plane, the impression of viewing his righthand (virtual right hand) visually captures the felt right hand location andparticipants rarely notice their unseen real right hand location behind themirror, which is far from the virtual hand. To investigate the relationshipbetween this illusion and peripersonal space, we evaluated the spatial limitof the illusion along the sagittal plane. Participants put their left hand onthe mirror at a fixed position. In each trial, they were required to put theirright hand at random position and to tap the mirror six times with all fingerssynchronously with both hands , and were required to report whetherthey felt their right hand behind the mirror was located mirror-inversedposition to their left hand or not. Their hand and finger positions wererecorded using the infra-red motion-capture system (Library co. ltd.). Theplot areas of their wrist position showed that the illusion was seen almostanywhere in the working area of the right hand except at the limit, whichsuggests that it is not peripersonal space but that the muscle tensions andsignals from the subjects’ joints may erase this illusion when the arm postureis unnatural. To further test this possibility, we kept the observers’right wrist positions the same as the virtual wrist. Their real right hand wasalways within their peripersonal space. The participants randomly twistedtheir right hand along the sagittal plane while keeping their wrist locationthe same. Again, the illusion disappeared when they twisted their hand tothe limit. These results suggest that the strong muscle tensions and signalsfrom joints can overcome visual capture and recalibrate the visual-proprioceptiveconflict.Acknowledgement: Supported by SCOPE to T. Y.53.426 Multi-modally perceived direction of self-motion fromorthogonally directed visual and vestibular stimulationKenzo Sakurai 1 (sakurai@mind.tohoku-gakuin.ac.jp), Toshio Kubodera 1 , PhilipGrove 2 , Shuichi Sakamoto 3 , Yôiti Suzuki 3 ; 1 Department of Psychology, TohokuGakuin University, 2 School of Psychology, The University of Queensland,3 Research Institute of Electrical Communication, Tohoku UniversityWe measured observers’ perceived direction of self-motion resulting fromthe simultaneous presentation of visual and vestibular information, eachsimulating a different direction of motion. Sakurai et al. (2003, ECVP)reported that when observers experience real leftward/rightward motionwhile viewing a visual expanding/contracting optic flow pattern consistentwith forward/backward self-motion, their perceived motion directionwas intermediate to those specified by visual and vestibular information.This experiment extends that study and generalizes those findings, exploringother visual/vestibular combinations. Specifically, we explored morevisual patterns including translational optic flow – consistent with upward/downward or leftward/rightward motion as well as our previous patternsof expanding/contracting optic flow, consistent with forward/backwardmotion. Observers were seated on an oscillating motor-driven swing pro-266 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Morning Postersviding real motion, while they viewed optic flow patterns, phase-locked tothe swing motion, consisting of sine wave gratings in each of four conditions:1) forward/backward real motion paired with upward/downwardtranslational oscillatory optic flow (horizontal gratings), 2) forward/backwardreal motion paired with leftward/rightward translational oscillatoryoptic flow (vertical gratings), 3) leftward/rightward real motion paired withupward/downward optic flow, 4) leftward/rightward real motion pairedwith expanding/contracting optic flow. Observers were cued to indicatetheir perceived direction of self-motion during one half of the swing period,by setting a virtual pointer presented at the center of the head mounted display.In every combination of orthogonally directed visual and vestibularstimulation, observers reported distorted directions intermediate to thosespecified by visual and vestibular information. For example, reported selfmotiondirections were forward and downward, backward and upward incondition 1, forward and rightward, backward and leftward in condition 2,leftward and downward, rightward and upward in condition 3, leftwardand forward, rightward and backward in condition 4. Observers reportedthe opposite distortions of perceived self-motions for the complementarystimulus pairs in each condition.Acknowledgement: Supported by Grant-in-Aid of MEXT for Specially Promoted Research(no. 19001004).53.427 Veridical walking inhibits vection perceptionShin’ichi Onimaru 1 (onimaru@real.tutkie.tut.ac.jp), Takao Sato 2 , Michiteru Kitazaki 3 ;1 Department of Electronic and Information Engineering, Toyohashi University ofTechnology, 2 Department of Psychology, University of Tokyo, 3 Research Centerfor Future Vehicle, Toyohashi University of TechnologyVection (visually-induced self-motion perception) has been investigated forstatic observers generally. We aimed to investigate effects of walking (bodyaction) on vection perception. Viewpoint motion along a line-of-sight wassimulated in a three-dimensional cloud of dots (2 km/h, 1626 dots visiblein average), and its optic flow was presented on a 120-inch rear-screen (91x 75 deg). The direction of simulated motion was forward (expansion) orbackward (contraction). Naive participants observed the stimulus duringwalking forward or backward on a treadmill (2 km/h) at 1.2 m viewing distancefor 60 s. All four combinations of 2 optic-flow directions and 2 walkingdirections were repeated 4 times for each participant, and their vectionlatency and duration were measured. Vection latency was shorter in thebackward vection condition than the forward vection condition. For theforward vection condition, its latency was longer when observers walkedforward than backward. For the backward vection condition, its latencywas longer when observers walked backward than forward. Thus, the vectionperception was inhibited when observers’ walking was in the directionof simulated viewpoint motion (direction of optic flow). These results seemto be paradoxical. Since the self-motion perception is a multi-modal perceptionfusing visual, vestibular, and propriceptive senses with differentweights (weak fusion model), we speculated that the weight of visual selfmotion(vection) is increased without walking or with incongruent walking(proprioception). On the contrary, the weight of vection is relativelydecreased with veridical walking because both visual and proprioceptiveinformation are available and congruent.Acknowledgement: Supported by Grant-in-Aid for JSPS Fellows MEXT Japan to SO, NissanScience Foundation to MK, and The Global COE program ‘Frontiers of Intelligent Sensing’53.428 Visual and Auditory deterministic signals can facilitatetactile sensationsR. Doti 1 (rafael.doti@umontreal.ca), J.E. Lugo 1 , J. Faubert 1 ; 1 Visual Psychophysicsand Perception Laboratory, School of Optometry, Université de MontréalWe report novel tactile-visual and tactile-auditory interactions in humans,demonstrating that a facilitating sound or visual deterministic signal, thatis synchronous with an excitatory tactile deterministic signal presentedat the lower leg, increases the peripheral representation of this excitatorysignal (deterministic resonance). In a series of experiments we applied alocal electrical stimulation and measured the electrical (electromyographyor EMG) response of the right calf muscle while the local electrical stimulationwas maintained at subthreshold levels (not detected). By introducingthe visual or auditory representation of the local electrical signal (facilitationsignal) to the central system the signal sensation was recovered andthe electrical EMG signal increased. We go further by demonstrating thatthe neural dynamics of this phenomenon can resemble that of stochasticresonance by showing similar peripheral effects when introducing auditorynoise instead of the same deterministic signal. In the last experiment, weshow that the paired deterministic stimulations exhibit response functionssimilar to stochastic resonance.Acknowledgement: NSERC-Essilor Chair and NSERC53.429 Kinesthetic information modulates visual motion perceptionBo Hu 1 (bh@cs.rochester.edu), David Knill 1 ; 1 Center for Visual Science, Universityof RochesterPreviously we showed that actively moving a grating pattern disambiguatesthe perceived direction of visual motion. We demonstrate here that activemovement is not necessary, but that kinesthetic signals from passive movementof the hand can disambiguate the perceived motion direction. Eightsubjects did 4 active and 4 passive blocks in Experiment 1. In active blocks,subjects slid a dumb-bell shape device (bar connecting two 6” plates) on atabletop along self-chosen directions for 4000ms. Its position was recordedat 250Hz. Square-wave gratings (2 deg/cycle), rendered on a monitor, werereflected by a mirror onto the plane co-aligned with the top plate and movingat the same velocity as the subjects’ hand movement. Subjects viewedthe gratings through a 12-degree round aperture and reported the perceivedgrating motion direction. In passive blocks, the device was mounted on arobot arm, which regenerated the motion recorded in active blocks. Subjects,grasping the bar, were moved by the robot without seeing their armsand performed the same task. Seven subjects’ direction judgments showedmodulation from the hand movement in both conditions. The weightsgiven to kinesthetic signal were not statistically significant in active andpassive conditions (paired t(7)=0.46). In Experiment 2, 8 new subjects did4 passive and 4 vision-only blocks, in which subjects reported the motionwithout grasping the bar. Six subjects showed kinesthetic modulation inthe passive condition, but none in the vision-only condition, excluding thepossibility that subjects used visual cues in judging the true motion direction.The results confirm that kinesthetic information modulates motionperception. That a similar magnitude of perceptual modulation occurredin passive and active conditions indicate that high-level intentional signalsabout the planned movement direction cannot explain the effect, rather thebrain integrates kinesthetic and visual signals when estimating the visualmotion direction.53.430 Does it feel shiny? Haptic cues affect perceived glossIona S Kerrigan 1 (I.S.Kerrigan@soton.ac.uk), Wendy J Adams 1 , Erich W Graf 1 ;1 University of Southampton, UKHuman observers combine haptic (touch) and visual cues to estimate objectproperties such as slant (Ernst, Banks & Buelthoff, 2000) and size (Ernst &Banks, 2002). In the present study we ask whether haptic cues can change avisual percept; specifically, is the perception of gloss influenced by how anobject feels? Observers binocularly viewed a single convex shaded bump,either with or without a specular highlight. The specular highlight waseither aligned with the diffuse shading, or offset by up to 120˚. On visualonlytrials observers simply viewed the stimulus and made a 2AFC shinyvs. matte judgement. On visual-haptic trials, observers touched and viewedthe stimulus before making their judgement. Stimuli felt either hard andsmooth or soft and rubbery. In agreement with previous work (Anderson& Kim, 2009), specular highlights that were closely aligned with the diffuseshading led to shiny percepts. However, as the misalignment increased, theobject was increasingly judged as matte. Importantly, when the object felthard and smooth, observers classed the objects as shiny for larger highlightmisalignments. In contrast, when the object felt soft and rubbery, objectsappeared matte with smaller offsets between highlight and diffuse shading.We conclude that haptic information can alter observers’ visual percepts ofmaterial properties.Acknowledgement: ISK was funded by an ESRC studentship53.431 Effective tactile noise can decrease luminance modulatedthresholdsJ.E. Lugo 1 (je.lugo.arce@umontreal.ca ), R. Doti 1 , J. Faubert 1 ; 1 Visual Psychophysicsand Perception Laboratory, School of Optomety, Université de MontréalThe multisensory FULCRUM principle describes a ubiquitous phenomenonin humans [1,2]. This principle can be interpreted within an energyand frequency model of multisensory neurons’ spontaneous activity. Inthis context, the sensitivity transitions represent the change from spontaneousactivity to a firing activity in multisensory neurons. Initially the energyand frequency content of the multisensory neurons’ activity (supplied byTuesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>267


Tuesday Morning PostersVSS 2010 AbstractsTuesday AMa weak signal) is not enough to be detected but when the facilitation signal(for example auditory noise or another deterministic signal) enters thebrain, it generates a general activation among multisensory neurons of differentregions, modifying their original activity. The result is an integratedactivation that promotes sensitivity transitions and the signals are then perceived.For instance, by using psychophysical techniques we demonstratethat auditory or tactile noise can enhance the sensitivity of visual systemresponses to weak signals. Specifically, we show that the effective tactilenoise significantly decreased luminance modulated visual thresholds.Because this multisensory facilitation process appears universal and a fundamentalproperty of sensory/perceptual systems, we will call it the multisensoryFULCRUM principle. A fulcrum is one that supplies capability foraction and we believe that this best describes the fundamental principle atwork in these multisensory interactions. [1] Lugo E, Doti R, Faubert J (2008)Ubiquitous Crossmodal Stochastic Resonance in Humans: Auditory NoiseNoise Facilitates Tactile, Visual and Proprioceptive Sensations. PLoS ONE3(8): e2860. doi:10.1371/journal.pone.0002860 [2] Lugo J E, Doti R, WittichW, Faubert J (2008) Multisensory Integration: Central processing modifiesperypheral systems. Psychological Science 19 (10): 989-999.Acknowledgement: NSERC-Essilor Chair and NSERC53.432 Semantic congruency, attention, and fixation positionmodulate conscious perception when viewing a bistable figureJhih-Yun Hsiao 1 (r97227115@ntu.edu.tw), Yi-Chuan Chen 2 , Charles Spence 2 , Su-Ling Yeh 1 ; 1 Department of Psychology, National Taiwan University, 2 Departmentof Experimental Psychology, University of OxfordBistable figures provide a fascinating window through which to explorehuman visual awareness. This is because a constant visual stimulus inducesa dynamic alternation between two distinct percepts over time. Here weprovide evidence that a background auditory soundtrack (that was semanticallycongruent with one or other percept) can modulate people’s perceptionof bistable figures; we then further test whether this factor interactswith the factors of selective attention and fixation position that have previouslybeen shown to influence the perception of bistable figures (Meng& Tong, 2004). The participants viewed the “my wife or my step-mother”figure and reported their dominant percept continuously. In Experiment1, the participants reported seeing the old woman (young lady) for moreof the time when listening to the voice of an old woman (young lady). InExperiment 2, this auditory modulation of bistable figure perception wasobserved regardless of where the participants were instructed to focus theirfixation. In Experiment 3, attending to a specific view was found to dominatethe percept of the bistable figure and this factor overrode the modulationof bistable perception by auditory semantic congruency. These resultstherefore suggest that in the process by which a conscious percept emergeswhen viewing a bistable figure, the modulation of both semantic congruencyand selective attention were independent of the low-level factors ofovert fixation position; however, the influence of selective attention is morepowerful than that of auditory semantic congruency. These results alsoimply that the modulation of top-down factors of selective attention andsemantic congruency on the formations of visual awareness in ambiguoussituations may be weighted differently, such as when comparing bistablefigures and binocular rivalry (see Chen, Yeh, & Spence, submitted).Acknowledgement: Acknowledgments: This research was supported by a joint projectfunded by the British Academy (CQ ROKK0) and the National Science Council in Taiwan(NSC 97-2911-I-002-038).53.433 Formal congruency and spatiotemporal proximity in multisensoryintegrationElena Makovac 1 (elena.makovac@hotmail.it), Walter Gerbino 1 ; 1 Department ofPsychology “Gaetano Kanizsa”, University of Trieste, ItalyMakovac & Gerbino (2009 a, b) reported increases of the multisensoryresponse enhancement (MRE) when audiovisual AV components of across-modal event are (a) formally congruent, like in the takete-malumaphenomenon, and (b) in close spatiotemporal proximity, a necessary conditionfor multisensory integration by the superior colliculus and other brainareas (Calvert et al. 2000). While previous studies required observers toexplicitly evaluate target properties, our MRE effects were obtained by askingobservers to detect the occurrence of V targets and to ignore sounds,in either A and AV trials. Data were consistent with general principles ofmultisensory integration (spatial rule, temporal rule, inverse effectivenessrule).In the present research we utilized a combination of implicit tasks to studythree aspects of MRE effects: (1) the interaction between the structural componentsof cross-modal stimulation, formal congruency and spatiotemporalproximity; (2) the optimal AV asynchrony compatible with low-efficiencystimuli; (3) the relationship between superadditivity and automatic activationas criteria for defining multisensory integration. Formal congruencywas manipulated by varying the degree of similarity between the soundintensity profile and the 2D visual shape of short (


VSS 2010 AbstractsTuesday Morning Postersin common (considering explicit, objective features) can now be subjectedto the same metrics and compared according to their semantic similarity.In Experiment 1 respondents judged abstract visual patterns and pseudowordson the same set of evaluative attributes. By calculating the distanceof stimuli in the 3-D evaluative space we have obtained mutual evaluativesimilarity of visual and verbal stimuli. In Experiment 2 respondentswere asked to make explicit matches between abstract visual patterns andauditory presented pseudowords. The results showed that cross-modalcorrespondences are mostly predicted by evaluative similarity of visualand verbal stimuli. Affective evaluation appears to be the most importantpredictor, followed by arousal and cognitive evaluation. In conclusion wepropose the model for prediction of cross-modal correspondences based onsemantic (evaluative) similarity of stimuli.Acknowledgement: This research was supported by Grant # 149039D from the SerbianMinistry of Science.53.436 Divided Attention and Sensory Integration: The Return ofthe Race ModelThomas U. Otto 1 (thomas.otto@parisdescartes.fr), Pascal Mamassian 1 ; 1 LaboratoirePsychologie de la Perception (CNRS UMR 8158), Université ParisDescartesThe sensory brain processes different features of incoming stimuli in parallel.The combination/integration of features within or across sensorymodalities can often improve perception and cognition as expected fromprobability summation. Interestingly, the literature on divided attentionabounds with reports of integration effects that exceed probability summation.In the so-called redundant target paradigm, participants respondas quickly as possible to targets that are defined, for example, by their color(red vs. green) and/or orientation (vertical vs. horizontal). Typically, reactiontimes to redundant targets defined by both features are faster than totargets defined by a single feature only. In analogy to a higher probabilityfor a “small number” when playing with two rather than one dice, probabilitysummation or race-models can in principle account for a speedingup of reaction times. However, according to the influential race-model-test(Miller, 1982, Divided attention: evidence for coactivation with redundantsignals, Cognitive Psychology 14, 247-279), reaction times are even fasterand performance is even better than predicted by probability summation.Consequently, race models are rejected and a benefit due to sensory integrationis assumed. Here, we critically evaluate the race-model-test and itsinterpretation. Fifteen observers participated in the experiment describedabove and we determined cumulative reaction time distributions based ona total of 3000 trials per condition. We show that not only fastest reactiontimes are faster but also that slowest reaction times are slower than predicted– a finding that is neglected by Miller’s test and also in the literature.Importantly, in terms of variance, this result indicates that performance isnot better but in fact worse. Hence, race models cannot be rejected. Wehypothesize that the increased variance is related to capacity limited decisionprocesses and provide new interpretations for a large variety of studiesusing the redundant target paradigm.Acknowledgement: T.U. Otto was supported by the Swiss National Science Foundationand the European Project CODDE.Multisensory processing: SynesthesiaOrchid Ballroom, Boards 437–444Tuesday, May 11, 8:30 - 12:30 pm53.437 Color Input into Motion Processing in Grapheme-ColorSynesthetesKatie Wagner 1 (kgwagner@ucsd.edu), David Brang 1 , V.S. Ramachandran 1 , KarenDobkins 1 ; 1 Psychology, UCSDBackground: It has been proposed that individuals with grapheme-colorsynesthesia have increased levels of connectivity, particularly between V4and Visual Word Form Area (Hubbard and Ramachandran, 2005 ). To investigatewhether increased connectivity may be a widespread phenomenon,we asked whether color and motion interactions are stronger in synesthetesthan in controls. To this end, we used a paradigm that indexes the amountof chromatic (red/green) input to motion processing as compared to abenchmark of luminance (light/dark) input to motion processing. Methods:We used a MOT/DET paradigm, which obtains the ratio of contrastthreshold for discriminating direction of a moving grating (MOT), to thecontrast threshold for detecting that same moving grating (DET). Typicaladults exhibit MOT/DET ratios near 1.0 for luminance gratings, but closerto 2.0-4.0 for chromatic gratings, suggesting that chromatic informationprovides weaker input to motion mechanisms than luminance information.Our stimuli were luminance and chromatic horizontal gratings (1.0cpd,5.5Hz, subtending 2.0x2.0°). In the MOT task, a grating was presented andparticipants indicated whether it moved up or down. In the DET task, participantsindicated in which of two intervals contained the moving grating.The relative contribution of chromatic versus luminance information formotion processing is calculated as the difference in log MOT/DET ratiosfor chromatic vs. luminance gratings (Diff-Ratio), with values > 0 indicatingweaker chromatic input. If synesthetes have greater-than-normal colormotioninteractions: 1) their chromatic MOT/DET ratios should be lowerthan controls, and 2) their Diff-Ratio should be lower than controls. Results:While both synesthestes (n = 7) and controls (n = 7) showed Diff-Ratios significantlygreater than 0 (indicating weaker chromatic vs. luminance inputto motion), there were no group differences in Diff-Ratio, nor chromaticMOT/DET. Conclusions: At current, we do not find evidence for increasedinteractions between color and motion in synesthetes.53.438 Determinants of synesthetic color choice for JapanesecharactersMichiko Asano 1 (asano@L.u-tokyo.ac.jp), Kazuhiko Yokosawa 1 ; 1 Department ofPsychology, Graduate School of Humanities and Sociology, The University ofTokyoThe determinants of synesthetic color choice for Japanese characters werestudied in six Japanese synesthetes, who report seeing color for characters,numerals and letters. Four possible determinants were investigated: visualform (script), character frequency, sound, and meaning. Three kinds of Japanesecharacters were used as stimuli: Hiragana, Katakana, and Kanji. Hiraganaand Katakana are both Japanese phonetic characters that represent thesame set of syllables although their visual forms are dissimilar. Hiraganacharacters appear much more frequently in Japanese texts than do Katakanacharacters. Kanji characters are logographs, which represent a meaning ora concept, like a content word does. There are many homophones amongKanji characters. Also many Kanji characters share the same visual components(radicals) although their sounds and meanings are dissimilar. Byusing Hiragana, Katakana, and Kanji characters, we could dissociate effectsof sound, visual form, character frequency, and meaning in Japanese grapheme-colorsynesthesia. From a palette of 138 colors, the synesthetes selecteda color corresponding to each character. The experimental sessions wererepeated three times to assess participants’ consistency of color choices. Theresults for Hiragana and Katakana characters were remarkably consistent.This indicates that color selection depended on character sounds, not onvisual form or character frequency. The color selections for Kanji numeralswere remarkably consistent with those for Arabic numerals, which hadbeen tested separately. Furthermore, Kanji characters which represent colornames and names of objects with high color diagnosticity were representedby the corresponding colors, indicating that meaning was a strong determinantof synesthetic color choice for Kanji characters. These results suggestthat sound and higher order processing (i.e., semantics) are involvedin Japanese grapheme–color synesthesia.53.439 When the Inducing Grapheme Changes and When theInduced Synesthetic Color ChangesSuhkyung Kim 1 (everwhite@korea.ac.kr), Chai-Youn Kim 1 ; 1 Department ofPsychology, Korea UniversityBackground: It has been shown that conscious awareness of an inducinggrapheme is necessary for synesthetic color experience (Mattingley et al.,2001; Rich & Mattingley, 2005). However, whether grapheme recognitionshould precede synesthetic color perception has not been addressed. Usingparticular grapheme pairs that entail ambiguous recognition when rotated,i.e., W-M or 6-9, we investigated the temporal relationship between graphemerecognition and synesthetic color perception. Experiment 1: In 2 separateblocks of trials, 9 synesthetes observed either the letter W(M) or thedigit 6(9) followed by a pattern mask. The presented graphemes were in oneof seven different angles between 0 and 180 degrees. Observers respondedby pressing one of two keyboard buttons indicating the perceived identityof the grapheme in the grapheme recognition task (e.g., W or M) or theexperienced synesthetic color (e.g., purple or green for SK) in the synestheticcolor task. For the most synesthetes tested, reaction time(RT) wasslower in the synesthetic color task than in the grapheme recognition task.Tuesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>269


Tuesday Morning PostersVSS 2010 AbstractsTuesday AMExperiment 2: In 2 separate blocks of trials, a subset of synesthetes whoparticipated in Experiment 1 observed either the letter or the digit rotatingin clockwise or counterclockwise direction. The initial angle of the graphemewas varied. Observers responded by pressing a button indicating themoment the grapheme changes its identity (e.g., from W to M or from Mto W) or the moment the experienced synesthetic color changes (e.g., frompurple to green or from green to purple for SK). For all the synesthetestested, the pattern of results shown by perceptual latency in Experiment2 was parallel to that shown by RT in Experiment 1. To further investigatewhether grapheme familiarity influences the microgenesis of synestheticcolor experiences, we’re now testing Korean synesthetes who see colors onboth alphanumeric and Korean characters.Acknowledgement: This work was supported by Korea Research Foundation- Grant fundedby the Korean Government (KRF-2009-332-H00011)53.440 Vividness of visual imagery predicts spatial priming ingrapheme-color synesthetesBryan D Alvarez 1 (bryanalvarez@berkeley.edu), Lynn C Robertson 1,2 ; 1 University ofCalifornia, Berkeley, CA, 2 VA Northern California Health Care SystemSynesthesia is well understood to be an automatic perceptual phenomenonparalleling print color in some ways but also differing in others. Followinga study presented previously, we address this juxtaposition by askingwhether synesthetic color binds to the location of an invoking graphemein the same way as print color. We tested 17 grapheme-color synesthetesusing stereo glasses to produce the perception of two planes in 3D depth.On each trial, an achromatic letter (prime) appeared for 750 msec on thenear or far plane of space and participants named the immediately followingcolor patch (probe) quickly and accurately. The prime and probeappeared in the same line of sight and either on the same or different spatialplanes. The probe color was either the same or different to the synestheticcolor induced by the prime. Supporting previous work, we found fasterresponses to name the probe colors congruent with synesthetic colors thanincongruent, but synesthetes as a group did not show effects of spatialpriming, unlike non-synesthetes who exhibited faster RTs when prime andprobe locations were different. However, individual synesthetes showeddramatically different spatial priming effects, exhibiting a positive correlationbetween their spatial priming scores and mental imagery measures(Marks, 1973). Specifically, synesthetes with more vivid visual imagerywere faster to name a color when the prime and probe were on the sameplane while those with weaker visual imagery showed the opposite pattern.Results from 17 non-synesthetes primed with colored letters but thesame probe task showed that under the current conditions, negative spatialpriming is the norm. Thus, synesthetes with strong visual imagery appearto overcome the typical prime/probe conflict, suggesting that synestheticcolor may operate through a cortical network that interacts with printedcolor but exists as a separate feature representation.Acknowledgement: National Eye institute NIH#EY1697553.441 New results in neuroscience, behavior, and genetics ofsynesthesiaStephanie Nelson 1 (snelson@bcm.edu), Molly Bray 3 , Suzanne Leal 4 , DavidEagleman 2 ; 1 Department of Neuroscience, Baylor College of Medicine, Houston,USA, 2 Department of Psychiatry, Baylor College of Medicine, Houston, USA,3 Department of Pediatrics, Baylor College of Medicine, Houston, USA, 4 Departmentof Human Molecular Genetics, Baylor College of Medicine, Houston, USASynesthesia is a phenomenon in which stimulation of one sense triggersan experience in another sense. The most common forms produce an automaticperception of color in response to a grapheme or a word, althoughthere are many forms involving various sensory associations with smell,taste, and touch. Synesthesia is thought to occur in at least 1% of the population,and the associations are quantifiable by measuring their consistencywithin subjects over time. Using data from the online Synesthesia Battery(www.synesthete.org; Eagleman et al, 2007), we have analyzed the forms ofsynesthesia reported by almost 6,000 synesthetes. Our results indicate thatsynesthesia forms tend to cluster into five main groups, one of which weterm colored sequence synesthesia (CSS). This clustering pattern suggeststhat synesthetes with one colored sequence (e.g. number-color) are likelyto have others (e.g. letter-color, weekday-color, month-color) but unlikelyto have a form like music-color. In an effort to elucidate the neural activityunderlying CSS, we present neuroimaging data collected while showingsynesthetes clips of black and white children’s television. Our preliminarydata indicate that synesthetes and non-synesthetes process color in anatomicallydistinct regions. Finally, the genetic mechanisms responsible forsynesthesia have been widely debated but remain largely unknown. Wepresent data from our ongoing family linkage analysis, collected from 48individuals in five large families. Each affected synesthete was verified forCSS using the Synesthesia Battery. Our results implicate a 23MB region onchromosome 16. In sum, we combine data from cluster analyses, neuroimagingstudies, and genetic linkage analyses to present a coherent pictureof the neural basis of synesthesia, an understanding which will serve as apowerful guide to the normal operations of neural cross-talk and perception.53.442 10 Color-grapheme synesthetes with highly similar learnedassociationsNathan Witthoft 1 (witthoft@stanford.edu), Jonathan Winawer 1 ; 1 Department ofPsychology, Stanford UniversityRecent work on synesthesia has begun to emphasize the role of learning indetermining the particular inducer concurrent pairings manifested in synesthesia(Rich et al, 2005; Smilek et al, 2007; Eagleman, 2009). Previously wereported on a female color grapheme synesthete with color letter pairingsthat closely resembled the colors found in a childhood letter set stronglysuggesting that an environmental stimulus can shape the development ofsynesthesia (Witthoft & Winawer, 2006). Here we extend this case studyto a group. We present data from an additional 9 synesthetes (4 female / 5male) with remarkably similar color-letter associations, 8 of who also recallor are still in possession of the same or a similar Fisher Price childhoodletter set. All the synesthetes were raised in the US and are in a similar agegroup (~27-39 years old). Color matching data and behavioral performanceindicative of synesthesia were largely gathered via the synesthete.org website(see Eagleman et al, 2007). Additionally, 5/10 subjects have been testedin the laboratory by ourselves or other researchers (Kim & Blake, 2006;Alvarez & Robertson, 2008). Intersubject correlations for the hues assignedthe 24 (I and O are excluded) letters are above 0.99 in some subject pairs,with several subjects choosing almost exactly the same hue for each letter(with all the colors closely matching those in the letter sets). While thesedata do not comment on the possible importance of genetic factors in determiningwho will become a synesthete (Ward & Simner, 2002; Barnett etal, 2008; Asher et al, 2009), they do add further support to the idea that,in some people, environmental stimuli can play a strong role in shapingsynesthetic associations.53.443 Motion induced pitch: a case of visual-auditory synesthesiaCasey Noble 1 (caseynoble@u.northwestern.edu), Julia Mossbridge 1 , LucicaIordanescu 1 , Aleksandra Sherman 1 , Alexandra List 1 , Marcia Grabowecky 1,2 , SatoruSuzuki 1,2 ; 1 Department of Psychology, Northwestern University, 2 InterdepartmentalNeuroscience Program, Northwestern UniversityWe report a novel feature-based visual-auditory synesthesia that is moreelaborate than the previously reported percepts of synchronized auditorybeats induced by visual flashes. CN hears specific pitches and chords whenshe views different motion patterns including a perfect 5th chord whenviewing rotational apparent motion, a tritone chord when the rotation isambiguous, and complex jarring sounds when viewing dynamic randomdots. We discovered that this form of synesthesia builds on simple systematicrelationships between visual motion direction and auditory pitch. Inthe current study, we determined the tuning, the underlying coordinatesystem, and the sensory impact of this synesthesia. To determine the tuning,we used sinusoidal gratings of 0.12 cycles/deg moving at 22.9 deg/sec,which reliably produced sounds for CN. While viewing a moving sinusoidalgrating, CN experiences a 315-Hz sound when the grating movesupward, a 254-Hz sound when it moves horizontally in either direction,and a 217-Hz sound when it moves downward. JNDs were all less than3-Hz, indicating precise and stable interactions between motion directionand pitch; these relationships were relatively independent of speed. Interestingly,CN experiences no sounds when viewing a square-wave grating,suggesting that the underlying synesthetic interactions are specific tospatial-frequency content as well as direction of motion. We examined theunderlying coordinate system of her synesthesia with 90-degree head tilt;the results indicated that CN’s synesthesia arises from a head-centered processingof visual motion. Lastly, to demonstrate the sensory impact of thesynesthesia, we used a 2IFC motion-coherence detection task. CN reportedusing synesthetically induced sounds to detect coherent motion. Indeed,her coherence-detection thresholds were lower than those of non-synes-270 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Morning Postersthetic control participants. These results suggest that visual processing ofmotion direction and auditory perception of pitch can maintain surprisinglyspecific and systematic neural connections.Acknowledgement: NSF BCS0643191, NIH R01EY0181197-02S153.444 Electrophysiological Evidence Supporting the Automaticityof Synaesthetic Number-FormsMichelle Jarick 1 (majarick@uwaterloo.ca), Colin Hawco 2 , Todd Ferretti 2 , MikeDixon 1 ; 1 University of Waterloo, Waterloo, Ontario, Canada, 2 Wilfrid LaurierUniversity, Waterloo, Ontario, CanadaFor individuals with Number-Form Synaesthesia, numbers occupy veryspecific and highly consistent spatial locations. The number-form synaesthetewe studied here (L) experiences the numbers 1 to 10 rising verticallyfrom bottom to top, then extending in a horizontal left to right directionfrom 10-20. Using a spatial cueing paradigm, we empirically confirmedL’s subjective reports of her unique number line. Digits 1, 2, 8, or 9 werecentrally presented on a computer screen followed by a target square thatappeared on the bottom or top of the display. L was reliably faster at detectingtargets in synaesthetically cued relative to uncued locations, where controlsshowed no response time differences. Interestingly, L’s cueing effectsdisappeared once the targets were misaligned with her number-forms (presentedon the left and right of the display). Furthermore, L demonstratedthe vertical cueing effects even at short stimulus onset asynchronies (150ms SOAs) between cue and target onsets, suggesting her attention wasautomatically shifted to the cued location. Here, we used event-relatedbrain potentials (ERPs) to provide converging evidence for L’s rapid shiftsof attention. Compared to non-synaesthetes, L’s brain waves generated anearly negative deflection occurring at about 200 ms (N2) in occipital andparietal sites following valid targets, reflecting an early enhancement inattention to validly cued locations. Importantly, this N2 component disappearedonce the targets were misaligned with her number-forms (targetsappeared on the left and right). Non-synaesthetes showed no ERP differencesto detect the targets when presented both vertically and horizontally.These findings substantiate L’s behavioural cueing effects for the verticaltargets at short SOAs with converging electrophysiological evidence thatrevealed the emergence of early-evoked potentials to validly cued locations.These findings further strengthen our claim that for this synaesthete(L) digits automatically trigger shifts in spatial attention.Acknowledgement: This research was supported by the Natural <strong>Sciences</strong> and EngineeringResearch Council of Canada (NSERC) with grants to M.D. and T.F. and a graduatescholarship to M.J.Temporal processing: Perception of timeOrchid Ballroom, Boards 445–456Tuesday, May 11, 8:30 - 12:30 pm53.445 The effect of luminance signal on adaptation-based durationcompressionInci Ayhan 1 (ucjtiay@ucl.ac.uk), Aurelio Bruno 1 , Shin’ya Nishida 2 , Alan Johnston 1,3 ;1 Division of Psychology and Language <strong>Sciences</strong>, University College London,2 Nippon Telegraph & Telephone Corporation, NTT Communication ScienceLaboratories, 3 CoMPLEX, University College LondonAdapting to high temporal frequency luminance-modulated gratingsreduces the apparent duration of a subsequently presented sub-seconddynamic stimulus (Johnston, Arnold & Nishida, 2006, Current Biology,16(5):472-9). Here we investigate the effect of the luminance signal on thestrength of this temporal aftereffect using stimuli defined along the equiluminantS-constant axis and elevated with respect to the equiluminanceplane of the DKL space (Derrington, Krauskopf & Lennie, 1984, Journalof Physiology, 357:241-65). We first found the individual equiluminancepoints using the minimum motion technique (Cavanagh, MacLeod & Anstis,1987, Journal of the Optical <strong>Society</strong> of America A, 4(8):1428-38) for differenttemporal frequencies and contrasts. We then eliminated the effectof adaptation on the perceived speed (of a 7Hz test) by using interleaved5Hz and 10Hz adaptors for different luminance levels (equiluminance andan intermediate (65cd/m²) luminance difference between the magenta andcyan grating phases), separately. Finally, we used these individual ratios,at which no change occurred in the perceived speed of a 7Hz test pattern,in our duration experiments. A standard grating (600ms, 7Hz, 0.5cpd) wasalways displayed at the adapted location (in half of the trials 5º above,in half of the trials 5º below the fixation point) and a comparison (7Hz,0.5cpd), presented at the unadapted side, was varied over trials (300ms –1200ms) to generate a psychometric function. The PSE provided a measureof perceived duration. Both test and adaptor were either isoluminant, orluminance modulated with a 45cd/m² and 90cd/m² luminance differencebetween the magenta and cyan phases of the chromatic gratings (magentabeing darker). We found an apparent temporal compression of the luminancemodulated gratings which decreased with a reduction in luminancecontrast and was no longer significantly different from zero at equiluminance.This provides further evidence for the involvement of the magnocellularsystem in adaptation-based compression.Acknowledgement: The Leverhulme Trust & NTT Communcation Science Laboratories53.446 Orientation-specific flicker adaptation dilates static timeLaura Ortega 1 (lauraortegat@gmail.com), Emmanuel Guzman-Martinez 1 , MarciaGrabowecky 1 , Satoru Suzuki 1 ; 1 Northwestern University, Evanston, IL, U.S.A.Adapting to a flickering stimulus makes a subsequently presented staticstimulus (in the order of 500-1000 ms) appear longer. This flicker-basedtime-dilation aftereffect has been thought to be mediated by central mechanismssuch as increased arousal and attention. We investigated a potentialrole of low-level visual adaptation in this aftereffect. If flicker adaptationof low-level visual neurons contributes to subsequent temporal dilation,the aftereffect should be orientation specific, stronger when the orientationsof the adaptor and test stimuli are the same than when they are different.The arousal hypothesis predicts no orientation specificity, the attentionhypothesis predicts the opposite effect because attention capture by orientationchange should make the different-orientation test stimulus appearlonger, and contrast adaptation also predicts the opposite effect becausethe less visible same-orientation test stimulus should appear shorter. Weused vertical and horizontal Gabors (4.17° radius, 4.32 cycles/deg) as theadaptor (flickered at 5 Hz or static; lasting 5000 ms) and test (static; lastingfrom 200 to 800 ms) stimuli. The perceived duration of the test stimuluswas measured using a 2AFC temporal bisection task (shorter PSEs indicatingtemporal dilation). Overall, the test stimulus appeared longer (by 67ms) when preceded by a flickered adaptor compared to a static adaptor,replicating the previous results. Crucially, the test stimulus appeared evenlonger (by 31 ms) when the flickered adaptor and the test Gabor had thesame orientation than when they were orthogonal. These results suggestthat flicker-based adaptation of orientation-tuned visual neurons contributesto temporal dilation over and above any effects of arousal, attentioncapture, and/or contrast adaptation. This orientation-specific time-dilationaftereffect is distinct from the recently reported location-specific butorientation-independent time shrinkage aftereffect (from rapidly flickeringadaptor to flickering test). Time perception thus depends on the adaptivestates of low-level visual processes in multiple ways.Acknowledgement: NSF BCS0643191, NIH R01EY018197-02S1, CONACyT EP 009425853.447 Influences of stimulus predictability on its perceived durationAurelio Bruno 1 (a.bruno@ucl.ac.uk), Inci Ayhan 1 , Alan Johnston 1,2 ; 1 Departmentof Cognitive, Perceptual and Brain <strong>Sciences</strong>, University College London,2 CoMPLEX, University College LondonOur ability to judge how long a visual stimulus lasts has been proposed torely on an internal content-dependent clock, which determines apparentduration using a “predict and compare” strategy. For example, a predictionof what the visual world will look like in 100 ms is continuously comparedto the sensory input. When the prediction matches the visual input,the system determines that 100 ms have passed and resets the prediction(Johnston, In Attention and Time, edited by Nobre & Coull (OUP, Oxford),In press). If this is true, it is reasonable to expect that the degree of predictabilityof a stimulus has an influence on its perceived duration. In thisexperiment, we asked subjects to compare the relative duration of a 10 Hzdrifting comparison stimulus (variable duration across trials) with a standardstimulus (fixed duration, 600, 1200 or 2400 ms) with different degreesof predictability in different sessions. The standard could be static, driftingat 10 Hz (these two conditions are highly predictable) or mixed (a combinationof static and drifting intervals). In this last condition the degree ofpredictability of the stimulus was low: a static interval always followed adrifting one, but we assigned a duration between 100 and 200 ms to eachsubinterval, randomly, within and across trials. For all standard durations,the unpredictable (mixed) stimulus looked significantly compressed (~20%reduction). The drifting and the static stimulus differed from the actualTuesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>271


Tuesday Morning PostersVSS 2010 AbstractsTuesday AMduration only for one standard duration in each case (mild expansion fordrifting at 2400, compression for static at 1200 as predicted by Brown, 1995,Perception & Psychophysics, 57(1): 105-116). These results support the ideathat interfering with the predictability of a stimulus may disrupt the continuityof a “predict and compare” mechanism and therefore, influence itsapparent duration.53.448 Is Subjective Duration a Signature of Coding Efficiency?David Eagleman 1,2 (eagleman@bcm.edu), Vani Pariyadath 1 ; 1 Department ofNeuroscience, Baylor College of Medicine, 2 Department of Psychiatry, BaylorCollege of MedicineThe perceived duration of a stimulus can be modulated by novelty andrepetition (Pariyadath & Eagleman, 2007, 2008; Eagleman and Pariyadath,2009). For example, in a repeated presentation of auditory or visual stimuli,an oddball stimulus of equivalent physical duration appears to last longer,a phenomenon known as the oddball effect. We have proposed thatthis duration illusion is a reflection of the neural phenomenon of repetitionsuppression – the diminishment of the neural response to a stimulus that isrepeated – suggesting that the illusion reflects not a subjective expansion ofthe oddball, but rather a contraction of the repeated stimuli. In support ofthis hypothesis, we show that patient populations with impaired repetitionresponses, such as in schizophrenia, perceive duration illusions differentlyfrom healthy controls. We further present neuroimaging data indicatingthat repetition suppression in stimulus-specific cortical areas influencessubjective duration. Results from our lab and several scattered findings inthe literature can be compiled to demonstrate that, in general, any stimulusmanipulation that increases response magnitude (such as increasing stimulussize, brightness or predictability) leads to a concurrent increase in theperceived duration of the stimulus (Eagleman and Pariyadath, 2009). Wepropose the novel hypothesis that the experience of duration is a signatureof the amount of energy expended in representing a stimulus—that is, thecoding efficiency.Acknowledgement: NIH NS05396053.449 Individual differences in time perception indicate differentmodality-independent mechanisms for different temporal durationsSharon Gilaie-Dotan 1,2 (shagido@gmail.com), Ryota Kanai 1 , Geraint Rees 1,2 ;1 Institute of Cognitive Neuroscience, UCL, London UK, 2 Wellcome Trust Centrefor Neuroimaging, UCL, London UKThe ability to estimate elapsed time is fundamental to human behavior,and this ability varies substantially across individuals. Its neural basisremains debated. While some evidence points to a central, modality-independent‘clock’ underpinning this ability, other empirical data suggest sensorymodality-specific ‘clocks’. And whether different brain structures areinvolved in estimation of shorter and longer temporal intervals remainsunclear. Here, we took a new approach to these questions by investigatingindividual differences in the ability to estimate the duration of stimuli in alarge group of healthy observers, and their relationship to brain structure.We examined how well individuals could judge the duration of a stimuluspresented for either a short (~2s) or longer (~12s) duration in either visualor auditory sensory modalities. We found substantial variation in accuracyacross the group of participants; but while variability in participants’performance was highly consistent across modalities, it was much weakeracross different estimation durations. We then examined whether theseindividual differences in behavioral accuracy were reflected in differencesin gray matter density using a voxel based morphometry analysis appliedto structural MRI images of our participants’ brains. Taken together, ourdata suggest the existence of different modality-independent mechanismsfor judging different temporal durations.Acknowledgement: This work was supported by the European Union and by the WellcomeTrust.53.450 Audiovisual integration: the duration of uncertain timesJess Hartcher-O’Brien 1 (jhartcher@tuebingen.mpg.de), Max di Luca 1 , Marc Ernst 1 ;1 Max Planck Institute for Biological Cybernetics, Tuebingen, GermanyDespite continual temporal discrepancies between sensory inputs, signalsarising from the same event are bound together into a coherent percept. Ithas been suggested that multiple timekeepers monitor the different sensorystreams, producing differences in perceived duration of events. Given this,what is the integration strategy adopted for combining sensory informationin the time domain? Specifically, if the brain has information aboutthe duration of an event from more than one source, can the uncertainty ofthe duration estimate decrease, and can the Maximum Likelihood Estimate(MLE) model predict such a change? Using a 2AFC procedure, participantshad to judge which interval was longer (1st or 2nd) for auditory, visual andaudiovisual stimuli. Each trial contained 2 intervals: a standard stimulus(sampled from one of three durations), and a comparison interval whoseduration changed randomly in relation to standard stimulus duration. Thereliability of the auditory stimulus was manipulated to produce the unimodalweighting scheme. Data was fit with a cumulative Gaussian psychometricfunction from which PSE and JND were extracted.Results for unimodal trials showed JND changes that depended uponthe duration of the standard, according to Weber’s law. JND values alsodecreased with decreases in signal noise. Comparison of the presentbimodal results with MLE predictions revealed optimal integration ofauditory and visual duration cues. Additionally the results show that theintegration of uncertain visual and auditory duration signals is a weightedaverage of these signals. That is, PSE shifts in perceived duration tended toreflect MLE predictions with shifts following the more reliable unimodalsignal. These results are the first to demonstrate ‘optimal’ integration ofsensory information in the time domain and contradict other studies applyingMLE to this stimulus feature.53.451 Spatially Localized Time Shifts in Visual ExperienceHinze Hogendoorn 1,2 (j.h.a.hogendoorn@uu.nl), Frans A.J. Verstraten 1 , AlanJohnston 2 ; 1 Helmholtz Institute, Experimental Psychology Division, UtrechtUniversity, Utrecht, The Netherlands, 2 Cognitive, Perceptual and Brain<strong>Sciences</strong>, Division of Psychology and Language <strong>Sciences</strong>, University CollegeLondon, Gower Street, London WC1E 6BT, United KingdomThe neural encoding of an event must necessarily lag behind its occurrencein the outside world. However, we are not generally conscious of this processingdelay. It is also possible to make a distinction between the timeat which a neural representation of the event is formed (brain time), andthe time at which we experience that event to have occurred (event time).Nevertheless, we expect that temporal relationships that exist in the worldshould be paralleled in visual experience. Here, we report that two synchronizedrunning clocks appear temporally offset when one is presentedin a region of the visual field previously adapted to a sequentially expandingand contracting concentric grating. After 20Hz adaptation, a clock inan adapted region appeared advanced relative to a clock in an unadaptedregion, whereas there was no such effect after 5Hz adaptation. The timeshiftinduced by adaptation cannot have been mediated by changes in theperceived speed of the clock, as 20Hz adaptation decreased, and 5Hz adaptationincreased, its apparent speed. When comparing the two runningclocks, observers were required to divide attention between two locations.However, the same pattern of time-shifts was also evident when observersreported the time on a single clock when exogenously cued by a visual transient.In a final experiment, we found that reaction times on a clock-handposition discrimination task presented in an adapted region did not differfor 5Hz or 20Hz adaptation, showing that the absolute time required forvisual information from an adapted region to be accessed was unaffectedby adaptation. Altogether, our findings show that it is possible to inducespatially localized time-shifts in visual experience. These shifts are not aresult of changes in processing latency. Rather, they indicate a decouplingof visual time representation and the timing of concurrent visual events.53.452 Does audiovisual temporal recalibration store withoutstimulation?Tonja-Katrin Machulla 1 (tonja.machulla@tuebingen.mpg.de), Massimiliano Di Luca 1 ,Marc O. Ernst 1 ; 1 Max Planck Institute for Biological Cybernetics, TuebingenRecent studies have investigated adaptation to temporal discrepanciesbetween different sensory modalities by first exposing participants to asynchronousmultisensory signals, and subsequently assessing the magnitudeof the adaptation effect (the size of the shift in subjective simultaneity).Although never reported, there is reason to assume that the strength of theadaptation effect declines during this measurement period. Usually, shortre-exposures are interleaved with testing to prevent such declining. In thepresent study, we show that a decrease in the strength of adaptation stillcan take place, even when a common re-exposure procedure is used. In asecond experiment, we investigated whether the observed decline is due to:(1) a dissipation of adaptation with the passage of time or, (2) a new adapta-272 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Morning Posterstion induced by the test stimuli. We find that temporal adaptation does notdissipate with time but is stored until new sensory information, i.e., stimulithat differ from those used during the adaptation procedure, is presented.An alternative explanation, namely that adaptation decays over time but isre-established before the first test trial due to the experimental procedurewe chose, is addressed in a control experiment. This finding is discussedin terms of Helson’s adaptation level (AL) theory [1947, Adaptation-levelas frame of reference for prediction of psychophysical data. The AmericanJournal of Psychology, 60, 1-29], according to which the null point ofany perceptual dimension, in our case the perception of simultaneity onthe dimension of temporal order, is a summarizing statistic of all stimulipresented in the past. Any single stimulus pulls the AL toward its ownvalue, and any single stimulus is judged as though it was being comparedwith the current AL.53.453 Color-motion asynchrony depends on stimulus repetitionThomas Sprague 1 (tsprague@cpu.bcm.edu), David Eagleman 1,2 ; 1 Department ofNeuroscience, Baylor College of Medicine, 2 Department of Psychiatry, BaylorCollege of MedicineColor-motion asynchrony depends on stimulus predictability To craft auseful model of the world, the visual system must combine informationabout different stimulus features processed at different times in differentnetworks in the brain. This perceptual binding can be fooled by certaintypes of visual stimuli. For example, when two stimulus attributes (e.g.,color and motion) are simultaneously alternated between two states (e.g.,red/upward motion and green/downward motion), the color is perceivedas changing ~80 ms before the motion direction. Previous investigationsinto this color/motion asynchrony (CMA) illusion have used a repetitivestimulus which alternates between two colors. In contrast, we here measureperceptual asynchrony using repeated (grey-blue-grey-blue-…) and random(grey-blue-grey-orange-…) color sequences paired with alternatingdirections of vertical motion. The CMA was found to be 17% smaller in therandomized condition (68 ms) than in the repeated condition (82 ms). Thisresult suggests that the asynchrony illusion is related to, or at least modifiedby, neural repetition suppression—the phenomenon of a diminishingneural response to a repeated stimulus. Using functional neuroimaging, wemeasured the BOLD signal amplitude in the posterior fusiform gyrus tobe 19.3% smaller in the repeated condition than in the random condition.All reported characteristics of the CMA illusion appear to be explained bya model of perceptual decision-making in which neural evidence for eachalternative, defined as the concurrent response of the relevant color andmotion populations, is accumulated over time and compared. The alternativewith the greater evidence is the end result of the decision. In thismodel, a change in the relative latencies or firing rates of these signals canlead to a change in the final perceptual decision. Our findings demonstratethat stimulus repetition—and thus the amplitude of neural responses—areimportant factors in the CMA illusion.Acknowledgement: NIH RO1 NS05396053.454 The spatial selectivity of neural timing mechanisms fortactile eventsAlice Tomassini 1 (a.tomassini@studenti.hsr.it), Monica Gori 1 , David Burr 2,3 , GiulioSandini 1 , Concetta Morrone 4 ; 1 Istituto Italiano di Tecnologia, Via Morego, 3016163 Genova, 2 Dipartimento di Psicologia, Università Degli Studi di Firenze,Via S. Salvi 12, 50125, Florence, Italy. , 3 Institute of Neuroscience, CNR - Pisa,Via Moruzzi 1, 56124, Pisa, Italy , 4 Dipartimento di Scienze Fisiologiche.Facolta’ di Medicina, Universita’di Pisa.Adaptation studies [Johnston et al. 2006, Burr et al. 2007] suggest that visualevents are timed by multiple, spatially selective mechanisms anchored inreal-world rather than retinal coordinates [Burr et al. 2007]. To test whetherthis was a general property of event timing we investigated timing mechanismsfor touch, using a paradigm similar to that used in vision [Burr etal. 2007]. Subjects adapted to tactile movement by resting their right indexfinger on a corrugated grating etched on a wheel moving at 15 cm/sec (45Hz). After adaptation, subjects compared the duration of a test stimulus(22 Hz moving grating of variable duration) presented to the adapted handto a probe presented to the index finger of the left hand for 600 ms after a500 ms pause. Three different conditions were examined: full adaptation,where the test stimulus was presented to the same index finger in the sameposition as the adaptor; dermotopic adaptation, where the test was presentedto the index finger in a different position in space; and spatiotopicadaptation, where the test was presented to the middle finger moved to thesame spatial position as the adaptor. Both perceived speed and perceivedduration of the tactile stimulus were affected by adaptation. When thespeed of the test was adjusted to compensate for the effects of adaptation,the effects of event time in the dermotopic condition were minimal, whilethe full and spatiotopic adaptation conditions showed large reductions inperceived duration, up to 40%. These results suggest that like visual events,tactile events are timed by neural mechanisms that are spatially selective,not in receptor coordinates but in external or body-centered coordinates.This may be important in constructing and updating a body representationwithin which tactile events are timed.53.455 The curse of inconsistent auditory-visual perceptualasynchroniesDaniel Linares 1 (danilinares@gmail.com), Alex Holcombe 1 ; 1 School of Psychology,University of SydneyNeurophysiological recordings indicate that sensory latencies are shorterfor auditory than for visual signals. Whether this processing advantagefor sounds causes them to be perceived earlier has been studied usingsynchrony and temporal order tasks with a click and a flash, but resultshave been inconsistent across both tasks and individuals. Some or all ofthe inconsistency may result from temporal attraction and repulsion effects(e.g. temporal ventriloquism) of a click and flash presented close in time.We probed relative perceptual latency in two tasks for which auditoryvisualinteractions should be smaller or absent. METHODS. In the first, aflash-lag task, participants reported the location of a moving object at thetime of a click or a flash. Relative perceptual latency was inferred from thedifference in reported positions between the click and the flash conditions.The second task was to judge which of two intervals was longer. A click anda flash were used to bound the intervals, which were several hundreds ofmilliseconds long to avoid auditory-visual interactions or sensory integration.A shorter perceptual latency for clicks should yield a longer perceivedduration for click-flash than for flash-click intervals. RESULTS. Across severalindividuals, the apparent perceptual latency difference for the flashand the click was inconsistent between our duration and flash-lag tasks, aswell as for order and synchrony judgments. CONCLUSION. The absence ofa consistent perceptual correlate of neural latency differences is apparentlynot due solely to sensory interactions between the click and the flash. Thetemporally extended nature of responses may play a role. Although physicallybrief, a click and a flash yield sustained neural responses with separateonset and offset transients. Differences in the use of these responses indifferent tasks and across individuals may result in the large variability inperceptual latencies observed.Acknowledgement: Supported by an Australian Research Council Discovery Project to AH53.456 Visually-evoked but context-dependent distortions in timeperceptionMichael Esterman 1,2 (esterman@jhu.edu), Leon Gmeindl 1 ; 1 Department of Psychologicaland Brain <strong>Sciences</strong>, Johns Hopkins University, 2 VA Boston HealthcareSystemPrevious results indicate that the rate of change within some stimulusdimensions (e.g., luminance) reliably influences subjects’ reports of stimulusduration: for example, rapid rates of change result in overestimatedevent duration. What remains unclear, however, is the degree to whichthese bottom-up influences on behavior reflect distortion in the perceptionof time passing, rather than distortion in retrieval from short-term memory(STM). Specifically, when subjects must either compare the durations ofsuccessive stimuli or reproduce stimulus duration – two common experimentalparadigms – they may need to “replay” stimuli held in STM. Furthermore,replaying stimuli from STM may take longer for more rapidlychanging stimuli, resulting in overestimated duration. Two experimentsreported here minimized the need for subjects to hold stimulus durationsin STM. The results provide evidence for online, visually-evoked distortionsin time perception. However, in subsequent studies we found thatdistortions in time perception were not tied to absolute rates of change;stimuli with identical rates of change were judged to have different subjectivedurations depending on context and subjects’ expectations. These findingsindicate that distortions in time perception arise from an interactionbetween bottom-up and top-down influences.Tuesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>273


Tuesday Morning PostersVSS 2010 AbstractsTuesday AMDevelopment: EarlyVista Ballroom, Boards 501–512Tuesday, May 11, 8:30 - 12:30 pm53.501 Perception of the Müller-Lyer illusion in 3- to 8- month oldinfantsYuka Yamazaki 1 (aphrodite1440@yahoo.co.jp), Midori Takashima 1 , So Kanazawa 2 ,Masami K Yamaguchi 1,3 ; 1 Chuo University , 2 Japan Women’s University, 3 PRESTO,JSTDevelopmental studies have examined the perception of the Müller-Lyerillusion mainly in childhood (Predebon, 1985; Gentilucci et al., 2001). Howeverthere are no studies of infants’ perception of the Müller-Lyer illusion.In the present study we investigated 3-8-month-old infants’ perception ofthe Müller-Lyer illusion by using a familiarization paradigm. A total of 36Japanese infants aged 3-4, 5-6 and 7-8 months participated in this study.The experiment consisted of three phases, namely the pre-test, the familiarizationtrial, and the post-familiarization test. In the familiarization trial,two identical Müller-Lyer illusion figures were presented. In the pre-testand the post-familiarisation test, the lines of same length and the lines ofdifferent length figures were presented side by side. If infants could perceivethe Müller-Lyer illusion, they were expected to show a novelty preferencefor the lines of same length figure in the post-familiarization test.Results of the familiarization trial showed that all infants habituated to thefamiliarization display. A preference score was calculated for each infantin the pre-test and post-familiarization test. This was done by dividing theinfant’s looking time for the lines of same length figure during the two testtrials by the total looking time over the two test trials, and then multiplyingthis ratio by 100. To determine whether infants perceived the Müller-Lyerillusion in the familiarization display, we conducted paired t-tests on thepreference score for the lines of same length figure between the pre-testand post-familiarization test. This analysis revealed that 5-6-month-old and7-8-month-old infants looked significantly longer during post-habituationtrials (5-6 months: t(11) = 3.51, p


VSS 2010 AbstractsTuesday Morning Posters53.505 Chromatic (Red/Green) and Luminance Contrast Sensitivityin Monozygotic and Dizygotic Twin InfantsRain Bosworth 1 (rain@ucsd.edu), Marie Chuldzhyan 1 , Karen Dobkins 1 ; 1 Departmentof Psychology, University of California, San DiegoPurpose: To determine the extent to which contrast sensitivity (CS) developmentis governed by genetic mechanisms vs. environment, we comparedCS between pairs of twin infants and pairs of unrelated infants. If geneticshave a strong influence on CS, Monozygotic (Mz) twin siblings shouldbe more similar (and more strongly correlated) than Dizygotic (Dz) twinsiblings, and both Mz and Dz twins should show greater correlations thanunrelated infant pairs. By contrast, if genetics have little influence, correlationsshould be the same for Dz and Mz twins. In this latter scenario, if bothMz and Dz twins show greater correlations than unrelated infant pairs,this suggests a role of shared environment. The current study measuredLuminance (light/dark) and Chromatic (red/green) CS to assess sensitivityof the Magnocellular and Parvocellular pathways, respectively. Methods:Ten and 26 pairs of Mz and Dz twin pairs were tested (mean age = 4.5±1.5 mos). Zygosity was assessed using a questionnaire and cheek swabkits. Luminance and Chromatic CS were obtained for sinusoidal gratingsusing forced-choice preferential looking (~200 trials per infant; 0.27 cycles/degree; 4.2 Hz). Results: Multiple regression was conducted on 100 runsof random Twin-1/Twin-2 orderings. Results indicated that the CS of onetwin predicted 35-40% of the variance (p


Tuesday Morning PostersVSS 2010 AbstractsTuesday AMtern of cortical regions with a tool preference. A whole-brain group comparisonindeed showed that there was no difference across age. Even youngchildren thus activate areas that are part of the dorsal grasping circuitwhen they passively look at tools. In addition, a behavioural study withthe same participants showed that affordances of graspable objects automaticallyinfluence actions in an adult-like manner from 6 years onwards.We thus report developmental consistency of the link between tools andactions from early childhood onwards, both in cortical organization andbehaviour. These results are consistent with two possible developmentalpaths: (1) in contrast to the face-network, that keeps on developing untilat least 10 years of age, experience before the 6th year of life is sufficientfor the tool-network to stabilize into an adult-like organization, or (2) thatautomatic processing of affordances is present at birth. Future directionswill be discussed.Acknowledgement: European Commission Early Stage Career Marie Curie FellowshipfMEST-CT-2005-02072553.510 Smooth Pursuit Eye Movements and Depth from MotionParallax in InfancyElizabeth Nawrot 1 (nawrot@mnstate.edu), Mark Nawrot 2 , Albert Yonas 3 ; 1 Departmentof Psychology, Minnesota State University Moorhead, 2 Center for VisualNeuroscience, Department of Psychology, North Dakota State University,3 Department of Psychology, Institute of Child Development, University ofMinnesotaMotion parallax (MP) is a kinetic, monocular cue to depth that relies onboth retinal image motion and a pursuit eye movement signal. With MP,depth sign is based on the direction of the smooth pursuit eye movementsignal: Retinal motion in the same direction as the pursuit signal is perceivednearer than fixation. Retinal motion in the opposite direction is perceivedfarther away than fixation (M. Nawrot & Joyce, 2006). In previousresearch to understand the development of MP in infants we (E. Nawrot,Mayo, & M. Nawrot, 2009) used an infant control habituation procedurewith an MP stimulus to determine the average age of dishabituation to adepth-reversed test stimulus. Dishabituation to the change in depth sign isevidence for depth discrimination from MP. Now, our goal is to determinewhen the developing smooth pursuit system has sufficiently matured ininfancy and then directly measure pursuit eye movements in relation toa motion parallax task. We presented 12-20 week-old infants with both adepth-from-MP task and a visual tracking task designed to elicit smoothpursuit (SP). The MP stimulus and procedure is identical to previousresearch (E. Nawrot, Mayo, & M. Nawrot, 2009). Tracking is elicited with aschematic “happy-face” that translates at 10 deg/sec. Eye movements arerecorded using a Tobii systems X120 Eye Tracker. We expect to find thatSP gain (eye velocity/target velocity) increases across this age range andpursuit maturity will correlate with the onset of sensitivity to MP. Data collectedfrom 16 infants so far supports the hypothesis that depth from MPrequires maturation of SP. In general, younger infants demonstrate moresaccadic and lower gain eye movements, without MP, while older infantsdemonstrate more smooth pursuit tracking of the stimulus and MP.Acknowledgement: This research is supported by NICHHD R15HD058179 (E.S.N.)53.511 A salience-mapping method for testing infants’ visualworking memory for speed vs. luminanceErik Blaser 1 (erik.blaser@umb.edu), Zsuzsa Kaldy 1 , Henry Lo 1 , Marisa Biondi 1 ;1 Psychology, UMass BostonBackground. Research on infant cognition has long been concerned withhow infants process static vs. moving objects (e.g. Van de Walle & Spelke,1996; Rakison & Poulin-Dubois, 2002). We are interested in comparinginfants’ visual working memory (VWM) for speed and luminance. Herewe focus on our revised ‘salience-mapping’ technique (Kaldy & Blaser,2009) that allows us to generate comparison objects with iso-salient differencesfrom a common baseline object, thereby ensuring fair VWM tests.Methods. Subjects’ age was 5;0-6;30. A Tobii T120 eye-tracker measuredinfant’ gaze direction. Experiment 1 (ISM): Salience was calibrated in apreferential looking paradigm by pitting a baseline object (a slowly rotatinggreen star) against a range of objects that increased either in luminanceor in speed of rotation. Salience functions were obtained for each of thedimensions. We chose speed and luminance values that were at the 75%iso-salience level. In this way we defined three objects that had the followingrelationship: the salience difference between the baseline and the luminancecomparison and the baseline and the speed comparison was equal.Experiment 2 (VWM): In this in-progress experiment, two of the three suchdefined objects are presented for 3.5 seconds. The two objects disappear for2 seconds, then reappear, but with one changed in luminance or speed (bythe previously calibrated amount) while the other reappears unchanged.Preference, determined from looking time, for the changed (vs. unchanged)object is evidence for memory. Results. Iso-salient differences for luminanceand motion were successfully measured in Experiment 1 using ourrevised salience-mapping technique. While data collection for Experiment2 is ongoing, we expect better VWM for motion as opposed to luminance.Discussion. In service to VWM experiments, we demonstrated an innovativemethod for producing psychophysically comparable stimulus differencesfor infants along the dimensions of speed and luminance.Acknowledgement: This research was supported by National Institutes of Health Grant1R15EY017985-01 .53.512 The neural correlates of imitation in childrenAngie Eunji Huh 1 (anghuh@gmail.com), Susan Jones 1 , Karin James 1 ; 1 Psychologicaland Brain <strong>Sciences</strong>, Indiana University BloomingtonThis study examines the association between action and perception in thedevelopment of the human mirror system (HMS) in children from 4-7 yearsof age. Imitation is one mechanism that may promote associations betweenaction and perception in the developing brain. Neuroimaging studies onadults have found activation of the same 3 areas in the brain, termed theHuman Mirror System, both when participants imitate an action and whenthey are imitated. Using fMRI, we compared neural activation patterns inchildren during action production and observation both in isolation andin the context of imitation. We hypothesized that because both perceptionand action are required for imitation, the HMS will not be recruited duringaction or perception alone. Results revealed no overlapping activationin the 3 core areas of the HMS during observations and production of thesame actions in isolation. In addition, the children’s response pattern duringimitation was somewhat different than the pattern previously shown inadults. Unlike adults, children showed no activation in the inferior parietallobule during imitation tasks, but similar to adults, did recruit the inferiorfrontal gyrus and the superior temporal sulcus. These results demonstratethat imitation recruits different brain systems in the adult than in the child.We speculate that imitation recruits a temporal-parietal-frontal pathway inadults and a more direct temporal-frontal pathway in the child.Perception and action: MechanismsVista Ballroom, Boards 513–522Tuesday, May 11, 8:30 - 12:30 pm53.513 Does an auditory distractor allow humans to behave morerandomly?Yoshiaki Tsushima 1 (tsushima@wjh.harvard.edu), Ken Nakayama 1 ; 1 Psychology,Harvard UniversityIt is well known that human beings are poor random generators (Wagenaar,1972). For example, people have difficulty in creating random numbersequences. What underlies this behavioral tendency? In order to examinethis, we investigate the kinds of environments that can alter this abilityto generate randomness. Subjects selected and mouse-clicked three differentbuttons in the designated field on the computer display (there are4X4 square buttons). In each trial, subjects were asked to make the threebutton combination that they have never created before, that is, “a newcombination”. One subject group did the task while listening to the radio,another group without listening to the radio. The degree of randomnesswas assessed quantitatively. We find that the averaged level of randomnessof the subjects group listening to the radio is significantly higher thanthe group without the radio. Although the achievement of the group withradio was still worse than the pseudo-random combinations generated bya computer, the auditory distractor led to more random combinations. Thepresent results are in accord with previous findings that show that peopleare poor random generators. At the same time, it suggests that reducing asubject’s concentration on the task enhances the ability to generate randomnessor attenuates suppression of more stereotyped behavior. However, tounderstand the mechanisms underlying this phenomenon, further psychophysicaland physiological experiments are required.Acknowledgement: JSPS Postdoctoral Fellow for Research Abroad276 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Morning Posters53.514 Visual decision making is most influenced by past experienceof weak signalsShigeaki Nishina 1 (nishina@jp.honda-ri.com), Dongho Kim 2 , Takeo Watanabe 2 ;1 Honda Research Institute Japan, 2 Department of Psychology, Boston UniversityVisual decision making is regarded as a process in which sensory signalsare integrated toward an appropriate action. Decisions are based not onlyon current sensory signals, but also on statistical knowledge about pastincidences. It is naturally thought that the statistical knowledge formedby stronger visual signals more greatly influences decision makings. However,here we have found that very weak signals on previous trials havea greater influence on current decisions than do stronger signals on currentdecisions. On each trial, subjects were presented with a noisy, orientedstimulus, and asked to report which one of two alternative orientationswas presented. The signal-to-noise ratio (0%, 5%, 15%, or 20%) was variedfrom trial to trial. While 5% signal was too weak to perceive, 15% and20% signals were conspicuous. One session consisted of 6 runs, with 144trials per run. In every block of 48 trials, one of three pairs of incidenceprobabilities (33%/66%, 50%/50%, or 66%/33%) was assigned to the twoorientations. Two groups of 10 subjects were employed In the first group,incidence probability was manipulated for the 5%, 15%, and 20% signaltrials. In the second group, incidence probability was manipulated for the15% and 20% signal trials only and the incidence probability for the 5%signal trials was constantly 50%. While the first group showed a significantdegree of positive correlation between the past stimulus sequence and currentresponse sequence, the second group showed no correlation. Giventhat the only difference between two groups was the manipulation of trailswith imperceptible 5% signal, the present results indicate that on the contraryto the general thought visual decision making is more influenced bypast experience of weak signals than that of stronger signals.53.515 Consciousness Thresholds of Motivationally RelevantStimuli: Faces, Dangerous Animals and Mundane ObjectsElizabeth C. Broyles 1 , Evelina Tapia 1 , Adam M. Leventhal 2 , Bruno G. Breitmeyer 1 ;1 Department of Psychology, University of Houston, 2 Department of PreventiveMedicine, University of Southern California Keck School of MedicineMotivationally relevant stimuli might reach awareness earlier than objectswithout such relevance. We used a visual masking paradigm to compareconsciousness thresholds for neutral faces, threatening dogs and cups.Here, the subjects were presented with two stimuli—an image from oneof the three object categories and its spatially scrambled counterpart—bothof which were followed, at a varying stimulus onset asynchrony (SOA), bya spatially overlapping pattern mask containing features of objects fromall three categories. Subjects then identified the category and the locationof the unscrambled image. We defined the consciousness threshold as thesmallest SOA at which the category and location of the masked image wascorrectly identified at an above chance accuracy. We reasoned that, on theone hand, (a) awareness of potentially dangerous objects might be adaptivefor planning evasive behaviors; on the other hand, (b) consciousness mightbe suppressed due to the negative valence of such stimuli. We predictedthat, of the three object categories tested, 1) faces, which are socially significantand more frequently encountered, would have the lowest consciousnessthreshold; 2) based on alternative (a), threatening dogs should havea consciousness threshold equal to or higher than that for faces but lowerthan that for motivationally neutral cups; 3) based on alternative (b), threateningdogs should have the highest consciousness threshold. Our resultsshow that (i) faces have a lower consciousness threshold than both cupsand threatening dogs, and (ii) supporting alternative (b), threatening dogshave a higher consciousness threshold than both faces and cups. Therefore,the minimal SOA for attaining awareness of stimuli might depend on theirsocial and motivational relevance. Moreover, these results suggest thatthreatening information may be suppressed from awareness and thereforeattain consciousness only when it becomes more visible at longer targetmaskSOAs.53.516 Effects of Movement Observation on Execution Altered byResponse Features and Background ImagesStephen Killingsworth 1 (s.killingsworth@vanderbilt.edu), Daniel Levin 2 ; 1 Departmentof Psychology and Human Development, Peabody College, VanderbiltUniversity, 2 Department of Psychology and Human Development, PeabodyCollege, Vanderbilt UniversityMany studies show that if one is asked to produce a motion while observinganother’s motion, action production is influenced by the overlap in spatialposition/direction (spatial congruency) and overlap in type of bodilymotion (anatomical congruency) between observed and executed motions.In the present study, we examine this effect more closely by modifyingthe paradigm established by Brass, Bekkering, & Prinz (2001). We askedwhether congruency between the kinematics of observed and executed up/down finger motions produces an anatomical effect even when the executedmotions (key pressing and key lifting) involve different object-specific goalsthan the observed motions (downward or upward finger motions). To isolatespatial and anatomical effects, finger motions were presented with theobserved hand in both an upright and inverted orientation. Furthermore,half of our participants saw a uniform gray background behind observedfinger motions and half saw a table image below the hand with a blank wallbehind (providing a goal object for downward, but not upward motions).We find that only key presses (not lifts) showed main effects of spatial andanatomical congruency. We suggest that when a salient familiar goal statefor an alternative action is part of a more novel response sequence (e.g.having a key depressed as the starting position for a lift response), the congruencybetween observed goals and the novel and familiar goal states ofresponses yield conflicting facilitatory and inhibitory effects. In addition,congruency effects were influenced by the background presented. Specifically,our results show that when participants initially saw upright fingermotions above a table, congruency effects were absent when the motions(and background) were subsequently inverted. However, this was not thecase if upright or inverted motions were presented on a gray background orif inverted motions were first presented with a table background.53.517 Modeling the visual coordination task in de Rugy et al.: It’sperception, perception, perceptionGeoffrey P. Bingham 1 (gbingham@indiana.edu), Winona Snapp-Childs 1 , AndrewD. Wilson 2 ; 1 Psychological & Brain <strong>Sciences</strong>, Indiana University, 2 Institute ofMembrane & Systems Biology, University of LeedsBingham (2001; 2004a,b) proposed a dynamical model of coordinatedrhythmic movement that predicted the information used was the relativedirection of motion, modified by relative speed. de Rugy et al (2008) testedthis prediction by testing the dependence on speed. They reported thatmovement stability did not depend on relative speed. However, there werelimitations that cast doubt on these findings. Among them was the fact thatthe task used to test the model was not one the model was designed torepresent. Snapp-Childs, Wilson and Bingham (submitted) replicated deRugy et al.’s experiment and obtained results that supported the Binghaminformation hypothesis in contrast to the finding of de Rugy et al. We nowrevise the original Bingham model to apply to this new task, and then comparesimulated data to the Snapp-Childs et al. data. To adapt the model tothe new task, it had to be revised in three respects. First, the visual coordinationtask entailed uni-directional (not bi-directional) visual coupling. Therevised model was used to simulate the switching experiment of Snapp-Childs et al successfully. Uni-directional coupling yielded a less stable systemthat switched at 1.25Hz rather than 3-4Hz. Second, the task requiredparticipants to control and produce specific amplitudes of movement (aswell as specific frequencies and relative phases). This entailed another informationvariable, specifying amplitude, to be incorporated into the dynamicalmodel to control and produce required amplitudes. Third, the taskrequired that participants correct spontaneous deviations from requiredrelative phases. The original Bingham model included perceived relativephase. This was now used to detect departures from required phases and toperform corrections. The resulting model successfully simulated the resultsreplicated in Snapp-Childs et al. illustrating emphatically that perceptionactionmodels are required to model performance in coordinated rhythmicmovement tasks.53.518 The stability of rhythmic movement coordination dependson relative speedWinona Snapp-Childs 1 (wsnappch@indiana.edu), Andrew D. Wilson 2 , Geoffrey P.Bingham 1 ; 1 Psychological & Brain <strong>Sciences</strong>, Indiana University, 2 Institute ofMembrane & Systems Biology, University of LeedsBingham (2001; 2004a,b) proposed a dynamical model of coordinatedrhythmic movement that predicted the information used was the relativedirection of motion, modified by relative speed. de Rugy et al (2008) testedthis prediction by testing the dependence on speed. They reported thatTuesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>277


Tuesday Morning PostersVSS 2010 AbstractsTuesday AMmovement stability did not depend on relative speed. However, there werelimitations that cast doubt on these findings. First, the only reported measurewas of stability. It quantified consistency but not accuracy. Second,amplitude, manipulated to alter relative speed, was not reported. Whetherrequired differences in speed were actually generated is unknown. Finally,the task used to test the model was not one the model was designed torepresent. We ran the following studies to test Bingham’s hypothesis moreprecisely. Participants used a joystick to coordinate the movement of twodots on a screen, controlled by computer and joystick respectively. First, wetested stability using the ‘switching’ paradigm. Participants attempted toproduce 180° relative phase at frequencies increasing from 0.5Hz to 2.0Hzby 0.25Hz steps. Switching occurred at 1.25Hz. Visual coordination is muchless stable than bimanual coordination. Next, we assessed movement stabilityat 0° and 180° by having participants move at 1.0Hz, 1.25Hz and 1.50Hz.The amplitude of the joystick dot was constant while that of the computerdot was either the same or three times larger. 0° with unequal amplitudeshad the same relative speed difference as 180° with equal amplitudes; so,the stability should be comparable and less than 0° with equal amplitudes.Using a measure of both consistency and accuracy, we found that speed differencesaffected movement stability as predicted by the Bingham hypothesis(even though amplitudes were somewhat different than required).Bingham, Snapp-Childs and Wilson (submitted) revised the model for thenew task and successfully captured these results.53.519 Reduction of the flash-lag effect in active observationdepends upon the learning of directional relationship betweenhand and stimulus movemenetsMakoto Ichikawa 1 (ichikawa@L.chiba-u.ac.jp), Yuko Masakura 2 ; 1 Departmentof Psychology, Chiba University, 2 Center for Hyper Media Research, TokyoPolytechnic UniversityIn our previous study, we found that observer’s active control of the stimulusmovement would reduce the illusory flash-lag effect when the upward(downward) movement of the stimulus on the front parallel display iscoupled with the forward (backward) mouse movement on the desk, as inmost computer operating systems (Ichikawa &amp;amp; Masakura, 2006<strong>Vision</strong> Research). In this study, we examined whether the repetitive observationwith the directional relationship between the stimulus and handmovements, which is opposite to the one used in most computer operatingsystems, and therefore unfamiliar to observers, would affect the flashlageffect. In the active condition, 28.8 arc deg of vertical movement of thestimulus (19.1 x 19.0 arc min) on the display corresponded to about 30.0 cmof the mouse movement on the desk. The upward (downward) movementof the stimulus was coupled with the backward (forward) mouse movement.In the automatic condition, the stimulus moved automatically witha constant velocity which was determined by the average of the stimulusmovement in the active condition. A flash stimulus (19.1 x 19.0 arc min) waspresented beside the moving stimulus. The vertical position lag betweenthe flash and moving stimuli ranged from -76.0 to 76.0 arc min by 19.0 arcmin step (negative value indicates that the position of the flash was behindof the moving stimulus). Observers judged whether the moving stimuluswas below or above the flash. We measured the flash-lag effect for theactive and automatic conditions before and after 360 training trials withthe unfamiliar relationship between mouse and stimulus movements. Wefound that, after the training trials, the flash-lag effect was reduced onlyin the active condition. This result suggests that the learning of a specificdirectional relationship between hand and stimulus movements wouldreduce the flash-lag effect.Acknowledgement: Grant-in-Aid for Scientific Reserach #21530760, JSPS53.520 The Effectivity of Stroboscopic Training on AnticipationTimingAlan Reichow 1 (Alan.Reichow@nike.com), Karl Citek 2 , Marae Blume 3 , CynthiaCorbett 3 , Graham Erickson 2 , Herb Yoo 1 ; 1 Nike, Inc., 2 Pacific University, 3 PrivatePracticeIntroduction Accurate and consistent anticipation timing (AT) is consideredadvantageous during dynamic reactive activities such as automobiledriving, baseball batting, and basketball passing. Stroboscopic training hasgained interest in the athletic community as a means of improving AT. Thepurpose of this study was to evaluate stroboscopic training effects on AT.Methods Forty-four young adult optometry students served as subjects.Pre-training AT was measured at speeds of 10, 20 and 30 mph using theBassin Anticipation Timer. Subjects were equally divided into an experimentalgroup that trained with functional stroboscopic eyewear and a controlgroup that trained with non-stroboscopic eyewear. Training consistedof 2 weeks of tennis ball catching: underhand tosses at 12 ft (3.7 m) for10 min per day. Upon completion of training, subjects were immediatelyretested on AT and then tested again 24 hrs later. Results Repeated-measuresANOVA revealed that, after training, there was no significant changein AT accuracy for either group, but the experimental group did showsignificant improvement in consistency at 30 mph, and that the effect wasmaintained for 24 hrs. Discussion Stroboscopic training did not improveAT accuracy in this study, possibly because of the relatively slow testingspeeds used, the simple training activity, and/or the study population.However, it did demonstrate improved AT consistency at the fastest testspeed, which was maintained post-training. Future research should considerstroboscopic training effects with speeds and activities more similarto those that an athlete would expect to encounter in his or her sport.53.521 Performance Affects Perception of Ball Speed in TennisMila Sugovic 1 (msugovic@purdue.edu), Jessica Witt 1 ; 1 Psychological <strong>Sciences</strong>,Purdue UniversitySeveral studies suggest that action abilities affect spatial perception. Forexample, a softball appears to be larger when athletes are hitting better(Witt & Proffitt, 2005), and a putting hole appears larger when golfers areplaying better (Witt, Linkenauger, Bakdash, & Proffitt, 2008). In the presentresearch, we demonstrate that action abilities in sports, specifically intennis, affects both spatial perception and perception of ball speed. Studentstaking tennis lessons estimated the duration of ball travel from a ballmachine to the point when it made contact with their racquet by performingan interval reproduction task following each ball return. After successfulreturns, players estimated the duration of the ball travel to be longer,suggesting that they perceived the ball to travel slower, compared withwhen they unsuccessfully returned the ball. Our results suggest that thereis a bidirectional relationship between performance and perception andthat perception of ball speed is scaled in relation to one’s performance. Thisfinding is consistent with athletic experiences. For example, in describingher game, a former world number one tennis player, Martina Navratilovasaid, “when I’m in the zone the ball simply appears to move slower, everythingslows down.”53.522 Visual Acuity is Essential for Optimal Visual Performance ofNFL PlayersHerb Yoo 1 (herb.yoo@nike.com), Alan Reichow 1 , Graham Erickson 2 ; 1 Nike, Inc.,Beaverton, Oregon, 2 College of Optometry, Pacific University, Forest Grove,Oregon, USAThe measurement of static visual acuity (SVA) is an essential element ofany vision evaluation because degraded visual acuity can have a detrimentaleffect on many other aspects of visual performance. Although theexpected level of SVA depends on the visual task demands of each sportsituation, studies have frequently found better SVA in athletes than in nonathletes.The purpose of this study was to compare performance of visualand visuomotor skills in a population of NFL football players with excellentand reduced SVA. Eighty-two NFL football players received a visualperformance evaluation at their team’s training center over a span of 2 seasons.Based on the athletes’ SVA and refractive error measures, they wereidentified for a referral to an eye care practitioner for potential remediation.An athlete was identified for a referral if (1) SVA of either or both eyes wasworse than 20/17 on a Snellen chart or (2) there was significant hyperopiain either eye (cyl ≥ 1.5 diopter). Twenty-eight of the 82 players were identifiedfor a referral. Statistically, the referred and the non-referred populationhad significantly different SVA (p


VSS 2010 AbstractsTuesday Morning PostersObject recognition: Recognition processesVista Ballroom, Boards 523–538Tuesday, May 11, 8:30 - 12:30 pm53.523 Object recognition based on hierarchically organizedstructures of natural objectsXiaofu He 1 (xiaofuhe2008@yahoo.com), Joe Tsien 1,2 , Zhiyong Yang 1,3 ; 1 Brain andBehavior Discovery Institute, Medical College of Georgia, Augusta, GA 30912,2 Department of Neurology, Medical College of Georgia, Augusta, GA 30912,3 Department of Ophthalmology, Medical College of Georgia, Augusta, GA30912Humans can recognize objects quickly and accurately despite tremendousvariations in their appearance. This amazing ability of rapid categorizationhas motivated several models of natural vision and object recognition. Theconclusion of these models is rather far-reaching: humans achieve rapidcategorization in a way similar to these models. Since understanding thecomputations underlying rapid categorization is important for achievingnatural vision, we have re-examined several of these models. In particular,we trained the models and tested them with scenes in which the objectsto be categorized were replaced with uniform ellipses. We found that themodels categorized most of the scenes with ellipses as having the objects.Therefore, these models do not categorize objects but rather the contextsin which the objects are imbedded and thus provide little clue on howhumans achieve rapid categorization. Here, we propose a statistical objectrecognition model based on a large set of hierarchically organized structuresof natural objects. First, a large set of hierarchical object structuresare obtained from natural objects. At each level of the hierarchy are a set ofobject structures, each of which consists of a combination of independentcomponents of natural objects. Each object/category is then represented bya subset of these hierarchical structures and the natural variations of theobject/category by a probability distribution of the underlying structures.Object recognition/categorization is performed as statistical inference.We tested this model on several large datasets and found that the modelachieves a great performance on object recognition/categorization both inisolation and in natural contexts.53.524 Statistics of natural objects and object recognitionMeng Li 1 (MLI@mcg.edu), Zhiyong Yang 1,2 ; 1 Brain and Behavior DiscoveryInstitute, Medical College of Georgia, Augusta, GA 30912, 2 Department ofOphthalmology, Medical College of Georgia, Augusta, GA 30912Natural visual scenes and objects entail highly structured statistics, occurringover the full range of variations in the world. Representing these statisticsby populations of neurons is a paramount requirement for naturalvision. The function of the visual brain, however, has long been taken to bethe representation of scene features. It is thus not clear how representingindividual features per se could deal with the enormous feature variationsand co-occurrences of other features in the natural environment.Here, taking object recognition as an example, we explore a novel hypothesis,namely, that, instead of representing features, the visual brain instantiatesa large set of hierarchically organized, structured probability distributions(PDs) of natural visual scenes and objects and that the function ofvisual cortical circuitry is to perform statistical operations on these PDs togenerate the full range of percepts of the natural environment for successfulbehaviors. To explore the merits of this hypothesis, we develop a largeset of hierarchically organized, structured PDs of natural objects. First, wefind all possible local structures of natural objects. Two object structuresare deemed the same if they can be transformed to each other by an affinetransform (displacement, rotation, scaling) and smooth nonlinear transforms.For each object structure, we then develop a PD that characterizesthe natural variations of the structure. Finally, by applying this procedureat a set of spatial scales, we obtain a large set of object structures, each ofwhich is associated with a PD. Using these object structures and associatedPDs, we develop hierarchical, structured probabilistic representationsof natural objects. Object recognition is performed as statistical inference.Tests on several large databases of objects show that the performance ofthis model is comparable to or better than some of the best models of objectrecognition.53.525 The role of Weibull image statistics in rapid object detectionin natural scenesIris Groen 1 (i.i.a.groen@uva.nl), Sennay Ghebreab 2 , Victor Lamme 1 , StevenScholte 1 ; 1 Cognitive Neuroscience Group, Department of Psychology, Universityof Amsterdam, 2 Intelligent Systems Lab, Informatics Institute, University ofAmsterdamThe ability of the human brain to extract meaningful information fromcomplex natural scenes in less amount of time than from simple, artificialstimuli is one of the great mysteries of vision. One prominent example ofthis ability is that natural scenes containing animals lead to a frontal ERPdifference compared to scenes without animals as soon as 150 ms afterstimulus-onset (Thorpe et al., 1996). Whereas these findings make clearthat the brain is able to very rapidly distinguish these types of images, it isunclear on the basis of what information this distinction is made - in otherwords, whether early differences in the ERP between natural stimuli arerelated to low-level or high-level information in natural images. We haveshown previously that the early animal vs. non-animal difference is drivenby low-level image statistics of local contrast correlations, as captured bytwo parameters (beta and gamma) of the Weibull fit to the edge histogramof natural images (Scholte et al., 2009). These parameters can be estimatedin a physiologically plausible way and explain 85% of the variance in theearly ERP. We are currently expanding on this work by investigating towhat extent low-level image statistics, as measured by beta and gamma, areinvolved in determining the latencies of target vs. non-target ERP differencesin those cases where other types of stimuli than animals (vehicles) areused and where the specific task the subject is performing is varied (fromsimple detection to subordinate categorization). Early results confirm ourprevious findings and expand them to other types of stimuli and tasks.Thorpe et al., (1996). Speed of processing in the human visual system.Nature, 381(6582):520-2.Scholte et al., (2009). Visual gist of natural scenes derived from image statisticsparameters [Abstract]. Journal of <strong>Vision</strong>, 9(8):1039, 1039a.53.526 Influence of Local Noise Structure on Object RecognitionHenry Galperin 1 (hgalp@brandeis.edu), Peter Bex 2 , Jozsef Fiser 3,4 ; 1 GraduateProgram in Psychology, Brandeis University, Waltham, MA 02454, 2 SchepensEye Research Institute, Harvard Medical School, Boston, MA 02114, 3 Departmentof Psychology, Brandeis University, Waltham, MA 02454, 4 Volen Centerfor Complex Systems, Brandeis University, Waltham, MA 02454Psychophysical experiments frequently use random pixel noise to study thecoding mechanisms of the visual system. However, interactions in naturalscenes occur among elements of larger articulated structures than pixels. Weexamined how structure in noise influences the ability to recognize objectswith a novel coherence paradigm for object recognition. Grayscale imagesof 200 everyday objects from 40 categories were analyzed with a multi-scalebank of Gabor-wavelet filters whose responses defined the positions, orientationsand phases of signal Gabor patches that were used to reconstruct theoriginal image. The proportion of signal to random noise Gabors was variedusing a staircase procedure to determine a threshold supporting objectrecognition on 75% trials. The noise structure was controlled to produceGabor chains of varying length (1, 2, 3, or 6 elements) and local orientation,forming straight, smoothly or irregularly curved contours. Each trial, nineteennaïve subjects assigned the reconstructed image to one of four categories,randomly selected from all categories. Object recognition thresholdswere invariant to the nature of the underlying local orientation structure ofthe noise. Increasing the length of the noise contours from 1 to 2 elementsincreased thresholds (p


Tuesday Morning PostersVSS 2010 AbstractsTuesday AMWe have previously shown that observers can recognize high-level materialcategories (e.g. paper, fabric, plastic etc.) in complex, real world imageseven in 40 millisecond exposures (Sharan et al., VSS 2009). This rapid perceptionof materials is different from object or texture recognition, and isfairly robust to low-level image degradations such as blurring or contrastinversion. We now turn to computational models and ask if machines canmimic this human performance. Recent work has shown that simple imagefeatures based on luminance statistics (Sharan et al.. 2008), or based on 5x5pixel patches (Varma and Zisserman, 2009) are sufficient for some textureand material recognition tasks. We tested state-of-art models based onthese features on the stimuli that our observers viewed. The performancewas poor (Categorization rate: Varma-Zisserman = 20%, observers = 90%,chance = 11%). Our stimuli, a diverse collection of photographs derivedfrom Flickr.com, are undoubtedly more challenging than state-of-artbenchmarks (Dana et al., 1999). We have developed a model that combineslow and mid-level image features, based on color, texture, micro-geometry,outline shape and reflectance properties, in a Bayesian framework. Thismodel achieves significant improvement over state-of-art on our stimuli(Categorization rate: 41%) though it lags human performance by a largemargin. Individual features such as color (28%) or texture (37%) or outlineshape (28%) are also useful. Interestingly, when we ask human observers tocategorize materials based on these features alone (e.g. by converting ourstimuli to line drawings that convey shape information, or scrambling themto emphasize textures), observer performance is similar to that of the model(20-35%). Taken together, our findings suggest that isolated cues (e.g. coloror texture) or simple image features based on these cues, are not sufficientfor real world material recognition.Acknowledgement: Disney, Microsoft, NTT Japan, NSF53.528 The role of feedback projections in a biologically realistic,high performance model of object recognitionDean Wyatte 1 (dean.wyatte@colorado.edu), Randall O’Reilly 1 ; 1 Department ofPsychology and Neuroscience, University of Colorado, BoulderNeurons in successive areas along the ventral visual pathway exhibitincreasingly complex response properties that facilitate the robust recognitionof objects at multiple locations, scales, and orientations in the visualworld. Previous biological models of object recognition have focused onleveraging these response properties to build up transformation invariantrepresentations through a series of feedforward projections. In addition tofeedforward projections, feedback projections are abundant throughoutvisual cortex, yet relatively little is known about their function in vision.Here, we present a model of object recognition that shows how feedbackprojections can produce considerably robust recognition performance invisual noise. The model is capable of transformation invariant recognitionof 100 different categories of three-dimensional objects and can generalizerecognition to novel exemplars with greater than 90% accuracy. Whenthe objects are embedded in spatially correlated visual noise, our modelexhibits substantially greater robustness during recognition (a 50% accuracydifference in some cases) compared to a feedforward backpropogationmodel. Thus, the top-down flow of activation via feedback projections canhelp to resolve uncertainty due to noise based on learned visual knowledge.Finally, in contrast with other biological models of object recognition,our model develops all of its critical transformation invariant representationsthrough general-purpose learning mechanisms that operate via thefeedforward and feedback projections. Thus, our model demonstrates howa biologically realistic architecture that supports generic cortical learning issuccessful at solving the difficult problem of invariant object recognition.Acknowledgement: ONR N00014-07-1-065153.529 GPGPU-based real-time object detection and recognitionsystemDaniel Parks 1 (danielfp@usc.edu), Archit Jain 2 , John McInerney 2 , Laurent Itti 1,2 ;1 Neuroscience Program, Hedco Neuroscience Building, University of SouthernCalifornia, Los Angeles, CA 90089-2520, 2 Computer Science Department,Henry Salvatori Computer Science Center, University of Southern California,Los Angeles, CA 90089-0781Many neuroscience inspired vision algorithms have been proposed overthe past few decades. However, it is difficult to easily compare the variousalgorithms that have been proposed by investigators. Many are very computationallyintensive and are thus hard to run at or near real time. Thismakes it difficult to rapidly compare different algorithms. Further, it makesit difficult to tweak existing algorithms and to design new algorithms dueto the training and testing framework that must be constructed around it.With the advent of GPGPU computing significant speedups on the orderof 10-50 times are achievable if the computations are intensive, local, andmassively parallel. Many object recognition systems fit this description, sothe GPGPU provides an attractive platform. We describe an implementedGPGPU-based system that uses saliency (Itti, Koch, 1998) to detect interestingregions of a scene, and a generic backend that can run various objectrecognition systems such as HMAX (Riesenhuber, Poggio 1999) or SIFT(Lowe, 2004). The less intensive front end system only achieved a speed upof 2x, but HMAX was sped up by 10x (Chikkerur, 2008). We believe thatthis framework will allow rapid testing and improvement of novel recognitionalgorithms.53.530 Top-Down Processes of Model Verification Facilitate VisualObject Categorization under Impoverished Viewing Conditionsafter 200 msGiorgio Ganis 1,2,3 (ganis@nmr.mgh.harvard.edu), Haline Schendan 2,4 ; 1 HarvardMedical School, Radiology Department, 2 Massachusetts General Hospital/Martinos Center for Biomedical Imaging, 3 Harvard University, PsychologyDepartment, 4 Tufts University, Psychology DepartmentWhile objects seen under optimal visual conditions are rapidly categorized,categorizing objects under impoverished viewing conditions (e.g., unusualviews, in fog, occlusion) requires more time and may depend more on topdownprocessing, as hypothesized by object model verification theory.Object categorization involves matching the incoming perceptual informationto a stored visuostructural representation in long-term memory. Functionalmagnetic resonance imaging (fMRI) work found evidence for modelverification theory, and implicated ventrolateral prefrontal cortex (VLPFC)and parietal lobe in top-down modulation of posterior visual processesduring the categorization of impoverished images of objects. We replicatedthe fMRI study with event-related potentials (ERPs) to time model verificationprocesses. The two-state interactive account of visual object knowledgepredicts that top-down processes of model verification modulate objectmodel selection processes supporting categorization during a frontopolarN350, and later categorization processes during a parietal late positive complex(LPC), but not earlier feedforward processes of perceptual categorizationduring a P150-N170 complex. 24 participants categorized fragmentedline drawings of known objects. Impoverishment was defined by a mediansplit of response time with slower times defining more impoverished (MI)and faster times defining less impoverished (LI) objects. As predicted, after200 ms, the N350 was larger for MI than LI objects, whereas the LPC wassmaller for MI than LI objects. Consistent with the two-state interactiveaccount, object model selection processes supporting categorization occurafter 200 ms and can be modulated by top-down processes of model verificationimplemented in VLPFC-parietal networks to facilitate object constancyunder impoverished viewing conditions.53.531 The Speed of Categorization: A Priority for People?Michael Mack 1 (michael.mack@vanderbilt.edu), Thomas Palmeri 1 ; 1 Department ofPsychology, Vanderbilt UniversityObjects are typically categorized fastest at the basic level (“dog”) relativeto more superordinate (“animal”) or subordinate (“labrador retriever”)levels (Rosch et al., 1976). A traditional explanation for this basic-leveladvantage is that an initial stage of processing first categorizes objects at thebasic level (Grill-Spector & Kanwisher, 2005; Jolicoeur, Gluck, & Kosslyn,1984), but this has been challenged by more recent findings (e.g., Bowers &Jones, 2008; Mace et al., 2009; Mack et al., 2008, 2009; Rogers & Patterson,2007). In the current study, we explored whether there is temporal priorityin processing people by measuring the time course of categorizationand evaluating behavioral data using a computational model of perceptualdecision making (Ratcliff, 1978). We contrasted speeded categorization ofpeople versus speeded categorization of dogs, manipulating the similaritybetween the targets and distractors (similar distractors were other animalsand dissimilar distractors were nonliving objects) and the homogeneity ofthe set of distractors (two versus ten object categories). Participants weremore accurate and faster for both people and dogs when distractors weredissimilar to the targets and the homogeneity of distractors did not havean effect on performance. But critically, we found a temporal advantagefor categorizing people both in overall reaction times and in measures ofminimal processing time for successful categorization. Not only were peo-280 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Morning Postersple categorized faster than dogs, they were also categorized earlier. Modelpredictions suggested that a temporal advantage for categorizing peoplearises from both a priority in perceptual encoding and a faster accumulationof evidence for a decision. The current study significantly extendsrecent work by further characterizing the time course of categorizationat different levels and for different kinds of objects and investigating theunderlying mechanisms within a computational framework.53.532 Top-down models explain key aspects of a Speed-of-Sightcharacter recognition taskGarrett Kenyon 1,2 (garkenyon@gmail.com), Shawn Barr 2 , Michael Ham 1 , VadasGintautas 1 , Cristina Rinaudo 1,2 , Ilya Nemenman 3 , Marian Anghel 1 , Steven Brumby 1 ,John George 1 , Luis Bettencourt 1 ; 1 Los Alamos National Laboratory, 2 New MexicoConsortium, 3 Emory UniversityObject recognition is very rapid, typically reaching completion within150 msec following image onset, consistent with intersaccade intervals inhumans. In Speed-of-Sight tasks, recognition can be interrupted by maskspresented at a given delay—termed the Stimulus Onset Asynchrony(SOA). Featureless images (black, white, grey or white noise) are minimallyeffective as masks, even at very short SOAs (e.g. 20 ms). Optimalmasks can significantly compromise object identification at SOAs of 60-80ms or more. We conducted a 2AFC experiment in which subjects reportedthe location (left/right) of targets presented next to distractors, with bothimages quickly replaced by identical masks. To limit image parameterswhile allowing task difficulty to be varied, images were depicted on a 7-segment LED-like display. Targets were always a specific digit (e.g. “2”or “4”). Masks and distractors consisted of digits, letters or non-semanticsymbols composed from the same 7 segments. To account for the observedvariability in mask efficacy for different target-mask combinations, weconstructed a model that combined dynamical variables representing feedforwardfeature detectors—corresponding to the 7 image segments—withhigh-level pattern detectors for targets, masks and distractors. Maskingwas most dependent on feature level competition: the numeral 8 was aneffective universal mask whereas the numeral 1 was a poor mask, allowingmany targets to be reliably detected after a 20 msec SOA. Accounting formask effectiveness required postulating top-down or feedback influencesfrom pattern detectors to modulate the confidence or persistence of lowlevelfeature detectors. Our results suggest that masking occurs at the levelof low-level features and is strongly modulated by top-down or feedbackprocesses, inconsistent with purely feedforward models often proposed toaccount for Speed-of-Sight results.Acknowledgement: NSF PetaApps program53.533 Comparing Speed-of-Sight studies using rendered vs.natural imagesKevin Sanbonmatsu 1 (kys@lanl.gov), Ryan Bennett 2 , Shawn Barr 3 , CristinaRenaudo 1,3 , Michael Ham 1,3 , Vadas Gintautas 1,3 , Steven Brumby 3 , John George 3 ,Garrett Kenyon 1,3 , Luis Bettencourt 1,3 ; 1 Los Alamos National Laboratory, 2 Univeristyof North Texas, 3 New Mexico ConsortiumViewpoint invariant object recognition is both an essential capability ofbiological vision and a key goal of computer vision systems. A criticalparameter in biological vision is the amount of time required to recognizean object. This time scale yields information about the algorithm used bythe brain to detect objects. Studies that probe this time scale (speed-of-sightstudies) performed with natural images are often limited because imagecontent is determined by the photographer. These studies rarely containsystematic variations of scale, orientation and position of the target objectwithin the image. Semi-realistic three-dimensional rendering of objects andscenes enables more systematic studies, allowing the isolation of specificparameters important for object recognition. To date, a computer visionalgorithm that can distinguish between cats and dogs has yet to be developedand the specific cortical mechanisms that enable biological visualsystems to make such distinctions are unknown. We perform a systematicspeed-of-sight study as a step towards developing such an algorithm byenabling a better understanding of the corresponding biological processingstrategies. In our study, participants are given the task of reporting whetheror not a cat is present in an image (‘cat / no cat’ task). The object image isdisplayed briefly, followed by a mask image. As a mask, we use images ofdogs as well as 1/f noise. We perform studies with natural images and withrendered images and compare the results.53.534 Electrophysiological evidence for early visual categorisationat 80 MSEmmanuel Barbeau 1 (emmanuel.barbeau@cerco.ups-tlse.fr), Denis Fize 1 , HolleKirchner 1 , Catherine Liégeois-Chauvel 2 , Jean Régis 2 , Michèle Fabre-Thorpe 1 ;1 Centre de Recherche Cerveau et Cognition UMR 5549 (CNRS - UniversitéPaul Sabatier Toulouse 3), Faculté de Médecine de Rangueil, 31062 ToulouseCedex 9, France. Email: emmanuel.barbeau@cerco.ups-tlse.fr, 2 Laboratoirede Neurophysiologie et de Neuropsychologie U751 (INSERM - Université de laMéditerranée - AP-HM Timone), Faculté de Médecine 27, boulevard Jean Moulin,13385 Marseille Cedex 05, FranceObjective: Using surface ERPs, it has been shown that the processing ofnatural scenes could be done in 150 ms. Here, we use intracerebral recordingsduring a visual Go-Nogo categorisation paradigm to investigate theearliest cerebral effects associated with this task. Methods: CF is a 35 yearoldwoman who underwent intracerebral investigation for presurgicalevaluation of intractable epilepsy. 12 depth electrodes, containing from 10to 15 contacts, were implanted. CF performed a Go-Nogo task in whichshe had to press a button each time she saw a human face among animalfaces. Instructions were then reversed. Stimuli were natural scenes in whichtargets were close-ups of a large variety of faces. Both categories of stimuliwere matched for mean contrast, luminance and spatial frequency. CF alsounderwent different control tasks. Results: Mean accuracy was 95% correctresponses. A focal negative ERP peaking around 80 ms was recorded inthe right calcarine fissure anterior to the junction with the parieto-occipitalfissure. The maximum amplitude of this ERP was larger for targets (humanor animal). With human faces as targets, the ERP peaked at 80 ms with anamplitude of -85 mV while it showed lower amplitude for non-target animalfaces (-71 mV; p


Tuesday AMTuesday Morning Posters53.536 Top-down processes modulate occipitotemporal cortex tofacilitate cognitive decisions with visually impoverished objectsafter 200 ms: Evidence from neural repetition effectsHaline Schendan 1,2 (haline_e.schendan@tufts.edu), Lisa Lucia 1 ; 1 Tufts University,2 MGH Martinos CenterA series of neuroimaging and event-related potential (ERP) studies investigatedthe brain dynamics of the visual constancy of cognitive decisionsabout objects. Model verification theory proposes top-down processes ofmodel prediction and testing in ventrolateral prefrontal cortex (VLPFC)and ventrocaudal intraparietal sulcus (vcIPS) regions implicated in mentalrotation modulate occipitotemporal cortex to enable cognitive decisionswith highly impoverished objects, such as unusual views. Regarding timing,a two-state interactive account proposes that, after the initial bottomuppass, these brain regions interact to support cognitive decisions aboutvisual objects during a frontal N3 complex. These accounts and multiplememory systems and transfer appropriate processing theories of memorypredict the largest repetition effects for same unusual views in thesebrain areas during the N3 complex because model verification processesare recruited during both study and memory test only in this condition.To test these ideas, repetition effects were compared to objects in unusualand canonical (best) views seen before from the same or the other viewduring categorization and recognition. Neuroimaging results showed thatsame unusual views show (a) the most suppression on both tasks in modeltesting regions in caudal VLPFC (BA 44/6), vcIPS, and dorsal occipitotemporalcortex, and (b) more suppression on categorization than recognitionin model prediction regions of mid-VLPFC, lateral occipital, and fusiformcortex. ERP results showed that a frontopolar N350 subcomponent of theN3 complex exhibits the task-general pattern seen in model testing regions,whereas a centrofrontal N390 subcomponent exhibits the categorizationspecificpattern seen in the model prediction regions. Altogether, thesefindings implicate top-down processes between 200 and 500 ms in thevisual constancy of cognitive decisions about objects and implicit memory,and indicate that vision and memory theories combined best explain thehuman brain dynamics for visual object cognition.53.537 Comparison between discrimination and identificationprocesses using line-drawing stimuliKosuke Taniguchi 1 (cosk-t@let.hokudai.ac.jp), Tadayuki Tayama 1 ; 1 GraduateSchool of Psychology, Hokkaido Univ.In object recognition, some cognitive processes, such as detection, discrimination,identification and categorization are involved. We have noadequate knowledge about what kind of properties in stimuli are involvedin these processes and how these processes are associated with each other.By focusing on the processes of discrimination and identification, the presentstudy aimed to exhibit the difference between these processes. Twoexperiments were conducted, one about the discrimination task, and theother about the identification task. In the discrimination task, two stimuliof line-drawings objects were briefly presented (200ms) at the center of thescreen and observers were asked to judge whether the objects are the sameor not. The procedure of the identification task was almost the same exceptthat the target was assigned by a word before two stimuli were presentedand observers were asked to choose one of them as the target. Eight objectswere selected from Snodgrass and Vanderwart (1980) as stimuli. The backgroundof stimuli was masked by black-and-white random-noise. Thestimuli (black line-drawing objects) were also masked probabilistically byrandom-noise. We analyzed the accuracy and RT on the discrimination andidentification tasks and examined the correlation between these values andintrinsic information of line-drawing objects in order to investigate whichinformation is involved in the discrimination and identification processes.The results of both task showed that there is no clear relationship betweenaccuracy and local information in stimuli. However, the difference in holisticcomplexity has some influences on the discrimination of line-drawingobject. On the other hand, identification was related to the contour’s factorof object (such as feature points and complexity of object component).Therefore, we assume that the discrimination process involves the comparisonof holistic shapes and the identification process involves using strongfeatures of the object as cues.VSS 2010 Abstracts53.538 The interaction of structural and conceptual informationdetermines object confusabilityDaniel Kinka 1 (kinkadan@gmail.com), Kathryn Roberts 1 , Cindy Bukach 1 ; 1 theUniversity of Richmond, Richmond, VAThe current study examines the impact of conceptual information on competitionfrom shared structural features during an object recall task. Previousstudies with category specific visual agnosia (CSVA) patient ELMshow that structural dimensions such as tapering and pinching are storedin a distributed fashion, and that integration of these dimensions can failduring recall due to competition from objects that share values on thesedimensions (Dixon, Bub & Arguin, 1997). Visual similarity is thereforedetermined not only by proximity among distributed diagnostic structuraldimensions, but also by the number of values shared by objects within acategory. Importantly, ELM was able to use distinctive conceptual informationto resolve structural competition during recall. This interactionbetween structural similarity and conceptual relatedness was replicated innormal recall of newly learned attributes to known objects (Bukach et al.,2004). However, it is difficult to define the relevant diagnostic features ofreal objects. In the current study, we use novel stimuli to manipulate boththe number of shared features and conceptual relatedness of objects whilecontrolling for similarity due to proximity. Participants associated eitherconceptually related or unrelated labels to object sets that shared few ormany values on diagnostic structural dimensions. Participants in the conceptuallyrelated and shared structural values condition made more errorsthan any other group. These results are consistent with the pattern of errorsfrom CSVA patient ELM and provide strong evidence for the influence ofconceptual information on resolution of competition from shared values onstructural dimensions during normal recall.Acknowledgement: the University of RichmondFace perception: DisordersVista Ballroom, Boards 539–552Tuesday, May 11, 8:30 - 12:30 pm53.539 Face detection in acquired prosopagnosiaBrad Duchaine 1 (b.duchaine@ucl.ac.uk), Lucia Garrido 2 , Chris Fox 3 , GiuseppeIaria 4 , Alla Sekunova 3 , Jason Barton 3 ; 1 Institute of Cognitive Neuroscience,University College London, 2 <strong>Vision</strong> <strong>Sciences</strong> Laboratory, Department ofPsychology, Harvard University, 3 Human <strong>Vision</strong> and Eye Movement Laboratory,Neuro-ophthalmology, University of British Columbia, 4 Department ofPsychology, University of CalgaryDetection of faces in visual scenes has received extensive attention inmachine vision, but limited research has addressed face detection inhumans. Here we assess face detection in six participants with acquiredprosopagnosia resulting from a variety of lesions to better understand theprocesses and neural areas contributing to face detection and the relationof detection to other stages of face recognition. All six participants showedsevere impairments on tests of facial identity recognition, confirmingprosopagnosia, and participants were also extensively tested for perceptualdiscrimination of faces. We used structural MRI to delineate the lesionsand functional MRI to show the status of the core regions of the face-processingnetwork (OFA, FFA, STS). Two tasks requiring visual search forthe presence of a face among distractor stimuli assessed detection in thepatients and 12 age-matched controls. Two participants, R-AT2 and B-AT1,performed normally on both tasks. These patients had anterior temporallesions that did not affect their core face-processing network. Two participants,R-AT1 and R-IOT4, had severe detection impairments while the performanceof R-IOT1 and B-AT/IOT1 was borderline. These four subjectsall showed difficulty on perceptual tasks requiring discrimination of facialidentity. Except for R-AT1, all subjects had lesions to right inferior occipitotemporalcortex, with loss of the FFA and OFA in R-IOT1 and B-AT/IOT1 and loss of the FFA alone in R-IOT4. Furthermore, DTI analysis in R-AT1 suggested reduced fractional anisotropy in the region of the FFA andOFA. The association between detection and identity perception suggeststhat these abilities may be supported by the same processes. Impairment inthese abilities correlates with damage to the core face-processing network282 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Morning Postersin the right hemisphere. Face detection deficits in R-IOT4 despite preservationof the right and left OFA indicates that these regions are not sufficienton their own to support detection.Acknowledgement: Support from ESRC (RES-061-23-0040) to BD and CIHR (MOP-77615)to JB.53.540 The right anterior temporal and right fusiform variants ofacquired prosopagnosiaAlla Sekunova 1,2 (alla_ks@hotmail.com), Brad Duchaine 4 , Lúcia Garrido 4 , MichaelScheel 1,2 , Linda Lanyon 1,2 , Jason Barton 1,2,3 ; 1 Department of Ophthalmologyand Visual <strong>Sciences</strong>, University of British Columbia, 2 Departments of Medicine(Neurology), University of British Columbia, 3 Department of Psychology,University of British Columbia, 4 Institute of Cognitive Neuroscience, UniversityCollege LondonSubtypes of acquired prosopagnosia have long been proposed, includingapperceptive, associative and amnestic variants. The relation of functionalsubtype to underlying variations in lesion anatomy is an area of study. Therecent development of fMRI localizers that reliably identify regions of thecore face-processing network (FFA, OFA, and STS) in single subjects allowsus to transform structure-function questions into investigations of the relationbetween behaviour and the status of face networks. Here we describetwo paradigmatic patients with acquired prosopagnosia:BP had had herpes encephalitis causing a right anterior temporal lesion,with fMRI showing an intact core network, but most likely loss of aIT. Neuropsychologicaltesting showed sparing of other perceptual and memoryfunctions, with deficits on the face component of the Warrington RecognitionTest and the Cambridge Face Memory Test. She was impaired onthe Famous Faces test, but had normal semantic knowledge of people. Shewas normal on many face perception tasks, including face detection, genderperception, expression perception, discrimination of facial features andconfiguration, and view-invariant face discrimination. She was impairedon face imagery, however.RG had a right medial occipitotemporal stroke that destroyed the FFA butspared OFA and STS. Neuropsychological tests showed sparing of otherperceptual and memory functions, and he too was impaired on the FamousFaces test, but had normal semantic knowledge of people. Unlike BP, RGshowed widespread impairments on many face perception tests, includingface detection, gender perception, discrimination of facial features and configuration,and view-invariant face discrimination. Imagery of global facialproperties was normal, in contrast to BP.We conclude that BP has an amnestic variant of prosopagnosia associatedwith right anterior temporal damage, likely including aIT, but sparing OFAand FFA, and that RG has an apperceptive variant, from right fusiformdamage and loss of the FFA.Acknowledgement: ESRC (RES-061-23-0040), CIHR (MOP-77615), Michael SmithFoundation for Health Research, Canada Research Chairs Program.53.541 Residual face-selectivity of the N170 and M170 is relatedto the status of the occipital and fusiform face areas in acquiredprosopagnosiaIpek Oruc 1,2 (ipek@psych.ubc.ca), Teresa Cheung 4 , Kirsten Dalrymple 3 , ChrisFox 1,2 , Giuseppe Iaria 5 , Todd Handy 3 , Jason Barton 1,2,3 ; 1 Department ofOphthalmology and Visual Science, University of British Columbia, 2 . Departmentof Medicine (Neurology), University of British Columbia, 3 Department ofPsychology, University of British Columbia, 4 Department of Physics, SimonFraser University, 5 Department of Psychology, University of CalgaryEvent-related potentials (ERP) using scalp EEG recordings demonstratethat a difference between the perception of face and non-face object stimuliis evident in the N170 potential, usually larger over the posterior regions ofthe right hemisphere. A similar phenomenon is noted in the M170 potentialin magnetoencephalography (MEG). The anatomic origins of this face-selectiveN170 remain uncertain, with proposals that it may reflect contributionsfrom the FFA, STS or both. To investigate this, we studied the face-selectiveN170 using ERP and M170 using MEG in patients with acquired prosopagnosia.Significance of face/object contrasts in single-subject ERP werebased on nonparametric bootstrap confidence intervals. All patients hadundergone extensive neuropsychological and behavioural testing, as wellas structural and functional MRI with a dynamic face localizer (Fox et al,Human Brain Mapping 2009) to characterize the post-lesion status of theircore face-processing network, namely the FFA, STS and OFA. Two patientshad right or bilateral anterior temporal damage from herpes encephalitis,sparing all components of the core network. The ERP data showed that,despite their prosopagnosia, they still showed a significant differencebetween faces and objects in the N170 over the right occipitotemporalregions, which was confirmed in the M170 using MEG in one patient. Threepatients had occipitotemporal damage, two with loss of the FFA alone andone with loss of the FFA and OFA. Two of these subjects showed no differencebetween faces and objects in either the N170 or M170; however onesubject with loss of the FFA alone did show a residual face-selective N170.We conclude that STS survival is insufficient on its own to generate a faceselectiveN170 in some patients, but on the other hand loss of the FFA alonedoes not always eliminate this electrophysiological phenomenon.Acknowledgement: Acknowledgments: NSERC Discovery Grant RGPIN 355879-08, CIHRMOP-7761553.542 Non-identity based facial information processing in developmentalprosopagnosiaGarga Chatterjee 1 (garga@fas.harvard.edu), Bradley Duchaine 2 , Ken Nakayama 1 ;1 <strong>Vision</strong> <strong>Sciences</strong> Laboratory, Department of Psychology, Harvard University,Cambridge, MA, USA, 2 Institute of Cognitive Neuroscience, University CollegeLondon, UKCertain models of face processing ( Bruce and Young , 1986) postulate thatcertain types of non-identity based facial information can be processedindependently of face identity recognition. Developmental prosopagnosiais characterized by a severe deficit in face-identity recognition. The statusof non-identity based face information in this condition would be useful inunderstanding how face processing happens normally and also in individualswith developmental prosopagnosia. Developmental prosopagnosicsgenerally report no subjective deficits in the perception of age, gender orattractiveness.By looking at associations and dissociations of non-identityfacial information like age , gender and attractiveness from face-based identityrecognition, the issues of parallel streams of information extraction canbe evaluated. Also, the kinds of facial information that are compromisedalong with face-based identity recognition will speak to the organizationof these information processing streams by understanding what deficits gotogether and what do not. Phenotype differences may also exist in developmentalprosopagnosia in the nature of the associations and dissociations– information from individual differences is important in this regard.Correlationsof attractiveness judgements by developmental prosopagnosicswith those of the control population are discussed. Tests were developed toassess attractiveness, age, gender perception.By testing 16 developmentalprosopagnosics, we show that normal age and gender processing can existin many such cases in spite of face-identity recognition deficits.53.543 Recognition of static versus dynamic faces in prosopagnosiaDavid Raboy 1 (dar54@pitt.edu), Alla Sekunova 2,3 , Michael Scheel 2,3 , Vaidehi Natu 1 ,Samuel Weimer 1 , Brad Duchaine 4 , Jason Barton 2,3 , Alice O’Toole 1 ; 1 The Universityof Texas at Dallas, 2 Department of Ophthalmology and Visual Science,University of British Columbia , 3 Department of Medicine (Neurology), Universityof British Columbia , 4 Institute of Cognitive Neuroscience, University CollegeLondonA striking finding in the face recognition literature is that motion improvesrecognition accuracy only when viewing conditions are poor. This maybe due to parallel (separate) neural processing of the invariant (identity)information in the fusiform gyrus and changeable (social communication)information in the superior temporal sulcus (pSTS) (Haxby et al., 2000).The pSTS may serve as a secondary “back-up” route for the recognitionof faces from identity-specific facial dynamics (O’Toole et al., 2002). Thispredicts that prosopagnosics with an intact pSTS may be able to recognizefaces when they are presented in motion. We compared face recognitionfor prosopagnosics with intact STS (n=2) and neurologically intact controls(n=19). In our experiment, we used static and dynamic (speaking/expressing)faces, tested in identical and “changed” stimulus conditions (e.g., differentvideo with hair change, etc.). Participants learned 40 faces: half fromdynamic videos and half from multiple static images extracted from thevideos. At test, participants made “old/new” judgments to identical andchanged stimuli from the learning session and to novel faces. As expected,controls showed equivalent accuracy for static and dynamic conditions,with better performance for identical than for changed stimuli. Usingthe same procedure, we tested two prosopagnosic patients: MR, who hasa lesion that destroyed the right OFA and FFA, and BP, who has a rightTuesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>283


Tuesday Morning PostersVSS 2010 AbstractsTuesday AManterior temporal lesion sparing these areas. For identical stimuli, MR andBP performed marginally better on static faces than on dynamic faces. Forthe more challenging problem of recognizing people from changed stimuli,both MR and BP performed substantially better on the dynamic faces. Themotion advantage seen for MR and BP in the changed stimulus conditionis consistent with the hypothesis that patients with a preserved pSTS mayshow better face recognition for moving faces.Acknowledgement: CIHR MOP-77615, Canada Research Chair program, Michael SmithFoundation for Health Research (JB)53.544 Neural differences between developmental prosopagnosicsand super-recognizersRichard Russell 1 (rrussell@gettysburg.edu), Xiaomin Yue 2 , Ken Nakayama 3 , RogerB.H. Tootell 2 ; 1 Department of Psychology, Gettysburg College, 2 MassachusettsGeneral Hospital, Harvard Medical School, 3 Department of Psychology, HarvardUniversityDevelopmental prosopagnosia (DP) is a condition marked by very poorface recognition ability despite normal vision and absence of brain damage.At the opposite end of the face recognition spectrum, super-recognizers arepeople who are exceptionally good at recognizing faces (Russell, Duchaine& Nakayama 2009). In previous fMRI studies, subjects with DP showedpatterns of brain activity similar to normal subjects. Here we extendedthe range of face recognition ability by comparing fMRI activity in DPsversus super-recognizers. Test stimuli included 1) standard localizers forface-selective activity (face vs. place images), and 2) faces of normal versusreversed contrast polarity. In normal subjects, reversal of contrast polarityproduces a deficit in both facial recognition and face-selective brain activity(George, et al, 1999; Gilad, Meng, Sinha, 2009). The results indicate that: 1)DPs had smaller Fusiform Face Areas (FFAs) than the super-recognizers, 2)super-recognizers showed higher face selectivity in FFA, compared to DPs;3) super-recognizers had stronger responses to faces in FFA, compared toDPs; 3) In FFA, both groups showed a larger response to faces of normalcontrast polarity, compared to faces of reversed contrast polarity; 5) in FFA,super-recognizers did not show a larger contrast polarity bias, comparedto DPs. However, 6) super-recognizers did show a larger contrast polaritybias in the anterior temporal lobe, bilaterally. These results support previousevidence that some aspects of mid-level face processing (e.g. contrastpolarity sensitivity in FFA) are automatic and bottom-up in nature, and donot differ as a function of facial recognition. Other aspects of our data (inFFA and the anterior temporal face region) may well be related to the facialrecognition differences in these two populations.53.545 Impaired face recognition despite normal face-spacecoding and holistic processing: Evidence from a developmentalprosopagnosicTirta Susilo 1 (tirta.susilo@anu.edu.au), Elinor McKone 1 ; 1 Department ofPsychology, Australian National University, Canberra, ACT 0200, AustraliaIt has long been presumed that face-space coding and holistic processingare primary determinants of successful face recognition. Here, however,we present a case of a severe developmental prosopagnosic who showednormal face-space coding and holistic processing. Subject SP (a 23 yearoldfemale) demonstrated severe face perception and face memory deficits,scoring 2.24 – 6.87 SD below the mean on the Cambridge Face PerceptionTest, the Cambridge Face Memory Test, and the MACCS Famous Face Test2008. Her deficits appeared to be face-specific: performance was well withinthe normal range on a general intellectual test (Raven), a word memorytask, the Birmingham Object Recognition Battery, and the Cambridge CarRecognition Test. To investigate SP’s face-space coding, we used a widerange of face adaptation aftereffect experiments. Compared to controls, SPshowed normal eye-height, expanded/contracted, and identity aftereffectsfor faces; she also showed normal expanded/contracted aftereffects forside-views of horses. Importantly, we ruled out the interpretation that SP’snormal face aftereffects arose from object-general representations: exactlylike controls (Susilo, McKone, & Edwards, VSS 2009), SP showed weaktransfer of aftereffects between a vertically-distorted T-shape and test facesvaried in eye-height. To investigate holistic processing, SP completed threecomposite tasks (one naming, two same/different) with three different setsof faces. She demonstrated normal composite effects for upright faces. Crucially,she showed no composite effect for inverted faces, ruling out thepossibility that the upright effect was driven by a general global attentionalbias. The case of SP suggests that face-space coding and holistic processingalone may not be sufficient to explain face recognition, and speaks to thepossibility of other important determinants behind successful face recognitionperformance.Acknowledgement: Supported by: Australian Research Council DP0450636 andDP0984558 to EM scholarship support from ANU Centre for Visual <strong>Sciences</strong> andoverseas student fee waiver from ANU Department of Psychology to TS.53.546 Holistic perception of facial expression in congenitalprosopagnosiaRomina Palermo 1 (Romina.Palermo@anu.edu.au), Megan Willis 2 , Davide Rivolta 2 ,C. Ellie Wilson 2 , Andrew Calder 3 ; 1 Department of Psychology, The AustralianNational University, Canberra, Australia, 2 Macquarie Centre for CognitiveScience (MACCS), Macquarie University, Sydney, Australia, 3 MRC Cognition andBrain <strong>Sciences</strong> Unit, Cambridge, England, UKPeople with developmental or congenital prosopagnosia (CP) have neverlearned to adequately recognise faces, despite intact sensory and intellectualfunctioning. Recent research suggests these individuals can have adeficit restricted to recognising facial identity, with no apparent difficultiesrecognising facial expressions. These findings are currently being usedto infer dissociable cognitive systems underlying facial identity and facialexpression recognition. However, this logic only holds if the intact facialprocesses are achieved via the same mechanisms as controls, rather thancompensatory strategies developed to overcome deficits. At present, thisis yet to be established. The aim of this project was to determine whetherCPs with apparently intact facial expression recognition abilities wereusing “normal” mechanisms to process this non-identity information fromfaces, or whether they were using atypical compensatory mechanisms. Weassessed the facial expression recognition abilities of a group of adult CPsand a group of age- and sex- matched controls. All of the CPs demonstrated“spared” facial expression recognition abilities on a sensitive battery offacial expression tasks. Given this, we assessed whether the CPs would alsoshow evidence of holistic facial expression perception in a composite task.Composite facial expressions were composed by aligning the top half ofone expression (e.g., anger) with the bottom half of another (e.g., happiness).Control participants were slower to identify the expressions in onehalf of the face when they were aligned rather than misaligned; that is a“composite effect”. Results suggest that the group of CPs also displayeda composite effect for expression. This indicates that CPs and controls usesimilar mechanisms to perceive facial expressions, and thus that facial identityand facial expression recognition might indeed be dissociable in developmentaldisorders of face recognition.53.547 Are deficits in emotional face processing preventingperception of the Thatcher illusion in a case of prosopagnosia?Natalie Mestry 1 (nm205@soton.ac.uk), Tamaryn Menneer 1 , Hayward Godwin 1 ,Rosaleen McCarthy 2 , Nicholas Donnelly 1 ; 1 School of Psychology, Universityof Southampton, UK, 2 Wessex Neurological Institute, Southampton GeneralHospital, UKBehavioural studies using the Thatcher illusion are usually assumed todemonstrate configurality in upright face processing. Previously, we havereported on PHD, an individual with prosopagnosia, could not discriminateThatcherized faces but showed some evidence for residual face processing(VSS, 08). Recent functional imaging data suggests a role for emotionalexpression perception in discriminating Thatcherized from neutralfaces (Donnelly & Hadjikhani, in preparation). Here we report on a series ofemotion perception tasks were conducted on PHD and control participants.Results for PHD revealed: (1) specific deficits for distinguishing magnitudeof anger and disgust; (2) poor sensitivity when discriminating faces as oneof two given emotions; (3) a within category deficit for intensity, but nointensity deficit between emotions unless disgust was present; (4) a differentsolution for PHD relative to controls in respect of a multidimensionalscaling study for sameness judgements of faces varying in emotion identityand intensity. We consider possible relationships between PHDs emotionperception and his ability to discriminate Thatcherised from normal faces.Acknowledgement: ESRC284 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Morning Posters53.548 Acquired prosopagnosia as a face-specific disorder: Rulingout the visual similarity hypothesisThomas Busigny 1 (thomas.busigny@uclouvain.be), Markus Graf 2 , Eugène Mayer 3 ,Bruno Rossion 1 ; 1 Universite Catholique de Louvain, Louvain-la-Neuve, Belgium,2 Max Planck Institute for Human Cognitive and Brain <strong>Sciences</strong>, München,Germany, 3 University Hospital of Geneva, SwitzerlandThe understanding of the nature of prosopagnosia – classically defined asa disorder of visual recognition specific to faces following brain damage– can inform about how visual face recognition is performed in the normalhuman brain. However, according to a long-standing alternative viewof prosopagnosia, the prosopagnosic impairment would rather reflect ageneral difficulty for fine-grained discrimination in visually homogenousobject categories (Faust, 1955; Damasio et al., 1982; Gauthier et al., 1999).We tested this hypothesis stringently with a well-known brain-damagedprosopagnosic patient (PS, Rossion et al., 2003), in three delayed matchingexperiments in which the visual similarity between the target and distractorwas manipulated parametrically. We used 3 kinds of stimuli: novel 3Dgeometric shapes manipulated on single or multiple dimensions, morphedcommon objects (Hahn et al., 2009), and morphed photographs of a highlyhomogenous familiar category (cars). In every experiment, there was noevidence of a steeper increase of error rates and RTs with increasing levelsof visual similarity for the patient, relative to normal observers. These datacategorically rule out an account of acquired prosopagnosia in terms of ageneral problem of fine-grained discrimination in a visually homogenouscategory. Finally, a fourth experiment with faces showed that, compared tonormal observers, the patient’s impairment with morphed faces was bestrevealed at the easiest levels of discrimination, i.e. when individual facesdiffer clearly in global shape rather than in fine-grained details. Overall,these observations indicate that the alternative view of prosopagnosia as amore general impairment for fine-grained discrimination in visually homogeneousobject categories does not hold.53.549 Typical and atypical development of a mid-band spatialfrequency bias in face recognitionHayley C. Leonard 1 (hleona01@students.bbk.ac.uk), Dagmara Annaz 2 , AnnetteKarmiloff-Smith 3 , Mark H. Johnson 1 ; 1 Centre for Brain and Cognitive Development,Birkbeck, University of London, 2 School of Psychology, MiddlesexUniversity, 3 Developmental Neurocognition Lab, Birkbeck, University of LondonPrevious research has found that adults rely on middle spatial frequenciesfor face recognition. The objectives of the present study were to followthe development of the mid-band bias in the typical population fromearly childhood and to compare this development in autism and Williamssyndrome. The current paradigm was adapted from the adult literature touse across development and involved masking different spatial frequencybands in face images. Poorer performance when a particular band wasmasked would imply that this band was being used during face recognition.Thirty-six typically developing controls (TD), eighteen children withhigh-functioning autism (HFA) and fourteen children with Williams syndrome(WS) between 7- and 15-years-old learned to recognise two facesand then determined which face had been masked during presentation ina 2AFC task. Masks covered the face images at either 8 (LSF), 16 (MSF) or32 (HSF) cycles per image. The use of each spatial frequency was plottedover developmental time for the three groups. In the TD group, 7-year-oldsrelied significantly more on HSF information than 15-year-olds, while theuse of LSFs and MSFs were not significantly predicted by age. An adult-likebias towards the mid-band was evident by the age of 15. Interestingly, theHFA group followed an almost identical pattern. The WS group, however,demonstrated no change in the use of HSFs with age, but a decrease in theuse of LSFs between 7- and 15- years-old. Both disorder groups displayedthe adult-like mid-band bias found in typical development by the end ofthe age range studied. These data suggest that the mid-band bias emergesover an extended period of time during childhood. They also confirm theimportance of comparing syndromes across a wide age range, demonstratingthat the same adult outcome can be achieved through different developmentalprocesses.Acknowledgement: UK Medical Research Council, Grant G0701484 ID: 8503153.550 Configural and Feature-based Processing of Human Facesand Their Relation to Autistic TendenciesScott Reed 1 (sreed@uoregon.edu), Paul Dassonville 2 ; 1 Department of Psychology,University of Oregon, 2 Department of Psychology, Institute of Neuroscience,University of OregonThe developmental disorder of autism has been associated with impairmentsin the ability to process the spatial configuration of facial features(Davies et al., 1994). Configural processing impairments have thereforebeen hypothesized to underlie the deficits in emotion recognition andabnormal face processing strategies that are observed in this population(Teunisse & de Gelder, 2001). However, in examining the visual scan patternsof faces in those with autism, results have been inconsistent acrossstudies due to the variability of symptoms that are displayed across smallautistic samples. An alternative method that avoids the confound of variantsymptoms is to measure autistic tendencies in neuro-typical individualsand examine how visual scan patterns of faces may be modulated bythese tendencies. In addition, by disrupting configural processing in neurotypicalindividuals through the inversion of faces, we can examine howconfigural impairments may interact with autistic tendencies to producedeficits in emotion recognition. Subjects (n = 112) completed the Autism,Empathizing, and Systemizing Quotient (AQ, EQ, and SQ) questionnairesand judged the emotional expression in upright and inverted faces whilevisual scan patterns were recorded. The EQ was negatively associated withthe fixation time of the mouth in upright faces, but this did not translatedirectly into changes in accuracy of emotion recognition. Surprisingly, theSQ was found to be positively associated with the magnitude of the inversioneffect on accuracy, with high systemizing tendencies associated withgreater impairments in processing facial affect when configural processingwas further disrupted through inversion of the faces. Because individualswith autism exhibit even more extreme levels of systemizing than in ourneuro-typical participants, the difficulties that clinical populations demonstratein recognizing emotion in upright faces may be due to this relationshipwith systemizing tendencies.53.551 Relationships Among Emotion Categories: Emotion AftereffectsIn High-Functioning Adults with AutismM.D. Rutherford 1 (rutherm@mcmaster.ca); 1 Psychology, Neuroscience & Behaviour,McMaster UniversityVisual aftereffects have been used to determine psychological relationshipsbetween perceived emotional facial expressions (Rutherford, Chattha &Krysko, 2007). Findings indicate that there is an asymmetrical relationshipamong perceived emotion categories: numerous negative emotions opposefew positive emotions. People with autism spectrum disorders (ASD) arebelieved to have atypical perception of emotional facial expressions (e.g.Rutherford & McIntosh, 2007). Two experiments use visual aftereffects toprobe the psychological relationships among emotion categories in thosewith ASD. Experiment 1 was designed to test whether adults with ASDexperienced aftereffects when viewing emotional facial expressions. Happyor Sad faces were the adapting image and a neutral image of the samemodel was the probe image. 19 ASD and 19 control participants saw theadapting image for 45s and the probe image for 800ms. Observers chose alabel in a 4 AFC paradigm to describe the image. Clear evidence of aftereffectsresulted. Experiment 2 was designed to probe relationships among the6 basic emotions. Adapting images were the 6 basic emotions (one per trial)and the probe image was a neutral image of the same model. Responsewas obtained via 6 AFC task in which observers chose one of the six basicemotion labels to describe the probe image. The control group replicatedprevious findings. The ASD group showed evidence of afterimages, butdifferent patterns of opposition: although happy opposed sad and sadopposed happy, the opposite of anger, fear and disgust was sad, whereasit was happy for the control group. Also, the opposite of surprise was predominantlydisgust, for this group. This study is the first demonstrationthat we know of of visual aftereffects in a group with ASD. It also providesevidence that aftereffects can be used as a tool to reveal idiosyncratic organizationof perceptual categories in special populations.Acknowledgement: Social <strong>Sciences</strong> and Humanities Research CouncilTuesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>285


Tuesday Morning PostersVSS 2010 Abstracts53.552 The Let’s Face It! Program: The assessment and treatmentof face processing deficits in children with autism spectrumdisorderJim Tanaka 1 (jtanaka@uvic.ca), Julie Wolf 2 , Robert Schultz 3 ; 1 Cognition and Brain<strong>Sciences</strong> Program, Department of Psychology, Univeristy of Victoria, 2 ChildStudy Center, Yale University School of Medicine, 3 Center for Autism Research,The Children’s Hospital of PhiladelphiaAlthough it has been well established that individuals with autism exhibitdifficulties in their face recognition abilities, it has been debated whetherthe deficit reflects a category-specific impairment of faces or a perceptualbias toward local level information. To address this question, the Let’s FaceIt! Skills Battery was administered to children diagnosed with autism spectrumdisorder (ASD) and IQ- and age-matched typical developing children.The main finding was that children with ASD were selectively impaired intheir ability to recognize faces across changes in orientation and expression.Children with ASD exhibited preserved featural and configural discriminationin the mouth region, but compromised featural and configuraldiscrimination in the eye region. Critically, for non-face objects, childrenwith autism showed normal recognition of automobiles and a superiorability to discriminate featural and configural information in houses. Thesefindings indicate that the face processing deficits in ASD are not due toa local processing bias, but reflect face-specific impairments, characterizedby a failure to form view-invariant face representations and impairedperception of information in the eyes. Can the face processing deficits ofASD be remediated through perceptual training? In a randomized clinicaltrial, children (N = 42) received 20 hours of face training with the Let’s FaceIt! (LFI!) computer-based intervention. The LFI! program is comprised ofseven interactive computer games that target the specific face impairmentsof autism. The main finding was that relative to the waitlist ASD group(N = 37), children in the active treatment training group demonstratedsignificant gains on the parts/wholes test. The treatment group showedimproved analytic recognition of the mouth features and holistic recognitionof the eyes. These results indicate that a relatively short-term interventionprogram can produce measurable improvements in the face processingskills of children with autism.Acknowledgement: James S. McDonnell Foundation, the National Science Foundation(#SBE-0542013) and the National Science and Engineering Research Councils of CanadaTuesday AM286 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


Tuesday Afternoon TalksMemory: Encoding and retrievalTuesday, May 11, 2:45 - 4:15 pmTalk Session, Royal Ballroom 1-3Moderator: Scott Murray54.11, 2:45 pmEvidence For a Fixed Capacity Limit in Visual SelectionEdward Ester 1 (eester@uoregon.edu), Keisuke Fukuda 1 , Edward Vogel 1 , EdwardAwh 1 ; 1 Department of Psychology, University of OregonRecent studies suggest that visual working memory (VWM) is bestdescribed by a model that enables the storage of a discrete number of itemswith limited precision. Motivated by known similarities between neuralmechanisms of visual selection and working memory, we asked whetherperformance on an attention-demanding selection task could be describedby a similar model. Observers were cued to monitor a variable number oflocations in a masked visual display and discriminate the orientation ofa single target. Performance on this task was well-described by a modelassuming that observers may select a fixed number of spatial locations withlimited precision, while encoding no information from other locations (R2= .94). In contrast, a shared resource model that assumes no fixed selectionlimit and an inverse relationship between the number of selected locationsand the precision of information that can be extracted from any onelocation provided a relatively poor fit to the observed data (R2 = .46). Furthermore,selection capacity estimates obtained in this task were stronglypredictive of VWM capacity estimates obtained in a memory-limited taskthat employed the same stimulus set. Finally, a cue-evoked N2pc (an ERPcomponent thought to reflect the selection and individuation of objects)was strongly predictive of the number of locations that observers couldsuccessfully monitor. This predictive relationship suggests that behavior inthis task was limited by the selection of multiple positions rather than thesubsequent encoding or storage of information at these locations. Together,these findings suggest that visual selection and VWM storage depend ona common fixed capacity system that enables the selection or storage of adiscrete set of positions or items.Acknowledgement: NIH R01MH08721454.12, 3:00 pmSpatial working memory is limited by fixed resolution representationsof locationMegan Walsh 1 (meggersw@jhu.edu), Leon Gmeindl 1 , Jonathan Flombaum 1 , AmyShelton 1 ; 1 Department of Psychological and Brain <strong>Sciences</strong>, Johns HopkinsUniversityIs spatial working memory (SWM) capacity limited by the resolution withwhich locations are coded in memory? To address this question, we comparedthe spatial resolution of location representations in two nearly identicaltasks typically characterized by different capacity limits. In particular,we first replicated previous work demonstrating that SWM span is reducedwhen subjects must recall items in a specific serial order (SO) comparedto when they may recall in any order (AO). In a follow up experiment weconsidered whether the different capacity limits in these tasks result fromdifferences in the resolution of position representations. Specifically, standardspan trials were intermingled with trials wherein a single target wasabsent, compared to memory displays, and participants were required tolocalize the missing target. Resolution was operationally defined as localizationprecision. Surprisingly, we found no significant resolution differencesin SO compared to AO trials. This suggests that SO processing relieson a categorically non-spatial resource, and that it degrades SWM by dualtask interference, not by degrading spatial working memory, per se. Importantly,we found no load effect on resolution in the AO condition for targetloads within an individual’s capacity limit; even a post-hoc comparison forone compared to five targets revealed no significant resolution difference.In other words, there were no per-item costs associated with successfullyremembering more targets, revealing that representations are coded at afixed resolution regardless of load. Perhaps most interestingly, resolutiondid vary participant-by-participant, and these differences were positivelyand significantly correlated with individual SWM spans. Participants withgreater location resolution could remember more targets. Taken togetherthese results suggest that while, within an individual, objects are alwaysencoded into memory with a fixed spatial resolution, person-by-person,differences in resolution may determine differences in capacity.54.13, 3:15 pmEncoding of a scene into memory is enhanced at behaviorallyrelevant points in timeJeffrey Lin 1 (jytlin@u.washington.edu), Amanda Pype 1 , Scott Murray 1 , GeoffreyBoynton 1 ; 1 University of WashingtonConsiderable evidence suggests that the encoding of visual input intomemory is strongly affected by attention. For example, encoding of a sceneis reduced if spatial attention is drawn away by a demanding rapid-serialvisual-presentation(RSVP) task at fixation. However, encoding under suchconditions of divided attention is improved if a scene is particularly salientor novel. Here, we show that the encoding of a scene is also enhanced atbehaviorally relevant points in time, regardless of the content of the sceneand the focus of spatial attention.In Experiment 1, after being familiarized with a set of scenes, participantswere presented with 16 scenes in a RSVP and were surprisingly unableto recognize whether a specific test scene had appeared in the previoussequence of scenes. In Experiment 2, the same set of scenes was presented,but attention was directed to a demanding task at fixation where the goalwas to identify a white target letter among a stream of black distractor letters.As before, one scene was presented for a recognition test immediatelyafter each sequence. Surprisingly, recognition performance was at chanceexcept when the test scenes had been presented concurrently with the whitetarget letters. When interviewed, subjects were unaware of their enhancedmemories for these target-concurrent scenes. In Experiment 3, enhancedencoding of visual scenes was also found at the specific time of an auditorytarget.Results suggest that at behaviorally relevant points in time, visual traces ofthe visual field are automatically encoded into memory regardless of thespatial focus of attention. It is as though the visual system is performinga ‘screen capture’ at the time of target identification; such a screen capturemechanism may play an important role in the retrospective analysisof important events.Acknowledgement: National Institute of Health (NIH) EY 12925 to GMB.54.14, 3:30 pmMagnetic stimulation of frontal brain areas: visual workingmemory suffers, other forms of visual short-term memory notIlja G. Sligte 1 (I.G.Sligte@uva.nl), H. Steven Scholte 1 , Victor A.F. Lamme 1 ; 1 CognitiveNeuroscience Group, Psychology, University of AmsterdamTo guide our behavior in successful ways, we often need to rely on informationthat is no longer in view, but maintained in visual short-termmemory (VSTM). According to recent insights, maintenance of informationin VSTM can happen at multiple levels in the neural hierarchy; eitherlow in primary visual cortex (iconic memory), intermediate in extrastriatevisual cortex (fragile VSTM), or high in parietal and frontal cortex (workingmemory). Previously, we1 have shown that both iconic memory andfragile VSTM can be disrupted, while leaving working memory intact (byshowing respectively light masks and pattern masks). Now, by deliveringtranscranial magnetic stimulation at the right dorsolateral prefrontal cortex(DLPFC) during stimulus maintenance, we show that working memorycapacity can be reduced, while leaving fragile VSTM intact. This impliesthat VSTM stores at different levels of the neural hierarchy operate relativelyindependently from each other.1Sligte, I.G., Scholte, H.S., & Lamme, V.A.F. (2008). Are there multiplevisual short-term memory stores? PLoS ONE 3, e1699.Tuesday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>287


Tuesday Afternoon TalksVSS 2010 AbstractsTuesday PM54.15, 3:45 pmSelective Remembering: Multivoxel Pattern Analysis of CorticalReactivation During Retrieval of Visual ImagesBrice Kuhl 1 (brice.kuhl@yale.edu), Jesse Rissman 2 , Marvin Chun 1 , AnthonyWagner 2,3 ; 1 Yale University, Department of Psychology, 2 Stanford University,Department of Psychology, 3 Stanford University, Neurosciences ProgramEpisodic retrieval is thought to involve reactivation of cortical regions thatsupport encoding. The fidelity of our memories is putatively related to howwell target memories are selectively reactivated during retrieval attempts.The present study employed multivoxel pattern analysis (MVPA) of fMRIdata from a visual memory task to assess how neural measures of corticalreactivation relate to episodic retrieval. We used a paradigm in which individualcue words (nouns) were associated with photographs of well-knownfaces and/or scenes. Cue words were associated with either a face, scene, orboth. Subjects were then presented with the cue word alone and attemptedto retrieve the most recently studied photograph associated with each word.Competition—and the demand for selective retrieval—existed whenever acue word was associated with multiple images. Behavioral results indicatedhigh levels of overall retrieval success, but competitive retrieval was associatedwith lower recall rates and lower levels of retrieval detail. Patternclassification analyses indicated that patterns of activity in ventral temporalcortex that were elicited during encoding were robustly reactivated duringretrieval—that is, classification of the category of retrieved items (face vs.scene) was well above chance. Indeed, the degree of reactivation revealedby pattern analysis increased as a function of retrieval detail that subjectsreported, suggesting a link between reactivation and the phenomenologyof visual remembering. Moreover, high levels of retrieval detail were associatedwith increased activation in the hippocampus, suggesting a role forthe hippocampus in supporting detailed retrieval and cortical reactivation.Finally, the behavioral costs associated with competition between imageswere also reflected in neural measures of ventral temporal reactivation,as classification success was poorer when cue words were associated withmultiple images. These results demonstrate a tight link between the subjectiveexperience of visual remembering and neural evidence of perceptualreactivation.Acknowledgement: This research was supported by research grants NIMH 5R01-MH080309 to A.D.W. and NIH R01-EY014193 and P30-EY000785 to M.M.C54.16, 4:00 pmObject features limit the precision of working memoryDaryl Fougnie 1,2,4 (d.fougnie@vanderbilt.edu), Christopher L. Asplund 1,3,4 , TristanJ. Watkins 4 , René Marois 1,2,4 ; 1 Vanderbilt <strong>Vision</strong> Research Center, 2 Center forIntegrative and Cognitive Neuroscience, 3 Vanderbilt Brain Institute, 4 Departmentof Psychology, Vanderbilty UniversityAn influential theory (Luck & Vogel, 1997) suggests that objects, rather thanindividual object features, are the fundamental units that limit our capacityto temporarily store visual information. This conclusion was drawn fromparadigms in which the observer must detect whether a change occurredbetween a sample and a probe array when the arrays are separated by a shortretention interval. Such ‘change detection’ paradigms reveal that increasingthe number of objects, but not the number of distinct features, affectsworking memory performance (Luck & Vogel, 1997; Olson & Jiang, 2002).Using instead a paradigm that independently estimates the number andprecision of items stored in working memory (Zhang & Luck, 2008), herewe show that the storage of object features is indeed costly. We collectedestimates of the precision and guess rate of working memory responsesas participants had to remember the color, orientation, or both the colorand orientation of isosceles triangles. We found that while the quantity ofstored objects is largely unaffected by increasing the number of featuresper object (no change in guess rate), the fidelity of these representationsdramatically decreased. Moreover, selective costs in precision dependedon multiple features being contained within the same objects, as effects onboth guess rate and fidelity were obtained when the orientation and colorfeatures were presented in distinct objects. Thus, in addition to providingevidence against cost-free conjunctions, our results demonstrate that storageof objects and features both limit visual working memory capacity.We argue that previous reports of cost-free conjunctions were due to theinsensitivity of the tasks to changes in representational precision. Consistentwith this interpretation, we found, using a change detection task, thatmanipulations of feature load do affect performance when the task placesdemands on the precision of the stored visual representations.Attention: Models and mechanisms ofsearchTuesday, May 11, 2:45 - 4:15 pmTalk Session, Royal Ballroom 4-5Moderator: Arni Kristjansson54.21, 2:45 pmIs object recognition serial or parallel?Alec Scharff 1 (scharff@u.washington.edu), John Palmer 1 ; 1 Department ofPsychology, University of WashingtonCan one recognize multiple objects in parallel, as if they were simple features?Or does one “read” objects one-by-one, as if they were words? Weconsider three models of divided attention: a standard serial model, anunlimited-capacity, parallel model, and a fixed-capacity, parallel model.The standard serial model analyzes objects one-by-one. The unlimitedcapacity,parallel model analyzes objects independently and simultaneously.The fixed-capacity, parallel model analyzes objects simultaneously,but acquires information at a fixed rate. Methods: For stimuli, weused images of similar animal categories (e.g., bear, wolf, fox). Observerssearched a brief display of animal images for target categories. This set ofstimuli minimized low-level differences between categories, such as objecttextures and spatial-frequency spectra. For the experiment, we used severalvariations on the simultaneous-sequential paradigm to distinguish amongthe three models. Previously, this paradigm has shown that simple featuresare processed by an unlimited-capacity, parallel model and words are processedby a standard serial model. Results: Current results for objects favorthe fixed-capacity, parallel model over the standard serial model. And,more decisively, the results reject the unlimited-capacity, parallel model.Acknowledgement: University of Washington Royalty Research Fund54.22, 3:00 pmAttention and Uncertainty Limit Visual Search in Noisy ConditionsRichard Hetley 1 (rhetley@uci.edu), Barbara Dosher 1 , Zhong-Lin Lu 2 ; 1 Memory,Attention and Perception Laboratory (MAPL), Department of Cognitive <strong>Sciences</strong>and Institute of Mathematical Behavioral <strong>Sciences</strong>, University of California,Irvine, CA 92697-5100, USA, 2 Laboratory of Brain Processes (LOBES), Danaand David Dornsife Cognitive Neuroscience Imaging Center, Departments ofPsychology and Biomedical Engineering, University of Southern California, LosAngeles, CA 90089-1061, USASignal detection theory- (SDT; Green & Swets, 1966) based uncertaintymodels (Palmer, 1994; Eckstein, 1998) with an unlimited capacity attentionsystem have provided an excellent account of the set size effects in visualsearch accuracy. However, spatial cuing task experiments have foundstrong effects of attention: precuing improves accuracy, especially whenthe target is embedded in a high level of external noise (Lu & Dosher, 1998;Dosher & Lu, 2000). In this research, we attempt to resolve the apparentcontradictory conclusions from these two major lines of inquiry in spatialattention. We hypothesize that the conditions in which an effect of spatially-cuedattention is substantial correspond to conditions in which attentioneffects over and above uncertainty occur in visual search. Our analysissuggests that many of the classical visual search experiments have beencarried out using stimulus conditions where attention effects on perceptionare least likely to be found. We studied visual search in a range of externalnoise and contrast conditions for low and high template overlap (targetdistractorsimilarity). We found that set size effects in high external noiseconditions are larger than expected by decision uncertainty alone: log-logslopes increase sharply in increasing external noise levels, especially inhigh-precision judgments, showing improved external noise exclusion atsmaller set sizes. Additional effects occur in low noise. All these resultsare well accounted by a visual model that uses the elaborated perceptualtemplate model (ePTM; Jeon, Lu & Dosher, 2009), the attention mechanismsdeveloped in the PTM framework (Lu & Dosher, 1998, Dosher & Lu, 2000),and the SDT-based uncertainty calculations. Our empirical results and theoreticalmodel generate a common taxonomy of visual attention in spatialcuing and visual search.Acknowledgement: Funded by 5R01MH81018288 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Afternoon Talks54.23, 3:15 pmSelective attention to transparent motion is by blocking and not byattenuationJohn Palmer 1 (jpalmer@uw.edu), Victor D. Nguyen 1 , Cathleen M. Moore 2 ; 1 Departmentof Psychology, University of Washington, 2 Department of Psychology,University of IowaA filtering paradigm was used to study selection by feature-based attentionin transparent motion displays. Observers viewed a single field ofdynamic random dots with net motion in several possible directions. Thetask was to discriminate between relevant directions of motion while ignoringirrelevant directions of motion. For example, one might discriminatebetween leftward and rightward motion while ignoring diagonal motions.By manipulating the motion strength in both relevant and irrelevant directions,one can test for selection by blocking versus selection by attenuation.With blocking, withdrawing attention prevents detection of even astrong stimulus; with attenuation, withdrawing attention can be overcomeby a sufficiently strong stimulus. The results were consistent with blockingand not attenuation. They rule out models of attenuation such as a motionanalog to the contrast gain model of contrast detection. Possible models ofblocking include attention switching, response gain, or a selection processin decision rather than perception.54.24, 3:30 pmThere are no attentional costs when selecting multiple movementgoalsDonatas Jonikaitis 1 (djonikaitis@googlemail.com), Heiner Deubel 1 ; 1 LudwidMaximilians UniversityStrong dual task costs are observed when people perform two tasks at thesame time, and these costs appear in different tasks, effectors and modalities.Reaction time is commonly used to measure dual task costs, and inferencesabout underlying cognitive factors leading to dual task competitionare made based on shorter or longer reaction times. Based on reaction timesit has been suggested that movement goal selection is a major factor leadingto dual task costs when making multiple goal directed movements atthe same time. To investigate this, we asked participants to point and lookat two different locations while we varied the time between the cues tostart eye and hand movements. Like in previous studies, we observed thatparticipants were slower to start their eye or hand movement if they wereplanning another movement at that time. Identical results were observedwhen participants were planning bimanual movements. Second, we measuredwhether spatial attention caused these observed dual task costs.Movements might have been delayed, because participants were not able toselect multiple movement locations in parallel. We measured attention allocationby presenting short attentional probes at different stages of movementplanning. In strong contrast to the dual task cost observed in movementlatencies, participants allocated their attention to eye and hand movementgoals in parallel and without any cost. The same pattern of results wasevident for bimanual movements. These results demonstrate that observeddual task costs in goal directed movements do not arise from movementgoal selection. The results also suggest a dissociation between movementlatencies and movement goal selection, in that longer movement latenciesdo not equate to delayed movement goal selection.54.25, 3:45 pmHow Does Reflexive Visuospatial Attention Speed TargetProcessing?Naseem Al-Aidroos 1 (naseem.al.aidroos@utoronto.ca), Maha Adamo 1 , Jacky Tam 1 ,Susanne Ferber 1,2 , Jay Pratt 1 ; 1 University of Toronto, 2 Rotman Research InstituteIrrelevant transient stimuli can speed responses to visual targets thatappear soon after at the same location (relative to other locations). Howdo these stimuli speed target processing? Traditionally, they are thoughtto act as cues that reflexively capture visuospatial attention, a mechanismthat provides processing priority to specific regions of the visual field.Here we report behavioral and electrophysiological evidence of the limitsof this explanation. In the first experiment we show that while targets areidentified faster at a cues locations (the classic cueing effect), this effect isincreased when the cue and target are visually similar. Thus, the reflexivecueing effect is not a general attentional enhancement of all visual processingwithin a region of space; rather, some component of the effect isrelated to the identity of the cue. In a second experiment we used attentionalcontrol settings to manipulate whether cues captured attention or notand measured event-related potentials. Cues that captured attention produceda posterior contralateral positivity between 200 to 400 ms after theironset that was absent when they did not capture attention. This componentresembles the Ptc, which has been associated with the resolution of perceptualcompetition between proximal stimuli. More importantly, a similarcomponent was observed time-locked to the target onset, except when thetarget appeared at a cued location. Thus cues may speed target processingby inducing competition resolution, making this process unnecessarywhen the target subsequently appears at that location. These results do notfit well with the notion that reflexive attention is a mechanism deployedto enhance visual processing within regions of space. Instead, the presentresults suggest that transient stimuli initiate perceptual processing, andsubsequent targets can exploit these ongoing processes if, for example, theyappear at the same location or are visually similar.Acknowledgement: Natural <strong>Sciences</strong> and Engineering Research Council of Canada54.26, 4:00 pm“Reversals of fortune” in visual search: Fast modulatory effects offinancial reward upon visual search performanceArni Kristjansson 1,2 (ak@hi.is), Olafia Sigurjonsdottir 1 , Jon Driver 2 ; 1 Faculty ofPsychology, School of Health <strong>Sciences</strong>, University of Iceland, 2 Institute ofCognitive Neuroscience, University College LondonRewards have long been known to modulate overt behavior. Less is knownof their effect upon attentional and perceptual processes. Here we investigatedwhether the (changeable) monetary reward-level associated withtwo different ‘pop-out’ targets might affect color-singleton visual searchand the phenomena of ‘priming of pop-out’, i.e. repetition priming for onetarget type versus the other. Our observers searched for a target diamondshapewith a singleton color among distractor diamond-shapes of anothercolor (e.g. green among red, or vice-versa), then judged whether the targethad a notch at top or bottom. Correct judgments led to monetary reward,with symbolic feedback indicating this immediately, while actual financialrewards accumulated for receipt at study end. One particular targetcolor led to higher (10:1) reward for 75% of its correct judgments, while theother singleton target color (counterbalanced over participants) receivedthe higher reward on only 25% of trials. These reward schedules led notonly to faster performance overall for the more rewarding target color, butalso increased trial-to-trial priming of pop-out for targets of that color. Theactual level of reward received on the preceding trial affected this, as did(orthogonally) the likely level of reward. When reward schedules werereversed within blocks, without explicit instruction, a corresponding reversalof the effect upon search performance emerged significantly withinaround six trials, asymptoting at around fifteen trials, without observers’explicit knowledge of the contingency. These results establish that not onlypop-out search but even priming of pop-out can be influenced by targetreward levels, with search performance and priming effects dynamicallytracking changes in reward contingencies.Acknowledgement: University of Iceland Research FundSpatial vision: Crowding and mechanismsTuesday, May 11, 5:15 - 7:00 pmTalk Session, Royal Ballroom 1-3Moderator: Jeremy Freeman55.11, 5:15 pmCrowding and metamerism in the ventral streamJeremy Freeman 1 (freeman@cns.nyu.edu), Eero Simoncelli 1,2 ; 1 Center for NeuralScience, New York University, 2 Howard Hughes Medical Institute, New YorkUniversity<strong>Vision</strong> is degraded in the periphery. The phenomenon of “crowding” providesa striking example: objects closer together than half their eccentricityare unrecognizable. Crowding has been described as statistical or texturalaveraging of features over spatial regions (Parkes et al., 2001), and recentlyBalas et al. (2009) showed that applying a texture analysis-synthesis model(Portilla & Simoncelli, 2000) to crowded stimuli simulates crowding effects.We develop this hypothesis with an explicit model of extrastriate ventralstream processing that performs eccentricity-dependent pooling across theentire visual field. Images are decomposed with V1-like filters, followed bysimple and complex-cell-like nonlinearities. Pairwise products among V1Tuesday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>289


Tuesday Afternoon TalksVSS 2010 AbstractsTuesday PMoutputs are averaged within overlapping spatial regions that grow witheccentricity according to a single scaling parameter (ratio of size-to-eccentricity).If this model captures the information available to human observers,then two properly fixated images with identical model responsesshould be metamers. We perform experiments to determine the scalingparameter that produces metameric images. Given a natural image, wegenerate images that have identical model responses, but are otherwise asrandom as possible. We measure discriminability between such syntheticimages as a function of scaling. When images are statistically matchedwithin small pooling regions, performance is at chance (50%), despite substantialdifferences in the periphery. With larger pooling regions, peripheraldifferences increase, and discriminability approaches 100%. We fit thepsychometric function to estimate the pooling regions (scaling) over whichthe observer estimates statistics. The result is consistent with the knowneccentricity-dependence of crowding, and also with receptive field sizes inmacaque mid-ventral areas, particularly V2. Finally, we show that metamerssynthesized from classic crowding stimuli (e.g., groups of letters) yieldimages with jumbled, unidentifiable objects. Thus, the model associates thespatial extent of crowding with mid-ventral receptive field sizes, and providesspecific hypotheses for the computations performed by underlyingneural populations.Acknowledgement: NSF Graduate Student Fellowship (J.F.)55.12, 5:30 pmReduced Neural Activity with Crowding is Independent of Attentionand Task DifficultyRachel Millin 1 (rmillin@usc.edu), A. Cyrus Arman 1 , Bosco S. Tjan 1,2 ; 1 NeuroscienceGraduate Program, University of Southern California, 2 Department ofPsychology, University of Southern CaliforniaForm vision in peripheral fields is limited by crowding, the impaired identificationof a target object when surrounded by other items. Crowding isthought to reflect a failure in feature selection and integration, due to limitedspatial resolution of attention or maladapted low-level receptive field propertiesrelated to lateral interactions. We investigated the top-down influencesof task-difficulty and attention on the neuronal response to crowdedand non-crowded letter triplets in the periphery using fMRI. In our firstexperiment, we changed the distance between target and flanking letters,and measured BOLD response while subjects identified the target letter. Ina second experiment, we added conditions identical to the non-crowdedstimuli but partially scrambled the target letter to increase task difficulty.We found that decrease in target-flanker separation was associated withdecrease in BOLD in V2 through V4 for both experiments, but degradingthe target in the non-crowded condition did not cause any such decrease.Therefore, the reduced BOLD signal observed in the crowded conditionwas not due to increased task difficulty per se or any related differences inattention demands. To further determine any interaction between attentionand crowding, we measured BOLD signal induced by peripherally presentedcrowded and non-crowded letter triplets, while subjects’ attentionwas directed to an unrelated task at fixation. Two conditions were addedwhere only the flankers were displayed, without the center letter. We foundthat in the non-crowded condition, adding the center letter increased BOLDsignal, while in the crowded condition the center letter produced no signalincrease, consistent with the interpretation that crowding suppressed signalfrom the center letter. Together, these results show that the low-levelneural response correlated with crowding is independent of task difficultyand attention, indicative of a bottom-up input-driven cause of crowding.Acknowledgement: NIH/NEI R03EY016391, R01EY01770755.13, 5:45 pmCrowding and cortical reorganization at and around the PRL:model and predictionsAnirvan Nandy 1 (nandy@usc.edu), Bosco Tjan 1,2 ; 1 Dept. of Psychology, Universityof Southern California, 2 Neuroscience Graduate Program, University ofSouthern CaliforniaVisual crowding is a ubiquitous phenomenon in peripheral vision andmanifests itself as the marked inability to identify shapes when targets areflanked by other objects. It presents a fundamental bottleneck to object recognitionfor patients with central vision loss. Such patients typically use astable location in their peripheral retina for fixation for a given task. Thislocation is usually very close to the central scotoma and is known as the preferredretinal locus (PRL). Preliminary studies (Chung & Lin, 2008, ARVO)have shown that the crowding zone measured at the PRL does not exhibitthe marked anisotropy that is a hallmark of crowding in the normal periphery(Toet & Levi, 1992). This suggests that there is a process of cortical reorganizationthat reshapes the crowding zone at the PRL. However, little isknown about the underlying causes and the temporal trajectory of the reorganizationprocess. Recently we have proposed a computational model thatexplains crowding as a consequence of inappropriate image statistics thatdrive the lateral (long-range horizontal) connections underlying the normalperipheral visual field (Nandy & Tjan, 2009, SfN). The temporal overlapof spatial attention and subsequent fovea-centric saccadic eye movementsdistort the image statistics to produce the radial anisotropy. By adding toour model the central scotoma and the PRL measured from a patient, weshow that the altered image statistics due to the temporal overlap of spatialattention and PRL-centric eye movements would drive the crowding zoneat the PRL to being isotropic. We also delineate the developmental timecourse from pre-scotoma anisotropy to post-scotoma isotropy as a functionof exposure to the post-scotoma statistics. Further, our model predictsthat the crowding zone at an intact non-PRL location would also undergoreorganization from anisotropy pointing toward the fovea (pre-scotoma) toanisotropy pointing toward the PRL (post-scotoma).Acknowledgement: NIH EY016093, EY01770755.14, 6:00 pmCrowding combinesDenis G. Pelli 1,2 (denis.pelli@nyu.edu), Jeremy Freeman 2 , Ramakrishna Chakravarthi3 ; 1 Psychology, New York University, 2 Center for Neural Science, New YorkUniversity, 3 CNRS, Faculté de Médecine de Rangueil, Université Paul Sabatier,Toulouse, FranceVisual crowding provides a window into object recognition: observers failto recognize objects in clutter. Here we ask, what do they see instead? Weanalyze observers’ errors to show that crowding necessarily reflects thecombination of information across multiple complex objects, rather thanthe mislocalization (or substitution) of one object for another. First, wepresented single letters, randomly chosen, in noise in the periphery andtabulated a confusion matrix based on observers’ (n=3) reports. We thentested the same observers in a classic crowding task, in which they vieweda triplet (target and two flankers) of closely spaced letters in the periphery(10 deg) and reported the identity of the middle target. For each observer,we tailored the triplets based on that observer’s single-letter confusionmatrix. One flanker was chosen to be a letter that was most confused with(most “similar” to) to the target, and the other was chosen to be a letterthat was least confused (least similar). Consistent with the literature, whenmistaken, observers tend to report the flankers. The crucial issue, however,is which of the two flankers observers report on these trials. Blind substitutionpredicts that the two flankers (similar and dissimilar) are equally likelyto be reported. Instead, we find that observers are more likely to report thesimilar flanker (70%) than the dissimilar flanker (30%). The effect of similarityon erroneous responses proves that the response combines informationfrom both the target and the reported flanker. By systematically tailoringthe stimuli, we induced a bias in the reports that reveals a pooled, “mongrel-like,”underlying percept. Our method, applicable to any object, generalizesthe evidence for “compulsory pooling” from the narrow domain ofgrating orientation (Parkes et al., 2001) to complex, everyday objects.Acknowledgement: NIH R01-EY0443255.15, 6:15 pmSaccade-distorted image statistics explain target-flanker andflanker-flanker interactions in crowdingBosco S. Tjan 1,2 (btjan@usc.edu), Anirvan S. Nandy 1 ; 1 Department of Psychology,University of Southern California, 2 Neuroscience Graduate Program, Universityof Southern CaliforniaThe ability to identify an object in peripheral vision can be severely impairedby clutter. This phenomenon of crowding is ubiquitous and is thought to bea key limitation of form vision in the periphery. The interactions betweentarget and flankers are complex, and no single model can account for themyriad of results. Recently we have argued that crowding is due to theimproper encoding of image statistics in peripheral V1 (Nandy & Tjan, 2009SfN). We hypothesize that image statistics are distorted due to a temporaloverlap between spatial attention, which gates the acquisition of imagestatistics, and the subsequent saccadic eye movement it elicits. In terms ofmutual information between edge orientations at neighboring positions,290 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Afternoon Talksthe distortion turns smooth continuation into repetition. By fixing all butone parameter in the model using well-known anatomical and eye-movementdata unrelated to crowding, we found that the spatial extent of thedistortion in orientation statistics precisely reproduces the spatial extentof crowding, with all its tell-tale characteristics: Bouma’s Law, radial-tangentialanisotropy, and inward-outward asymmetry. The reproduction isrobust in the sole free parameter of the model (the hypothesized temporaloverlap between attention and saccade). Here, we extended the model toquantify the effects of crowding on orientation discrimination with a Gabortarget and different flanker configurations. We proceeded from first principlesby using the saccade-distorted image statistics as priors in a Bayesianformulation of a simultaneous segmentation and estimation task within thecomputational framework of a random field. This model can account forthe recent finding of Levi and Carney (2009, Curr. Biol.) that more flankerscan cause less crowding. It can also match ordinally the varying levels ofcrowding induced by the different flanker configurations used in Livne andSagi (in-press, JOV).Acknowledgement: NIH R01EY016093, R01EY01770755.16, 6:30 pmCrowded by drifting Gabors: Is crowding based on physical orperceived stimulus position?Gerrit W. Maus 1,3 (gwmaus@ucdavis.edu), Jason Fischer 1,2,3 , David Whitney 1,2,3 ;1 Center for Mind and Brain, UC Davis, 2 Department of Psychology, UC Davis,3 Department of Psychology, UC BerkeleyIn crowding a stimulus in the periphery (the target) becomes hard to recognizewhen other stimuli (crowders) are presented nearby. Crowdingdepends on the distance of the crowders to the target (Bouma, 1970). If theposition of the crowders is misperceived, which position matters for thedegree of crowding, the physical or the perceived position? In the presentexperiment the crowders were drifting Gabor gratings whose positionappeared as shifted in the direction of motion (DeValois & DeValois, 1991).Six gratings, either drifting inward or outward, surrounded a peripheraltarget grating at systematically varied distances. Observers judgedthe orientation of the target grating, that could be tilted 2 deg clockwiseor counter-clockwise. Observers performed worse, i.e., they experiencedmore crowding, when gratings were drifting towards the target and thusperceived in a position closer to the target. The difference in performancebetween inward and outward drifting crowders corresponds to the differencein perceived positions, as determined in a separate psychophysicalassessment of the size of the mislocalization illusion. The extent of crowdingwas fully determined by the perceived position, which in the presentcase is only available after the integration of motion information. Arguably,feedback from motion-sensitive extra-striate areas is involved in the motionmislocalization illusion. If so, the results demonstrate that crowding is notbased solely on spatial interactions in the feed-forward stream. Furthermore,the present results indicate that the influence of motion on positionperception occurs early, before low-level features such as orientation canbe perceptually judged.Acknowledgement: NIH grant EY01821655.17, 6:45 pmLetter crowding increases with flanker complexityJean-Baptiste Bernard 1 (jb.bernard@berkeley.edu), Susana Chung 1 ; 1 School ofOptometry, UC BerkeleyCrowding refers to the deleterious influence of nearby contours on visualdiscrimination. A popular theory for crowding is that it arises as the consequenceof inappropriate feature integration. This theory predicts that theeffect of crowding increases with the number of features in close proximityto the target. We tested this prediction by examining how letter crowdingdepends on the perimetric complexity of flanking letters, a measurementthat correlates with the number of features. We analyzed a total of 96000trials in which 16 observers (6000 trials per observer) identified the middle(target) letter of sequences of three crowded lowercase letters (center-tocenterseparation = 0.8x the x-height) presented at 10° in the inferior visualfield. Each letter was randomly drawn from the 26 letters of the Romanalphabet. Eight observers were tested with the Times-Roman font andthe other eight with the Courier font. The perimetric complexity (perimetersquared/”ink” area) was determined for each letter and the sum wascalculated for the pair of flankers on each trial. We binned the perimetriccomplexity of the flankers into 10 groups. In general, the error rate of identifyingthe target letter increased linearly with the perimetric complexity ofthe flankers, for both Times-Roman (r = 0.98) and Courier (r = 0.99) fonts.However, the increase of the error rate of identifying the target letter withflanker complexity depends on the target letter complexity such that targetletters of low complexity are more susceptible to the flanker complexity,while target letters of high complexity are less susceptible to the flankercomplexity. These findings are consistent with the prediction based onthe inappropriate feature integration account of crowding, and stronglysupport the speculation that feature integration is a competitive processthat depends on the relative proportion of features between the target andflankers.Acknowledgement: Supported by NIH grant R01-EY012810Perceptual learning: Plasticity andadaptationTuesday, May 11, 5:15 - 7:00 pmTalk Session, Royal Ballroom 4-5Moderator: Sara Mednick55.21, 5:15 pmAdaptation to low signal to noise decreases visual sensitivityStephen Engel 1 (engel@umn.edu), Peng Zhang 1 , Min Bao 1 ; 1 Psychology, Universityof MinnesotaSome neurons in sensory systems will be relatively noisy in a given environmentor for a given task. Reducing spiking in such neurons could allowmore accurate perception and save limited metabolic resources. Whetherthe nervous system automatically limits responses of noisy neurons, however,remains unknown. To test this possibility, we measured how subjects’visual sensitivity changed when they adapted to a lowered signal to noiseratio at a specified orientation in the visual environment. Eight subjectsviewed the world through an “altered reality” system, comprised of a headmounted gray-scale video camera fed into a laptop computer that drove ahead-mounted display (HMD). Vertical information about the world wasremoved from the video images prior to their display, while keeping overallvertical energy constant. This was performed in real time by randomizingthe phases of all vertical Fourier components of the image. Viewingthe vertically randomized video images through the HMD, subjects performedeveryday tasks, such as playing games, and watching movies inan environment where vertical signals were distracting noise. Prior to andfollowing four hours of adaptation to this environment, contrast detectionthresholds were measured for vertical and horizontal sinusoidal patterns (6deg diameter, 1 cpd, presented 8 deg in the periphery). Following adaptation,vertical thresholds increased by more than 15% relative to horizontalthresholds. A second experiment found a reliable reduction in the apparentcontrast of suprathreshold patterns at the noisy orientation following onlyone hour of adaptation. Sensitivity for simple patterns has been linked toresponses of orientation selective neurons in early visual cortex. Exposureto low signal to noise may have caused these neurons to decrease their gain.Such decreases could improve perception in tasks that pool across orientations,and save limited metabolic resources for neurons that signal orientationswith higher information content.55.22, 5:30 pmLearning enhances fMRI pattern-based selectivity for visual formsin the human brainJiaxiang Zhang 1 (j.zhang.1@bham.ac.uk), Zoe Kourtzi 1 ; 1 School of psychology,Univiersity of BirminghamPrevious neurophysiological experiments have shown that learning shapesthe neural representation of low-level stimulus features (e.g., orientation).However, much less is known about the neural mechanisms that mediatelearning to discriminate global forms. Here, we combine psychophysicaland high-resolution fMRI measurements to investigate learning-dependentchanges in the neural representation of forms across the human visual cortex.We employed Glass pattern stimuli defined by dot dipoles and generatedby linear morphing between radial and concentric patterns. Observerswere trained to perform a categorization task (i.e. judged whether eachstimulus was similar to a radial or concentric pattern). Observers weretrained (2400 trials) with stimuli presented in noise (40% signal) and weretested in the scanner before and after training while performing the sameTuesday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>291


Tuesday Afternoon TalksVSS 2010 AbstractsTuesday PMtask. Our behavioural results showed that performance in the discriminationof visual forms was significantly improved after training. Multi-voxelpattern classification analysis (MVPA) showed that stimulus discriminationfrom fMRI responses was enhanced after training in higher occipitotemporalareas (V3a, V7, V3B/KO, and LOC) but not early visual areas.To reliably decode stimuli from fMRI responses, we derived pattern-basedtuning curves by ranking the voxel-based preferences for each stimuluscondition and fitting the MVPA results with a Gaussian function. Weobserved significant learning-dependent decreases in the width but not theamplitude of the tuning curves in higher occipito-temporal areas. This findingwas replicated by a control experiment, during which observers werescanned while performing a dot-density discrimination task that controlledfor differences in task difficulty across stimulus conditions. These findingssuggest that learning shapes the fine-tuned representation of global formsrelated to neural selectivity in higher visual areas rather than simply theoverall fMRI responsiveness related to attentional gain processes.Acknowledgement: This work was supported by grants from the Biotechnology andBiological <strong>Sciences</strong> Research Council to ZK [D52199X, E027436]55.23, 5:45 pmREM sleep prevents interference in the texture discrimination taskSara Mednick 1 (smednick@ucsd.edu); 1 Department of Psychiatry, School ofMedicine, UCSDImprovement in the texture discrimination task (TDT) depends on sleep,shows feature and retinotopic specificity, varies with stimulus exposure,and is vulnerable to interference. Using a classic interference paradigm,Yotsumoto et al (2009) showed that two-part training on competing backgroundorientations blocked learning of the initial stimulus set. We examinedthe role of sleep in perceptual interference on a short version of theTDT (to avoid deterioration effects). Interference was tested using twoparttraining with vertical or horizontal background orientations, whichwere presented in either the same or different retinotopic locations (lowerleft and upper right). We examined three nap conditions: naps (with andwithout rapid eye movement (REM) sleep) compared with quiet rest. Wefound that quiet rest showed no improvement in the interference condition,whereas naps with REM sleep eliminated interference effects. In fact, REMnaps showed more than double the magnitude of learning in the interferencecondition compared with the non-interference condition. Learningin the interference condition was significantly correlated with the amountof REM sleep in the nap, no other sleep stage was related to performancechanges. Interestingly, quiet rest produced perceptual learning to the samedegree as sleep in the non-interference conditions. In conclusion, when thebrain is serially presented with competing information, such as discriminatinga target embedded within two different background orientations, REMsleep appears to enhance memory for information presented first. Thus, thememory trace becomes resilient to interference from subsequent competingtargets. Furthermore, although prior studies have compared sleep to activewaking controls, we find that quiet rest maybe as effective for learning asREM sleep.Acknowledgement: K01-MH08099255.24, 6:00 pmTransfer in perceptual learning as extrapolationC. Shawn Green 1,2 (csgreen@umn.edu), Daniel Kersten 1,2 , Paul Schrater 1,2,3 ;1 Department of Psychology, University of Minnesota, 2 Center for Cognitive<strong>Sciences</strong>, University of Minnesota, 3 Department of Computer Science, Universityof MinnesotaGiven appropriate training, humans will demonstrate improvement on virtuallyany perceptual task. However, the learning that occurs is typicallyhighly specific to the training task and stimuli. In the language of machinelearning, such specificity is indicative of what is known as policy learning.Policies can be thought of simply as lookup tables that map states ontoactions (i.e.-”what to do”). Importantly, policies are specific to a given goal;if the goal is changed, knowing what was previously the right thing to doprovides no information regarding what is currently the right thing to do.As an example, in a typical orientation discrimination task (“Was the gabortilted clockwise or counterclockwise from a reference angle?”), the optimalpolicy relies on a discriminate. If the current “state” lies on one side of thediscriminate, press ‘A’, otherwise, press ‘B’. Given this, it is clear why transferis not observed – this policy completely inapplicable when the referenceangle is rotated by 90°. If perceptual learning is analogous to policylearning, we hypothesized that in order to observe transfer, the trainingtask must promote the development of a policy that can be extrapolatedfrom and will be appropriate for new orientations. To this end, rather thana discrimination task, which promotes the development of an untransferablepolicy, an orientation estimation task was employed. Subjects wereasked to indicate (by rotating a single line) the exact orientation of a quicklyflashed gabor (+/-15° from 45°). The policy that should be learned in thistask is a continuous function of orientation and thus it should be possibleto extrapolate to previously unseen orientations. As predicted, full transferwas observed when the stimuli were rotated by 90°. These results and overallframework provide a novel way of approaching the field of perceptuallearning.Acknowledgement: ONR N 00014-07-1-093755.25, 6:15 pmUniformative trials are more effective than informative trials inlearning a long term perceptual biasSarah J. Harrison 1 (sharrison@sunyopt.edu), Benjamin T. Backus 1 ; 1 SUNY Collegeof OptometryA Bayesian account of perceptual learning predicts that learning shouldoccur only when stimuli are informative about statistical contingencies inthe environment (e.g. Kersten, O’Toole, Sereno, Knill & Andersen, 1987).Alternatively, learning could occur in the absence of informative cues toappearance, through practice of the perceptual decision itself. We assessedthese two possibilities using a perceptually ambiguous Necker cube stimulus:Cue recruitment studies have shown the perceived rotation directioncan be trained to be contingent on the stimulus’ retinal location (Backus &Haijiang, 2007; Harrison & Backus, 2009). One group viewed only informativepresentations, with the direction of cube rotation disambiguatedby disparity and occlusion depth cues. Another group viewed uninformative,ambiguous, cubes for more than 96% of presentations. The remaining,informative, trials were sufficient to prime stabilization of the percept(Brascamp, Knapen, Kanai, Noest, van Ee & van den Berg, 2008; Klink, vanEe, Nijs, Brouwer, Noest & van Wezel, 2008), such that the two groupsexperienced equivalent pairing of perceived rotation direction with retinallocation on Day 1. The long-term influence of perceptual experienceon Day 1 was assessed on Day 2 by presenting subjects with a 50:50 mix ofinformative and uninformative stimuli. Informative stimuli had the reverserotation-location contingency to that experienced the previous day. Thosesubjects whose perceptual experience on Day 1 had been elicited by theuninformative stimuli were affected very little by the reverse-contingencyinformative presentations on Day 2, and instead perceived ambiguouscubes as rotating in the same direction as Day 1. In contrast, subjects whoseperceptual experience on Day 1 had been elicited by informative stimuliwere more likely to perceive opposite rotation on Day 2. Hence, contraryto Bayesian prediction, long-term learning of perceptual appearance waslargely driven by “practice”, perhaps of the decisional process, while informativepresentations played a smaller role.Acknowledgement: NIH R01-EY-013988, HFSP RPG 3/2006, NSF BCS-081094455.26, 6:30 pmRecovery of stereopsis in human adults with strabismus throughperceptual learningJian Ding 1 (jian.ding@berkeley.edu), Dennis Levi 1 ; 1 School of Optometry, Universityof California, Berkeley, CA 94720, USAStereopsis, the process leading to the sensation of depth from retinal disparity,is compromised or absent in strabismus and/or amblypia. Herewe provide the first evidence for the recovery of stereopsis in humanadults through perceptual learning - the repetitive practice of a demandingvisual task with a feedback. Three strabismic adult observers (23-28year old) without stereopsis but with normal visual acuity participated inthe training. Before stereo training, the three observers failed the Randotcircle test ( ≤ 400 arcsec), and also failed to detect a large binocular disparity( ≤ 1320 arcsec) in stereoscopic sinewave gratings. Training trials beganwith a dichoptic cross and a binocular surrounding frame. By decreasingthe contrast of the dominant eye’s frame until both frames were visible,and adjusting the vertical and horizontal positions of the two frames separately,observers were able to achieve binocular fusion and alignment. Oncefusion was achieved, a pair of sinewave gratings, one above the other withidentical contrast and spatial frequency, was presented to the two eyes stereoscopically.The lower grating was presented in the same plane as the292 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Afternoon Talkssurround (zero disparity), and the upper grating was presented with a binoculardisparity. The observer’s task was to judge the relative depth of thetop grating (i.e., closer or farther than the bottom grating). Feedback wasprovided after each trial. Following the training (thousands of trials), allthree observers recovered stereopsis, achieving 40-140 arcsec stereoacuitywith the Randot circle test, and were able to detect disparities of 70-280 arcsecwith stereoscopic sinewave gratings which were jittered in horizontalposition to avoid monocular cues. However, even after recovery of localstereopsis, our observers were unable to detect depth in random dot stereograms.We conclude that perceptual learning may be a useful clinical toolfor treating stereoblindness.Acknowledgement: NEI 5R01EY1728-3355.27, 6:45 pmFeedback inhibits untrained motion directions in perceptuallearningJonathan Dobres 1 (jmd@bu.edu), Takeo Watanabe 1 ; 1 Boston University <strong>Vision</strong><strong>Sciences</strong> LaboratoryFeedback regarding the correctness of subjects’ responses has been shownto have beneficial effects on perceptual learning. It has been shown thatfeedback can increase the rate of learning (Herzog & Fahle, 1999) or makeit possible for an observer to learn with stimuli that would be too difficultto learn in the absence of feedback (Seitz et al 2006). Given the powerfuleffects of feedback, it would be worthwhile to examine its deeper characteristics,such as specificity and transfer, but these aspects remain largelyunexamined in the literature. To examine the nature of these effects, thisstudy examines feedback in concurrence with coherent motion stimuli.Subjects were trained in a 2IFC random dot motion detection task in whichtwo coherent motion directions (coherence = 10%) were interleaved withintraining sessions. One direction was always paired with trial feedback, andthe other, separated from the first by 90º, had no feedback associated withit. Subjects participated in seven such training sessions, each of which wasconducted on a different day. One day before and after the training stage,subjects completed pretest and post-test sessions in which they detectedmotion directions that included the trained directions as well as 16 otherdirections in a range of ±48º around the directions of training. Results indicatethat during training performance steadily increased for the traineddirections with and without feedback. To our surprise, results of the teststages are totally different between the two directions; while the observers’detection sensitivity improved only for the direction that had beenpaired with feedback and its vicinity, performance improvement occursevenly around the direction that had been paired with no feedback. Theseresults suggest that feedback plays role in inhibiting directions that are nottrained.Acknowledgement: NIH-NEI R21 EY018925 NIH-NEI R01 EY015980-04A2 NIH-NEI R01EY019466Tuesday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>293


Tuesday Afternoon PostersTuesday PMBinocular vision: StereopsisRoyal Ballroom 6-8, Boards 301–316Tuesday, May 11, 2:45 - 6:45 pm56.301 Stereopsis in People with Eyes of Different Lengths: Adaptationvia Receptor Geometry or Post-receptoral Mechanisms?Martin Banks 1,2,3 (martybanks@berkeley.edu), Kaccie Li 1 , Jaclyn Wray 2 , BjornVlaskamp 1 , Austin Roorda 1,2 ; 1 <strong>Vision</strong> Science Program, UC Berkeley, 2 School ofOptometry, UC Berkeley, 3 Department of Psychology, UC BerkeleyStereopsis in normal observers is most sensitive when the objects presentedto the two eyes are the same size. People with different refractiveerrors in the two eyes (anisometropes) usually have one eye longer thanthe other, so the retinal images differ in size for equal-sized objects. Weasked whether stereopsis is best in anisometropes when the retinal imagesor objects are the same. We measured stereo sensitivity for different objectsize ratios. Observers discriminated the orientation of a disparity-definedcorrugation. Disparity noise was added to determine coherence thresholds.Threshold was best when object sizes were the same despite the differingeye lengths. Two mechanisms could account for this result. First, the retinamay expand in proportion to eye length such that the number of cones samplinga given visual angle in the two eyes remains unchanged; this is thereceptor hypothesis. Second, post-receptoral mechanisms may adjust forthe differences in retinal-image size; this is the post-receptor hypothesis. Todetermine which hypothesis is a better account, we used an adaptive optics(AO) ophthalmoscope to measure linear and angular cone density in theanisometropes tested psychophysically. AO imaging was done with infraredlight and dynamic wavefront correction. Images of the cone mosaicwere stabilized and averaged, and individual cones identified. We couldresolve cones to within ~0.25 deg of the foveal center. Axial length, cornealcurvature, and anterior chamber depth were measured using ultrasound,and those parameters were used to calculate retinal-image sizes. Angularcone density was generally higher in the longer eye. Thus, objects of thesame size cover more cones in the long than the short eye, which is inconsistentwith the retinal hypothesis. We conclude that anisometropes maintainfine stereopsis, despite having eyes of different lengths, via post-receptoraladaptation of the representation of the retinal images.Acknowledgement: NIH56.302 Do People of Different Heights have Different Horopters?Emily A. Cooper 1 (emilycooper@berkeley.edu), Johannes Burge 2 , Martin S.Banks 1,3,4 ; 1 Helen Wills Neuroscience Institute, University of California,Berkeley, 2 Center for Perceptual Systems, University of Texas, Austin, 3 Schoolof Optometry, University of California, Berkeley, 4 Department of Psychology,University of California, BerkeleyAccurate perception of depth with respect to the ground is critical for walking.The most precise visual cue to depth is binocular disparity. Depthestimates from disparity are most precise for stimuli near correspondingpoints, pairs of retinal loci that yield the same perceived direction whenstimulated. Rays from corresponding points projected into space intersectat the horopter. It would be adaptive if an upright observer’s horopter layin or near the ground. Interestingly, corresponding points deviate systematicallynear the retinas’ vertical meridians: above the left and right foveasthey are shifted rightward and leftward, respectively; below the foveas, theshift is opposite. Because of this horizontal shear, the horopter is pitchedtop-back. Helmholtz noted that this places the horopter near the groundfor an upright observer and thereby could optimize depth perception withrespect to the ground.We asked whether people with different eye heights and separations havedifferent shear angles, and whether those angles place the horopter in theground for each individual. We used a dichoptic apparent-motion paradigmto measure the positions of corresponding points at different retinaleccentricities. We also measured cyclovergence to control for eye torsionand determined the effect of a structured stimulus like the natural environmenton cyclovergence. We found a statistically significant, but modest,correlation between predicted and observed shear angles in 28 observerswith heights ranging from 4.3 to 7 feet. Thus, corresponding points in mostpeople place the horopter near the ground when they are standing. However,some observers’ data were inconsistent with linear shear; their correspondingpoints yielded curved horopters that cannot be co-planar withthe ground.Acknowledgement: NIH Research Grant R01 EY012851, National Defense Science andEngineering Graduate Fellowship, and UC Berkeley Neuroscience Graduate Program56.303 The effect of local matching on perceived slant in stereopsisRui Ni 1 (rui.ni@wichita.edu); 1 Department of Psychology, Wichita State UniversityThe difference between two monocular views of a horizontal line can resultin either a complete matching or an incomplete matching. The completematching is consistent with the perception of a line slanted in depth whilethe incomplete matching is consistent with the perception of a partiallyoccluded line. Without other cues both complete and incomplete matchingare possible resulting in an ambiguous perception of the line which couldbe either slanted in depth or partially occluded. This study investigatedwhether a complete matching of local features would affect the matching ofhorizontal lines in stereopsis and disambiguate the perception. A Crystal-Eyes 3 Workstation was used to produce the stereo images on a ViewSonicCRT monitor, 140Hz in refresh rate and 1024*768 in resolution. In the experimentaldisplays, horizontal lines were presented to each eye with interoculardifferences that are consistent with both slant perception and occlusionperception. Vertical lines were presented in between the horizontallines which specified a unique local matching. In Experiment 1, the verticallines were presented followed by the presentation of the horizontal lines.In Experiment 2, the vertical lines were presented simultaneously with thehorizontal lines. In both Experiments, the vertical lines were manipulatedsuch that they were presented either in a fronto-parallel plane or in a planeslanted in depth. The subjects were asked to judge the perceived slant of thehorizontal lines. The results showed that the matching of vertical lines ispropagated by the visual system to that of horizontal lines. The matching oflocal features determined whether slanted lines or partially occluded linesshould be perceived from the differences between the left and right viewsof horizontal lines.56.304 Effects of orientation and noise on the detection of cyclopeanformLisa O’Kane 1 (lisa.okane@stir.ac.uk), Ross Goutcher 1 ; 1 University of StirlingWe present two experiments investigating effects of local orientation anddifferent types of noise on observers’ perception of cyclopean form. Observerswere presented with a stimulus containing line elements distributedrandomly across a surface, consistent with a disparity-defined square wave,oriented at either ±45deg. The observer’s task was to determine whetherthe stimulus was at a clockwise or counter-clockwise orientation. Eachstimulus was comprised of either horizontal or vertical line elements. Lineelements had the same local orientation within each trial. Different formsof noise were added to these stimuli, in order to obtain 75% performancethresholds for correctly discriminating the orientation of the square wave.In the first experiment, noise was added via the random repositioning oflines in each eye (decorrelation noise). In the second experiment, noise wasadded by distorting the z positions of the lines in each eye (disparity noise).Thresholds were measured for varying line length (11 – 33 arcmin) andstimulus densities (5 – 50%). We find effects of both line element orientationand noise type. In the first experiment, decorrelation noise thresholds werelower for stimuli comprised of vertical line elements, indicating enhancedperformance compared to horizontal line stimuli. In the second experiment,horizontal line stimuli showed improved performance compared to verticalline stimuli when lines were short (11 arcmin). However, performancewith horizontal line stimuli reduced markedly with increasing line lengthto a much greater extent than vertical line stimuli. These results point toeffects of noise occurring at multiple levels of processing. Results obtained294 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong> See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Afternoon Postersusing decorrelation noise are consistent with cross correlation as a methodof disparity estimation, however, the effects obtained using disparity noiseare consistent with disruption at the level of cyclopean form processing.Acknowledgement: BBSRC Grant # BB/G004803/1 and RCUK Fellowship # EP/E500722/156.305 Multiple Planes in Stereo-TransparencyAdam Reeves 1 (reeves@neu.edu), David Lynch 1 , Minh Tran 1 , Rebecca Grayem 1 ;1 Dept. of psychology,Northeastern University, Boston MAIn the typical Julesz display, subjects view random-dot stereograms inwhich dots in each depth plane are adjacent. However, Tsirlin, Allison,&Wilcox (JOV/8/5/5) intermingled dots with different disparities to createtransparent, overlaid, depth planes. With free viewing and unlimited timeto search and re-fixate, their subjects could distinguish up to six such depthplanes. We now report that naïve subjects achieve this by scanning the display,noting only two or three planes at a time. However, after many hoursof practice, experienced subjects can, in optimal conditions, accuratelydistinguish six to eight depth planes in a random display within 400 ms.(We provide unlimited exposure to just the first and last planes, to facilitatefusion, before completing the display for the specified time.) The new resultsupports theories of stereopsis in which the simultaneous analysis of multipledepth planes is possible, even without spatial adjacency.56.306 A Neural Model of Binocular Transparent Depth PerceptionFlorian Raudies 1 (florian.raudies@uni-ulm.de), Ennio Mingolla 2 , Heiko Neumann 1 ;1 Ulm University, Inst. of Neural Information Processing, Germany, 2 BostonUniversity, Dept. of Cognitive and Neural Systems, USAProblem. Humans can segregate binocularly viewed textured surfaceswith different disparities, forming a percept of binocular transparency.Most models of stereoscopic perception cannot successfully explain therobust computation of stereoscopic depth for transparent as well as opaquesurfaces. Building on a recent model of motion transparency perception(Raudies & Neumann, J.Physiol.Paris 2009, http://dx.doi.org/10.1016/j.jphysparis.2009.11.010) we propose a model of binocular stereo processingthat can segregate transparent surfaces at different depths and also handlesurfaces that are slanted with respect to the observer’s line of sight.Methods. Spatial correlations among model V1 orientation tuned cells calculateinitial disparity estimates, which are passed to model area V2, whichintegrates V1 activations. Center-surround competition of disparity signalsand divisive normalization of activities leads to attraction of similar disparities,a repulsion of nearby disparities and co-existence of distant disparitiesalong a line-of-sight. Modulatory feedback helps to integrate consistent disparityestimates while dissolving local ambiguities and multiple matches atthe same location.Results and Conclusion. The model has been probed by synthetically renderedscenes with transparent slanted planes separated in depth. Suchplanes can be segregated in depth once disparity differences exceed a smallthreshold. The model can also successfully integrate smooth depth gradients.The model has been tested on stereo pairs from the Middlebury dataset(http://vision.middlebury.edu/stereo) and robustly processes opaquesurfaces. When one eye’s view of a region is occluded, disparity estimatesare filled in from nearby positions of that region that are visible to both eyesby a feedback loop between areas V1 and V2. The model proposes howdisparity sensitive V2 cells, their lateral interactions, and feedback can segregateopaque and transparent surfaces that are slanted in depth. Centersurroundinteractions in the disparity domain lead to fusion for binocularlymatching regions and repulsion of disparity layers that are closely spacedin depth.Acknowledgement: Supported by the Graduate School of Mathematical Analysis ofEvolution, Information, and Complexity at the University of Ulm and BMBF Brain Plasticityand Perceptual Learning 01GW0763. EM was supported in part by CELEST, an NSFScience of Learning Center (NSF SBE-0354378), HP (DARPA prime HR001109-03-0001),and HRL Labs LLC (DARPA prime HR001-09-C-0011)56.307 Crossed-line stereograms and the processing of stereotransparencyRoss Goutcher 1 (ross.goutcher@stir.ac.uk), Lisa O’Kane 1 ; 1 Department ofPsychology, University of StirlingThe perception of overlapping surfaces in depth (stereo transparency) presentsa challenge to disparity measurement mechanisms, since multiple disparitiesmust be encoded within a single area of the visual field. Here weuse stimuli comprised of sets of randomly positioned crossed horizontaland vertical lines to examine how the visual system integrates informationover space. Disparity was added to these crossed-line stimuli in two ways.First, horizontal and vertical lines in each cross could be given the samedisparities by adding shifts of identical magnitude and direction to bothlines (same shift stimuli). Alternatively, horizontal and vertical lines couldbe given different disparities by adding opposite shifts to each line (opposingshift stimuli). Both methods were used to create stereograms depictingtransparent surfaces in depth, with the proportion of same shift crossesvaried systematically. Observers were presented with two intervals, onecontaining a crossed-line transparency stimulus, the other containing acrossed-line stimulus depicting a single plane at fixation. Signal-to-noiseratio was varied by randomly repositioning a proportion of crossed linesindependently in each image. Signal-to-noise ratios were always identicalbetween intervals. Observers’ task was to determine the interval containingthe transparent stimulus. 75% correct thresholds were obtained, indicatingthe signal-to-noise ratio required to successfully determine the intervalcontaining stereo transparency. Thresholds changed with a change in theproportion of same shift crosses, although the direction of change was notconsistent across all observers. These results suggest that difficulties in theprocessing of stereo transparency exist both at the level of disparity measurement,where opposing shifts limit the effectiveness of spatial integration,and at the level of cyclopean surface interpolation, where, in the caseof same shift stimuli, evidence for the presence of two disparities in thesame area of the visual field is reduced.Acknowledgement: Research supported by BBSRC Grant # BB/G004803/1 and RCUKFellowship # EP/E500722/1.56.308 Binocular Capture: The effects of mismatched Spatialfrequency and opposite contrast polarityAvesh Raghunandan 1 (raghuna@ferris.edu), Shawn Andrus 1 , Laura Nennig 1 ;1 Michigan College of Optometry, Ferris State UniversityBackground: Binocular capture occurs when the perceived positions ofmonocular targets are biased by the cyclopean visual direction of surroundingbinocular targets. This effect is larger when the vertical separationbetween monocular targets exceed the spatial period of its carrierfrequency. In an attempt to further elucidate the underlying mechanismmediating this effect, we measured the effects of mismatched spatial frequencytargets and opposite contrast targets on the magnitude of binocularcapture. Methods: Relative alignment thresholds and bias were measuredseparately for a pair of vertically separated (8, 30, 60 arcmin.) monocular(4’ X 66’) Vernier spatial frequency (SF) ribbons and a pair of monocular(4’ X 66’) Gaussian bars presented across a cyclopean random dot depthedge (10 arcmin. relative horizontal disparity). Each ribbon of the pair comprisedcarrier frequencies that were either matched (8 cpd and 1 cpd) ormismatched (top ribbon 1 cpd, bottom ribbon 8 cpd, and vice versa). TheGaussian bars were presented with either matched contrast (bright/bright)or opposite polarity (bright/dark) contrast. Gaussian bars were presentedat approximately 3.4 times their contrast detection thresholds. Results:Capture magnitudes increased significantly with vertical separation for thematched 8cpd and mismatched SF ribbons, however, the matched 1 cpd ribbonsfailed to show a significant effect of separation on capture magnitude.Both matched and opposite polarity Gaussian bars produced increasingcapture with increasing vertical separation, however the magnitude of capturewas significantly larger for the opposite polarity bars. Capture magnitudesexhibited a strong linear dependence on the alignment thresholds forall conditions, but a weak dependence on the alignment thresholds for thematched 1 cpd condition. Conclusions: Stimuli that favor the recruitment ofnon-linear position mechanisms exhibit greater susceptibility to binocularcapture. In these cases the magnitude of capture is strongly dependent onthe precision of relative alignment.Acknowledgement: This research was partially funded by a Ferris Faculty Research GrantAward to the first author56.309 Interactions between monocular occlusions and binoculardisparity in the perceived depth of illusory surfacesInna Tsirlin 1 (itsirlin@yorku.ca), Laurie Wilcox 1 , Robert Allison 1 ; 1 Centre for <strong>Vision</strong>Research, York UniversityMonocular occlusions play an important role in stereoscopic depth perception.They signal depth discontinuities and, in certain configurations,create percepts of illusory occluding surfaces. Previous research showedthat in these configurations the visual system not only infers the depth signof the illusory occluder but also the depth magnitude. It is believed thatTuesday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>295


Tuesday Afternoon PostersVSS 2010 AbstractsTuesday PMquantitative depth percepts from occlusion arrangements are based onthe constraints imposed by the viewing geometry. That is, the minimum(or maximum) possible depth of the illusory occluder is constrained bythe line of sight from the eye in which the feature is hidden. This informationis used by the visual system to estimate depth even in arrangementswhere the maximum (or minimum) possible depth is unconstrained. Herewe have evaluated the effects of binocular disparity on the localizationin depth of illusory occluders for several different stimuli. In each of thestimuli, the presence of monocular occlusions induced the percept of anillusory occluder at a different depth than the occluded object. In a seriesof psychophysical experiments we measured the perceived depth of theoccluder as we manipulated 1) the occlusion geometry and 2) the disparityof a binocular feature placed next to the illusory surface. Subjects useda disparity probe to match the perceived depth in the stimuli. Our resultsshow that the disparity of binocular features biases the perceived depth ofthe illusory occluders in the direction unconstrained by the viewing geometry.We argue that the extent to which binocular disparity influences depthpercepts from occlusions can serve as a litmus test of the contribution ofmonocular information to quantitative depth percepts.Acknowledgement: NSERC to LW and RA56.310 Relative disparity computation underlies the effects ofsurround area binocular correlation on depth perceptionShuntaro Aoki 1 , Hiroshi Shiozaki 1 , Ichiro Fujita 1 ; 1 Laboratory for Cognitive Neuroscience,Graduate School of Frontier Biosciences, Osaka UniversityHuman subjects perceive depth when viewing binocularly correlatedstereograms. Binocular anti-correlation of an entire stereogram abolishesdepth perception, while anti-correlation only of the center part of the stimulusaccompanied by a correlated surrounding area reverses the directionof perceived depth. Here we developed a computational model whichexplains the effects of the surround on depth perception. The model consistedof input units responding to disparity in either the center or surroundof the stimuli. Anti-correlation of the stimuli inverted the disparitytuning curves of the input units, mimicking V1 neurons. Integration of theinput units’ responses with threshold operation resulted in relative disparityselective units. The model output was the difference between theresponses of two relative disparity selective units preferring either near orfar disparity. The model reproduced the effects of surround area binocularcorrelation on depth perception. We tested the model with psychophysicalexperiments using random dot stereograms consisting of center and surroundareas. In each trial, all dots in the center had the same disparity (-0.32or +0.32 deg). Dots in the surround were divided into two groups. Differentdisparities of equal magnitude but opposite sign were assigned for dots ineach group (0, ±0.1, …, or ±1.0 deg). Each area was either binocularly correlatedor anti-correlated. Four human subjects discriminated the depth ofthe center against the surround. When both the center and the surroundwere correlated, subjects reported the depth based on the minimum relativedisparity between the center and the surround. Stimuli with correlatedcenter and anti-correlated surround caused reversed depth when the magnitudesof the surround disparities were small (0.2 deg). The results wereagreement with our model. We suggest that relative disparity computationbetween the center and surround is crucial for the effects of surround areabinocular correlation on depth perception.56.311 The effects of binocular disparity on the detection of curvedtrajectories are independent of motion directionRussell Pierce 1 (rpier001@ucr.edu), Zheng Bian 1 , George Andersen 1 ; 1 Departmentof Psychology, University of California, RiversidePierce, Bian & Andersen (VSS 2009) found that binocular information wasimportant for the detection of curved trajectories. In the current study weexamined whether this effect was due to the direction of the motion path.On each trial subjects viewed two computer generated displays. In one display,a sphere followed a straight trajectory; in the other, another spheremoved along either a concave or a convex curved trajectory relative to thex-axis. We used a two-alternative forced choice procedure (2AFC) withoutfeedback and participants were instructed to indicate which displaysimulated a curved trajectory. Thresholds for curved path discriminationfrom 16 participants were assessed by varying the curvature of the curvedtrajectories with an adaptive staircase. We manipulated three independentvariables, viewing condition (binocular vs. monocular) curve type (concavevs. convex), and direction of the motion direction path (approaching vs.receding). Each of the 8 combinations was run in a separate block, and theorder of blocks was counterbalanced across participants with a Partial LatinSquare design. We found thresholds were lower in the binocular condition(M = 6.10 * 10-5) than in the monocular condition (M = 8.55 * 10-5). Thisdifference was greater for concave arcs (M difference = 3.70 * 10-5) thanfor convex arcs (M difference = 1.19 * 10-5). Whereas the effects for recedingobjects tend to be weaker than for approaching objects, there was nosignificant difference between conditions related to motion direction. Theseresults were consistent with our previous finding in support of the importanceof the binocular disparity in detecting curved trajectories.Acknowledgement: Supported by NIH AG031941 and EY01833456.312 A Comparison of Stereoacuity at 6m of Collegiate BaseballPlayers in Primary Gaze and Batting StanceGraham Erickson 1 (ericksog@pacificu.edu), Herb Yoo 2 , Alan Reichow 2 ; 1 PacificUniversity College of Optometry, 2 Nike, Inc.Introduction Accurate discrimination of distance information and judgmentsof spatial localization may be advantageous during baseball batting.Stereopsis is traditionally measured in primary gaze, however a baseballbatter’s eyes are typically in a lateral gaze direction during batting. The purposeof this study was to compare stereopsis performance at far in primarygaze and in preferred batting stance in a population of collegiate baseballplayers. Methods Measurements of 6m stereoacuity were conducted as partof a visual performance assessment for the Pacific University men’s baseballteam (NCAA Division III) from 2004 to 2009. The athletes were 18-24years of age (N=149), and only measurements taken during their first season’sparticipation were used for analysis in returning athletes. Thresholdstereoacuity was measured using a 2-forced choice paradigm at pre-set rodseparations with a Howard-Dolman device. Threshold stereoacuity wassubsequently measured with the athlete in preferred batting stance. ResultsThe mean threshold stereoacuity in primary gaze was significantly betterthan in batting stance (p


VSS 2010 AbstractsTuesday Afternoon Posterswith good self-assessed or measured autostereogram skill. After practicesignificant differences between those with poor versus good measuredautostereogram skill remained only for vergence facility (p = 0.05), nearphoria (p = 0.02), and TNO stereoacuity (p = 0.01 for crossed disparities, p= 0.003 for uncrossed disparities). Binocular visual symptoms at near werenot significantly different for the two groups.Acknowledgement: SCO Summer Research Program is supported in part by AlconPartner’s in Education Program.56.314 Long distance disparity processing in the human visualcortex: an EEG source imaging studyBenoit Cottereau 1 (cottereau@ski.org), Anthony Norcia 1 , Tzu-Hsun Tsai 2 , SuzanneMckee 1 ; 1 The Smith-Kettlewell Eye Research Institute, San Francisco, 2 Departmentof Ophthalmology, National Taiwan University Hospital, Taipei, TaiwanWe estimated the relative disparity response of neural populations in differentvisual areas in human cortex with visual evoked potentials andsource localization methods. Using dense dynamic random dot patterns,we modulated the disparity of a central disk (4°diameter) from 0 to 12.6’uncrossed disparity at 2 Hz. The disk was surrounded by a static annulus(16° outside diameter) presented in the fixation plane. We varied the gapseparating the disk from the annulus parametrically from 0 to 5.5 degreesin six separate conditions. We compared the response amplitudes as a functionof gap size to responses to the disk alone within fMRI-defined ROI’sacross the visual cortex. Based on the average signal-to-noise ratio (6 subjects)for the first harmonic (2Hz), we found that there was no change inresponse amplitude for small separations (


Tuesday Afternoon PostersVSS 2010 AbstractsTuesday PMmization within a Bayesian framework, number and the length of individualaction segments were determined automatically. Performance of thesedifferent algorithmic methods was compared with human performance.[1] Shipley et al., JOV, 4(8), 2004.[2] Agam & Sekuler JOV, 8(1), 2008.[3] Bayerl & Neumann, IEEE PAMI 29(2), 2007.[4] Endres et al., NIPS 20, 2008.Acknowledgement: Funded by the EC FP7 project SEARISE, DFG, and Herman LillySchilling Foundation.56.318 Apparent size biases the perception of speed in rotationalmotionAndrés Martín 1 (andres.mrtn@gmail.com), Javier Chambeaud 1 , José Barraza 1 ;1 Instituto de investigación en Luz, Ambiente y Visión (ILAV) - UNT CONICETVelocity constancy is the ability to equate physical speeds of objects placedat different depths, despite object’s angular speeds on the retina changesproportionally with depth. Multiple studies have shown that size cues playa central role in the achievement of velocity constancy. On the other hand,some studies have provided evidence showing that depth cues are unnecessaryfor velocity constancy. However, since retinal size is linearly relatedto depth, it is reasonable to hypothesize that both cues should affect the perceptionof speed. We present here results of two experiments in which wemeasure the bias of perceived speed and size as a function of depth for rotationalmotion. We use this type of motion to avoid the effect of the frameon the perceived speed since the reference in rotational motion is its owncenter. We introduce binocular disparity to produce depth perception. Thestimulus consisted of 16 dots (0.15 deg size) located 2 deg away from thecenter of rotation, undergoing rotational motion. 6 observers, the authorsand 3 others naives as to the purpose of this study took part in the experiment.Results show that observers overestimate dot speed and pattern sizeof further stimuli but perceiving angular velocity as invariant. This resultshows that the visual system would re-scale dot speed when the apparentradius increases so as to maintain angular velocity constant. However, thebias in perceived size is much larger than that of speed, which suggests thatsuch re-scaling is not linear.Acknowledgement: UNT - CONICET56.319 Discriminating between upward and downward 3-D motionfrom projected velocityMyron L. Braunstein 1 (mlbrauns@uci.edu), Zheng Bian 2 , George J. Andersen 2 ;1 Department of Cognitive <strong>Sciences</strong>, University of California, Irvine, 2 Departmentof Psychology, University of California, RiversideThe direction of motion of an object in a 3-D scene can be ambiguous if onlythe projected motion path is considered. Specifically, downward motionin the projection can represent either upward or downward motion in thescene. The aim of this study was to determine whether observers could discriminateupward from downward 3-D motion from the projected velocityfunction alone. The displays consisted of a ball moving towards theobserver, below eye level, either against a 3-D scene background or againsta uniform background. The projected path of the ball was always downwardand was identical across conditions. The average projected speed wasalso identical across conditions. The projected size changes correspondedto those that would occur for a level path in 3-D, regardless of whetherupward or downward motion was simulated. Only the velocity functionvaried according to the simulated 3-D motion. Two displays, one simulatingupward motion and one simulating downward motion, were presentedsuccessively in a paired comparison design. The independent variableswere the angle between the upward and downward 3-D paths and the typeof background–full scene or blank field. To avoid having the ball appear tostart from a position on the ground, a cylinder was inserted in the scene andserved as a platform from which the ball began its motion. We found thatobservers were able to discriminate upward from downward 3-D motionwith projected trajectories all showing the same downward motion paths.For each background condition, accuracy was determined by the anglebetween the simulated upward and downward 3-D paths. Accuracy washigher with a scene background than with a uniform background. Theseresults indicate that the projected velocity function is sufficient for discriminationof direction of 3-D motion even with motion paths that are identicalin the 2-D projection.Acknowledgement: Supported by NIH grant EY1833456.320 The aperture problem in three dimensionsJay Hennig 1 (mobeets@mail.utexas.edu), Thad Czuba 1,3 , Lawrence Cormack 1,2,3 ,Alexander Huk 1,2,3,4 , Bas Rokers 1,2,3,4 ; 1 Center for Perceptual Systems, TheUniversity of Texas at Austin, 2 Institute for Neuroscience, The University ofTexas at Austin, 3 Psychology, The University of Texas at Austin, 4 Neurobiology,The University of Texas at AustinThe classic aperture problem describes the ambiguity inherent to the motionof a frontoparallel (2D) contour (such as a line or an edge) viewed through acircular aperture. Despite a continuum of 2D velocities consistent with theapertured view, observers consistently perceive the direction of motion asorthogonal to the contour. Here we present an analogous 3D version whereobservers judged the 3D direction of motion of a slanted planar surfacedefined by a moving random dot stereogram presented behind a circularaperture. If the surface is specified by single frame dot lifetimes, the onlypotential factors influencing the perceived motion direction of the surfaceare the change in binocular disparity across time and 3D surface orientation.Provided observers use a similar heuristic in the 2D and 3D cases, sucha surface should be perceived as traveling normal to its 3D orientation.In separate sessions, observers judged either the perceived surface slant ordirection of motion of the surface using a bird’s-eye-view matching paradigm.We varied the surface slant, the lifetime of individual dots, and the3D motion direction specified by the dots.Slant judgments were close to veridical in all conditions. When dot lifetimeswere more than one frame, and thus unambiguously specified surfacemotion, motion judgments were consistent with previously reportedbiases in the perception of 3D motion, and relatively close to veridical.However, when the surface was specified by single frame dot lifetimes,motion was always perceived as moving directly towards or away from theobserver. Thus, in the 3D version of the aperture problem, the perceptionof surface motion was heavily biased as moving along the line of sight, andnot towards the perceived surface normal. These results suggest that thevisual system might resolve perceptual ambiguity distinctly in 2D and 3Dmotion processing.56.321 Use of optic flow and visual direction in steering toward atargetShuda Li 1 (lishuda1980@gmail.com), Diederick C. Niehorster 1 , Li Li 1 ; 1 Departmentof Psychology, The University of Hong KongPrevious studies have shown that humans use both optic flow and the targetvisual direction in active control of self-motion. Here we develop a methodologythat allows a more sensitive measurement of the observer’s separatereliance on these cues to steer toward a target. Three observers were askedto use a joystick to steer toward a target with three types of displays, anempty screen with only a target visible, a textured ground plane, and atextured ground with reference posts. To tease apart the observer’s use ofoptic flow and target visual direction cues, we perturbed both heading (Yh)and the simulated gaze direction (Yg) in the display using independentsums of seven harmonically unrelated sinusoids (0.1-2.18 Hz and 0.11-2.21Hz). The former shifted heading from the target while the latter kept headingintact but shifted the target visual direction on the screen. Observershad control of their heading but not their simulated gaze direction (i.e., Yhis a closed-loop task while Yg is an open-loop task). Ninety-second timeseries of heading error, gaze direction, and joystick displacement wereFourier analyzed and averaged across six trials. For all three observers, asdisplays contained more optic flow information, the heading RMS errordecreased (mean error: 5.99°, 4.50°, and 4.33° for the empty, the texturedground, and the textured ground with posts displays respectively), andobservers increasingly controlled heading compared to gaze disturbance(mean ratio of control power correlation: 0.82, 1.08, and 1.40, respectively).Furthermore, Bode plots (frequency response plots) revealed a significantdecrease of sensitivity to gaze disturbance (mean control gain: 5.53, -1.03,and -3.76 dB respectively). These findings show that with enriched opticflow displays observers rely more on heading and less on visual directionto steer toward a target.Acknowledgement: Hong Kong Research Grant Council, HKU 7471//06H56.322 Global and local influence of form information on humanheading perceptionDiederick C. Niehorster 1 (dcniehorster@hku.hk), Joseph C. K. Cheng 1 , Li Li 1 ;1 Department of Psychology, The University of Hong Kong298 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Afternoon PostersWe have previously reported that the static focus of expansion (FOE) in aradial Glass pattern influences human heading perception (Cheng, Khuu,& Li, VSS, 2008). Here we investigate the underlying mechanism. In Experiment1, we presented observers with an integrated form and motion displayin which the dot pairs in a radial Glass patterns were oriented towardone direction on the screen (the form FOE) while moving toward a differentdirection in depth (the motion FOE) and a non-integrated display inwhich a static radial Glass pattern was superimposed on a regular opticflowstimulus. Heading judgments were strongly biased towards the formFOE for the integrated but not the non-integrated display (form weight:0.78 vs. 0.27), indicating that the form influence on heading perception isnot a decision bias. In Experiment 2, we manipulated the global form informationin the radial Glass pattern by randomly orienting some dot pairs.The heading bias towards the form FOE decreased as the global form signalwas degraded, suggesting that the bias is mediated by the global form percept.In Experiment 3, we examined whether observers combined the formand motion FOEs for heading perception in a statistically optimal way. Themotion FOE was weighted less than its variance warranted, suggesting thatthe local orientation of each dot pair in the radial Glass pattern disturbedits perceived motion direction, thus affecting the reliability of the estimatedmotion FOE in optic flow. By approximating the level of motion directionnoise for which integration would be optimal, we found that the strength ofthe effect of local dot-pair orientation on its perceived motion direction wasat about 50%. We draw the conclusion that the influence of the form FOEon heading perception is due to both global and local interactions betweenform and motion signals.Acknowledgement: Hong Kong Research Grant Council, HKU 7471//06H56.323 Rotation is used to perceive path curvature from optic flowJeffrey Saunders 1 (jsaun@hkucc.hku.hk); 1 Department of Psychology, Universityof Hong KongPrevious studies have found that observers can reliably judge their futuretrajectory along a circular path from optic flow. However, observers havedifficulty distinguishing straight and circular paths in some conditions,suggesting insensitivity to optical acceleration. How are observers able toaccount for path curvature when judging a future circular path? One explanationis that instantaneous rotation is used as a cue for curvature. In manysituations, such as driving, the body rotates in sync with change in headingdirection, so rotation provides a reliable cue. The purpose of this study wasto test whether the visual system relies on rotation to perceive path curvaturefrom optic flow. Stimuli simulated travel along a circular path on a randomdot ground plane, with speeds of 2 m/s and curvature (yaw) of 2°/s.Two conditions differed in simulated view rotation. In the rotating viewcondition, view direction rotated in sync with heading direction, as in previousstudies. In the fixed view condition, displays simulated travel alongthe same circular paths but without change in view direction. In Experiment1, observers indicated their perceived future path at various distancesby adjusting the horizontal position of a pole. Judgments were consistentwith curved paths in the rotating view condition, while in the fixed viewcondition, judgments were consistent with straight paths. In Experiment 2,observers reported whether their perceived path was straight, curved leftward,or curved rightward. Judgments were highly accurate in the rotatingview condition, while in the fixed view condition, curved paths were oftenreported to be straight, and observers did not reliably distinguish the signof curvature. In both experiments, observers had difficulty perceiving pathcurvature from optic flow when it was not accompanied by view rotation,consistent with use of rotation as a perceptual cue to curvature.56.324 A Visuomotor Aftereffect Requires Effort To Self LocomotePaired With A Mismatch of Optic FlowElizabeth Hopkins 1 (ebh7z@virginia.edu), Dennis Proffitt 1 , Tom Banton 1 ; 1 Departmentof Psychology, University of VirginiaWhen riding in a car, we experience a mismatch between optic flow andself-produced locomotor activity. In this circumstance, we do not experiencea subsequent visuomotor adaptation, perhaps due to the absence oflocomotor movement. Note, however, that a mismatch produced by pairinglocomotion with an absence of optic flow – in this case, caused by wearinga blindfold – does induce and aftereffect (Durgin & Pelah, 1999). Thepresent research investigated what sort of action / optic flow pairings arerequired to evoke a visuomotor adaptation. The study employed a 2 x 4design in which one half of the participants experienced optic flow at 3mph, and one half of the participants experienced zero optic flow. Duringthis time, participants performed one of 4 actions: walking on a treadmillat 3 mph, walking in place, riding a stationary bicycle, or standing still. Foreach participant, we obtained pre- and post- measures of forward drift duringa blind marching in place task. We found that pairing zero optical flowwith treadmill walking was the only condition evoking a reliable visualmotor adaptation. We conclude that effort to self locomote, coupled witha mismatch of optic flow, is required in order to establish a visuomotoraftereffect.56.325 Improving Driver Ability to Avoid Collisions when Followinga SnowplowPeter Willemsen 1 (willemsn@d.umn.edu), Michele Olsen 1 , Sara Erickson 3 , AlbertYonas 2 ; 1 Department of Computer Science, University of Minnesota Duluth,2 Institute of Child Development, University of Minnesota, 3 Department ofPsychology, University of MinnesotaLow luminance contrast occurring with fog or snow under photopic conditionscreates extremely dangerous situations when driving, especiallywhen following other vehicles. In these situations, detecting motion of thelead vehicle is greatly reduced due to low contrast sensitivity. In particular,the expansion information necessary for detecting potential collisions maybe poorly integrated. We created a driving simulation framework to testalternative lighting configurations on snowplows to improve detection ofapproach in low luminance contrast situations, reducing the time to respondin a realistic, driving simulation study. We compared errors and reactiontimes in a simulated driving task over virtual 3D roadways in which participantsjudged whether the lead snowplow vehicle was approaching orwithdrawing. We compared lighting that was similar to that used in currentsnowplows to ones in which vertical non-flashing bars were added tothe outer edges of a snowplow and to a condition in which bright cornerswere added. We found a significant drop in response time to informationfor impending collision when non-flashing vertical bars positioned at theleft and right sides of the vehicle were added to a baseline display thathad only normal flashing lights. The average response time for the flashingcondition was 1.96 seconds, while the reaction time for the vertical bar conditionwas 1.84 seconds. We also found that when the lights on the cornerswere added to the vertical bars average performance again improved to1.79 seconds. Ability to detect information for approach under dense fogor snowing conditions can be substantially improved if lighting on the leadvehicle is altered to optimize the light positioning and orientation. Thesetransformations raise the optical expansion information over threshold forsubjects in a driving simulation study. Other lighting designs may be evenmore effective in improving the safety of drivers.Acknowledgement: NATSRL56.326 Perception of apparent motion relies on postdictive interpolationZoltan Nadasdy 1 (zoltan@vis.caltech.edu), Shinsuke Shimojo 2 ; 1 NeuroscienceInstitute, Scott & White Memorial Hospital, Texas A&M Health Science Center,2 Division of Biology, California Institute of TechnologyEver since Wertheimer discovered apparent motion (AM), controversyabout its mechanism (i.e., interpolation vs. extrapolation, postdictive vs.predictive) still lingers. In this series of experiments, we addressed bothquestions by presenting subjects an AM stimulus starting from the middleof the screen (phase 1) and terminating at either left or right (phase 2)unpredictably. The subjects perceived both motions effortlessly, regardlessof the apparent direction. Thus, the motion illusion must have been constructedin the brain only after phase 2, which determined the directionof motion. In the same experiment, we also flashed two targets simultaneously,during phase 2, at various spatial locations and asked subjects toreport the temporal order of these targets. We found that almost all subjectsperceived the two targets sequentially, between the two AM phases in time,when they were flashed between the AM stimuli. No sequential effect wasdetected on targets outside of the AM trajectory. These results are consistentwith the interpolation hypothesis. In a second experiment, we studiedthe dependency of sequential effect on different spatiotemporal configurationsof targets. We introduced a marker to help subjects to disambiguatethe order of intermediate targets and asked them to judge the co-occurrenceof the marker with either target while the target configuration was varied.We applied two types of AM sequences, a “predictive” when targets werepresented before the AM, and a “postdictive” when targets were presentedafter the AM. According to the results, the marker helped subjects to per-Tuesday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>299


Tuesday Afternoon PostersVSS 2010 AbstractsTuesday PMceive the correct temporal order under the predictive condition but notunder the postdictive condition. We concluded that apparent motion perceptionis postdictive, that it relies on interpolation, and that the postdictiveinterpolation has a sequential masking/delaying effect on the perceptionof intermediate targets. The neuronal mechanism of this masking is yet tobe determined.Acknowledgement: JST.ERATO Shimojo Implicit Bran Function Project56.327 Curved apparent motion induced by amodal completion andthe launching effectSung-Ho Kim 1 (sungho4@eden.rutgers.edu), Manish Singh 1 , Jacob Feldman 1 ;1 Department of Psychology and Center for Cognitive Science, Rutgers University- New BrunswickMany aspects of amodal completion in static scenes have been studied,but relatively little is known about how completion interacts with movingstructures in dynamic scenes. We examined whether amodal completioncan bias an apparent motion path towards longer curved paths behind anoccluder, which would violate the well-established principle that apparentmotion follows the shortest possible path. In a series of experiments,observers viewed motion sequences of two alternating rectangular targetspositioned at the ends of a semicircular “tube,” with varying inter-stimulusintervals (ISI: 100-500 ms). With short ISIs, observers tended to reportsimple straight path motion, i.e. outside the tube. But with longer ISIs, theybecame increasingly likely to report a curved motion path occluded bythe tube. Subjects also reported that at longer ISIs straight motion becamejerker while curved motion became smoother. In the next experiment, wevaried the shape of the occluder, and found similar results with no effectof occluder shape. Other experiments investigated whether the motionpath could be influenced by a Michotte-style “launch” at the initiation ofmotion. We added two more small objects which appeared to collide withthe motion tokens at offset, in the direction of either the straight path or thecurved path. Subjects in these experiments almost exclusively perceiveda motion path in the direction of the launch, regardless of ISI, suggestinga very strong bias in the direction of perceived momentum. In sum,our results suggest that (1) The amodal representation of a fully hiddenobject behind an occluder can bridge the gap between two token locationsvia a curved motion trajectory, (2) amodal completion in space-time canmake curved motion appear relatively smooth and continuous, and (3) thelaunching effect strongly induces a motion path in the direction of launching.Acknowledgement: JF supported by NIH EY-15888, MS supported by NSF CCF-054118556.328 Planar configuration rather than depth adjacency determinesthe strength of induced motionArash Yazdanbakhsh 1 (Arash_Yazdanbakhsh@hms.harvard.edu), Jasmin Leveille 1 ;1 Department of Cognitive and Neural Systems, Center for Adaptive Systems,and Center of Excellence for Learning in Education, Science, and Technology,Boston UniversityA moving object can induce an opposite direction of motion in a neighboringtarget, a phenomenon called induced motion. It has been suggestedthat motion induction is due to target-inducer interactions that are localin visual space. According to this view, increasing the distance betweeninducer and target should weaken induced motion. Alternatively, separationin depth per se may not determine whether one object can affect themotion of another. Currently available data supports either viewpoint. Inparticular, Gogel and Mac Cracken (1979) observed a strong weakening ofinduced motion as the target’s location in depth moves farther than theinducer, whereas Di Vita and Rock (1997) noted that depth separation didnot exert a strong influence. We noticed that the former study employedstimuli whose motion covered a constant visual angle across depth, whereasthe second study employed a constant extent of motion on the stimulusdisplay. Here we report the results of psychophysical experiments whichleverage on this difference to resolve the discrepancy in the two results andshow that disparity-based depth is insufficient to determine the strengthof induced motion. Participants rated the effect of a horizontally oscillatinginducer frame on a vertically oscillating target dot presented at differentdisparities in an otherwise dark environment. In the constant visualangle condition (similar to Gogel and Mac Cracken (1979)), induced motiondecreased with depth separation. In the constant extent condition (similarto Di Vita and Rock (1997)), induced motion was constant across depth.These results imply that factors related to the target velocity or extent ofmotion, more than depth, determine the magnitude of motion induction.AY and JL are supported in part by CELEST, an NSF Science of LearningCenter (NSF SBE-0354378). JL is also supported in part by the SyNAPSEprogram of DARPA (HR001109-03-0001, HR001-09-C-0011).Acknowledgement: AY and JL are supported in part by CELEST, an NSF Science ofLearning Center (NSF SBE-0354378). JL is also supported in part by the SyNAPSEprogram of DARPA (HR001109-03-0001, HR001-09-C-0011).56.329 From Motion to Object: How Visual Cortex Does MotionVector Decomposition to Create Object-Centered ReferenceFramesJasmin Leveille 1 (jasminl@cns.bu.edu), Stephen Grossberg 1 , Massimiliano Versace 1 ;1 Department of Cognitive and Neural Systems, Center for Adaptive Systems,and Center of Excellence for Learning in Education, Science, and Technology,Boston UniversityHow do spatially disjoint and ambiguous local motion signals in multipledirections generate coherent and unambiguous representations of objectmotion? Various motion percepts have been shown to obey a rule of vectordecomposition, where global motion appears to be subtracted from thetrue motion path of localized stimulus components (Johansson, 1950). Thisresults in striking percepts wherein objects and their parts are seen as movingrelative to a common reference frame. While vector decomposition hasbeen amply confirmed in a variety of experiments, no neural model hasexplained how it may occur in neural circuits. The current model showshow vector decomposition results from multiple-scale and multiple-depthinteractions within and between the form and motion processing streams inV1-V2 and V1-MT. These interactions include form-to-motion interactionsfrom V2 to MT which ensure that precise representations of object motionin-depthcan be computed, as demonstrated by the 3D Formotion model(e.g., Grossberg, Mingolla and Viswanathan, 2001, Vis. Res.; Berzhanskaya,Grossberg and Mingolla, 2007, Vis. Res.) and supported by recent neurophysiologicaldata of Ponce, Lomber, & Born, 2008, Nat. Neurosci.). Thepresent work shows how these interactions also cause vector decompositionof moving targets, notably how form grouping, form-to-motion capture,and figure-ground separation mechanisms may work together to simulateclassical Duncker (1929) and Johansson (1950) percepts of vector decompositionand coherent object motion in a frame of reference. Supported inpart by CELEST, an NSF Science of Learning Center (SBE-0354378) and theSyNAPSE program of DARPA (HR001109-03-0001, HR001-09-C-0011).Acknowledgement: Supported in part by CELEST, an NSF Science of Learning Center(SBE-0354378) and the SyNAPSE program of DARPA (HR001109-03-0001, HR001-09-C-0011).56.330 Visual discrimination of arrival times: Troublesome effectsof stimuli and experimental regimeKlaus Landwehr 1 (landweh@uni-mainz.de), Robin Baurès 2 , Daniel Oberfeld 1 , HeikoHecht 1 ; 1 Allgemeine Experimentelle Psychologie, Universität Mainz, 2 UFRSTAPS, Université Paris Ouest Nanterre La DéfenseDiscrimination thresholds for visually perceiving which of two objects,approaching head-on at constant velocity, will arrive earlier at one’s stationpoint, have been reported to range between 0.016 and 0.250 (Oberfeld &Hecht, 2008; Regan & Hamstra, 1993; Simpson, 1988; Todd, 1981). Valuesfor lateral motion are typically in the lower range (Bootsma & Oudejans,1993), and values for recession, moment of passage, and complex scenarios,in the higher range (Kaiser & Mowafy, 1993; Kim & Grocki, 2006). Wecompared Todd’s (1981) original stimuli and his experimental regime withmodified ones. Todd had presented outlines of two virtual squares, opticallyspecified by 24 dots each, in 2AFC with a constant standard. We, inaddition, used dot clouds with the same number of dots as Todd’s squares,and a standard-free procedure. We also used narrower ranges of objectsizes, velocities, and arrival-time differences, fewer trials, and naïve insteadof trained observers. We obtained a minor effect of stimulus type and alarge one of experimental regime. As verified by detailed analyses by conditionsand levels of variables, huge differences in object size and velocitydistract from the task. The weak effect of stimulus type is consistent withSimpson’s (1988) contention that unspecialized optic-flow analyzers sufficefor extracting temporal information; on the other hand, it might also mirrorthe flexibility of dedicated “looming detectors” – not requiring contours oroutlined shapes for proper functioning (cf. Beverley & Regan, 1980; Koenderink,1985). We observed excessive individual differences with Weberfractions ranging between 0.017 and 0.123. For 32 % of sessions, no psycho-300 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Afternoon Postersmetric functions could be fitted. We are currently extending our presentwork to include different trajectories, objects, and contexts, and also controlmeasures of plain motion sensitivity, in order to test the generality of ourfindings and to better understand differences in performance.Acknowledgement: Supported by a grant of the Deutsche Forschungsgemeinschaft toHeiko Hecht (HE 2122/6-1: Kontaktzeitschätzung im Kontext) and a post-doc fellowship ofthe Alexander-von-Humboldt-Stiftung to Robin Baurès.56.331 Looking off effect – shift of face direction caused by arotating objectKotaro Hashimoto 1 (khashimo@riec.tohoku.ac.jp), Kazumichi Matsumiya 1,2 , IchiroKuriki 1,2 , Satoshi Shioiri 1,2 ; 1 Graduate School of Information <strong>Sciences</strong>, TohokuUniversity, 2 Research Institute of Electrical Communication, Tohoku University[Purpose] Motion influences the perceived position of stationary objects incertain conditions. We found a similar phenomenon, where rotation signalsin depth influence the direction of a briefly presented face image (lookingoff effect). When a rotating object is replaced by a face image, the facedirection appears to shift in the rotation direction of the moving object. Weexamined whether the phenomenon is a variation of the 2D motion effect,whether it is local effect and whether it is face specific phenomenon.[Experiment] The rotating inducer was a 3D human head and the test stimuluswas a 2D cartoon face or a wired object. In a trial, the inducer rotatedaround the vertical axis from one side to the other. When the inducerdirected to the center (facing to the observer), a test stimulus replaced itbriefly. The direction of the test stimulus varied between -4° and 4° andthe observer responded the direction of the test (left or right). With in themethod of constant stimuli, we measured the direction of the test object thatappeared to be in straight ahead.[Results] The perceived straight direction of the test image was shifted in thedirection of the inducer rotation. The effect was the larger for the face (2.0°)than for the wired object (1.2°). When an upside down face was used, theamount of the shift reduced (1.7°). Using the depth reversed wired object,we confirmed that the effect was in 3D. We also found no effect when theinducer and the test do not overlap in space. They indicate that the effect isneither specific to face nor a 2D phenomenon, although it may be strongerfor face images. They suggest that the effect is related to spatial perceptionrather than object or face recognition.Neural mechanisms: Human electrophysiologyOrchid Ballroom, Boards 401–408Tuesday, May 11, 2:45 - 6:45 pm56.401 Early VEP magnitude is modulated by structural sparsenessand the distribution of spatial frequency contrast in naturalscenesBruce C Hansen 1 (bchansen@mail.colgate.edu), Theodore Jacques 1 , Aaron PJohnson 2 , Dave Ellemberg 3 ; 1 Department of Psychology, Colgate University,Hamilton, NY USA, 2 Department of Psychology, Concordia University, Montréal,QC, Canada, 3 Centre de recherche en neuropsychologie et cognition (CERNEC),Université de Montréal, QC, CanadaThe contrast response function of early visual evoked potentials (VEPs)elicited by sinusoidal gratings is known to exhibit characteristic potentialsassociated with the parvocellular and magnocellular pathways. Specifically,the N1 component has been linked with parvocellular processes, while theP1 component has been linked with magnocellular processes (Ellemberg, etal., 2001, Spatial <strong>Vision</strong>). However, little is known regarding the responseof those pathways during the encoding of complex (i.e., broadband) stimulisuch as natural scenes. Natural scenes are known to vary in terms of: 1) theamount of structural content (i.e., structural sparseness) contained withineach image, and 2) the distribution of contrast across spatial frequency (i.e.,1/f slope of the amplitude spectrum) across each image. Thus, the presentstudy was designed to examine the extent to which the physical characteristicsof natural scenes mentioned above modulate early VEPs in humans.The stimuli consisted of 50 natural scene images, grouped according to theslope of their amplitude spectra (five different levels of slope: -0.76, -0.87, -1.0, -1.2, & -1.4) and degree of structural sparseness (two levels of structuralsparseness: high and low) contained within each image. We recorded EEGswhile participants viewed each natural scene image for 500ms, preceded bya 500ms mean luminance blank from which base-line measurements weretaken. The results show that: 1) the relative magnitude of the early VEPswas highly dependent on the amount of structure contained within thescenes, independent of amplitude spectrum slope; 2) the overall magnitudeof the early VEPs was dependent on the slope of the amplitude spectrumsuch that the presence of more contrast at the higher spatial frequenciesyielded higher overall early VEP magnitude. These results suggest that it isthe amount of structure at the higher spatial frequencies in natural scenesthat dominate early VEPs.Acknowledgement: Acknowledgement: NSERC & CFI grants to DE, NSERC to APJ, andCURCG to BCH56.402 Relative latency of visual evoked responses to reversals incontrast, orientation, and motion directionOliver Braddick 1 (oliver.braddick@psy.ox.ac.uk), Jin Lee 1 , Katie McKinnon 2 , IsobelNeville 3 , John Wattam-Bell 4 , Janette Atkinson 4 ; 1 Dept of Experimental Psychology,Oxford University, UK, 2 St Anne’s College, Oxford, UK, 3 St Catherine’s College,Oxford, UK, 4 Visual Development Unit, Dept of Developmental Science, UniversityCollege London, UKConventional VEP recording tests neural responses evoked by reversingpattern contrast. VEPs for orientation-reversal (OR) [Braddick et al (1986),Nature, 320: 617] and direction-reversal (DR) [Wattam-Bell (1991), <strong>Vision</strong>Res, 31: 287] use stimulus sequences designed to isolate cortical responsesto these higher-order changes from responses to contrast change. Sincethese require more complex processing than contrast changes, we mightexpect some additional delay of the measured response, reflecting thisadditional processing.We have tested this hypothesis by recording pattern-reversal (PR-), ORandDR responses, at reversal rates up to 4 /sec, from occipital scalp electrodeson adult subjects, and assessing the mean latency of the first positivepeak. OR- and DR- sequences isolate the effect of reversals from accompanyingcontrast changes, by embedding the reversal event within a sequenceof equivalent contrast changes (‘jumps’). We use two methods to avoid ourlatency measures being contaminated by responses to jumps – filteringout harmonics in the signal related to the jump frequency, or subtracting a‘jump-only’ section of the waveform from the response to reversal + jump.We find very similar latencies for OR- and PR- responses, suggesting thatresponses to pattern reversal arise from a level of cortical processing whichis already orientation-selective. The DR- response is more complex, buttypically contains components with a latency 10-20 ms lower than eitherPR or OR - evidence against any time penalty associated with motion processing..We will discuss these results in relation to possible differences inthe balance of magno- and parvocellular inputs to the three responses, andpossible ‘fast’ routes for motion processing bypassing V1. Future work willtest the overall temporal properties of the different responses, beyond theinitial latency, and also the potential use of this comparison in analysingcortical processing in infancy.Acknowledgement: Research Grant G0601007 from the Medical Research Council and aThouron award from the University of Pennsylvania56.403 Orientation selectivity in primary visual cortex using MEG:an inverse oblique effect?Loes Koelewijn 1,2 (lkoelewi@maccs.mq.edu.au), Julie R. Dumont 1 , Suresh D.Muthukumaraswamy 1 , Anina N. Rich 2 , Krish D. Singh 1 ; 1 CUBRIC (Cardiff UniversityBrain Research Imaging Centre), School of Psychology, Cardiff University, ParkPlace, Cardiff CF103AT, UK, 2 MACCS (Macquarie Centre for Cognitive Science),Macquarie University, Sydney NSW 2109, AustraliaOrientation discrimination is much better for horizontal or vertical thanfor orientations with a 45-degree tilt. In partial support of these behaviouralfindings, some animal physiology studies show that a moderatelylarger number of neurons are tuned to cardinal than oblique orientationsin primary visual cortex, and the former are more tightly tuned to theirpreferred orientation. A limited number of human neuroimaging studiesalso support this classic ‘oblique effect’, with the BOLD response localisingthe neural effect to V1, and EEG demonstrating both increased responsemagnitudes and reduced latencies. How is orientation selectivity reflectedin the magnetoencephalography signal? The animal literature shows thatGABAergic interneurons play a critical role in orientation selectivity. Asthese inhibitory interneurons are also important for stimulus-inducedgamma oscillations, it is likely that responses in the gamma spectrum areinfluenced by orientation. We measured the evoked response, as well as theTuesday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>301


Tuesday Afternoon PostersVSS 2010 AbstractsTuesday PMinitial spike and sustained induced gamma response to maximum contrast,3 cycle/degree, stationary black/white sine-wave circular grating patchesof diameter 4 degrees. Three orientations (0, 45 and 90 degrees from vertical)were randomly chosen and presented in the lower left quadrant 2.5degrees from fixation. Our results point towards a larger induced gammaresponse for oblique stimuli over cardinal ones in contralateral V1, reflectedboth in the initial spike as well as in the sustained response during stimuluspresentation. The specific frequency of the peak response did not differ.Interestingly, in contrast to the EEG findings, we also found this ‘inverseoblique effect’ in the evoked response around 80ms post-stimulus. Theseresults may suggest that V1 neurons have a more complex response tuningto non-preferred orientations. Alternatively, the results may be due to anoddball phenomenon, implicating that cardinals are grouped in perception,and that oblique stimuli attracted attention.Acknowledgement: CUBRIC - School of Psychology, MACCS Postgraduate Grant,Macquarie International Travel Scholarship56.404 Neural Mechanism of Inverse Oblique Effect on Broad-bandNoise Stimuli: An ERP StudyYan Song 1 (songyan@bnu.edu.cn), Bin Yang 1 , Fang Wang 1 , Xiaoli Ma 1 ; 1 State KeyLaboratory of Cognitive Neuroscience and Learning, Beijing Normal UniversityWhen resolution acuity or contrast sensitivity is evaluated using simplestimuli such as lines or gratings, visual performance is often best for horizontaland vertical orientations and worst for oblique orientations. It’sthe well known ‘oblique effect’. However, using more complex stimuli ofbroad-band spatial frequencies and orientations, Essock et al. found aninverse oblique effect, which visual performance is worst for horizontalorientation and best for oblique orientations. The anisotropy in the numberof cortical neurons tuned to different orientations in visual cortex hasaccounted for the oblique effect by previous fMRI and physiological studies.But the neural mechanism of this inverse oblique effect remains largelyunknown. In the present study, seventeen subjects were first tested for orientationsalience threshold before they were recorded EEGs. Thresholdswere highest for horizontal orientation and lowest for oblique orientations,which was consistent with Essock et al.’s work. Then, we recorded high resolutionelectroencephalography from the whole-scalp sensor array whilesubjects took part in an orientation identification task, in which orientationsalience was 1.2~2.5 times of the threshold of horizontal orientation.We found the response accuracies were lower and the response times werelonger for cardinal orientations than for oblique orientations. The eventrelatedpotential results revealed that the difference between cardinal orientationsand oblique orientations occurred around 200 ms post stimulusonset, which was much later than the traditional oblique effect. Besides,the P300 latency was much earlier for oblique orientations than for cardinalorientations. These findings indicated that this inverse oblique effect ofbroad-band noise stimuli might occur at relatively higher levels of visualinformation processing and might involve more complex neural mechanismthan oblique effect.Acknowledgement: This research was supported by the National Natural ScienceFoundation of China (No. 30600180) and the Beijing Natural Science Foundation(No.7073092)56.405 EEG and MEG Time Functions Are the SameStanley Klein 1,2,3 (sklein@berkeley.edu), David Kim 3 , Thom Carney 1 ; 1 School ofOptometry, UC Berkeley, 2 Helen Wills Neuroscience Institute, 3 Bioengineering,UC BerkeleyPrevious studies with simultaneous EEG and MEG recordings, havereported significant intermodal differences. These differences reflect thedifferential contributions from multiple sources due to the different dependenciesof EEG and MEG on tissue conductivity. Our multifocal stimulicomposed of 32 small patches that randomly check-reversed at 30Hz activatein a time-locked fashion only early visual areas, while effectively suppressinghigher level processes. We suspect it is this selective activation andthe availability of many patches for SVD analysis that permits the impressiveintermodal agreements in our study.We performed three analyses to assess the time-function similarity.1) Estimation of the overall fit by correlation measures. EEG/MEG correlationcoefficients for the first three SVD time-function components were[0.99, 0.99, 0.99], [0.94, 0.97, 0.97] and [0.61, 0.57, 0.88] ([S1, S2, S3] denotesthe three subjects).2) ChiSquare estimation of intermodal time-lag. To p=.001 confidence, wecan detect the difference between EEG and MEG when the EEG signal isshifted by [1.6, 1.9, 1.6], [1.5, 1.3, 1.1] and [3.7, 9.3, 1.5] milliseconds for thethree components. These surprisingly small values are a powerful demonstrationof EEG/MEG agreement especially since we allowed arbitrary linearcombination of the EEG components to match each MEG component.3) Identification of signal differences at specific temporal locations. In orderto carefully test the significance of temporal regions where signals differ,we used cluster-based permutation analysis to determine the location andsignificance of these differences. We found that for the three subjects, thecluster analysis (p


VSS 2010 AbstractsTuesday Afternoon Postersthe analysis. In a second experiment, we presented two independent randomluminance sequences, one on each side of fixation, and participantsattended to one or the other during each trial. The oscillation in the VIRF inresponse to the attended stimulus was 20% greater than for the unattended;this contrasts with the typical decrease of ongoing alpha with attention.Our findings demonstrate that the brain resonates with the random fluctuationsin our the noise stimulus at its natural frequency (~10 Hz), anddisregards other frequencies; in accordance with this interpretation, ourparticipants tended to perceive the stimulus as periodic flicker rather thanas a broadband white noise signal. Therefore each luminance modulationin our stimulus, i.e., each ‘event’, generated not just one potential, but aperiodic series of potentials - ‘echoes’ of the first, akin to cortico-thalamicreverberation.Acknowledgement: This research was supported by grants from the ANR (project ANR06JCJC-0154) and the EURYI to R.V.56.408 The spatial distribution of VEP responses to temporalmodulations of motion contrast in human adultsJ.D. Fesi 1 (jdf232@psu.edu), R.O. Gilmore 1 ; 1 Department of Psychology, ThePennsylvania State UniversitySingle unit and fMRI evidence suggests that extrastriate cortical regionsprocess motion contrast. In this study, we employed high-density steadystatevisual evoked potential (SSVEP) recordings to study the tuning curvesof cortical areas sensitive to the separation of a figure from its backgroundusing different classes of motion contrast information--direction and globalcoherence. Participants (n=21; 11 female) viewed moving dot displays (7cd/m2 size; 8% density) in which four square “figure” regions (9° wide)emerged from and disappeared into the background at a specific frequency(1.2 Hz: 1F1), based on differences in dot direction and motion coherence.We found responses at 1F1 that increased monotonically to both types ofmotion contrast, observed over midline channels near the occipital pole.Responses at the second harmonic (2F1) were strongest over lateral channels;there the response curves saturated once a minimal threshold ofmotion contrast magnitude was reached. We interpret the midline activityat 1F1 to reflect the processing of the magnitude of motion contrast informationin early visual association areas, possibly V2 or V3A/D. The lateralizedactivity at 2F1 appears to reflect a non-linear thresholding operation associatedwith extracting the figure from background, perhaps engaging lateraloccipital cortex (LOC), among other areas. Source modeling techniques willallow us to locate the precise coordinates of the processing centers of thesefunctionally distinct evoked responses in the brain.Attention: TrackingOrchid Ballroom, Boards 409–425Tuesday, May 11, 2:45 - 6:45 pm56.409 Neural measures of interhemispheric information transferduring attentive trackingTrafton Drew 1 (tdrew@uoregon.edu), Todd S. Horowitz 2, 3 , Jeremy Wolfe 2, 3 , EdwardK. Vogel 1 ; 1 University of Oregon, 2 Harvard Medical School, 3 Brigham andWomen’s HospitalPeople are generally able to track 4-5 objects as they move amongst visuallyidentical distractors. However, Alvarez & Cavanagh (2005) found thatif tracked objects are lateralized to one visual hemifield, tracking capacityis drastically reduced relative to bilateral tracking trials. These data suggestthat tracking for each hemifield is carried out independently by thecontralateral hemisphere. If so, what happens when an object moves fromone hemifield to another? If the right hemisphere is tracking an object thatmoves to the right visual field, does the left hemisphere pick up the objectrepresentation the moment that it crosses the midline, does it preemptivelystart tracking the object before it crosses the midline, or does it waituntil some point after the midline has been crossed? When does the righthemisphere stop tracking the object? We studied these questions using asustained contralateral negativity that indexes tracking activity duringlateralized versions of the attentive tracking task (Drew & Vogel, 2009).We measured contralateral and ipsilateral activity while a tracked objectmoved horizontally across the midline. As predicted, activity with respectto the hemifield where the object originated initially exhibited a strongcontralateral negativity that then flipped to an ipsilateral negativity as theobject moved to the opposite hemifield. We found that ipsilateral activityprospectively increased prior to the moment when the object crossed themidline whereas contralateral activity did not decrease until several hundredmilliseconds after the object crossed the midline. This suggests thatthe two hemispheres were both tracking the object for several hundred milliseconds.Furthermore, we were able to influence the timing of this interactionby manipulating the predictability of object motion. When the objectmovement was less predictable, the duration of interhemispheric informationsharing decreased.56.410 The coordinate systems used in visual trackingPiers Howe 1,2 (howe@search.bwh.harvard.edu), Yair Pinto 1,2 , Todd Horowitz 1,2 ;1 Brigham and Women’s Hospital, Boston, MA, 2 Harvard Medical School, Boston,MATracking moving objects is a fundamental attentional operation. Withouttracking, attention cannot be maintained on objects translating throughspace. Here we ask which coordinate system is used to track objects, retinal(retinotopic), scene-centered (allocentric), or both. While maintaininggaze on a fixation cross, observers tracked three of six disks, which wereconfined to move within an imaginary square. Relative to the imaginarysquare, the disks all moved at the same speed. By moving either the imaginarysquare (and thus the disks contained within), the fixation cross (andthus the eyes), or both, we could increase disk speeds in one coordinatesystem while leaving them unchanged in the other. Increasing disk speedsin either coordinate system reduced tracking ability by an equal amount.These data support the hypothesis that humans track objects simultaneouslyin both retinotopic and allocentric coordinates. This finding imposesa strong constraint on models of multiple object tracking.Acknowledgement: We would like to acknowledge NIH MH65576 to TSH56.411 Effects of Distinct Distractor Objects in Multiple ObjectTrackingCary Feria 1 (cary.feria@sjsu.edu); 1 Department of Psychology, San Jose StateUniversityPrevious studies investigating the question of whether feature informationis maintained during multiple object tracking (MOT) have found mixedresults. The present experiment addresses this question by manipulatingthe color of some of the distractor objects in MOT. Can the visual systemfilter out distractors that have a distinct feature from the targets? Atthe beginning of each trial, several circles were displayed, and 5 of themflashed to designate them as targets. Then the circles moved about thescreen. When they stopped moving, one circle was highlighted, and theobserver answered whether it was a target or not. On each trial, there were5 targets, 5 distractors that were identical to the targets, and also several (0,1, 2, 5, or 10) additional distractors. The additional distractors were eitherthe same color as the targets, or a different color. The highlighted circle wasalways one of the targets or 5 identical distractors. Tracking performancedeclined as the number of additional distractors increased, both for samecolorand different-color additional distractors. Yet tracking performancewas higher when the additional distractors were different in color from thetargets. These results demonstrate that distractors hinder tracking, but thatif distractors have a distinct feature from targets, the distractors’ effect isreduced. However, even featurally distinct distractors interfere with trackingto some extent. These findings show that the visual system can use featureinformation about objects to facilitate MOT, which supports the claimthat feature information can be maintained during tracking.56.412 Adaptive Training in Multiple Object Tracking ExpandsAttentional CapacityTodd W Thompson 1 (toddt@mit.edu), John DE Gabrieli 1 , George A Alvarez 2 ; 1 MassachusettsInstitute of Technology, 2 Harvard UniversityOne popular task for measuring attentional capacity is multiple objecttracking (MOT), where observers attentively track multiple moving targetitems among a set of identical distractors. MOT performance depends on avariety of factors, including the number of targets and their speed (Alvarezand Franconeri, 2007). Thus, the MOT task provides two measures of attentionalcapacity: (1) the number of items that can be tracked at a fixed speed,(2) the maximum speed at which a fixed number of items can be tracked.Here, we explored whether these measures of attentional capacity can beincreased through an adaptive training regime. A threshold procedure wasused to determine the speed at which two subjects could track four targetsamong eight distractors (mean speed = 5.3 deg/s). Subjects then completedtwenty sessions of MOT practice (40 trials per day), with the object speedTuesday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>303


Tuesday Afternoon PostersVSS 2010 AbstractsTuesday PMon each trial adaptively updated. When all targets were accurately trackedon two consecutive trials, speed was increased by 1 deg/s. When any targetswere missed, the speed was decreased by 1 deg/s. After the last sessionof training, we assessed the number of items that could be trackedat the initial pre-training speed. Over twenty sessions of training, subjectsincreased the speed that they could reliably track four objects from 5.3 deg/s to an average speed of 12.3 deg/s. The maximum speed they could trackobjects was proportional to the time spent training (r = .843, p


VSS 2010 AbstractsTuesday Afternoon Posters56.417 Tracking seven is not the same as tracking three: The rolesof parallel and serial resources in object trackingJonathan Flombaum 1 (flombaum@jhu.edu); 1 Johns Hopkins UniversityTracking a subset of moving targets among a group of identical items, typicallystudied with the multiple object tracking paradigm (MOT), has longbeen known to be capacity limited, usually to about three items. But recentwork suggests that this limit can increase to as many as eight when objectsmove slowly enough. The current studies asked whether seven items aretracked in the same way as three. Participants performed MOT whilealso detecting transient probes that appeared on targets. In Experiment1, participants tracked between one and five targets. Targets were alwaysrevealed one at a time, and in half the trials participants had to identifytargets in the same order as they were revealed, adding serial order (SO)memory requirements. Whereas probe detection rates declined linearly asa function of load in the SO task, detection only declined in the spatial taskwhen tracking four or five targets compared to two or three. Monotoniccosts associated with additional targets in the SO task reveal the operationof a serial processing mechanism. But the absence of such costs for fewerthan three targets in the spatial condition suggests that two or three targetswere tracked in parallel, while tracking four or more demanded serialresources. An experiment with slow object speeds and tracking loads up toseven confirmed these intuitions. Declines in probe detection evolved formore than three targets, though there were no significant costs associatedwith tracking three compared to two. Further experiments used an objectlocalization procedure and found a similar absence of per-item costs for oneto three targets, compared with steep per-item costs for more than three.Overall, these experiments demonstrate that only up to three targets can betracked in parallel, and that tracking more than three requires the allocationof serial resources.56.418 The spatial representation in multiple-object trackingMarkus Huff 1 (m.huff@iwm-kmrc.de), Frank Papenmeier 1 , Georg Jahn 2 ; 1 KnowledgeMedia Research Center Tübingen, Germany, 2 Department of Psychology,University of Greifswald, GermanyDuring multiple-object tracking visual attention is allocated asymmetricallyacross target and distractor objects: probe-detection experimentsshowed that visual tracking benefits from inhibiting distractors (Pylyshyn,2006). However, experiments examining multiple-object tracking within3D-scenes across scene rotations suggest that the spatial relations betweenall objects may be used in visual tracking (Huff, Jahn, & Schwan, 2009).In the current study, we examined the role of spatial relations among targetand distractor objects within 3D-scenes. Participants tracked five of tenballs moving on a circular monochromatic floor-plane. Halfway througheach trial, we abruptly rotated only distractors, only targets, or all objects(complete rotations) during a masking flash of 100ms. In control conditionsthere was a flash but no rotation. If spatial relations between all objects areused in multiple-object tracking, performance should be impaired in conditionswith distractor rotation. However, if the distractors are processedseparately from the targets and if the relations between distractors and targetsare irrelevant for tracking, abrupt distractor rotations should not affecttracking. Additionally, tracking should be easier for complete compared totarget only rotations if relations between all objects are used. Compared tothe control condition, abrupt distractor rotations impaired tracking performanceonly in trials in which distractor rotations led to increased crowdingaround a target object. When distractor rotation involved no increasedcrowding, tracking performance was comparable to the control condition.Additionally, abrupt complete rotations impaired tracking performancethe same way as target only rotations did. Two implications can be drawnfrom this study. First, there is some evidence that targets and distractors areprocessed separately. More specific, the spatial relations between targetsand distractors seem to be irrelevant for multiple-object tracking. Second,crowding plays an important role for allocating visual attention as crowdingaround target objects due to distractor rotations impaired tracking.Acknowledgement: Deutsche Forschungsgemeinschaft (German Research Foundation)grants HU 1510 4-1 and JA 1761/5-156.419 Reallocating Attention in Multiple-Object Tracking WithoutExplicit CuesJustin Ericson 1,2 (jerics1@lsu.edu), James Christensen 1 ; 1 Air Force ResearchLaboratories, 2 Louisiana State UniversityWolfe, Place, and Horowitz (2007) presented data from multiple experimentsdesigned to increase the real-world relevance of the typical multipleobject tracking paradigm (Pylyshyn & Storm, 1988). They found thatcontinuously adding and removing objects from the tracked set duringtracking produced very little change in performance. Ericson and Christensen(VSS 2009) tested the addition and removal of objects during trackingseparately, and found that performance increases with an additionand decreases equally with a removal as compared to tracking a fixed set.The increase was explained using a performance model that assumes theprobability of losing tracking on a dot is solely a function of the number ofdots tracked at that time. A new second experiment demonstrates that thedecreased accuracy produced by removing an object is eliminated when theobject to be removed is offset rather than cued via sudden onset. The resultsof applying the performance model support the idea that object trackingcan be described as a continuous-time Markov process with highly efficientreallocation of attention as long as cues do not conflict with reallocation.Acknowledgement: AFRL 711th HPW, Consortium Research Fellowship56.420 Shrinking or Falling? Naturalistic Optical TransformationsDo Not Increase Multiple Object Tracking CapacityChris Brown 1 (cmbrown1@wichita.edu), Dinithi Perera 1 , Evan Palmer 1 ; 1 HumanFactors Program, Department of Psychology, Wichita State UniversityObjects that shrink and expand at occluding surface boundaries in multipleobject tracking are more difficult to follow than objects that delete andaccrete (Scholl & Pylyshyn, 1999). Here we ask whether this difficulty isdue to the lack of a top-down naturalistic explanation for the shrinkingor whether it is due to the bottom-up optical transformation of shrinkingitself. If a naturalistic visual context for the shrinking disks was available,would observers exhibit higher tracking capacity? Observers tracked fourof eight disks that could pass behind an occluding surface in the middle ofthe display. The disks moved for 10 seconds and then observers attemptedto identify the four disks they were tracking by clicking on them. In theoccluded condition, disks deleted/accreted at the occlusion boundary,while in the shrinking condition, disks shrank/expanded at the boundary.In the critical falling condition, disks also shrank/expanded at the occlusionboundary but a background image with a steep slope made the disksappear to fall into and then emerge from a ravine below the occludingsurface. The low level optical transformations in the shrinking and fallingconditions were identical – only the background image varied. Observershad the highest tracking capacity in the occluded condition, followed bythe shrinking condition. Contrary to the naturalistic optical transformationhypothesis, observers exhibited the lowest tracking capacity in the fallingcondition. We interpret this result to indicate that bottom-up opticaltransformations per se caused the decrease in tracking capacity. If an objectshrinks at an occlusion boundary, the visual system seems to stop trackingit, even if there is a naturalistic visual explanation for the shrinking.56.421 Event-related Potentials Reveal “Intelligent Suppression”during Multiple Object TrackingMatthew M. Doran 1 (mdoran@udel.edu), James E. Hoffman 1 ; 1 Department ofPsychology, University of DelawareRecent evidence indicates that distractor objects may be actively inhibitedor suppressed during multiple object tracking (MOT; Doran & Hoffman,in press; Pylyshyn, 2006; Pylyshyn et al., 2008) however the mechanism ofsuppression is currently unclear. In one view, zones of suppression maysurround tracked targets so that objects that are near targets (and thereforewithin the suppressive region) would be suppressed (Franconeri etal., Psychonomics 2009). Alternatively, it may be that suppression is onlyapplied to distractors that are likely to be confused with targets and thereforeinterfere with tracking performance (Pylyshyn et al., 2008). We examinedthis issue by measuring the amplitude of the N1 component of theevent-related potential (ERP) elicited by probe flashes presented on targets,nearby distractors, and distant distractors. Critically, some of the distractorswere “confusable” with the targets (i.e., they were the same color andshape) while others were not (i.e., they were a different color and shape).If distractors are suppressed via an inhibitory region surrounding targetsthen confusability shouldn’t matter and both confusable and nonconfusabledistractors should be suppressed when they are near targets. Alternatively,if suppression is “intelligent” or selective then only the confusabledistractor objects should be suppressed perhaps because they are morelikely to interfere with accurate tracking. The results of this experimentTuesday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>305


Tuesday Afternoon PostersVSS 2010 AbstractsTuesday PMsupport intelligent suppression as N1 amplitude for probes appearing onnearby distractors was suppressed only when they were confusable withthe target. In sum these data suggest that suppression of distractors duringMOT is “intelligent” as it is applied only to distractors that are potentiallyconfusable with targets.56.422 Blink-induced masking and its effect on Multiple ObjectTracking: It’s easier to track those that stop during interruptedviewingDeborah Aks 1 (daks@rci.rutgers.edu), Harry Haladjian 1,2 , Alyssa Kosmides 3 , SeethaAnnamraju 4 , Hristiyan Kourtev 1 , Zenon Pylyshyn 1 ; 1 Rutgers Center for CognitiveScience, 2 Rutgers University Department of Psychology, 3 Rutgers UniversityDepartment of Biomedical Engineering, 4 Rutgers University Department oComputer EngineeringWhen tracking multiple objects, does the visual system encode the locationand trajectory of tracked objects? Is encoding only triggered from theabrupt changes that typically occur in the real world such as when objectsdisappear behind other objects? We extend our 2009 work examining therole of location-coding in Multiple Object Tracking (MOT) using a novelblink-contingent method, enabling us to control simultaneously: item disappearanceand abrupt transitions.1 Here, we introduce backward-maskingto the eye-blink paradigm to further control onset transitions. Observerswere instructed to blink their eyes when a brief tone was presentedmidway into each 5s trial of tracking (4 of 8 circles). Eye-blinks induced twoevents: item disappearance (for 150, 450, or 900 ms), and onset of a mask,which occluded the entire display of items (either for the full disappearancetime, or 75ms plus a blackout for the remaining interrupt). During theirdisappearance, objects either continued moving along their trajectories,or halted until their reappearance. Therefore, “move” objects reappearedfurther along their trajectory while “halt” objects did not. Results replicateKeane & Pylyshyn, (2006); and Aks et al., (2009) with better tracking whenitems halt [but here only reliably in the 450 and 900 ms trials]. These trendsindicate that trajectory information is not encoded during tracking, and thevisual system may refer back to past position samples as a ‘best guess’ forwhere tracked items are likely to reappear. Importantly, the “halt” advantageoccurred in both blocked and randomized forms of object motion,suggestive of an automated and data-driven tracking mechanism; one notinclined to predict objects’ trajectories even when presented in a repeated,and thus, predictable context. [1Although an eye-blink is a sudden eventoptically, we are typically unaware of it and it is likely not encoded as atransient event.]Acknowledgement: NSF-IGERT56.423 Attentional tracking in the absence of consciousnessEric A. Reavis 1 (eric.a.reavis@dartmouth.edu), Peter J. Kohler 1 , Sheng He 2 , PeterU. Tse 1 ; 1 Department of Psycholgical and Brain <strong>Sciences</strong>, Dartmouth College,2 Department of Psychology, University of MinnesotaVisual attention and awareness are closely related but dissociable processes.One demonstration of the dissociation between attention and awarenesscomes from attentional allocation toward invisible targets (Jiang, Costello,Fang, Huang, & He, 2006). However, there are multiple subtypes of visualattention, and it is probable that there are differences in the relationshipbetween attention and awareness for different subtypes. Thus, it might bethat some aspects of attention, such as orienting to a change in the saliencymap of the visual field, demonstrated by Jiang and colleagues (2006), continueto operate without awareness, while other aspects of attention, such asattentional tracking, only occur with awareness. We designed experimentsto determine whether attentional tracking can take place without consciousawareness of to-be-tracked moving targets. First, we replicated the result ofJiang and colleagues (2006). We used continuous flash suppression to renderimages invisible, then measured subjects’ performance on a subsequentvisible two-alternative forced-choice gabor orientation judgment task inthe location of an invisible attentionally salient or non-salient stimulus.We replicated the result that subjects’ discrimination of gabor orientationwas influenced by the attentionally salient stimuli. We then modified theparadigm to test attentional tracking. We presented an attentionally salientstimulus which morphed into a dot, then moved across the screen to a differentlocation while remaining invisible. We tested subjects’ performanceon the orientation discrimination task in both the original location of theattentional stimulus and the final location of the associated trackable dot.We found that subjects’ performance on the rotation task was influencedin both locations, compared to control locations where a non-salient stimulusand trackable dot appeared. This suggests that attentional tracking canoccur without awareness of the tracked stimulus. These data imply thateven certain advanced aspects of attention are dissociable from awareness.56.424 Spatial Reference in Multiple Object TrackingGeorg Jahn 1 (georg.jahn@uni-greifswald.de), Papenmeier Frank 2 , Huff Markus 2 ;1 Department of Psychology, University of Greifswald, Germany, 2 KnowledgeMedia Research Center Tübingen, GermanyWhile tracking multiple targets simultaneously, the configuration in thescene as it is projected onto the picture plane provides stable spatial referencefor tracking targets. Multiple object tracking in a 3D-scene is robustagainst smooth movements of the whole scene even without static referenceobjects (Liu et al., 2005) suggesting that targets and distractors provideenough configurational information. What is important, however, is continuousmotion in the picture plane as revealed by the detrimental effect ofabrupt viewpoint changes (Huff, Jahn, & Schwan, 2009). Abrupt viewpointchanges suddenly change the configuration of identically looking objects inthe picture plane and yield it difficult to establish correspondence betweenobject locations before and after the viewpoint change. The background ina 3D-scene can act as a static and visually distinct spatial reference to solvethis correspondence problem. If the presence of a static background turnsout to be beneficial, this would demonstrate that static reference objects areused to locate targets in MOT when the configuration of dynamic objectsprovides insufficient information. We report three experiments employingabrupt viewpoint changes, in which a checkerboard floor plane and a wireframefloor plane improved performance compared to a display lackingany static background. This floor plane effect was found when viewpointchanges of 20° occurred while two targets were tracked and while a singletarget was tracked. In contrast, tracking 3, 4, or 6 targets showed no benefitfrom the presence of a floor plane. We argue that targets are tracked asparts of a continuously changing configuration that provides spatial reference.Discontinuous changes create the need to use static reference objectsfor relocating targets. Our experiments have revealed narrow limits forrelocating targets, which may generalize to other dynamic tasks, in whichobservers move and targets are not continuously in view.Acknowledgement: Deutsche Forschungsgemeinschaft (German Research Foundation)grants JA 1761/5-1 and HU 1510 4-156.425 Investigating virtual object structure in multiple objecttrackingNicole L. Jardine 1 (n.jardine@vanderbilt.edu), Adriane E. Seiffert 1 ; 1 Department ofPsychology, Vanderbilt UniversityResearchers use multiple object tracking to study how people attend to aset of identical, moving targets. Yantis (1992) showed that people conceiveof multiple targets as parts of a virtual object (e.g. three vertices of a triangle),even when targets move independently. We investigated whetherstrengthening object structure by increasing the similarity of target motionwould improve tracking performance. Observers tracked 4 of 12 identicaldots moving in a box for 5.6 seconds in order to select targets from distractorsat the end of the trial. To increase virtual object structure, we made targetsmaintain the form a rigid polygon that rotated, translated, expandedand contracted during the trial. Distractors formed two other polygons thatbehaved similarly in the same space. We also tested the effect of object symmetry:the polygons were either symmetric shapes, such as diamonds, orspecific asymmetric shapes, such as skewed trapezoids. These conditionswere compared to tracking randomly moving objects. Motion conditionsignificantly affected proportion correct, F(3,30) = 31.50, p


VSS 2010 AbstractsTuesday Afternoon PostersAttention: Endogenous and exogenousOrchid Ballroom, Boards 426–429Tuesday, May 11, 2:45 - 6:45 pm56.426 Effects of central cue reliabilities on discrimination accuracyand detection speedAlex Close 1 (psp815@bangor.ac.uk), Giovanni D’avossa 1 , Ayelet Sapir 1 , JohnParkinson 1 ; 1 Wolfson Centre for Clinical and Cognitive Neuroscience, BangorUniversity, UK, and School of Psychology, Bangor University, UKPartially valid central cues have been extensively used to study the effectsof spatial attention on visual performance. However, the effect of cue reliabilityhas not been examined in great detail. We assessed the effects of cuereliability in motion discrimination and speeded detection tasks. Four randomdots kinematograms (RDKs) were presented in the four visual quadrantsat 10° eccentricity. Each RDK was contained within a circular apertureof 5° diameter. In the discrimination task, participants reported the directionof translational coherent motion. Coherent motion occurred in onlyone of the four apertures. Its likely location was indicated by a cue whosereliability varied across trials over 4 levels (25%, 60%, 75%, 86%). Discriminationaccuracy was greater when motion stimuli were preceded by validthan invalid cues. This effect was modulated by cue reliability, the greaterthe cue reliability the greater the validity effect. Moreover, validity effectswere also found in the 25% reliability condition, when no task relevant spatialinformation was provided by the cue. In the detection task, the set-upwas the same, except that participants had to report the onset of expandingmotion as quickly as possible, rather than motion direction. Validity andreliability effects were also found in this paradigm. We compared performancewhen two quadrants were cued either by a pro-cue (pointing to thelocation where coherent motion will have appeared) or anti-cue (pointingto locations where coherent motion will not have appeared). Anti-cues wereassociated with poorer performance than pro-cues. On the other hand, discriminationperformance when one or two quadrants were cued, by cueswhich provided the same amount of spatial information, i.e. one bit, wasvirtually identical. We conclude that visual performance, following centralcues, reflect both the utility of cued information as well as automatic processes.Acknowledgement: A. Close is supported by an ESRC studentship56.427 Revealing the space in symbolically-controlled spatialattentionAlexis Thompson 1 (thoalexi@gmail.com), Bradley Gibson 1 ; 1 University of NotreDameThis study investigated the nature of the spatial computations that underliethe symbolic control of visual attention. At issue is the specificity of thespatial information that is conveyed by directional symbols such as spatialwords (ABOVE, BELOW, LEFT and RIGHT) and corresponding arrows.Consider a right-pointing arrow that appears at fixation and is intendedto direct attention to a location in the periphery. Abundant evidence hasbeen interpreted to suggest that such cues can direct attention to specificlocations based on findings showing that RTs are faster when a subsequenttarget appears at a location in the cued direction relative to when it appearsat an uncued location in the opposite or orthogonal direction. However,previous studies have typically presented targets at only a single locationalong each cued direction, making it difficult to ascertain the specificityof the spatial information that was conveyed by the cue. Accordingly, thepresent study presented targets at one of three possible locations in eachdirection. Three different spatial cues were compared: word cues, arrowcues, and onset cues. The target always appeared at one of the three locationsalong the cued direction with the constraint that it appeared at oneof the three locations 80% of the time and at the other two locations 20% ofthe time. Observers were informed of these contingencies and they wereinstructed to shift their attention to the most probable location. The specificityof the spatial information conveyed by the three cues was estimated bythe magnitude of the cuing effect (uncued RT – cued RT) observed alongeach cued direction. The results suggested that the word cues and arrowcues conveyed less specific spatial information than the onset cues, thusrevealing weaknesses in the computation of metric spatial information necessaryfor directing attention to specific locations.56.428 The influence of goal-directed attention on unattendedstimulus-driven responsesDavid Bridwell 1 (dbridwel@uci.edu), Sam Thorpe 1 , Ramesh Srinivasan 1 ; 1 Departmentof Cognitive <strong>Sciences</strong>, Center for Cognitive Neuroscience, University ofCalifornia, IrvineAttention is influenced by internal goals and external stimulus-drivenevents. In the following experiment we investigate how peripheral goaldirectedattention to one visual field modulates the steady-state visualevoked potential (SSVEP) to a flickering noise patch in the opposite visualfield. In the experiment, goal-directed attention is shaped by conditionsthat emphasize 1) enhancement of external noise at the attended location(by detecting changes in external noise contrast) 2) suppression of externalnoise at the attended location (by detecting a Gabor patch within high externalnoise), and 3) neither suppression nor enhancement of external noisein the attended location (by detecting a Gabor patch within low externalnoise). SSVEP responses to the unattended noise flicker allow us to examinehow suppression and enhancement of external noise at the attendedlocation may modulate the response to external noise at an unattendedlocation. SSVEP responses to the flickering (f2 = 8 Hz) unattended noisewere measured across 40 second trials and Fourier analyzed. We found apatch of occipitoparietal electrodes contralateral to the attended location(ipsilateral to the flicker) that were significantly larger when individualsdetect changes in noise contrast at the attended location (condition 1 vs. 2).During Gabor detection within high and low noise (condition 2 vs. 3) wefind no significant difference in occipitoparietal responses. These resultsindicate that 8 Hz occipitoparietal responses to an unattended noise flickerincrease when top-down goals match features of the unattended flicker.These occipitoparietal responses do not decrease when top-down goalspromote suppression of features at the attended location. The results areconsistent with findings suggesting that peripheral attention suppressesexternal noise primarily at the attended location (Lu, Lesmes, & Dosher,2002, Journal of <strong>Vision</strong>) while feature enhancement extends to unattendedlocations.Acknowledgement: Supported by NIH grant 2 R01 MH6800456.429 The D2 dopamine receptor agonist bromocriptine enhancesvoluntary but not involuntary spatial attention in humansWilliam Prinzmetal 1 (wprinz@berkeley.edu), Ariel Rokem 2 , Ayelet Landau 1 , DeannaWallace 2 , Michael Silver 2,3 , Mark D’Esposito 1,2 ; 1 Psychology Department, Universityof California, Berkeley, 2 Helen Wills Neuroscience Institute, University ofCalifornia, Berkeley, 3 School of Optometry, University of California, BerkeleyThe neurotransmitter dopamine has been implicated in cognitive controland working memory. Specifically, the D2 dopamine receptor agonist bromocriptinehas been demonstrated to affect task switching and resistanceto distraction in visual working memory. Here, we systematically manipulatedspatial attention in a cueing paradigm and assessed the effects of bromocriptineadministration on behavioral performance. Subjects performeda visual discrimination task in which they reported the tilt of a target gratingthat appeared in one of four locations. Each trial began with the presentationof a cue in one of the four locations. Voluntary and involuntaryattention were separately assessed by manipulating cue probability andcue-to-target stimulus onset asynchrony (SOA). Some blocks were anti-predictive:for 20% of the trials, the target appeared in the cued location. Inthe remaining 80% of trials, the target appeared in the location diametricallyopposite the cue. In other blocks, the cue was not predictive of targetlocation (target appeared in cue or other locations with equal probability).In addition, there were two SOAs: long (600 msec) or short (40 msec). Inthe predictive blocks, for long SOA trials, allocation of voluntary attentionto the expected (opposite) target location resulted in shorter responsetimes (RTs). When the SOA was short, the involuntary capture of attentionresulted in the opposite pattern: RTs were significantly shorter when thetarget appeared in the same location as the cue. When the cue was nonpredictive,only involuntary attention effects were observed. Bromocriptinewas administered in a double-blind, placebo-controlled, crossover design.We found that bromocriptine enhanced the effect of spatial cueing (differencein RT for targets appearing at the opposite versus cue location) forlong SOA but not short SOA blocks and only in the anti-predictive cue condition.This result demonstrates dopaminergic modulation of voluntary butnot involuntary spatial attention.Acknowledgement: Funding support provided by NIH grants DA20600 to Mark D’Espositoand EY17926 to Michael Silver.Tuesday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>307


Tuesday Afternoon PostersVSS 2010 AbstractsTuesday PMPerceptual organization: Contours and 2DformOrchid Ballroom, Boards 430–444Tuesday, May 11, 2:45 - 6:45 pm56.430 A Computational Mid-level <strong>Vision</strong> Approach for Shape-Specific Saliency DetectionCristobal Curio 1 (cristobal.curio@tuebingen.mpg.de), David Engel 1 ; 1 Max PlanckInstitute for Biological CyberneticsWe present a novel computational approach to visual saliency detectionin dynamic natural scenes based on shape centered image features. Midlevelfeatures, such as medial features, have been recognized as importantentities in both human object recognition and in computational vision systems[Tarr & Buelthoff 1998, Kimia 2003]. [Kienzle et al 2009] have shownhow image driven gaze predictors can be learned from fixations duringfree viewing of static natural images and result in center-surround receptivefields. Method: Our novel shape-centered vision framework providesa measure for visual saliency, and is learning free. It is based on the estimationof singularities of long ranging gradient vector flow (GVF) fields thathave originally been developed for the alignment of image contours [Xu &Prince 1998]. The GVF uses an optimization scheme to guarantee preservationof gradients at contours and, simultaneously, smoothness of the flowfield. The specific properties are similar to filling-in processes in the humanbrain. Our method reveals the properties of medial-feature shape transformsand provides a mechanism to detect shape specific information, localscale, and temporal change of scale, in clutter. The approach generates agraph which encodes the shape across a scale-space for each image. Results:We have made medial-feature transforms amenable to work in clutteredenvironments and have demonstrated temporal stability thus providing amechanism to track shape over time. The approach can be used to modeleye tracking data in dynamic scenes. A fast implementation will provide auseful tool for predicting shape-specific saliency at interactive framerates.Acknowledgement: This work was supported by the EU-Project BACS FP6-IST-027140 andthe Deutsche Forschungs-Gemeinschaft (DFG) Perceptual Graphics project PAK 38.56.431 Using classification images to reveal the critical featuresin global shape perceptionIlmari Kurki 1,2 (ilmari.kurki@helsinki.fi), Aapo Hyvärinen 2,1 , Jussi Saarinen 1 ; 1 Departmentof Psychology, University of Helsinki, 2 Department of Mathematics andStatistics, University of HelsinkiRadial frequency contours (RF; sinusoidally modulated circles) have beenused to investigate how the visual information is integrated in global shaperecognition. However, it is not clear, (A) if the integration is based on particularcontour features (RF peaks or throughs; contour corners or sides)and (B) how similar processing of different RF shapes is. Here, classificationimages (CI), a psychophysical reverse-correlation technique was used toestimate the parts of the RF pattern that are critical for shape discrimination(RF pattern versus circle). Stimuli were composed of difference-of-Gaussianpatches (center sd = 5.6 arc min, n=32) in an RF contour (r=1.5 deg).Position noise (jitter) along the radial axis was added. The standard RF contourhad zero modulation amplitude (circle). The modulation amplitude ofthe test was adjusted to keep the proportion of correct detections at 75%. Aone-interval shape discrimination task was used with 4-point rating scale.CIs were computed from the position noise. Both four-cycle RF4 and fivecycleRF5 patterns were tested. CIs show that both radial modulation peaksand throughs are used, suggesting that both contour sides and corners areabout equally weighted in the shape recognition. The amplitude of the featuresacross the contour length varies but is non-zero everywhere. This suggeststhat the detection is largely but not purely a global process. Especially,detection of the RF5 is based more on the features on top of the shape.56.432 Boundary information and filling-in in afterimage perceptionJeroen J.A. van Boxtel 1 (j.j.a.vanboxtel@gmail.com), Christof Koch 1,2 ; 1 Division ofBiology, California Institute of Technology, 2 Brain and Cognitive Engineering,Korea University, Seoul, KoreaAfterimages are the result of retinal bleaching, and other neural adaptationprocesses. The mainstream idea is that afterimages disappear whenthe level of adaptation-modulated signal (the true afterimage) is reducedbelow a contrast-detection threshold (e.g. Leguire & Blake, 1982). Here wemanipulated afterimage visibility by projecting a ring (i.e. a boundary)around the afterimage, without changing the visual stimulation at the positionof the aftereffect itself, and thus without directly affecting adaptationmodulatedsignals. We show that the duration of the afterimage is lengthenedby about 50% with a ring that exactly encompasses the afterimage.Rings that are equal or smaller in size than the afterimage increase afterimageduration relative to a condition without a ring, while boundaries largerthan the afterimage decrease afterimage duration. We find furthermorethat maximum modulation occurs for intermediate contrasts of the ring,making attentional capture (by large luminance changes) an unlikely causeof the effect. Finally, placing a ring around the position of an already fadedafterimage, revives the afterimage. Our data show that boundary signals(i.e. the ring) are crucial in the determination of afterimage perception.When boundary signals are present, the area within the boundary is filledin with features that are present just within its circumference, which is trueafterimage when the ring snugly fits the afterimage area. When the the ringis larger than the afterimage, or absent, the filled-in feature is the gray background,which effectively shortens afterimage duration, even though thetrue afterimage is still present. We suggest that an ‘active’ boundary andfilling-in mechanism is involved in afterimage perception, similar to thatproposed for peripheral fading and retinally stabalized images.Acknowledgement: JvB is supported by a Rubicon grant from the NetherlandsOrganisation for Scientific Research56.433 Temporal dynamics of contour and surface processing oftexture-defined second-order stimuliEvelina Tapia 1 (etapia@uh.edu), Bruno G. Breitmeyer 1,2 , Jane Jacob 1 ; 1 Departmentof Psychology, University of Houston, Houston TX 77204-5022, 2 Center forNeuro-Engineering and Cognitive <strong>Sciences</strong>, University of Houston, Houston TX77204-4005Psychophysical and neurophysiological experiments have demonstratedthat at the implicit (i.e. nonconscious) level first-order luminance-definedcontours are processed on average 30-60 ms before surface information.Here, figure-ground segmentation processes establish contours thatare later filled in with surface (e.g. wavelength and brightness) information.The present work, using metacontrast masking paradigm, examineswhether the same sequence of processing also characterizes extraction ofsecond-order features. On the one hand, first- and second-order contourand surface features might be processed in an analogous succession – formfollowed by surface details. On the other hand, a preliminary segmentationbased on differences between foreground and background surfaceelements might be required before texture-defined second-order contourscan be fully established. In this study texture-defined second-order targetstimuli were followed, at varying stimulus onset asynchronies (SOAs), bya surrounding but spatially non-overlapping texture-defined second-ordermask. First, we established that a typical U-shaped metacontrast function,obtained with first-order stimuli, can be also obtained with these stimuli.Secondly, in contour-discrimination tasks the subjects identified the shapesof the target stimulus; while in surface-discrimination tasks the subjectsidentified the surface texture elements of the targets. The results indicatethat the suppression of texture-defined second-order contour visibilityoccurs at an earlier SOA than the suppression of texture-defined secondordersurface information. These findings are similar to those reported withfirst-order stimuli and indicate that texture-defined second-order contoursare processed before texture-defined second-order surfaces. These findingsbear on theories of metacontrast and place constraints on models proposingthat first- and second-order features are processed by separate neuralcircuits.56.434 Mind the Gap: The Effect of Support Ratio and Retinal Sizeon Contour InterpolationMohini N. Patel 1 (patelm29@mcmaster.ca), Bat-Sheva Hadad 1 , Daphne Maurer 1 ,Terri L. Lewis 1 ; 1 Department of Psychology, Neuroscience & Behaviour,McMaster University, Hamilton CanadaAdults see bounded figures even when local image information fails tospecify the contours, such as in cases of partial occlusion and illusory contours.Here, we examined the effects of support ratio (the ratio of the physicallyspecified contour to the total edge length) and absolute size on interpolationstrength. In Experiment 1, adults (n = 24) discriminated fat fromskinny shapes formed by real contours, partially occluded contours, or illusorycontours. Across conditions, support ratio and absolute size were var-308 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Afternoon Postersied. We formed fat and skinny shapes by rotating the corners of the shape(Ringach & Shapley, 1996). In a 3-down, 1-up staircase procedure, the angleof rotation of the corners increased or decreased over trials, producing variouscurvatures of the shape. The strength of interpolation was measured bythe smallest angle of rotation of the corners for which the shape was discriminatedaccurately as fat or skinny. Interpolation was better for highersupport ratios (p


Tuesday Afternoon PostersVSS 2010 AbstractsTuesday PMabout spatiotemporal integration that involves storing discrete image componentsin short-term memory and binding them across space and time.Our psychophysical work showed that collinear contours are reliablydetected and identified when moving behind a narrow slit, suggesting thatthe spatiotemporal contour integration may not depend on V1 horizontalconnections that are thought to underlie spatial contour integration. In thecurrent study we used fMRI to investigate the neural mechanisms thatmediate spatiotemporal contour integration under slit-viewing. The stimulicomprised arrays of randomly orientated Gabor patches, either with(contour stimuli) or without (random stimuli) embedded contour paths.The contour stimuli contained five embedded contours each comprising4~9 collinear Gabor elements. The global orientation of the contours wasjittered between 30°-45° or 135°-150°. In a blocked design the contour orrandom stimuli moved behind a slit whose width (0.75°) was equal to theaverage inter-element distance. As a result, at any given moment only oneor parts of two contour elements could be seen through the slit. Withina block of trials a small proportion (12.5%) of stimuli contained verticalcontours as targets. Observers were instructed to detect the target stimuli.A GLM analysis showed stronger fMRI responses to contour stimuli thanrandom stimuli in higher dorsal visual areas (V7, V3B/KO, VIPS) and inthe LOC, suggesting that spatiotemporal integration involves interactionsbetween ventral and dorsal visual areas. We hypothesize that the dorsalareas may contribute to binding moving contour elements across space andtime, and that the ventral regions may generate the integrated percept ofthe global contour.Acknowledgement: This work was supported by grants from the Biotechnology andBiological <strong>Sciences</strong> Research Council to ZK [D52199X, E027436]56.440 An improved model for contour completion in V1 usinglearned feature correlation statisticsVadas Gintautas 1 (vadasg@lanl.gov), Benjamin Kunsberg 2 , Michael Ham 1 , ShawnBarr 3 , Steven Zucker 2 , Steven Brumby 1 , Luis M A Bettencourt 1 , Garrett T Kenyon 1,3 ; 1 Los Alamos National Laboratory, 2 Yale University, 3 New Mexico ConsortiumHow to establish standards for comparing human and cortically-inspiredcomputer model performance in visual tasks remains largely an open question.Existing standard image classification datasets have several criticalshortcomings: 1) Limitations in image resolution and number of imagesduring set creation; 2) Reference to semantic knowledge, such as the definitionof “animal,” and 3) Non-parametric complexity or difficulty. Toaddress these shortcomings, we developed a new synthetic dataset, consistingof line segments that can form closed contours in 2D (“amoebas”).An “amoeba” is a deformed, segmented circle, in which the radius varieswith polar angle. Small gaps between segments are preserved so that thecontour is not strictly closed. To create a distractor “no-amoeba” image,an amoeba image is divided into boxes of random size, which are rotatedthrough random angles so that their continuity no longer forms a smoothclosed object. Randomly superimposed no-amoeba images serve as backgroundclutter. This dataset is not limited in size, relies on no explicit outsideknowledge, has tunable parameters so that the difficulty can be varied,and lends itself naturally to a binary object classification task (“amoeba/noamoeba”)designed to be pop-out for humans.We show that humans display high accuracy (>90%) for this task in psychophysicsexperiments, even at short stimulus onset asynchrony=50 ms.Existing feed-forward computer vision models such as HMAX performclose to chance (50-60%). We present a model for V1 lateral interactionsthat is biologically motivated and significantly improves performance.The model uses relaxation labeling, where support between edge receptorsis based on statistics of pair wise correlations learned from coherentobjects, but not incoherent segment noise. We compare the effectivenessof this approach to existing computer vision models as well as to humanpsychophysics performance, and explore the applicability of this approachto contour completion in natural images.Acknowledgement: NSF, Los Alamos LDRD-DR56.441 Why do certain spatial after-effects increase with eccentricity?Elena Gheorghiu 1 (elena.gheorghiu@psy.kuleuven.be), Frederick A. A. Kingdom 2 ,Jason Bell 2 , Rick Gurnsey 3 ; 1 Laboratory of Experimental Psychology, Universityof Leuven, Tiensestraat 102, B-3000, Leuven, Belgium, 2 McGill <strong>Vision</strong>Research, Department of Ophthalmology, McGill University, 687 Pine AvenueW., Montreal H3A 1A1, Quebec, Canada, 3 Department of Psychology,Concordia University, 7141 Sherbrooke Street W., Montreal H4B 1R6, Quebec,CanadaAim. The shape-frequency and shape-amplitude after-effects (SFAE andSAAE) describe the shifts in perceived shape-frequency/shape-amplitudeof sinusoidal-shaped contours following adaptation to contours withslightly different shape-frequencies/shape-amplitudes. When measuredusing pairs of adaptors/tests positioned above and below fixation, bothafter-effects increase with eccentricity. Why? We have considered the followingexplanations: (i) scaling (magnification) of contour-shape receptivefields with eccentricity; (ii) reduced spatial interactions between the contourpairs when presented peripherally; (iii) less rapid decline of adaptation attest onset in the periphery; (iv) greater positional uncertainty in the periphery.Methods. We measured SFAEs and SAAEs as a function of eccentricityusing a staircase procedure. At each eccentricity, we varied stimulus scale,the spatial separation between the contour pairs, and the time intervalbetween adaptor offset and test onset. We also compared shape-frequency/shape-amplitude discrimination thresholds between center and periphery.Results. We found: (i) similar-size after-effects for all scales at each eccentricity;(ii) only a small increase (~10%) in the after-effects with increasedspatial separation between the pair of contours when eccentricity was heldconstant; (iii) similar temporal rates of decline of adaptation for center andperiphery, and (iv) comparable center-to-periphery ratios for shape discriminationthresholds and shape after-effects. Taken together, the resultsare inconsistent with all the above explanations except positional uncertainty.Conclusion. The increase in the SFAE and SAAE with eccentricity isbest explained by increased positional uncertainty in the periphery.56.442 The Effects of Closure on Contour Shape LearningPatrick Garrigan 1 (pgarriga@sju.edu), Livia Fortunato 1 , Ashley LaSala 1 ; 1 Departmentof Psychology, Saint Joseph’s UniversityShape information is fundamental to object recognition. Objects can berecognized from shape in the absence of color, relative size, or contextualinformation. To support object recognition from shape, a large number ofdistinguishable shape representations must be learned and stored in memory.In 4 experiments presented here, participants learned to recognize16 novel, 2D contour shapes over the course of 9 alternating training andtest sessions. In the first experiment (Exp. 1), half of the subjects learnedto recognize closed shapes and half learned to recognize open shapes ofequivalent complexity. We tested how the presence or absence of closureaffects 2D contour shape learning. Our results show that closed shapes areeasier to learn to recognize than open shapes. We then show that the benefitfor recognition is due to better encoding of closed shapes (Exp. 2) and notdue to easier comparison between closed shapes and their representationsin memory (Exp. 3). Finally, we show that potential closure (contours thatappear likely to close behind an occluder) does not lead to better recognitionperformance. In fact, open shapes are more easily recognized thanequivalent, occluded shapes (Exp. 4). Together, these experiments suggestthat closed shapes are, in general, more easily or more effectively encoded.However, for the benefits of closure to be realized, a shape must be geometricallyclosed (the visible contour must close), not just perceptually closed(behind an occluder).56.443 Kanizsa illusory contour perception in children: a novelapproach using eye-trackingKimberly Feltner 1 (kim.feltner@hotmail.com), Kritika Nayar 1 , Karen E. Adolph 2 ,Lynne Kiorpes 1 ; 1 Center for Neural Science, New York University, 2 Psychology,New York UniversityThe time course for development of sensitivity to Kanizsa Illusory Contours(KIC) is unclear. Some previous studies have shown that the ability toperceive Kanizsa illusory forms is present near birth, while others suggestthat KIC perception develops around age 5 years or later. The variabilityobserved in the literature may in part be due to limitations inherent in test-310 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Afternoon Postersing infants or young children. We are using a novel approach to this problemby combining a match-to-sample (MTS) test paradigm in conjunctionwith eye tracking.Sixteen participants were recruited into 4 age groups (3-4, 5-6, 7-9 yrs,adults). Participants were seated in front of a touch-sensitive video displayand equipped with a head-mounted eye-tracker. Each participant wasinstructed on the match-to-sample concept, and then completed 2 practiceruns consisting of a shape or orientation discrimination task using real,complete forms (10 trials each). They were then tested with 6 different realshapes as sample stimuli and KICs for matching comparison stimuli (40trials).Only one of four 3-4 year-olds tested performed above 80% correct (criterion)on both practice runs and the KIC discrimination. All older participantspassed criterion on the practice runs, while performance with KICswas 77% (5-6), 96% (7-9), 100% correct (adult). Interestingly, location ofgaze and locus of touch (response) depended on age and stimulus type.With real forms, final gaze position and touch location were directed at theimage center regardless of age. However, with KICs, children under 6 primarilylooked at and touched individual contour inducing elements (pacmen),whereas older children and adults looked at and touched the centerof the KIC. These results show clear developmental changes in the abilityto appreciate illusory forms, and a modification of processing strategy withage and stimulus type.Acknowledgement: James S. McDonnell Foundation Scholar Award to Lynne Kiorpes56.444 Switching Percepts of Ambiguous Figures: Specific skills ora General Ability?Aysu Suben 1 (aysu.suben@yale.edu), Brian J. Scholl 1 ; 1 Perception & CognitionLab, Dept. of Psychology, Yale UniversityThe study of ambiguous figures emphasizes that perception involves notonly the incoming visual stimulation, but also internal mental processes.But which ones? Previous research has explored both the temporal dynamicsof switching rates and the degree to which they can be influenced byintentional control. Moreover, this research has explored both individualdifferences and several different ambiguous figures. Previous work hasrarely put these pieces together, though: Is intentional perceptual switchingmediated by a general ability, or a set of skills specific to individual figures?People clearly differ in their ability to intentionally switch percepts, but itis not clear whether the good switchers for different figures are the samepeople. To find out, we measured the time it took for observers to intentionallyswitch percepts in response to haphazardly-timed tones, using manyambiguous figures -- including both static images (e.g. bistable geometricfigures, silhouettes, and semantically interpretable figures such as theduck/rabbit) and dynamic animations (e.g. the Ternus apparent-motionconfiguration, and bistable structure-from-motion displays). Neither of theextreme possibilities was supported by the data: perceptual switching wasneither a completely uniform ability, nor a completely unconnected set offigure-specific skills. Rather, perceptual switching seems to be controlledby a set of specific visual abilities. For example, intentional switchinglatency was highly inter-correlated for those ambiguous figures that primarilyinvolved depth reversals. At the same time, several other plausiblefactors (such as whether a figure was dynamic, or whether the ambiguityinvolved a semantic difference) played no systematic role in mediatingperceptual switching. However, strong correlations for specific figure pairs(e.g. between Ternus motion and the duck/rabbit) suggested the existenceof previously unsuspected factors. We conclude that volition is expressedin perception not as a monolithic cognitive ability, but rather as a set ofparticular visual skills.3D perception: Distance and sizeOrchid Ballroom, Boards 445–451Tuesday, May 11, 2:45 - 6:45 pm56.445 A surprising influence of retinal size on disparity-defineddistance judgmentsArthur Lugtigheid 1 (ajl786@bham.ac.uk), Andrew Welchman 1 ; 1 School ofPsychology, University of Birmingham, UKFrom simple geometry, the retinal projection of an object in the environmentdepends on both its size and distance from an observer. Thus, givena sensed retinal size, the brain should not know the distance of the object:the retinal measurement is compatible with infinite combinations of physicalsizes and distances. A number of previous studies have demonstratedthat, in the absence of visual cues to distance, an object with a larger retinalimage is perceived as closer than an object with a smaller retinal size. However,when binocular disparity information is available, observers shouldbe able to judge the distance between two objects accurately. Here we reportthe seemingly surprising result that the retinal size influences observers’judgments of disparity-defined distance. Our stimuli consisted of large andsmall discs surrounded by a peripheral reference volume of textured cubesthat provided a continuous reference frame to support reliable disparityestimates. Observers judged which of two sequentially-presented stimuliof different retinal sizes was closer to them. Disparity-defined depth wasvaried parametrically to measure psychometric functions. Our resultsshowed a shift in the PSE of around 5 cm, so that large objects were seen ascloser than small objects when disparity-defined distance was the same. Incontrast, there was no bias when two objects of equal size were presented.Varying the ratio of object sizes, and testing objects placed at different distancesrevealed that bias increased as (i) the viewing distance increased;and (ii) the ratio of the object sizes increased. We propose that the retinalsize of an object is probabilistically related to its distance in the environmentand this information is combined with disparity when making judgmentsof distance.Acknowledgement: BBSRC, UK56.446 The intrinsic bias influences the size-distance relationshipin the darkLiu Zhou 1,2 , Zijiang J. He 1 , Teng Leng Ooi 3 ; 1 Department of Psychological andBrain <strong>Sciences</strong>, University of Louisville, 2 Institute of Cognitive Neuroscience,East China Normal University, 3 Department of Basic <strong>Sciences</strong>, PennsylvaniaCollege of Optometry at Salus UniversityA dimly-lit target in an otherwise dark environment is perceived as locatedat the intersection between its projection line from the eye and an implicitslant surface (intrinsic bias) (Ooi et al, Nature 2001). To investigate if theintrinsic bias affects the size-distance relationship, observers used a perceptualmatching task to report the perceived size of a dimly-lit target (0.75deg)at multiple locations (1.5-6.75m and 0 or 0.5m above the floor). Based on thematched metric size and physical angular target size, we derived the perceptualtarget location (perceived target direction is veridical in darkness).We found the derived perceptual target locations form a profile of a slantedsurface, resembling the intrinsic bias. This indicates the intrinsic bias supportssize perception in the dark. We then used a blind-walking-gesturingsize-estimationtask to measure the judged target location and judged targetsize. In this task, observers walked blindly to traverse the perceived targetdistance, gestured its perceived height and indicated its size. From the indicatedtarget sizes we derived the perceptual target distances, and comparedthese with the measured distances. We found a reliable correlation betweenthe two distances, suggesting the intrinsic bias is responsible for both perceivedsize and distance in the dark. Finally, we investigated the effect ofknowledge of target size on judged location. Using the blind-walking-gesturingtask, we measured observers’ judged location of the dimly-lit targetin the dark when they either had, or had no, knowledge of the physical targetsize. We found knowledge of the physical target size (2.5cm) improvesthe accuracy of the judged target locations. Altogether our findings revealthat in the reduced cue environment where the intrinsic bias dictates ourperceptual space, there exists a lawful relationship between perceived sizeand distance, which reflects the uniqueness of our perceptual world.Acknowledgement: NIH (R01 EY014821)56.447 The importance of a visual horizon for distance judgmentsunder severely degraded visionKristina Rand 1 (kristina.rand@utah.edu), Margaret Tarampi 1 , Sarah Creem-Regehr 1 ,William Thompson 2 ; 1 Psychology, University of Utah, 2 Computer Science,University of UtahCritical for understanding mobility in low vision is knowledge of one’sability to judge an object’s location in the environment under low visionconditions. A recent study investigating distance judgments under severelydegraded vision yielded surprising accuracy in a blind walking task(Tarampi et al., 2009). It is suggested in the current study that participantsmay have access to certain visual context cues despite the low vision manipulation.Specifically, the angle of declination from the visually defined horizonto the target on the ground may be used to determine distance.Tuesday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>311


Tuesday Afternoon PostersVSS 2010 AbstractsTuesday PMTo test this hypothesis, normally sighted individuals wore goggles thatseverely reduced acuity and contrast. All participants monocularly viewedtargets which were placed on the ground in a large room at distances of3, 4.5, or 6 meters. Targets were equated for visual angle, and viewedthrough a viewing box that restricted horizontal and vertical field of viewin order to occlude the information provided by the sidewalls and ceiling.Participants were instructed to walk to the target location without vision.In one condition, the context in which participants viewed target objectswas manipulated such that the perceived intersection of the floor and backwall appeared raised. If the visual frame of reference for the horizontal isused, this manipulation increases the angle of declination to the target, andshould lead to an underestimation in distance compared to a conditionwhere the floor is not changed. Preliminary results suggest individuals inthe raised floor context condition are showing a decrease in distance estimatesin a blind walking task relative to the control condition. These resultssupport the account that reliance on the visually-defined horizon may havecontributed, in part, to accuracy in previous blind walking performanceunder severely degraded vision. Subsequent manipulations will be performedto further explore this hypothesis.Acknowledgement: This work was supported by NIH grant 1 R01 EY017835-0156.448 Ground surface advantage in exocentric distance judgmentZheng Bian 1 (bianz@ucr.edu), George Andersen 1 ; 1 University of California,RiversidePrevious studies have shown a ground surface advantage in the organizationof 3-D scenes (Bian, et. al., 2005, 2006). In the current study, we examinedwhether there was a ground surface advantage in exocentric distancejudgments. Observers were presented with displays simulating either aground plane or a ceiling surface with random black and white checkerboardtexture. In Experiment 1, there were three vertical red poles standingon the ground plane or attached to the ceiling surface. The three polesformed an inverted L-shape so that poles 1 and 2 were separated in depthand poles 2 and 3 were separated horizontally. The observers’ task wasto use a joystick to adjust the distance between the two poles that wereseparated horizontally to match the perceived distance between the twopoles separated in depth. In addition to the surface presented, we alsomanipulated motion parallax, the distance of the L-shape, and the size ofthe L-shape. In Experiment 2, there were 3 horizontal red poles lying onthe ground plane or attached to the ceiling surface. The three poles wereparallel to each other and separated in depth. The task of the observer wasto bisect the distance between the front pole and the back pole, that is, toadjust the distance between the front pole and the middle pole so that itmatched the distance between the middle pole and the back pole. In bothexperiments we found that judged depth on the ground plane was largerthan that on the ceiling surface, suggesting less compression of space on aground surface as compared to on a ceiling surface.Acknowledgement: NIH AG031941 and EY18334-0156.449 Perception of Distance in the Most Fleeting of GlimpsesDaniel Gajewski 1 (gajewsk1@gwu.edu), John Philbeck 1 ; 1 Department ofPsychology, George Washington UniversityHumans can walk without vision to previewed targets (floor-level, 3-5 mdistant) without large systematic error and with near perfect sensitivity totarget distance, even when targets are glimpsed for as little as 9 ms. Todetermine whether performance at brief viewing durations is controlledby perceived distance, versus nonperceptual strategies (e.g., inferentialreasoning), we compared blindwalking to verbal distance estimates andgestured size estimates (9- and 113-ms viewing durations). In Experiment1, blindwalking and verbal reports showed equivalent sensitivity to distance.While there was greater underestimation in verbal reports (-27%)than blindwalking (-17%), the pattern remained constant across viewingdurations, despite very different functional requirements across tasks. Thissuggests that both responses are controlled by the same variable, ostensiblyperceived distance. In Experiment 2, targets varied in size and distance, andthe required response (blindwalking or size gesture) was not revealed untilafter the glimpse. We assumed that participants would thus be unlikely touse inferential strategies for both responses on each trial. There were largedifferences in bias: distance was underestimated (-18%) and size was overestimated(47%). The magnitude of bias was unaffected by viewing durationin the blindwalking task, but the bias towards overestimation in sizejudgments was reduced with extended viewing. Nevertheless, participantswere highly sensitive to changes in target size, even though the visual angleremained constant across distances. Furthermore, sensitivity to size changeswas statistically equivalent to sensitivity to distance changes, suggestinga constant ratio of indicated size to indicated distance when visual angleremained fixed—in accordance with Emmert’s law. This linkage stronglysuggests that perceived distance indeed varies during brief glimpses andlikely controls the responses tested here. The overall pattern of results providesconverging evidence for the idea that distance is perceived even inthe most fleeting of glimpses.56.450 Tool use affects perceived shape: An indirect measure ofperceived distanceJessica Witt 1 (jkwitt@purdue.edu); 1 Department of Psychological <strong>Sciences</strong>,Purdue UniversityWhen targets are presented just beyond arm’s reach, the targets look closerwhen observers intend to reach to the targets with a tool, and are thus ableto reach them, than when they reach without the tool. This finding is justone of several examples that a person’s ability to act influences perceiveddistance to objects. In nearly all previous experiments, the dependent measurewas a direct measure of perceived distance such as verbal reports orvisual matching tasks. As such, some critics have argued that the previousresults are simply due to response bias. According to their account, the targetdoes not actually look closer, but participants report that it is. To counteractthis argument, the current experiments used an indirect measure ofperceived distance, specifically perceived shape. In several experiments,the target constituted one corner of a triangle. Participants made judgmentsabout the shape of the triangle then reached to the target with a tool, theirhand, or a laser pointer. The results indicate that participants who reachedwith the tool perceived the shape of the triangle to be shorter. These resultssuggest that the ability to reach an object changes the perceived distance tothe object, and are not due simply to response bias.56.451 Arousal and imbalance influence size perceptionMichael Geuss 1 (michaelgeuss@gmail.com), Jeanine Stefanucci 1 , Justin deBenedictis-Kessner 2 , Nicholas Stevens 2 ; 1 University of Utah, 2 College of Williamand MaryPrevious research has demonstrated that manipulating vision can influencebalance (Edwards, 1946). Here, we assess the influence of manipulatingbalance on size perception. In Experiment 1, participants visuallymatched the widths of beams when balanced and unbalanced by standingon a balance board. When unbalanced, participants estimated the widthsto be thinner than when balanced. Experiments 2-5 tested possible mechanismsof this effect. In Experiment 2, participants did not estimate the widthof the beam differently when viewing the board for a limited amount oftime, suggesting that the effect when unbalanced was probably not dueto reduced attention to the beam. Experiment 3 tested another hypothesis:that imbalance increases arousal, which affects perception. Participants’arousal level was increased by jogging in place. They estimated the boardas thinner when they jogged as compared to when they were balanced.However, when participants were jogging or unbalanced they may haveexperienced greater movement in the visual scene. In Experiment 4, weraised participants’ level of arousal without having them move, by askingthem to count backward by 7s. When participants counted backward, theyestimated the width of the beam as thinner than when not counting. In allconditions that produced an effect (unbalanced, jogging, counting by 7s),participants were aroused, but also performed two tasks simultaneously. Inthe final experiment, participants viewed arousing pictures before estimatingwidths. In this case, arousal was increased, but a dual-task paradigmwas not employed. Again, participants estimated the width of the beamsas smaller after viewing arousing images. Overall, the observed effects onsize perception seem to be due to higher levels of arousal that may be experiencedwhen unbalanced.Acknowledgement: NIH RO1MH075781-01A2312 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Afternoon PostersEye movements: Perisaccadic perceptionVista Ballroom, Boards 501–516Tuesday, May 11, 2:45 - 6:45 pm56.501 Perceptual grouping of contour elements survives saccadesMaarten Demeyer 1 (maarten.demeyer@psy.kuleuven.be), Peter De Graef 1 , KarlVerfaillie 1 , Johan Wagemans 1 ; 1 Department of Psychology, Katholieke UniversiteitLeuven, Leuven, BelgiumVisual exploration of a scene relies on the frequent execution of saccadiceye movements. At the retinal level, this implies that the projection of thescene constantly undergoes large, rapid displacements. Yet, the human perceptualexperience is stable and continuous. A long-standing question isthen whether visual object representations constructed before a saccade canbe retained until after saccade landing, and if so, whether this transsaccadicrepresentation is subsequently employed in postsaccadic processing of thesame object. In the present study we show that this is indeed the case forimage-abstracted yet detailed representations of visual form. Specifically,subjects glimpsed a closed contour defined by the perceptual grouping(by similarity) of spatially separated local elements in the periphery of thevisual field, and made a saccadic eye movement towards it. After saccadelanding this preview information was observed to affect the perceptualgrouping speed of a second closed contour in the same spatiotopic location,despite an intrasaccadic change to the grouping principle defining it (goodcontinuation instead of similarity). This yielded a benefit for an identicalpreview and a cost for a different but well-defined preview contour, comparedto a baseline condition where only vaguely defined form informationwas contained within the preview display. In addition, it was found thatthe presaccadic presence of such a vaguely defined preview object by itselfalready decreased the speed of postsaccadic object contour grouping, relativeto conditions in which unstructured preview displays were presented.We conclude that the visual system pools its local feature informationacross space, cues, and time into transsaccadically persistent object formrepresentations, providing a robust basis for the integration of detailedshape information as well as perceptual continuity across saccades.Acknowledgement: This research was supported by the Concerted Research EffortConvention GOA of the Research Fund K.U. Leuven (GOA/2005/03-TBA) granted toGéry d’Ydewalle, Karl Verfaillie, and Johan Wagemans, by the European Communitythrough GazeCom project IST-C-033816 to Karl Verfaillie and Peter De Graef, and by aMethusalem grant to Johan Wagemans (METH/08/02).56.502 The effect of perceptual grouping on perisaccadic spatialdistortionJianliang Tong 1 (jtongopt@berkeley.edu), Zhi-Lei Zhang 1 , Christopher Cantor 1 ,Clifton Schor 1 ; 1 School of Optometry, University of California-berkeleyPurpose: Target flashes immediately before the onset of a saccade appeardisplaced in the direction of the saccade (perisaccadic spatial distortion).In this study we observed that the horizontal spatial distortions of flashespresented above the fovea, immediately before a horizontal saccade,increase with retinal eccentricity. We also investigated perceptual interactionsbetween spatial distortions produced with two brief (1 ms) flashespresented simultaneously at different retinal elevations from the fovea.Methods: In condition one, single or horizontally-aligned paired (synchronous)flashes were presented at combinations of three different elevations(1, 4 and 8 deg) above the fixation target before the onset of a horizontal saccade.In condition two, paired flashes were presented with misalignment inboth horizontal (4 deg) and vertical (1 deg) directions. Observers reportedthe perceived horizontal location of each flash in both conditions. Results:The amount of perceptual mislocalization increased with vertical eccentricitywith single-flashed targets. In condition one, vertically-aligned pairsof simultaneous flashes presented at different elevations were distortedequally by an amount approximately equal to the average of single flashdistortions, and distortions of paired flashes had the same horizontal offsetas presented in the retinal image in condition two. Paired perisaccadicdistortions were the same for monoptic and dichoptic conditions. Conclusions:Our results suggested that perisaccadic spatial distortions resultingfrom flashes at different eccentricities undergo perceptual grouping associatedwith their simultaneous presentation. Grouping most likely occursafter the stage of binocular integration of distorted visual directions.56.503 Updating for perception: An ERP-study of post-saccadicperceptual localizationJutta Peterburs 1 (jutta.peterburs@rub.de), Kathrin Gajda 2 , Christian Bellebaum 1 ,Klaus-Peter Hoffmann 2 , Irene Daum 1 ; 1 Department of Neuropsychology, Instituteof Cognitive Neuroscience, Faculty of Psychology, Ruhr University Bochum,Germany, 2 Department of Neuroscience and Department of General Zoologyand Neurobiology, Faculty of Biology and Biotechnology, Ruhr UniversityBochum, GermanyWith every eye movement the retinal positions of objects in our environmentchange, and yet we perceive the world around us as stable. Efferencecopies of motor commands are used to update the retinal positionsacross saccades. Saccadic updating after two successive saccades (oculomotorupdating) has been shown to take place between the saccades and toinvolve parietal regions. However, there is evidence that updating of objectlocations across a single saccade (perceptual updating) may represent adistinct process. The present study investigated the timecourse and topographicalorganization of perceptual updating in twenty healthy humansubjects by means of simultaneous eyetracking and event-related potential(ERP) recording. Participants were asked to perform single horizontal leftandrightward saccades and to subsequently localize a target perceptuallythat was briefly shown before the saccade. Successful completion of the taskinvolved either intra- or interhemispheric updating of the target location.Localization was less precise when updating of visual space was necessaryrelative to a control condition with no updating requirement. In the updatingcondition, we observed a positive deflection over parietal electrodesites starting about 300 ms after saccade onset. This effect was more pronouncedover right parietal electrode sites, corroborating findings of righthemispheric dominance in updating of visual space. While this componentlikely reflects memory of the updated stimulus location, we suggest thatan earlier negative deflection occurring at about 100 to 180 ms after saccadeonset may be more directly linked to the updating process. The needfor interhemispheric transfer of target related information did not havean impact on either localization accuracy or magnitude of the associatedERP components. These results indicate that perceptual updating takesplaces shortly after the saccade, a finding which is similar to what has beenobserved for oculomotor updating.Acknowledgement: This research was funded by the German Research Foundation(Deutsche Forschungsgemeinschaft DFG)56.504 Background is remapped across saccadesOakyoon Cha 1 (oakyoon@yonsei.ac.kr), Sang Chul Chong 1,2 ; 1 Graduate Programin Cognitive Science, Yonsei University, 2 Department of Psychology, YonseiUniversitySaccadic eye movements evoke a motion that disrupts stable perception.One way to prevent this disruption is to remap visual features beforesaccades and to maintain the remapped representation during saccades.Since object recognition is important, previous studies have focused on theremapping of objects. However, we investigated the influence of pre-saccadicremapping on the background rather than objects. In Experiment 1,a display was presented with a fixation point in the center of the left visualfield, a 5° sine grating tilted 20 or -20° (figure) located in above or below 4°of the fixation, and a grating tilted oppositely to the figure (background)was presented in the entire left visual field. This display was shown for 3seconds to produce tilt aftereffects, followed by the fixation period for 300~ 500 ms. After this variable fixation, participants were required to make a10° saccade. Their task was to determine whether a sine grating (a probe)presented either above or below the saccadic target was tilted towards theleft or the right. Note that the probe could appear in the remapped locationof the figure or the background. We found strong tilt aftereffects in the backgroundregion as well as in the figure region, suggesting that backgroundinformation was also remapped to the location of the saccadic target. InExperiment 2, we investigated whether the remapped representation wasmaintained during saccades and if the effect of remapping could be generalizedto orientation-specific aftereffects. Figure-ground configuration wassimilar as in Experiment 1, but we measured threshold elevation duringsaccades rather than tilt aftereffects before saccades. We again found significantthreshold elevation for both the figure and the ground. Thus, ourresults suggest that remapping before and during saccades occurs for boththe figure and the background.Tuesday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>313


Tuesday Afternoon PostersVSS 2010 AbstractsTuesday PMAcknowledgement: This research was supported by Basic Science Research Programthrough the National Research Foundation of Korea (NRF) funded by the Ministry ofEducation, Science and Technology (2009-0089090).56.505 Translation of a visual stimulus during a saccade is moredetectable if it moves perpendicular, rather than parallel, to thesaccadeTrinity Crapse 1,2 (crapse@cnbc.cmu.edu), Marc Sommer 1,2 ; 1 Department ofNeuroscience, University of Pittsburgh, 2 Center for the Neural Basis of CognitionA feature of visuomotor behavior is the stable visual percept that the braingenerates despite the volatile input provided by the retinas. One mechanismof visual stability invokes neurons that presaccadically shift their receptivefields. These neurons seem to sample the same region of space twice, permittinga comparison between the presaccadic and postsaccadic samples.We found previously that frontal eye field neurons perform such a comparisonoperation in the context of intrasaccadic translations. Moreover, theneurons are more sensitive to translations that occur perpendicular to thesaccade than parallel to it. Here we extend our physiological findings tothe behavioral level. Our aim was to characterize the ability of monkeys toreport intrasaccadic translations to visual stimuli. We trained monkeys toperform a scan task in which they made repetitive saccades between twovisual targets. After a random number of saccades, a third visual stimuluspresent on the screen was translated intrasaccadically by a small distance.Upon detecting the change, the monkey was tasked to report his percept byimmediately saccading to the translated stimulus. We included two conditions:one in which the stimulus translated parallel to the scanning saccades,and another in which the stimulus translated perpendicular to thesaccades. Based on the physiological findings, we made two predictions:that the monkeys’ performance would increase with greater translationamount, and that a greater proportion of hits would occur for perpendicularthan for parallel translations. The data confirmed both predictions. First,monkeys performed at chance level for small translations, attained ~50%levels for translations around 2 degrees, and achieved 90% performanceat translations greater than 5 degrees. Second, the monkeys had a higherpercentage of hits for perpendicular translations. Taken together with ourphysiological data, these results suggest that monkeys rely on FEF activityfor performing intrasaccadic change detection.56.506 Preview benefit facilitates word processing in FixationRelated Brain PotentialsIsabella Fuchs 1 (isabella.fuchs@univie.ac.at), Stefan Hawelka 2 , Florian Hutzler 2 ;1 University of Vienna, Department of Psychological Basic Research, Vienna,Austria, 2 University of Salzburg, Department of Psychology and Center ofNeurocognitive Research, Salzburg, AustriaIn fixation-related potential (FRP) experiments stimuli can be presentedsimultaneously, whereas classical event-related potential (ERP) settings arerestricted to a serial presentation. Hutzler et al. (2007; Welcome to the realworld: Validating fixation-related brain potentials for ecologically valid settings.Brain Research, 1172, 124-129) were able to validate the FRP approachusing the old/new effect. They found that this marker effect occurred earlierin FRPs than in a classical ERP setting, whereas the shapes of FRPs andERPs were similar. In the current study two possible explanations for thisfinding were investigated: a preview benefit and the self-pacing of stimuluspresentation. To assess these two explanations, we established four differentsettings. We compared a classical and a self-paced ERP experiment andtwo FRPs settings, where the target stimulus was either visible or maskeduntil fixation. We found a substantially earlier occurrence of effects only inthe FRP setting, where the target was already visible before fixation. Theresults clearly show that processing of the target was significantly facilitatedwhen parafoveal information had been available. Therefore, FRPsindicate a substantial influence of the preview benefit, while there was noeffect of the self-paced processing rate on ERPs.56.507 Dynamic recurrent processing for coordinate transformationexplains saccadic suppression of image displacementFred H Hamker 1 (fred.hamker@informatik.tu-chemnitz.de), Arnold Ziesche 1 , HeinerDeubel 2 ; 1 Computer Science, Chemnitz University of Technolgy, 2 Psychology,Ludwig-Maximilians-Universität MünchenWhen we shift our gaze, the image on the retina abruptly changes. What arethe mechanisms that establish our subjective experience of visual stability?This question has been more formally addressed by experiments in whichthe image is displaced during the saccade, showing that subjects do notnotice small image displacements. The observation has been interpreted bya built-in assumption of visual stability according to which we align ourpre-saccadic view with a reference found in the post-saccadic view (Deubel,Schneider, & Bridgeman, Vis Res, 1998), or similarly by a prior expectationthat object jumps do not occur (Niemeier et al., Nature, 2003). We simulatedthe saccadic suppression of image displacement task as done by Deubelet al (Vis Res, 1996) with a dynamic recurrent basis function network ofcoordinate transformation which maps eye-centered into head-centeredrepresentations using realistic visual processing including visual latency,persistence and saccadic suppression. Eye position is not modeled as a continuousbut rather discrete signal, which transiently switches from the pretothe post-saccadic eye position. This model explains the suppression ofdisplacement by simple dynamic properties of the visual system rather byhigher-level strategies as emphasized in earlier theories: According to thesimulations, the displacement is not perceived since the displaced stimulusinteracts with the pre-saccadic stimulus trace. The head-centered representationof pre-saccadic stimulus feeds back to the incoming displacedstimulus response and stabilizes the representation in the presence of smalldisplacements. The model can also explain the paradoxical target blankingeffect in which displacements become apparent when the target is blankedbefore or after the saccade: in this case, no stabilization occurs because thepre-saccadic stimulus trace is missing.Acknowledgement: German Federal Ministry of Education and Research project VisuospatialCognition56.508 The influence of saccades on visual maskingAlessio Fracasso 1 (alessio.fracasso@unitn.it), David Melcher 1 ; 1 CIMeC, UniversitàDegli Studi Di TrentoVisual masking is a well known phenomenon in which the visibility ofa stimulus, the target, is reduced by the rapid presentation of either asubsequent or preceding stimulus, called the mask. In a typical maskingparadigm participants are not allowed to move their eyes and asked tomaintain fixation throughout the trial. Outside the laboratory, however,people typically make several saccadic eye movements per second. Sincesaccades have been shown to influence the perceived spatial and temporalproperties of briefly presented stimuli, we investigated whether thesechanges in perception might also influence masking. We tested subjects,with and without saccades, in a series of masking experiments includingmetacontrast masking (a single target followed by a backward mask) andmasking of a pattern by noise (repeated forward and backward masking ina sequence). We measured discrimination performance for the target as afunction of when the target was shown with respect to saccade onset. Wefound that performance did indeed change during the time period aroundsaccades, leading either to “unmasking” or stronger masking of the target,depending on the type of masking and the timing of the target and maskwith respect to saccade onset. Overall, the results suggest that changes inperception around the time of saccades influence the spatial and temporalattributes of the target and mask. These results also imply that saccadesmight be useful, in everyday life, at reducing the influence of masking onperception of briefly presented targets.56.509 A model of perisaccadic flash mislocalization in the presenceof a simple background stimulusJordan Pola 1 (jpola@sunyopt.edu); 1 SUNY State College of OptometrySubjects tend to mislocalize a perisaccadic target-flash presented in the dark(Honda, 1991; Matin & Pearce, 1965). It is often suggested that a primaryreason for this perceptual phenomenon is an extraretinal (exR) signal thatchanges before, during and after the saccade. However, Pola (2004, 2007)proposed that such mislocalization is not simply the effect of a time-varyingexR signal, but is a consequence of flash retinal (R) signal persistence interactingwith the exR signal. A number of studies have shown that flash mislocalizationcan be substantially influenced by the presence of backgroundstimuli (e.g., Lappe, Awater & Krekelberg, 2000; Maij, Brenner & Smeets,2008; Matin, 1976). To explore the role of such stimuli, a R-exR model wasused to simulate data from an experiment conducted by Matin (1976). Aprinciple virtue of Matin’s experiment, from the perspective of modeling, isthat the perisaccadic flash occurred either in complete darkness or in darknesswith a simple background stimulus (a small, stationary target) present314 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Afternoon Postersthroughout and after the saccade. The model simulation indicates that flashR signal persistence is substantially involved in the difference betweenthe features of flash mislocalization in the dark and with the backgroundstimulus. Essentially, the model suggests that just as R signal persistenceinteracting with the exR signal produces mislocalization in the dark, thepersistence, interacting with the background stimulus as well as the exRsignal, accounts for some of the main features of mislocalization with thebackground. These findings, together with earlier results (Pola, 2004, 2007)show that R signal persistence can influence flash mislocalization in a widerange of visual circumstances: i.e., in the dark, with sequential stimuli, andwith a variety of background stimuli.56.510 Phase-encoded fmri investigation of retinotopic remappingresponsesTomas Knapen 1 (tknapen@gmail.com), Jascha Swisher 2 , Benjamin Wolfe 2 , FrankTong 2 , Patrick Cavanagh 1 ; 1 Laboratoire Psychologie de la Perception, UniversitéParis Descartes, 2 Department of Psychology & <strong>Vision</strong> Research Center,Vanderbilt UniversityIn order to maintain visual stability in the face of continuous saccadic eyemovements, the brain has to incorporate the motor commands for upcomingsaccades into visual processing. The process by which saccadic motorcommands impinge on visual processing is called remapping. In remapping,neurons with receptive fields sensitive to the post-saccadic retinotopiclocation of the stimulus pre-activate before the saccade, and thus, beforethey receive visual input. Using single-cell physiology in monkeys and fmriin humans, evidence for remapping has been found in areas ranging fromparietal lobe to lower levels of visual cortex.We devised a novel method to investigate the topographic properties ofremapping activity in the human brain. Subjects made saccades back-andforthbetween two lateral fixation marks every second. An expanding ringstimulus was shown alternately at either fixation position. In separate runs,the ring appeared either at the current gaze position (shifting position insync with gaze, foveal stimulation condition), or at the position oppositethe current gaze location (shifting position out of sync with gaze, remappingcondition). In the first condition, the intermittent presentation of theexpanding ring stimulus caused a traveling wave of activity, allowing us toemploy classic phase-encoded retinotopic mapping techniques. In the second,remapping condition, subjects always made a saccade to the locationwhere a stimulus had just disappeared. Remapping responses of this stimuluspresentation should therefore produce responses with phases equivalentto the first condition. Furthermore, phases in the remapping conditionshould be uncorrelated with a control condition that has identical retinalstimulation but no saccades.We use this novel technique to extend previous results regarding remappingresponses in human visual cortex.Acknowledgement: NIH EY017082, Chaire d’Excellence56.511 Perisaccadic response properties of MT neuronsTill S. Hartmann 1 (till@rutgers.edu), Frank Bremmer 2 , Bart Krekelberg 1 ; 1 Center forMolecular and Behavioral Neuroscience, Rutgers, Newark, NJ, 2 Department ofNeurophysics, Philipps-University Marburg, GermanyHumans perform an average of 3 fast eye movements per second. Duringthese eye movements, the physical image of the world races across theretina at speeds up to 1000 deg/s, yet this movement is not perceived. Arecent publication (Bremmer et.al. JNeurosci, 2009) described how saccadesdecrease the population activity of areas MT, MST, VIP, and LIP. The timecourse of suppression qualitatively matched the perceptual loss of sensitivityaround the time of saccades in humans. In our current study, we investigatedthe temporal dynamics of visual responses around saccades in a largepopulation of MT cells. Unlike the previous study, our design allowed us toinvestigate perisaccadic responses at high temporal resolution in each cell(and not just at population level).The monkeys either fixated a central target or performed optokinetic nystagmus(OKN), which was induced by moving random dots. At the sametime, a noise stimulus was presented in the background. This stimulusdrove the MT cells quite strongly and allowed us to observe the influence ofeye movements on an ongoing visual response. We aligned the firing ratesof each neuron to the saccades. We corroborated the finding that the populationresponse is reduced for stimuli presented around saccade onset, andenhanced just after the saccade offset. However, modulations in single cellswere surprisingly heterogeneous. Some cells decreased their firing duringand prior to saccades, others increased theirs tenfold. Some modulationsstarted before saccade onset; hence at least part of the modulation in areaMT is due to an extraretinal signal. Taken together these data suggest thatwhile the overall reduction in firing may be a correlate of the behavioralphenomenon of saccadic suppression, a significant number of neurons inarea MT change their responses in a manner that is not compatible withsimple reduced responsivity.Acknowledgement: The Pew Charitable Trusts (BK), the National Institute of HealthR01EY17605 (BK), and European Union Research Grant MEMORY 6 FP/043236 NEST(FB)56.512 Borders between areas with different colors influence perisaccadicmislocalizationFemke Maij 1 (f.maij@fbw.vu.nl), Maria Matziridi 1 , Eli Brenner 1 , Jeroen B.J. Smeets 1 ;1 VU University AmsterdamThe location of a brief flash is misperceived when the flash is presentedaround the time of a saccade. This phenomenon has been studied intensively,but there are still some unanswered questions. In a previous experimentwe showed that the saccade target is used as a visual reference whenlocalizing flashes that are presented before or after a saccade. The saccadetarget is a visual reference, but it also provides direct feedback about thesaccade. In the study that we present here we examine whether other visualstructures can also be used to improve the accuracy of the perceived locationof the flash. In particular we examine whether a border between differentlycolored backgrounds can be used as a visual reference. Moreoverwe hypothesize that the perceived location of the flash would always be onthe correct background color. This would influence the perceived locationwhenever the border between the two colors is at a position across whichflashes are mislocalized on trials with a uniform background. The resultsshowed that there is indeed an increase in accuracy when the flash was presentedbefore and after the saccade, but when the flashes were presentedduring the saccade they were readily perceived on the background with adifferent color. This suggests that visual references other than the saccadetarget can be used as visual references before and after the saccade, but thatthey are not effective during saccades.Acknowledgement: This research was supported by the Netherlands Organization forScientific Research (NWO, ALW grant 816-02-017).56.513 Peri-saccadic mislocalization centered at salient stimulusinstead of saccade goalGang Luo 1 (gang.luo@schepens.harvard.edu), Tyler Garass 2 , Marc Pomplun 2 , EliPeli 1 ; 1 Schepens Eye Research Institute, Department of Ophthalmology, HarvardMedical School, 2 Department of Computer Science, University of MassachusettsBostonIt is generally believed that visual localization is performed based on retinotopicposition and an efference copy of oculomotor signals. However,this model has difficulty in explaining the non-uniformity of many visualmislocalization phenomena associated with eye movements, e.g. the “compressed”pattern of peri-saccadic mislocalization with its locus centered atthe saccade target. We investigate whether the mislocalization locus followsthe actual saccade landing point or the salient object, which are typically atthe same location, but were separated by up to 14º in our study.Subjects made saccades from a fixation marker on the left (-10°) to a positionon the right (+10°), and a vertical bar was flashed for one frame (100Hzframe rate) at -9°, 1°, 9°, or 15° contemporaneous to saccades. In the baselinecondition, there was a salient target at the saccade landing point (+10°), inthe control condition, there was no saccadic target and subjects made saccadesto the memorized point (+10°). In the main experiment, there was nosaccadic target either, but a salient stimulus (distracter) randomly appearedat -4 or 5° at the same time as the saccade cue and persisted for 600ms.The results show that: (a) the mislocalization in the baseline was consistentwith that in previous publications (compression index CI=0.44); (b) themislocalization in the control condition was smaller (CI=0.22); and (c) themislocalization in the main test was strong (CI=0.43), but the mislocalizationlocus was shifted to the distracter.Our results imply that in the case of uncertain retinotopic and oculomotorsignals (as during eye movements), the spatial coding of an object maybe affected by other objects that are spatially well established in additionto commonly accepted retinotopic position and ocluomotor signals. Thisinfluence might contribute to the “compressed” pattern of saccadic mislocalization.Tuesday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>315


Tuesday Afternoon PostersVSS 2010 AbstractsTuesday PMAcknowledgement: NIH grants AG034553(GL), EY12890 and EY05957 (EP), andEY017988 (MP)56.514 Rapid development of spatiotopic representations asrevealed by inhibition of returnYoni Pertzov 1 (pertzov@gmail.com), Ehud Zohary 1 , Galia Avidan 2 ; 1 InterdisciplinaryCenter for Neural Computation and Dept of Neurobiology, Hebrew U,Jerusalem, 2 Department of Psychology, Ben Gurion University of the Negev,Beer-Sheva, IsraelInhibition of return (IOR), a performance decrement for stimuli appearingat recently cued locations, occurs when the target and cue share the samescreen-position (i.e. spatiotopic mapping; Posner and Cohen, 1984; Maylorand Hockey, 1985). This is in contrast to cue-based attention-facilitationeffects that were recently suggested to be mapped in a retinotopic referenceframe (Golomb et al., 2008), the prevailing representation throughout earlyvisual processing stages. Here, we investigate the dynamics of IOR formationin both reference frames, using a modified cued-location reaction-timetask with an intervening saccade between cue and target presentation. Thisenabled creating trials in which the target was present at the same retinotopiclocation as the cue, and trials with the same screen-position (spatiotopictrials). IOR was primarily found for targets appearing at the same spatiotopicposition as the initial cue, when the cue and target were presentedat the same visual hemifield, as early as 10 ms after the intervening saccadeended. Therefore, under these experimental conditions, the representationof previously attended locations is not remapped after the execution of asaccade. Rather, either a retinotopic representation is remapped prior to theend of the saccade (using prospective motor command) or the position ofthe cue and target are encoded in spatiotopic reference frame, regardless ofeye position. We suggest that deficits in the formation of such spatiotopicrepresentation due to right parietal lesions may explain classical aspects ofneglect syndrome, such as re-fixating previously visited targets on a visualsearch task.Acknowledgement: This work was supported by the National Institute for Psychobiology inIsrael (NIPI) grant 2-2008-09 to GA and an Israel Science Foundation (ISF) grant to EZ.56.515 TMS over the Human Frontal Eye Field Distorts PerceptualStability across Eye MovementsFlorian Ostendorf 1 (florian.ostendorf@charite.de), Juliane Kilias 1 , Christoph Ploner 1 ;1 Dept. of Neurology, Charité Universitätsmedizin BerlinWe perceive a stable outside word despite the constant changes of visualinput induced by our own eye movements. An internal monitoring of eyemovements may contribute to the seemingly perfect maintenance of perceptualstability. The frontal eye field (FEF) represents a candidate area forthe cortical integration of oculomotor monitoring signals: It receives informationabout an impending eye movement from the brainstem (Sommerand Wurtz, 2002) and exhibits predictive receptive field changes that couldserve the trans-saccadic integration of visual space (Umeno and Goldberg,1997; Sommer and Wurtz, 2006). However, what perceptual consequencesmay arise from altered remapping circuits within the FEF remains unclear.Here, we show that transcranial magnetic stimulation (TMS) over FEF distortsperceptual stability across eye movements. To assess trans-saccadicperceptual integration, we asked normal healthy subjects to report thedirection of intra-saccadic stimulus displacements. The saccade target wasswitched off intra-saccadically and reappeared 250 ms later at a displacedposition (Deubel et al., 1996). In a critical condition, we applied offline TMSin a continuous theta-burst stimulation (cTBS) protocol before subjects weretested in this task. The cTBS protocol has been shown to suppress corticalexcitability for up to 30 min after stimulation when applied over primarymotor cortex (Huang et al., 2005) or FEF (Nyffeler et al., 2006). We determinedthe perceptual thresholds for intra-saccadic displacement detection.Immediately after cTBS over the right FEF, subjects showed significantlyelevated detection thresholds for leftward saccades (i.e., for saccadesdirected to the contralateral hemifield with respect to the stimulated FEF).Control stimulation over the vertex yielded no significant threshold differencescompared to a baseline measure without prior stimulation. Thesefindings indicate that the FEF is involved in the integration of oculomotorfeedback signals that support visual stability across eye movements.Acknowledgement: Supported by BMBF Grant 01GW0653 (Visuospatial Cognition)56.516 Pre-saccadic attention during developmentThérèse Collins 1 (collins.th@gmail.com), Florian Perdreau 1 , Jacqueline Fagard 1 ;1 Laboratoire Psychologie de la Perception, Université Paris Descartes & CNRSSaccadic eye movements to spatial locations are preceded by attentionalshifts which enhance perception at the selected location relative to otherlocations. This link between action preparation and perception is thoughtto be mediated in part by the frontal eye fields. We examined pre-saccadicattention at different ages to ascertain the role of frontal cortex maturationon the expression of the pre-saccadic perceptual enhancement. Wereplicated the oft-reported pre-saccadic perceptual enhancement in adults,examined it in adolescents, and used a modified version of the paradigm totest perception during saccade preparation in 10-month-old infants.Development: LifespanVista Ballroom, Boards 517–528Tuesday, May 11, 2:45 - 6:45 pm56.517 Sensory transmission, rate of extraction and asymptoticperformance in visual backward masking as a function of age,stimulus intensity and similarityGerard Muise 1 (muiseg@umoncton.ca); 1 Laboratoire dePsychologie Cognitive,Universite de Moncton, NBSpeed of visual information processing alters as we age. Using a visualbackward masking (VBM) task, we have compared cohorts of various ages(10, 15, 20, 40 and 60 years of age) on three parameters of a two-stage model(Lagged-Accrual Model, LAM) presented by Muise, LeBlanc, Lavoie andArsenault, 1991. Of particular interest were the duration (Tlag) of initialchance performance reflecting sensory transduction and transmission, therate (theta) of central information accrual and level (alpha) of asymptoticperformance. These parameters were shown to vary systematically as afunction of age, similarity of stimulus set (CGOQ vs IOSX) and stimulusintensity (0.57, 0.70, 0.86, and 1.06 cd/m2). Surprisingly, speed of sensoryprocessing was already at its fastest for the 10 year-olds. The rate of extractionwas at a maximum at 15 years with a sharp deline for older subjects.Older subjects were less able to take advantage of enhanced informationavailable in increased stimulus intensities and dissimilarities. Differentialasymptotic performance in the youngest group as a function of intensitysuggests attentional lapses. Results will also be presented that suggestincreasing stimulus intensity may “normalize” parametric VBM performanceas a function of age. Older subjects may simply need more intenseor high contrast stimuli. A discussion follows that may be pertinent to thecomparison of different clinical populations in early visual informationprocessing within the context of VBM.56.518 Aging and the use of implicit standards in the visualperception of lengthAshley Bartholomew 1 (ashley.bartholomew@wku.edu), J. Farley Norman 1 , JessicaSwindle 1 , Alexandria Boswell 1 , Hideko Norman 1 ; 1 Department of Psychology,Western Kentucky UniversityA single experiment compared younger (mean age was 23.7 years) andolder (mean age was 74.0 years) observers’ ability to visually discriminateline length using both explicit and implicit standard stimuli. In some conditions,the method of constant stimuli (i.e., explicitly presented standardstimulus) was used to determine the observers’ difference thresholds,whereas the method of single stimuli (no explicit standard stimulus) wasused in other conditions. In the method of single stimuli, younger observersreadily form a mental representation of the standard stimulus and canthen use that mental standard in order to make judgments about test stimuli(i.e., whether a “test” line on any given trial is longer or shorter thanthe standard). The current experiment was designed to evaluate whetherincreases in age affect observers’ ability to form effective mental standardsfrom a series of test stimuli. If older observers cannot form effective mentalstandards, then their discrimination performance should deteriorate in thesingle stimuli conditions relative to that exhibited by younger observers. Inour experiment, the observers’ difference thresholds were 5.845 percent ofthe standard when the method of constant stimuli was used and improvedto 4.57 percent of the standard for the method of single stimuli (a decreasein threshold of 22 percent). Both age groups performed similarly: the olderadults discrimination performance was equivalent to that of the youngerobservers. The results of the experiment demonstrate that older observersretain the ability to form effective mental standards from a series of visualstimuli.316 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Afternoon Posters56.519 Older adults misjudge decelerationHarriet Allen 1 (h.a.allen@bham.ac.uk), Mike G Harris 1 ; 1 School of Psychology,University of Birmingham, UKBoth speed discrimination and optic flow perception show age-relateddeclines in performance. Given the importance of these signals for maintaininggood driving, we investigated how well older adults were able tojudge braking.Participants viewed a display of dots simulating constant deceleration overa groundplane towards a visual target. Deceleration was varied from trialto trial, and participants indicated whether or not braking was sufficient tostop safely at the target.Older adults were less likely than young adults to recognize under-braking.E.g. from an initial speed of 20 mph, they reported deceleration atonly 84% of the required value to be safe on almost half the trials, whereasyounger adults made this error on only one fifth of trials. Performance byolder adults improved at faster speeds (errors reduced to less than 20%).There was considerable variation in performance between older individuals,with some performing nearly as well as the younger adults but othersperforming poorly.Both age groups tended to misjudge over-braking e.g.from 20mph, younger adults reported 125% of the required braking to beinsufficient on almost 100% of trials.There is an age-related decline in the ability to discriminate decelerationrates. This is likely to be linked to the known age-related decline in motionprocessing mechanisms. Given that good performance on this task alsorequires integrating changes in motion speed over 1-4 seconds, we suspectthat age-related changes in sustained attention also play a role.Acknowledgement: ESRC, BBSRC56.520 Modulatory effects of binocular disparity and aging uponthe perception of speedJ. Farley Norman 1 (farley.norman@wku.edu), Cory Burton 1 , Leah Best 1 ; 1 Departmentof Psychology, Western Kentucky UniversityTwo experiments investigated modulatory effects of a surround upon theperceived speed of a moving central region. Both the surround’s depth andvelocity (relative to the center) were manipulated. The abilities of youngerobservers (mean age was 23.1 years) were evaluated in Experiment 1, whileExperiment 2 was devoted to older participants (mean age was 71.3 years).The results of Experiment 1 revealed that changes in the perceived depthof a surround (in this case caused by changes in binocular disparity) significantlyinfluence the perceived speed of a central target. In particular,the center’s motion was perceived as fastest when the surround possesseduncrossed binocular disparity relative to the central target. This effect, thattargets that are closer than their background are perceived to be faster, onlyoccurred when the center and surround moved in the same directions (anddid not occur when center and surround moved in opposite directions). Theresults of Experiment 2 showed that the perceived speeds of older adultsare different: older observers generally perceive nearer targets as fasterboth when center and surround move in the same direction and when theymove in opposite directions. In addition, the older observers’ judgmentsof speed were less precise. These age-related changes in the perception ofspeed are broadly consistent with the results of recent neurophysiologicalinvestigations that find age-related changes in the functionality of corticalarea MT.56.521 The effects of age in the discrimination of curved and linearpathsAmy H. Guindon 1 (guindon.amy@gmail.com), Zheng Bian 1 , George J. Andersen 1 ;1 Department of Psychology, University of California, RiversidePrevious research has found that younger observers show a greater accuracyin detecting curved trajectory of moving objects when a three-dimensionalbackground was present (Gillespie & Braunstein, VSS 2009). The currentstudy examined age-related differences in detecting curved trajectoriesof moving objects. Younger and older observers viewed two displays inwhich a ball moved towards the observer. In one of the displays the ballmoved along a linear path, while in the other display the ball moved in anupwards arc along one of three curved paths. Curvature of the path wasindicated by projected velocity information, size change information, or byboth types of information. In addition to age, four independent variableswere manipulated: the background information (3D scene vs. uniformbackground), the information indicating the curved trajectory (velocity,size, or both), the order of the trajectory (linear first vs. curved first), andthe curvature of the curved trajectory (three levels). The task was to indicatewhich display simulated a curved trajectory. A three way interactionwas found between age, background information, and information indicatingthe curved trajectory. When background information was present,the performance of older observers was similar to that of younger observerswhen only velocity information was available and when both size andvelocity information was available. Accuracy was at chance for both agegroups when only size change information was provided. When uniformbackground information was given, this trend was the same for youngerobservers. However, for older observers, performance was at chance levelregardless of what type of information was available. These results showthe importance of velocity information when detecting forward movingcurved paths. The results also suggest that older individuals in particularuse ground plane information to determine curved paths.Acknowledgement: Research supported by NIH EY018334 and AG031941.56.522 Aging and common fateKarin S. Pilz 1 (pilzk@mcmaster.ca), Eugenie Roudaia 1 , Patrick J. Bennett 1,2 ,Allison B. Sekuler 1,2 ; 1 Department of Psychology, Neuroscience and Behaviour,McMaster University, Hamilton, Ontario, Canada, 2 Centre for <strong>Vision</strong> Research,York University, Toronto, Ontario, CanadaCommon fate is a fundamental law of Gestalt psychology: elements thatmove together are perceived as being part of the same object (Wertheimer,1923). Although common fate suggests that the perception of motion drivesobject perception, the spatial arrangement of the elements also can havean effect on the perception of motion, even when that arrangement is perceivedonly via motion. For example, Uttal et al. (2000) showed that dotsthat moved in a common direction within a cloud of randomly movingdots were detected better when the target dots were arranged collinearilythan when they were non-collinear. These results indicate that both motiondirection and spatial organization are crucial for target detection in randomdot motion displays.As we age, some aspects of our motion perception remain relativelyunchanged, while other aspects are impaired. For example, the ability tointegrate form and motion information in the context of higher-level visualstimuli, such as biological motion stimuli, seems to be impaired (Pilz etal., in press). Here, we investigated the effect of aging on the perception ofcommon fate, and the way common fate interacts with form perception.In the current experiment, older (~65 years) and younger (~ 23 years) subjectsdetected a group (collinear or non-collinear) of four coherently movingdots that appeared in one of two sequentially presented sets of randomlymoving dots with limited lifetime. The target dots always movedin a common direction, which varied across trials. Compared to youngersubjects, older subjects showed a general decline in target detection basedon common fate. This decline was significantly greater for non-collineartargets. These results indicate that with aging, form regularity is especiallyimportant for detecting coherently moving targets, which may underlieprevious results regarding perception in higher-level visual tasks such asbiological motion.Acknowledgement: The current research was supported by the Alexander von HumboldtFoundation (KSP), the Alexander Graham Bell Canada Graduate Scholarship(ER), and theand the Canadian Institutes of Health Research (ASB and PJB)56.523 Assessing the effect of aging on spatial frequency selectivityselectivity of visual mechanisms with the steady state visuallyevoked potential (ssVEP)Stanley W. Govenlock 1 (govenlock@mcmaster.ca), Allison B. Sekuler 1,2 , PatrickJ. Bennett 1,2 ; 1 Dept of Psychology, Neuroscience and Behaviour, McMasterUniversity, 2 Center for <strong>Vision</strong> Research, York UniversityElectrophysiological studies suggest that the spatial frequency selectivityof V1 neurons declines with age (Leventhal et al., 2003; Zhang et al., 2008).However, no psychophysical evidence for such an age-related decline hasbeen found in humans (Govenlock et al., <strong>Vision</strong> Res, submitted). Three possibleexplanations for this discrepancy exist: (1) psychophysical performanceis determined by the few neurons that remain highly-selective throughoutaging, rather than by a larger less-selective population of neurons; (2) compensatorymechanisms enhance tuning in older humans; or (3) human andmacaque monkeys are differentially affected by age. To test hypothesis 1,we measured the bandwidth of spatial frequency-selective mechanismsusing the steady-state visually evoked potential (ssVEP) (Regan & Regan,1988; Peterzell & Norcia, 1997). Twelve younger (22 years) and 12 olderTuesday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>317


Tuesday Afternoon PostersVSS 2010 AbstractsTuesday PM(69 years) subjects viewed two superimposed, iso-oriented, high contrastGabor patterns counter-phase flickering at 6.67 (F1) and 8.57 (F2) Hz. Thespatial frequency of one Gabor was fixed at 1 cpd; the frequency of theother Gabor varied +/- 0.66 octaves around 1 cpd. The dependent variable-- the amplitude of the F1+F2 Hz component of the ssVEP -- was measuredas a function of the spatial frequency difference (∆f) between the Gabors.In both age groups, F1+F2 amplitude was greatest when ∆f was zero anddeclined as ∆f increased. The full bandwidth (at half-amplitude) of theF1+F2 response was approximately 0.65 octaves in both age groups. Thus,spatial frequency selectivity, as indexed by the population response registeredby the ssVEP, does not become more broadly tuned in older humans.These results do not support the hypothesis that the discrepancy betweenhuman and macaque results can be explained by performance being determinedby a few highly-selective neurons.Acknowledgement: NSERC, Canada Research Chairs, CIHR Strategic Training Program inSocial Interaction and Communication in Healthy Aging56.524 Effect of aging on the use of orientation and position inshape perceptionEugenie Roudaia 1 (roudaia@mcmaster.ca), Patrick J. Bennett 1,2 , Allison B.Sekuler 1,2 ; 1 Psychology, Neuroscience & Behaviour, McMaster University,2 Centre for <strong>Vision</strong> Research, York UniversityGrouping local elements to extract shapes is a crucial task of the visual system.Recently, Day and Loffler (2009) showed that shape perception resultsfrom a weighted combination of local element positions and orientations,and the weighting of each cue depends on their relative strength. When elementswhose orientations were consistent with a pentagon were positionedon circle, the orientation information dominated the percept and a pentagonshape was perceived. With increasing number of elements, the positioninformation became more dominant and a circular shape was perceived.Shape discrimination is unaffected by healthy aging (Habak et al., 2009).However, older adults are less influenced by local orientation informationwhen integrating contours (Roudaia et al., 2008). Here, we examinedwhether the relative roles of orientation and position information in shapeperception change with age.Following Day and Loffler (2009), conflicting target stimuli were created bysampling the orientation of a rounded pentagon with Gabors and positioningthem on a circle. Test stimuli were composed of Gabors whose positionsand orientations were consistent with pentagon shapes of varyingamplitude. On each trial, older (~ 66 yrs) and younger (~ 24 yrs) subjectsviewed a target and test stimuli in two intervals and judged which shapelooked more circular. The amplitude of the test stimulus was varied tomeasure the point of subjective equality between the perceived target andthe test shapes. The number of Gabors comprising the stimuli was variedto manipulate the strength of position information. Consistent with previousfindings, the perceived target shape was consistent with a pentagon forstimuli comprising 15 - 40 elements. This orientation dominance effect disappearedwith denser sampling. Interestingly, this effect was equal in olderand younger subjects across all sampling levels. These results support thefindings that shape perception mechanisms are preserved in older age.Acknowledgement: Canada Institute of Health Research Grant and Canada Research ChairProgram to A.B.S. and P.J.B., and Alexander Graham Bell Canada Graduate Scholarshipand CIHR Training Program in ‘‘Communication and Social Interaction in Healthy Aging’’grant for E.R.56.525 Effects of aging on discriminating emotions from pointlightwalkersJustine M. Y. Spencer 1,2 (spencjmy@mcmaster.ca), Allison B. Sekuler 1,2 , PatrickJ. Bennett 1,2 , Martin A. Giese 3 , Karin S. Pilz 1,2 ; 1 Department of Psychology, Neuroscienceand Behaviour, 2 McMaster University, 3 University of TubingenThe visual system is able to recognize human motion simply from pointlights attached to the major joints of an actor. Moreover, it has been shownthat younger adults are able to recognize emotions from such dynamicpoint-light displays. Here, we investigated whether the ability to recognizeemotions from point-light displays changes with age. There is accumulatingevidence that older adults are less sensitive to emotional stimuli. Forexample, it has been shown that older adults are impaired in recognizingemotional expressions from static faces. In addition, it has been shownthat older adults have difficulties perceiving visual motion, which mightbe helpful to recognize emotions from point-light displays. In the currentstudy, ten older (mean = 70.4 years) and ten younger adults (mean = 26.1years) were asked to identify three emotions (happy, sad, and angry) displayedby four types of point-light walkers: upright and inverted normalwalkers, which contained both local motion and global form information;upright scrambled walkers which contained only local motion information;and upright random-position walkers which contained only global forminformation. Observers in both age groups were able to recognize emotionsfrom all types of point-light walkers, but performance was best withupright-normal walkers, worst with scrambled walkers, and intermediatewith random-position and inverted-normal walkers. Older subjects performedworse than younger subjects in the scrambled and random-positionconditions, but no age difference was found in the upright- and invertednormalconditions. These results suggest that both older and youngeradults are able to recognize emotions from point-light walkers on the basisof local motion or global form information alone. However, performanceis best when both form and motion information are presented simultaneously,an effect which is enhanced in older subjects.56.526 The Ebbinghaus Illusion as a function of age: completepsychometric functionsLaurence Thelen 1 (laurencethelen@hotmail.com), Roger Watt 1 ; 1 Department ofPsychology, University of Stirling, Scotland, UKIn the Ebbinghaus illusion the visually perceived size of circles is affectedby contrast with the size of neighbouring circles. Children under 6 yearsare thought to show little or no illusion. We have collected data for groupsof children of ages 4 to 9. Each participant was shown a series of images ofa pair of target circles: one target circle was surrounded by larger circles;the other by smaller. The relative size of the two target circles was variedand participants were asked which was the larger circle. The proportion oftrials when the target circle surrounded by the larger circles was selected,was plotted as a function of target cirlce size. Normally, one would takethe point where this function crossed 50% to be a measure of the illusion.However, we have found that participants up to age 6 have a tendency toreport on a proportion of trials the target circle surrounded by the largerones, irrespective of the size of the target circle itself. This suggests visualcrowding in these age groups: the occasional intrusion of surround circlesinto the judgement. When one allows for this, then there is no further differencebetween any age groups and adults.56.527 Age and guile vs. youthful exuberance: Sensory and attentionalchallenges as they affect performance in older and youngerdriversLana Trick 1 (ltrick@uoguelph.ca), Ryan Toxopeus 1 , David Wilson 1 ; 1 Dept. ofPsychology, University of GuelphWith age there are reductions in sensory, attentional, and motor functionthat would predict deficits in performance in older drivers. A variety ofstudies suggest that the magnitude of these effects varies with the attentionaldemands of the task: age-deficits in performance are especially notablein tasks where there is high attentional load. These studies typicallymanipulate attentional load by imposing a secondary task that does not gonaturally with driving (e.g. mental arithmetic). In this study, an attemptwas made to manipulate the demands of the drive by using challenge factorsintrinsic to driving. Three manipulations were investigated: a sensorychallenge (driving in fog as compared to driving on a clear day); a trafficdensity challenge (driving in high as compared to low density traffic); anda navigational challenge (having to use memorized directions, signs andlandmarks to navigate while driving as compared to simply “following theroad”). The effects of these manipulations were investigated alone and incombination in 19 older adults (M age = 70.8 years) and 21 younger adults(M age = 18.2 years). Participants were tested in a high fidelity driving simulator.Hazard RT, collisions, steering performance and navigational errorswere measured. Contrary to prediction, when the driving task was mademore challenging, the older drivers performed as well or better than theyounger adults, with significantly fewer collisions and marginally lowerhazard RT. This high level of performance may have arisen because olderdrivers adjusted their speeds more appropriately in the face of differentdriving challenges. Speed adjustment indices were calculated for each conditionand participant. For the older adults, these speed adjustment indicescorrelated with measures of selective and divided attention, which suggeststhat older adults with deficits in attentional processing adjust theirdriving speeds to compensate.318 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Afternoon PostersAcknowledgement: Natural <strong>Sciences</strong> & Engineering Research Council of Canada, OntarioNeurotrauma Foundation, Auto21: Network Centres of Excellence, Canadian Foundationfor Innovation56.528 The role of ageing on searching for a multisensory object in3-dimensional arraysAnnalisa Setti 1,2 (asetti@tcd.ie), Jason S. Chan 1,2 , Corrina Maguinness 1,2 , Kate E.Burke 1,2 , RoseAnne Kenny 1,3.4 , Fiona N. Newell 1,2 ; 1 Institute of Neuroscience,Trinity College Dublin, 2 School of Psychology, Trinity College Dublin, 3 Departmentof Medical Geronotology, Trinity College Dublin, 4 St. James Hospital,DublinWith ageing sensory acuity declines, however, recent studies suggest thatperception is compensated by combining inputs from across the varioussenses [Laurienti, et al., 2006]. However, perception can be compromisedwhen unrelated sensory information is combined across the senses [Poliakoffet al., 2006]. What is not known is how efficient is multisensory integrationin older adults when the task is to search for an object in a largespatial array. In a task involving visual target localisation, we investigatedwhether an auditory stimulus presented from the same location improvesperformance relative to a visual-only condition and whether an auditorytarget presented to a different location (left or right, in front or behind) tothe visual target compromises performance. We also tested whether theseeffects were more pronounced in older than younger adults. In Experiment1, we manipulated the spatial congruency between the auditory andvisual events along the depth plane (z-axis) and in Experiment 2 along thehorizontal plane (x-axis). Overall, performance was worse for older thanyounger adults in both experiments. In particular, performance in the olderadults group did not benefit from spatial congruency between the visualtarget and the auditory non target, but it was hindered by a sound comingfrom a spatially incongruent depth location. Conversely, in Exp.2 no detrimentaleffect of a spatially incongruent sound on the horizontal plane wasfound in the older adult group suggesting that visuo-spatial processing isnot affected by sounds mislocated to the left or right.These results show that when auditory and visual stimuli are available olderadults may integrate unreliable auditory inputs to perform a visual task, inparticular along the depth plane. These findings support the idea that multisensoryintegration is enhanced in older relative to younger adults.Acknowledgement: This research was completed as part of a wider programme ofresearch within the TRIL Centre, (Technology Research for Independent Living). The TRILCentre is a multi-disciplinary research centre, bringing together researchers from UCD,TCD, NUIG and Intel, funded by Intel and IDA Ireland. www.trilcentre.orgFace perception: Eye movementsVista Ballroom, Boards 529–541Tuesday, May 11, 2:45 - 6:45 pm56.529 Dissociating holistic from featural face processing bymeans of fixation patternsMeike Ramon 1 (meike.ramon@uclouvain.be), Goedele van Belle 1 , Philippe Lefèvre 2 ,Bruno Rossion 1 ; 1 Université catholique de Louvain, Cognition & DevelopmentResearch Unit, Laboratory of Neurophysiology, 2 Université catholique deLouvain, Center for Systems Engineering and Applied MechanicsIn the face processing literature, holistic/configural (HP) has classicallybeen dissociated from featural processing (FP) (Sergent, 1984). HP, theinteractivity of feature processing, promotes the generally observed efficiencyin recognizing/discriminating individual faces and forms the basisof phenomena such as the whole-part advantage (Tanaka & Farah, 1993)and the composite-face effect (Young et al., 1987). FP, characterized by alack of such interactivity, renders a local, serial processing style suboptimalfor face processing (as seen in acquired prosopagnosia). Past investigationshave assessed HP/FP behaviorally, by e.g. discrimination/recognition offeatures embedded in the facial context, or presented in isolation. Here wesuggest that the extent to which HP/FP is engaged in varies depending onthe information that can be derived from full-face stimuli, and furthermorecan be assessed by means of fixation patterns. Participants’ eye-movementswere recorded during a delayed face-matching task. The stimuli weremorphs created from personally familiar/unfamiliar faces that were eithereasily discriminable (50% difference), or more difficult (20%, 50% blurredfaces). The rationale was that greater similarity (20%) would decrease HPand increase reliance on individual features (FP). Contrariwise, HP shouldincrease with dissimilarity, or when featural information is unavailable, asgiven for blurred faces (Collishaw & Hole, 2000). For easily discriminablefaces, participants fixated the face-center, below the eyes (Hsiao & Cottrell,2008). However, the individual features (eyes, mouth) were fixated morewith increased similarity. The distributed nature of fixation patterns, alongwith more fixations, cannot be attributed to lower performance, as this patternwas not found for blurred faces. Here, despite decreased performance,fixations remained even more centrally located, as if seeing the whole facefrom the central point was the optimal strategy. Our results are the first todemonstrate that stimulus quality and similarity can determine processingstyle, which is directly linked to the observed pattern of eye-gaze fixations.56.530 Gaze contingent methods reveal a loss of holistic perceptionfor inverted facesGoedele Van Belle 1, 2, 3 (goedele.vanbelle@uclouvain.be), Karl Verfaillie 1 , Peter DeGraef 1 , Bruno Rossion 2, 3 , Philippe Lefèvre 3 ; 1 Laboratorium voor ExperimentelePsychologie, University of Leuven, Belgium, 2 Unité Cognition et Développement,University of Louvain, Belgium, 3 Laboratoire de Neurophysiologie,University of Louvain, BelgiumThe face inversion effect (FIE) is often attributed to the inability of thehuman face recognition system to simultaneously perceive multiple featuresof an inverted face and integrate them into a single global representation,a process called holistic processing. If inversion reduces holisticprocessing, then for inverted faces the functional visual field should beconstricted, as opposed to global (expanded) for upright faces. Until now,however, there are only indirect indications supporting this hypothesis.In the current experiment, we directly manipulated holistic processing byusing a gaze-contingent technique allowing manipulation of the amountof facial features simultaneously perceived. First, a gaze-contingent fovealmask covering all foveal information prevented the use of high resolutioninformation, necessary for part-based processing, but allowed holistic processingbased on lower resolution peripheral information. Second, a gazecontingentfoveal window covering all peripheral information preventedthe simultaneous use of several facial features, but allowed detailed investigationof each feature individually. A delayed face matching task showedan increased FIE with a foveal mask compared to full view and an almostabsent FIE with a foveal window. These data provide direct evidence thatthe FIE is caused by the inability to process inverted faces holistically.56.531 Ultra-rapid saccades to faces : the effect of target sizeMarie A. Mathey 1,2 (marie.mathey@gmail.com), Sébastien M. Crouzet 1,2 , Simon J.Thorpe 1,2 ; 1 Université de Toulouse, UPS, Centre de Recherche Cerveau & Cognition,France, 2 CNRS, CerCo, Toulouse, FranceWhen two images are simultaneously flashed left and right of fixation, subjectscan initiate saccades to the side where a face is present in as little as100-110 ms (Crouzet, Kirchner & Thorpe, submitted). In the present study,we tested how performance is affected by reducing the size of the targetregion within the image. Six different scales were used so that the percentageof the pixels in the image that corresponded to the head (not includinghair) was set at 20%, 10%, 5%, 2%, 1% or 0.5%. We generated sets of 100target and distractor images by taking two photographs of the same scene,one with a human present (target), and another without (distractor). Oneach trial, a fixation cross was presented for 800-1600 ms followed by a gaplasting 200 ms. Then, a target stimulus at one of the six scales was pairedwith either one of the matched distractors, or one of 500 other highly varieddistractor images. 8 subjects were required to saccade towards the sidecontaining a human target. Although accuracy decreased when face sizewas reduced, overall performance remained surprisingly high, even for thesmallest sizes. For example, accuracy dropped from 94.6% to 84.2% whenthe face was reduced from 20% to 0.5% of the image. Furthermore, averagereaction time was under 150 ms for all six sizes, and the minimum reactiontime defined as the bin where correct responses statistically outnumbererrors was still only 100-110 ms, even for the smallest size. Although thereis now increasing evidence that these ultra-rapid saccades towards facesmay depend on relatively low level information contained in the powerspectrum, the current results demonstrate that this sort of analysis must beperformed locally, rather than being a feature of the global power spectrumfor the image.Tuesday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>319


Tuesday Afternoon PostersVSS 2010 AbstractsTuesday PM56.532 Power spectrum cues underlying ultra-fast saccadestowards facesSébastien M. Crouzet 1,2 (sebastien.crouzet@cerco.ups-tlse.fr), Simon J. Thorpe 1,2 ;1 Université de Toulouse, UPS, Centre de Recherche Cerveau & Cognition,France, 2 CNRS, CerCo, Toulouse, FranceWhen images of a face and a vehicle are flashed left and right of fixation,subjects can selectively saccade towards the face only 100 ms after imageonset (Crouzet et al., submitted). This is so quick that it probably does notallow time for complete analysis of the image by the ventral stream. Whatsorts of information could be used for triggering such fast saccades? Onepossibility is that this ultra-rapid processing relies on relatively low levelpower spectrum (PS) information in the Fourier domain (Honey et al., J.Vis., 2008). Thus, PS normalization in the task can significantly alter facedetection performance, especially for the very first saccades (Crouzet et al.,ECVP 2008). However, a decrease of performance following PS normalizationdoes not prove that PS-based information is sufficient to performthe task (Gaspar & Rousselet, Vis. Res., 2009). Following the Gaspar andRousselet paper, we used a swapping procedure to clarify the role of PSinformation in fast face detection. Our experiment used 3 conditions: (i)with the original images, (ii) inverted, in which the face image has the PS ofa vehicle, and the vehicle has the PS of a face, and (iii) swapped, where theface has the PS of another face, and the vehicle has the PS of another vehicle.The results showed very similar levels of performance in the original andswapped conditions, and a huge drop in the inverted condition. The conclusionis that, in the early temporal window offered by the saccadic choicetask, the visual saccadic system effectively makes use of low level PS informationfor fast face detection, implying that faces may be detected by someparticular combination of spatial frequency and orientation energy.56.533 Human and foveated ideal observer eye movement strategiesduring an emotion discrimination taskMatthew Peterson 1 (peterson@psych.ucsb.edu), Miguel Eckstein 1 ; 1 Dept. ofPsychology, UC Santa BarbaraIntroduction: Previously, we have shown that eye fixation patterns duringa quick, difficult facial identification task are highly observer-specific,with these differences mirroring idiosyncratic fixation-dependent taskability (Peterson, 2009). Here, we extended this exploration to the taskof emotion recognition. Specifically, we investigated the optimality withwhich humans adapt their eye movements to changing task demands duringface recognition. Methods: We implemented an ideal observer limitedby a human-like foveated visual system in order to evaluate the expectedperformance for each possible fixation location. In order to assess humanstrategy optimization we ran observers in two tasks. In both, observersbegan each trial by fixating along the edge of a monitor. A face embeddedin white noise would then appear in the middle of the screen for 350ms duringwhich observers made a single eye movement. In one task, observerswere shown and then asked to identify one of ten faces. In the second task,observers were shown either a smiling face or a neutral face and asked tochoose the displayed emotion. Results: Ideal observer results show that ashift in eye movement strategy between identification and emotion recognitiondownward toward the mouth is optimal. This move is driven largelyby the differences in the locations of information concentration between thetwo stimuli and task types. Humans continued to show individualized fixationpatterns across both tasks. Furthermore, humans showed a significantshift in gaze locus between the two conditions on a subject-by-subject andgroup basis. However, fixations did not shift as much as optimality wouldsuggest. A foveated system with differential upper-field and lower-fieldvisibility (Cameron, 2002) is able to explain the pattern of eye movements.Conclusion: Humans optimally adapt their eye movements depending onthe face recognition task at hand and on the individual’s observer-specific,fixation-dependent ability.Acknowledgement: NIH-EY-015925, NSF-DGE-022171356.534 Location of pre-stimulus fixation strongly influences subsequenteye-movement patterns during face perceptionJoseph Arizpe 1 (arizpej@mail.nih.gov), Dwight Kravitz 1 , Galit Yovel 2 , Chris Baker 1 ;1 Laboratory of Brain and Cognition, NIMH, NIH, 2 Department of Psychology, TelAviv UniversityInterpretation of eye-tracking data rests on the assumption that observedfixation patterns are mainly stimulus and task dependent. Given thisassumption, one can draw conclusions regarding where the most diagnosticinformation is for a given perceptual task (e.g. face recognition, sceneidentification). If the assumption is true, then the fixation location at stimulusonset should not largely influence subsequent fixation patterns. However,we demonstrate that start location very strongly affects subsequentfixation patterns. Participants viewed upright and inverted faces and weretold that they would be required to recognize the faces later in the experiment.We imposed five different start locations relative to the faces: above,below, right, left, and center (tip of nose). We found a distinct pattern offixations for each start location that extended through at least the first fivefixations. In particular, there was a strong fixation bias towards the side ofthe face opposite the start location. For the center start location, we foundthe classic result of more fixations to the eyes, particularly the right. The initialsaccade from the center start location was delayed relative to the otherstart locations, suggesting that participants were already sampling informationfrom the face. These general fixation patterns held regardless of faceorientation. However, the difference in fixation patterns between uprightand inverted faces was dependent on start location. While the central startlocation produced the classic result (more fixations to eyes for upright andto mouth and nose for inverted), the relative preference for eyes in uprightover inverted was dependent on the start location. We conclude generalbiases in saccadic programming as well as stimulus information influenceeye-movements during face perception. Eye-tracking allow us to tease theseinfluences apart only if both factors are carefully controlled and analyzed.Acknowledgement: NIMH Intramural program, United States - Israel Binational ScienceFoundation56.535 Scan Patterns Predict Facial Attractiveness JudgmentsDario Bombari 1 (dario.bombari@psy.unibe.ch), Fred Mast 1 ; 1 Institute ofPsychology, University of BernWe investigated the role of eye movements during judgments of facialattractiveness. Forty participants had to rate high and low attractivenessfaces while their eye movements were registered. High attractiveness facesevoked a more configural scanpath compared to low attractiveness faces.This suggests that attractiveness is perceived by collecting informationfrom different regions of the face and by integrating them to form a globalrepresentation. Moreover, high attractiveness faces elicited a higher numberof fixations compared to low attractiveness faces. Participants lookedpreferably at the eye region and the poser’s left hemiface when the faceswere attractive whereas they spent more time fixating the mouth regionand the right hemiface in unattractive faces. Our findings are in line withevidence showing that the perception of attractive and unattractive facesrelies on different mechanisms.56.537 Gaze direction mediates the effect of an angry expressionon attention to facesAnne P. Hillstrom 1 (anne.hillstrom@port.ac.uk), Christopher Hanlon 1 ; 1 Departmentof Psychology, University of PortsmouthAn angry expression on someone’s face can draw and hold our attention.Research demonstrating this typically uses faces gazing directly at the participant.Other research has shown that gaze direction affects the way emotionsare processed, so we looked for an attentional effect of angry expressionswith averted gaze. In this study, a serial stream of faces appearedalternating randomly between positions left and right of fixation andparticipants searched for a target face. All faces either had averted gazeor direct gaze. The target face had either a neutral or an angry expression,as did the face that appeared immediately before the target (the distractorface). We looked for effects of angry expressions drawing attention spatially(when the target was neutral and the distractor was angry, we looked forslower responses when they were at different locations than the same location;when the target was angry and the distractor was neutral, we lookedfor faster responses when they were at the same location than differentlocations) and also for engagement effects (focusing on trials where targetand distractor were at the same location, we looked for slower responseswhen either the target or distractor were angry compared to neutral). Noattention-drawing effects were seen. There was an engagement effect ofanger and it was mediated by gaze: (1) Angry targets were responded tomore slowly than neutral targets. (2) When gaze was averted, responseswere slower when the distractor was angry than when the distractor wasneutral. (3) But when gaze was direct, the distractor’s expression had noimpact. Thus, regardless of gaze, engagement is high when the target is320 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Afternoon Postersangry. If an angry non-target face is encountered during search, that faceis more disruptive if all faces are looking away than if looking directly atthe observer.56.538 Fear expressions enhance eye gaze discriminationDaniel H. Lee 1 (d23lee@gmail.com), Joshua M. Susskind 1 , Adam K. Anderson 1 ;1 University of TorontoEvidence suggests that facial expressions may have originated from a primitivesensory regulatory function (Susskind et al., 2008). For example, widereye-opening in fear expressions is associated with a subjectively largervisual field and enhanced peripheral stimulus detection. Here we examinedthe Functional Action Resonance hypothesis (Susskind et al., 2008),predicting that these benefits for fear expressers are, in parallel, passed onto their observers by enhancing gaze directionality discrimination. To testthis hypothesis, we derived schematic eye gazes by averaging across 19individuals expressing canonical fear and disgust facial actions. Eye aperturewas interpolated from wide “fear” to its functionally opposite, narrow“disgust”, and gaze direction was parametrically modulated from 0(straight) to 0.25 degrees (left and right). The remainder of the face wasremoved to examine directly how expression effects on eye aperture influencegaze perception. Participants viewed a pair of eyes and made forcedchoiceresponse judgments of left vs. right gaze direction. Logistic regressionrevealed that accuracy increased with gaze angle and with increasedeye aperture characteristic of fear expressions. This effect appears specificto eyes, and not reducible to simple geometric properties, as the discriminationenhancement did not extended to analogous rectangles (matchingdimension and proportion) that were not perceived as eyes. In an additionalexogenous attentional-cuing experiment, where gaze matched ormismatched the location of a target, participants responded faster to theeccentric targets with increasing eye aperture. This facilitation was correlatedwith increased visibility of the iris, consistent with how fear physicallyenhances the gaze signal. In sum, these results support the FunctionalAction Resonance hypothesis demonstrating links between how emotionsare expressed on the face, their functional roles for the expresser, and howthey influence their observer’s perception and action.Acknowledgement: Natural <strong>Sciences</strong> and Engineering Research Council of Canada56.539 First fixation toward the geometric center of human facesis common across tasks and cultureHelen Rodger 1 (helenr@psy.gla.ac.uk), Caroline Blais 2 , Roberto Caldara 1 ; 1 Departmentof Psychology and Center for Cognitive Neuroimaging (CCNi), Universityof Glasgow, United Kingdom, 2 Départment de Psychologie, Université deMontréal, CanadaCultural diversity in eye movements has been shown between East Asian(EA) and Western Caucasian (WC) observers across various face processingtasks: the recognition of upright and inverted faces, categorization byrace and expression. Eye-tracking studies in humans have also consistentlyreported that first gaze fixations are biased toward the center of naturalscenes or visual objects. However, whether such low level perceptual bias isuniversal remains to be established. To address this question, we re-examinedthe initial fixations of a large set of eye movement data of EA andWC observers performing diverse tasks: the learning and recognition of(1) upright and (2) inverted faces, (3) categorization by race, and (4) categorizationof emotional expressions. In all experiments, to prevent anticipatoryeye movement stategies and record a genuine location of the firstfixation, we presented faces pseudorandomly in one of the four quadrantsof a computer screen. We measured the mean Euclidian distance betweenthe geometric center of each face stimulus and the center-of-gravity of thefirst saccade across tasks and observers. Consistent with previous visualsearch findings with objects, the first saccade directed the fovea towards thecenter-of-gravity of the target face, regardless of the culture of the observeror the task. Interestingly, we observed differences in the onset of the firstsaccade, with upright faces eliciting the fastest onset compared to invertedor emotionally expressive faces across both groups. The first fixation couldrelate to a basic visual function and universal human ability to localizeobjects in the visual environment, possibly representing the entry level forinformation processing. Top-down factors modulate the speed of preparatorysaccades, but not their landing locations. Culture does not shape thelanding location of the first fixation, but only modulates subsequent stagesof information processing.Acknowledgement: The Economic and Social Research Council and Medical ResearchCouncil (ESRC/RES-060-25-0010)56.540 You must be looking at me: the influence of auditory signalson the perception of gazeRaliza S. Stoyanova 1 (raliza.stoyanova@mrc-cbu.cam.ac.uk), Michael P. Ewbank 1 ,Andrew J. Calder 1 ; 1 MRC Cognition and Brain <strong>Sciences</strong> Unit, University ofCambridgeThe direction of another’s eye gaze provides a cue to where they arecurrently attending (Baron-Cohen, 1995). If that gaze is directed at theobserver, it often indicates a deliberate attempt to communicate. However,gaze direction is only one component of a social signal that may includeother emotionally salient information in the face or the voice. A recentstudy from our laboratory has shown that gaze is more likely to be seen asdirect in the context of an angry as compared to a fearful or neutral facialexpression (Ewbank, Jennings & Calder, in press). This is consistent withthe presence of a ‘self-referential bias’ when participants are faced withambiguously directed gaze in the context of a threatening face. However, itremains unclear whether a self-referential signal in the auditory modalitycould exert an influence on the perception of gaze. To address this question,we presented neutral faces displaying different degrees of gaze deviationwhilst participants heard a name in the unattended auditory channel. Hearingone’s own name and seeing direct gaze both capture and hold attention(Moray, 1959; Senju & Hasegawa, 2005). These two ostensive signals havealso been shown to activate similar mentalizing regions (Kampe, Frith &Frith, 2003). Given the shared signal value of the two cues, we predictedthat participants would evaluate a wider range of gaze deviations as lookingdirectly at them when they simultaneously heard their own name. Ourdata supported this hypothesis showing, for the first time, that the communicativeintent signalled via the auditory modality influences the visualperception of another’s gaze.Acknowledgement: This research was funded by the UK Medical Research Council projectcode U.1055.02.001.0001.01 (Andrew J. Calder)56.541 Enhanced detection in change via direct gaze: Evidencefrom a change blindness studyTakemasa Yokoyama 1 (yokoyama@lit.kobe-u.ac.jp), Kazuya Ishibashi 1,2 , ShinichiKita 1 ; 1 Department of Psychology, Kobe University, 2 Japanese <strong>Society</strong> for thePromotion of SciencePurpose: A number of questions remain unclear regarding direct gaze.Does change via direct gaze elicit more specific attention than change vianon-direct gaze? In addition, change via direct gaze can be categorizedinto two types: “look toward,” which means gaze changing to look towardobservers, and “look away,” which means gaze changing to look awayfrom observers. Which type of change via direct gaze triggers more specificattention? This study answers these questions.Method: We conducted the one-shot paradigm of the flicker task. The taskrequires specific attention for change detection, otherwise change blindnessoccurs. We hence explored how change detection occurred through aspectsof attention. To explore the above questions, we compared among “lookaway,” “look toward,” and non-direct gaze change. In experiments, we preparedsix schematic faces positioned at 5 deg visual angle from the centerfixation which observers were required to fixate their eyes to. In the directgaze conditions, gaze changed from the center to both sides of eyes (“lookaway”) or from both sides to the center of eyes (“look toward”) whereas inthe non-direct gaze conditions, gaze changed from side to side of eyes.Results and Discussion: The experiments indicated that detection of changevia direct gaze was more significantly accurate than detection of change vianon-direct gaze. A post hoc analysis showed “look toward” was more effectivelydetected than “look away”. Moreover, under manipulating distanceof gaze change, explanation by simple motion detection was excluded. Ourstudy showed two novel findings. First, change via direct gaze drew moreparticular attention than non-direct gaze. Second, “look toward” elicitsmore specific attention than “look away.” These results demonstrate thatindividuals pay more attention when they perceive direct gaze and that“look toward” draw their more specific attention than “look away” indirect gaze.Tuesday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>321


Tuesday Afternoon PostersVSS 2010 AbstractsTuesday PMFace perception: Parts and configurationsVista Ballroom, Boards 542–556Tuesday, May 11, 2:45 - 6:45 pm56.542 Face identification and the evaluation of holistic indexes:CFE and the whole-part taskYaroslav Konar 1 (konary@mcmaster.ca), Patrick Bennett 1,2 , Allison Sekuler 1,2 ;1 Department of Psychology, Neuroscience & Behaviour, McMaster University,2 Center for <strong>Vision</strong> Research, York UniversityKonar, Bennett and Sekuler (Psychological Science, in press) showed thatperformance in a standard measure of holistic processing, the compositeface-effecttask (CFE), was highly variable across observers, and did notcorrelate with accuracy on a face identification task. This result suggeststhat the influence of holistic processing on face identification may not beas significant, or automatic, as commonly assumed. Of course, holistic processingcan be measured in more than one way, and, although it is typicallyassumed that the measures are tapping into a single mechanism, thatassumption is not typically tested.Here we examine the reliability of and relations between face identification,the CFE, and another measure of holistic processing, the whole-parttask (e.g., Tanaka & Farah, 1993). Our whole-part task was modelled afterLeder and Carbon’s (2005) second experiment: subjects learned associationsbetween names and whole faces or face parts, and then were testedwith whole faces and parts in upright and inverted conditions. Our face setremoved external features (hair, chin, and ears) to ensure that discriminationwas based on internal facial features.Consistent with Konar et al., measures of the CFE and identification accuracyexhibited moderate-to-high reliability, but were uncorrelated witheach other. Like other researchers have found, there was a whole-face superiorityeffect on the whole-part task: performance was better on whole-facetrials regardless of learning or orientation, and the effect had high withinobserverreliability. Notably, however, there were no significant correlationsbetween performance in the whole-part task and either the CFE orface identification accuracy.These results, based on 10 observers, suggest that different holistic tasksmay, in fact, be tapping into distinct perceptual mechanisms, neither ofwhich is predictive of our face identification task.Acknowledgement: NSERC-PGS-D to Yaroslav Konar56.543 The influence of horizontal structure on face identificationas revealed by noise maskingMatthew V. Pachai 1 (pachaim@mcmaster.ca), Allison B. Sekuler 1,2 , Patrick J.Bennett 1,2 ; 1 Department of Psychology, Neuroscience, and Behaviour, McMasterUniversity, 2 Centre for <strong>Vision</strong> Research, York UniversityDakin and Watt (J Vis., 2009, 9(4):2, 1-10) suggested that face identity isconveyed primarily by the horizontal structure in a face. We evaluatedthis hypothesis using upright and inverted faces masked with orientationfiltered Gaussian noise. Observers completed a 10-AFC identification taskthat used faces that varied slightly in viewpoint. Face stimuli were presentedin horizontal and vertical noise, and in a noiseless baseline condition.Both face and noise orientation were varied within subjects, with faceorientation blocked and counter-balanced across two sessions and noiseorientation varying within each session. We measured 71% correct RMScontrast thresholds for each condition and then converted the thresholdsinto masking ratios defined as the logarithm of the ratio of the masked andunmasked thresholds. There was a significant effect of noise orientationfor upright faces (F(1,11)=5.162, p0.4). Finally, we simulated the performance of Dakin andWatt’s so-called barcode observer for our experimental conditions, andfound that the predictions of the model were consistent with the maskingdata obtained with upright faces. Together, these data suggest that observersmay indeed identify faces preferentially using the horizontal structurein the stimulus.Acknowledgement: NSERC56.544 Facial Perception as a Configural ProcessDevin Burns 1 (devburns@indiana.edu), Joseph Houpt 1 , James Townsend 1 ; 1 IndianaUniversityConfigural or gestalt processing are general terms given to phenomenawhere the whole is different from the sum of its parts. Here we explorethese phenomena through face perception, a known configural process.Split faces have often been employed as a manipulation that disrupts theconfigurality typically found in face processing. By applying systems factorialtheory we can discover the differences in processing that result fromsplitting faces. This knowledge can help us further our understanding ofwhat configurality is, and what qualities are necessary to observe it. Wefind that the difference in this case is due to a reduction in the workloadcapacity of the system, as measured by Townsend’s capacity coefficient.Systems factorial technology is employed to draw conclusions regardingarchitecture, stopping rule, capacity and independence.56.545 Attentional weighting in configural face processingFitousi Daniel 1 (dxf28@psu.edu), Michael Wenger 1 , Rebecca Von Der Heide 1 ,Jennifer Bittner 1 ; 1 Department of Psychology, The Pennsylvania State UniversityThe composite face effect (CFE, Young, Hellawell, & Hay, 1987) has, inrecent years, been suggested as one possible empirical signature of theholistic (configural, gestalt, etc.) characteristics of facial perception andcognition. In CFE people’s performance with one part of a composite faceappears to be dependent on the other. Theoretical analyses of the CFE usingmultidimensional signal detection theory (general recognition theory,GRT) has suggested that the behavioral regularities can potentially haveboth perceptual and decisional sources, with recent empirical studies documentingthe influence of decisional factors in the CFE. However, GRT (likeclassical univariate signal detection theory) addresses behavioral regularitieswithout assuming specific mechanisms. Consequently, the presentstudy investigated one possible source for the decisional factors that canbe involved in the DFE: differential attentional weighting. Our hypothesiswas that observers will distribute visual attention to the two componentsof a component in accord with the statistical regularities of the presentationfrequencies, and that shifts in thedistribution of attention will drive shifts inresponse in criteria. We tested this hypothesis using a composite face task,in which we varied the base rates (e.g., prior frequencies) for the two halvesof the composite stimuli. The base rate manipulation imposed correlationalstructure on the dimensional space (Garner, 1974), and thus allowed for theemergence of various decisional criteria within individual observers. Thisenabled us to relate statistical regularities in the stimulus space to variousGRT constructs, including those that tap the decisional components. Basedon our results, we highlight the need for further theorizing and experimentationon the role of attentional mechanisms in configural face perception.56.546 Internal and external features of the face are representedholistically in face -selective regions of visual cortexJodie Davies-Thompson 1 (j.davies@psych.york.ac.uk), Alan Kingstone 2 , AndrewW. Young 1 , Timothy J. Andrews 1 ; 1 Department of Psychology and York NeuroimagingCentre, University of York, UK, 2 Department of Psychology, University ofBritish Colombia, CanadaThe perception and recognition of familiar faces depends critically on ananalysis of the internal features of the face (eyes, nose, mouth). We thereforecontrasted how information about the internal and external (hair, chin,face-outline) features of familiar and unfamiliar faces is represented inface-selective regions. There was a significant response to both the internaland external features of the face when presented in isolation. However,the response to the internal features was greater than the response to theexternal features. There was significant adaptation to repeated images ofeither the internal or external features of the face in the FFA. However,the magnitude of this adaptation was greater for the internal features offamiliar faces. Next, we asked whether the internal features of the face arerepresented independently from the external features. There was a releasefrom adaptation in the FFA to composite images in which the internal featureswere varied but the external features were unchanged, or when theinternal features were unchanged but the external features varied, demonstratinga holistic response. Finally, we asked whether the holistic responseto faces could be influenced by the context in which the face was presented.We found that adaptation was still evident to composite images in whichthe face was unchanged but body features were varied. Together, these322 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsTuesday Afternoon Postersfindings show that although internal features are important in the neuralrepresentation of familiar faces, the face’s internal and external features arerepresented holistically in face-selective regions of the human brain.Acknowledgement: JD-T is supported by an ESRC studentship.56.547 Beliefs alone alter holistic face processing...If responsebias is not taken into accountIsabel Gauthier 1 (isabel.gauthier@vanderbilt.edu), Jennifer Richler 1 , Olivia Cheung 1 ;1 Psychology, College of Arts and <strong>Sciences</strong>, Vanderbilt UniversityFaces are processed holistically and the composite paradigm is widely usedto quantify holistic processing (HP) but there is debate regarding the appropriatedesign and measures in this task. Important theoretical conclusionshinge on which measure of HP is adopted because different approachesyield qualitatively different results. We argue that some operational definitionsof HP are problematic because they are sensitive to top-down influences,even though the underlying concept is assumed to be cognitivelyimpenetrable. Participants matched one half of two sequentially presentedface composites while trying to ignore the irrelevant half. In the oftenusedpartial design the irrelevant halves are always different, and HP isindexed by higher hit rates or d’ for misaligned vs. aligned composites.Here, we used the complete design, which also includes trials where irrelevanthalves are the same. We told one group of subjects that the targethalf would remain the same on 75% of trials, and another group that itwould change on 75% of trials. The true proportion of same/different trialswas 50% - groups only differed in their beliefs about the target halves. Weassessed the effect of beliefs on three measures of HP: the difference in hitrate for aligned vs. misaligned trials (the standard measure used in the partialdesign), d’ for aligned vs. misaligned trials based only on partial designtrials, and the interaction between congruency and alignment, which canonly be obtained from the complete design. Critically, beliefs influencedresponse biases and altered both partial design measures of HP whilethe complete design measure was unaffected. Thus, top-down biases, inaddition to stimulus transformations (Cheung et al., 2008), can complicatepartial design measures of HP. Many claims about face processing dependonly on partial design measures should be re-examined with more validmeasures of HP.Acknowledgement: This research was supported by grants to the Temporal Dynamics ofLearning Center (NSF Science of Learning Center SBE56.548 Interactive Processing of Componential and ConfiguralInformation in Face PerceptionRuth Kimchi 1 (rkimchi@research.haifa.ac.il), Rama Amishav 2 ; 1 Departmentof Psychology & Institute of Information Processing and Decision Making,University of Haifa, 2 Institute of Information Processing and Decision Making,University of HaifaThe relative dominance of componential and configural information to faceprocessing is a controversial issue. We investigated this issue by examininghow componential information and configural information interact duringface processing, using Garner’s speeded classification paradigm (Garner,1974). This paradigm examines the ability to process one dimension of amultidimensional visual stimulus, while ignoring another dimension, usingselective attention measures, and provides a powerful test of perceptualseparability between stimulus dimensions. When classifying upright facesvarying in components (eyes, nose, and mouth) and configural information(inter-eyes and nose-mouth spacing), observers were unable to selectivelyattend to components while ignoring irrelevant configural variation, andvice versa (indexed by symmetric Garner Interference). Performance withinverted faces showed selective attention to components but not to configuralinformation (indexed by asymmetric Garner interference). Whenfaces varied only in components, spatially distant or spatially close, selectiveattention to different components was possible (nearly zero Garnerinterference). These results suggest that facial components are processedindependently, and that components dominate the processing of invertedfaces. However, when upright faces vary in componential and configuralinformation, as in natural faces, the processing of componential informationand the processing of configural information are interdependent, withno necessary dominance of one type of information over the other.56.549 What Did the Early United States Presidents Really LookLike?: Gilbert Stuart Portraits as a “Rosetta Stone” to the Pre-Photography EraEric Altschuler 1,2 (eric.altschuler@umdnj.edu), Ahmed Meleis 2 ; 1 Departments ofPhysical Medicine and Rehabilitation and Microbiology & Molecular Genetics,New Jersey Medical School, UMDNJ, 2 School of Medicine, New Jersey MedicalSchool, UMDNJThere are no photographs for the first five United States Presidents (GeorgeWashington through James Monroe). However, there does exist a photographof the sixth President John Quincy Adams (1767-1848, President1825-1829). The fact that President John Quincy Adams straddled theeras of portraiture and photography thus offers the exciting possibility ofseeing how faithful portraitists in the pre-photography era were, and, iffound faithful, to knowing the true likenesses of the early Presidents andother individuals who were never photographed—a veritable “RosettaStone” into the pre-photography era. The great American painter GilbertStuart (1755-1828) painted the first six presidents. Stuart’s 1818 portraitof Quincy Adams bears a striking resemblance to an 1848 photograph ofQuincy Adams, ever more so when we “aged” Stuart’s portrait using afreely available program. Similarly, Stuart’s portraits of US Senator DanielWebster and physician John Collins Warren are remarkably faithful tophotographs taken years later. However, conversely, we find a likeness ofQuincy Adams painted by another well-known American painter, CharlesWillson Peale (1741-1827) to be not as faithful to the photograph as Stuart’s.Thus, Stuart’s portraits can serve as a “Rosetta Stone” to know the imagesof individuals who lived before photography. In theory one can bootstrapfurther back in time. This perspective on portraits also gives a way of viewingartists from all eras: Indeed, while Stuart is faithful to his subjects, andhis portraits capture critical features of a subject’s face, they not nearly asdetailed as portraits by Holbein (c. 1497-1543), for example, Holbein’s 1527portrait of Sir Thomas More. This portrait in turn pales in terms of detail incomparison with van Eyck’s 1438 portrait of Cardinal Albergati. van Eyckused the same detail in the portrait, e.g., lines, creases, hairs, as he did in allaspects of his other paintings.56.550 Downloadable Science: Comparing Data from Internet andLab-based Psychology ExperimentsLaura Germine 1 (lgermine@fas.harvard.edu), Ken Nakayama 1 , Eric Loken 2 , BradleyDuchaine 3 , Christopher Chabris 4 , Garga Chatterjee 1 , Jeremy Wilmer 5 ; 1 Departmentof Psychology, Harvard University, 2 Department of Human Development andFamily Studies, Pennsylvania State University, 3 Institute of Cognitive Neuroscience,University College London, 4 Department of Psychology, Union College,5 Department of Psychology, Wellesley CollegeAs a medium for conducting behavioral experiments, the internet offersthe opportunity to collect large samples from a broad cross-section of thepopulation on a relatively low budget. Despite the increasing use of theinternet as a means of gathering data for psychology experiments, it isunclear how comparable internet-based data is to data gathered in the lab.Furthermore, it is not clear how recruitment method might impact dataquality: for instance, tests conducted on the internet might produce comparableresults to tests conducted in the lab, as long as the participants wererecruited through traditional methods (i.e. privately). In order to assess thequality of data from internet-based experiments, we compared data fromthe Cambridge Face Memory Test (Duchaine & Nakayama, 2006) from participantstested in the lab and on the web, with different recruitment methods.Specifically, data were gathered from (a) 3004 unpaid participants whofollowed links to our website (testmybrain.org) to ‘test their skills’ (public/internet), (b) 594 participants recruited through the Australian twin registry,via traditional methods, but tested on the internet at testmybrain.org(private/internet), and (c) 209 participants tested in the lab (private/lab).Reliability, as measured by cronbach’s alpha, was similar across all threedatasets (public/internet: 0.90; private/internet: 0.89; private/lab: 0.89).Performance, in terms of proportion correct, was also comparable in thethree datasets (public/internet: 0.76, SD = 0.13; private/internet: 0.74, SD=0.14; private/lab: 0.72, SD = 0.13). Our data indicate that, even for testslike the Cambridge Face Memory Test that include complex visual stimuli(faces), the internet has the potential to provide data comparable to datagathered in the lab and from participants recruited through more traditionalmethods.Tuesday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>323


Tuesday Afternoon PostersVSS 2010 AbstractsTuesday PM56.551 Heads, bodies and holistic processing in person recognitionRachel Robbins 1,2 (dr.r.robbins@gmail.com), Max Coltheart 1 ; 1 MACCS, Faculty ofHuman Science, Macquarie University, 2 Psychology/MARCS, College of Arts,University of Western SydneyInterest has recently increased into how we recognise human bodies, aswell as faces. Here we present two experiments on body identity. Experiment1 tested holistic processing of unfamiliar bodies using a same-differentmatching version composite task. Results for top-, bottom-, left- and righthalvesof bodies were compared to those for top-halves of faces (where theeffect is generally largest). Orientation was manipulated between subjects.Results replicated previous findings of a larger composite effect for uprightthan inverted faces. Results also showed holistic processing for bodies,which was most apparent for left-right splits. This may be because gesturesetc. require more integration across left and right halves of the body thantop and bottom halves, and because the left-right splits always includedthe head. In Experiment 2 we tested the relative importance of head versusbody to person recognition. We trained subjects to name 6 femalesfrom full-body pictures. Subjects then named new images, both uprightand inverted, with sometimes only head or body shown. We also includedhead-body composites (head of one person, body of another). Subjects hada strong tendency to correctly identify the head of these composites (80%)rather than the body (10%). Inversion made people less likely to correctlyrecognise heads (62%) but slightly more likely to correctly recognise bodies(14%). Upright recognition was best for whole body (93%) and headonlypictures (91%), but still very good for body-only pictures (63%). Largeinversion effects were shown for whole bodies (22%), head-only (25%) andbody-only (15%; all significant, ps


VSS 2010 AbstractsTuesday Afternoon Posters56.556 Perception and Visual Working Memory EmphasizeDifferent Aspects of Face ProcessingAllison Yamanashi Leib 1 (ayleib@gmail.com), Elise Piazza 1 , Shlomo Bentin 2 , LynnRobertson 1 ; 1 University of Califonia, Berkeley, 2 Hebrew UniversityThis experiment investigates both perceptual encoding of configural informationand the maintenance of second order configural information inworking memory. We collected data from 32 participants. In the perceptualcondition, participants viewed two sequentially presented faces withessentially 0 ISI. The faces were configurally manipulated in either theeyes region, the mouth region, or the contour region. The stimulus set contained48 faces with 6 degrees of difficulty. Difficulty was increased alonga continuum with a 1-pixel change comprising the hardest condition anda 6-pixel change comprising the easiest condition. In the perceptual task,participants were asked to judge whether the two faces were the same ordifferent. Importantly, participants’ attention was directed to the specificface region (eyes, mouth, or contour) that was relevant in each condition.Our findings show that participants perform comparably in the variousface regions, suggesting that configural perceptual encoding between faceregions is equivalent in perception. In the working memory condition, participantsagain viewed two sequentially presented faces, and their attentionwas directed in the same manner as before. In contrast to the perceptualexperiment, the first face was viewed for varying exposure durations (500ms, 1500 ms). Additionally, the ISI was varied, although the SOA remainedthe same throughout conditions. Results showed that performance in theeye region was significantly better than performance in the mouth or contourconditions. These findings suggest that configural eye information isgiven more importance in working memory than configural mouth or contourinformation, but that these differences are not accounted for by perceptualprocessing. This study provides new insight into normal processesof human face perception and memory.Tuesday PMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>325


Wednesday Morning TalksWednesday AMEye movements: UpdatingWednesday, May 12, 8:15 - 10:00 amTalk Session, Royal Ballroom 1-3Moderator: Tamara Watson61.11, 8:15 amDynamics of eye position signals in macaque dorsal areas explainperi-saccadic mislocalizationAdam Morris 1 (adam@vision.rutgers.edu), Michael Kubischik 2 , Klaus-PeterHoffman 2 , Bart Krekelberg 1 , Frank Bremmer 3 ; 1 Center for Molecular andBehavioral Neuroscience, Rutgers University, Newark, NJ, USA, 2 AllgemeineZoologie und Neurobiologie, Ruhr-Universität, Bochum, Germany, 3 Departmentof Physics, Philipps-Universität, Marburg, GermanyHuman observers mislocalize visual stimuli that are flashed around thetime of saccadic eye movements. Specifically, targets presented just before(after) the onset of a saccade are perceived at positions that are shifted in(against) the direction of the eye movement. This biphasic pattern has beenattributed to a damped internal representation of eye position across saccades,but this claim has not been by verified by electrophysiological data.In the current study, we recorded the extracellular activity of single neuronsin two macaque monkeys (four hemispheres) as they performed a combinationof fixations and saccadic eye movements in near-darkness. Recordingswere performed in four dorsal cortical areas: the lateral and ventral intraparietalareas (LIP; VIP), the middle temporal area (MT), and the medial superiortemporal area (MST). Individual neurons in each of these areas werefound to have ‘eye position fields’: a systematic relationship between meanfiring rate and the position of the eyes in the orbit. Our analysis used theseeye position fields to translate observed instantaneous firing rates acrossthe population into scalar estimates of ongoing eye position. During fixation,the decoder estimated eye position with a good degree of accuracy forall fixation locations. Across saccades, the decoder revealed an anticipatorychange in the representation of eye position just prior to the onset of the eyemovement, followed by a brief retraction toward the initial fixation positionand an eventual stabilization at the final fixation position after around250ms. The mismatch between the actual eye position and that encoded bythe recorded neurons across saccades predicts a pattern of perceptual mislocalizationthat is consistent with the human psychophysical data. Theseresults suggest that eye position signals in dorsal cortical regions underliethe localization of peri-saccadic visual targets.Acknowledgement: NIH R01EY17605,The Pew Charitable Trusts, & NHMRC 52548761.12, 8:30 amA study of peri-saccadic remapping in area MTWei Song Ong 1,4 (weisong.o@gmail.com), James W Bisley 1,2,3,4 ; 1 Departmentof Neurobiology, David Geffen School of Medicine at UCLA, Los Angeles, CA90095, 2 Jules Stein Eye Institute, David Geffen School of Medicine at UCLA,Los Angeles, CA 90095, 3 Department of Psychology and the Brain ResearchInstitute, UCLA, Los Angeles, CA 90095, 4 Interdepartmental PhD Program forNeuroscience, UCLA, Los Angeles, CA 90095Area MT has traditionally been thought to operate in a retinotopic referenceframe; however, there has been recent fMRI evidence that human MThas some spatiotopic properties (d’Avossa et al, 2007). Also, we have presentedpsychophysical evidence that area MT plays a spatiotopic role in thememory for motion process (Ong et al, 2009).Here, we recorded from area MT in animals performing visually guidedsaccades during which a moving dot stimulus (100% coherence) or a circularstimulus was presented. The dot stimulus moved in the preferreddirection of the recorded neuron in the pre-saccadic or post–saccadic receptivefield for 500 ms; its onset occurring 80 ms before the saccade targetappeared. The luminance-matched circle was flashed for 50 ms in the preorpost-saccadic receptive field at random time intervals between 100msbefore the saccade target appeared to 350 ms after. Mean saccadic latencywas 192 ± 35 ms.We recorded from 31 neurons and none of them showed pre-saccadicremapping with either stimulus. With the flashed circle, approximately1/3 of the neurons showed late post-saccadic remapping, defined as whenstimuli flashed shortly before the beginning of the saccade induced a neuralresponse after the saccade in the post-saccadic receptive field.We found that the post-saccadic response latencies of the moving dots weresimilar to onset latencies for most neurons, consistent with saccadic suppression.A subpopulation had shorter latencies, but none were pre-saccadic.These neurons were more likely to show late post-saccadic remappingof the flashed circle.Although no neurons exhibited pre-saccadic remapping, the presence ofthe late post-saccadic response to a stimulus flashed entirely prior to thesaccade indicates that a remapping mechanism may act on MT neurons andcould explain results showing spatiotopic processing in area MT.Acknowledgement: The National Eye Institute, the Kirchgessner Foundation, the GeraldOppenheimer Family Foundation, the Klingenstein Fund, the McKnight Foundation, theAlfred P. Sloan Foundation.61.13, 8:45 amPersistence of Visual Mislocalizations across Eye Movements ina Case of Impaired Visual Location Perception: Implications forVisual Updating and Visual AwarenessMichael McCloskey 1 (michael.mccloskey@jhu.edu), Emma Gregory 1 ; 1 Departmentof Cognitive Science, Johns Hopkins UniversityAH, a young woman with a developmental deficit in perceiving the locationof visual stimuli, makes left-right and up-down reflection errors in avariety of tasks. For example, she may reach, point, or saccade to the rightfor an object on her left, or verbally report that a stimulus is at the bottom ofa display screen when in fact it is at the top. Extensive testing revealed thatAH’s impairment is a selective visual deficit, and that her errors arise not inearly vision, but rather at a higher level of visual representation. Remarkably,AH’s misperceptions of location often persist across eye movements.When she erroneously perceives an object to be on her left while lookingstraight ahead, and saccades leftward in an effort to fixate the object, shemay then report that she is looking at the object, despite the fact that the eyemovement shifted the target further into the visual periphery. We arguethat these persisting visual mislocalization errors shed light on trans-saccadicprocessing of location information in the normal visual system. Whenthe eyes move, a new high-level representation of an object’s location couldbe constructed by updating the initial high-level representation to accountfor the eye movement (using corollary discharge information), and/or bycomputing a high-level representation de novo from post-saccadic lowlevelvisual representations. From results of several tasks probing AH’spersisting visual mislocalizations, we argue that de novo computation ofnew high-level representations from new low-level representations is notautomatic following an eye movement; as long as low-level representationsimply that the visual scene has not changed, new high-level representationsmay be generated by updating alone. Finally, with respect to visual awareness,we argue that AH’s location misperceptions imply that awareness ismediated by high- and not low-level visual representations.61.14, 9:00 amThe spatial coordinate system for trans-saccadic informationstorageI-Fan Lin 1 (i-fan.lin@parisdescartes.fr ), Andrei Gorea 1 ; 1 CNRS, Université ParisDescartesWhile memory storage of objects identity and of their spatiotopic locationsmay sustain cross-saccadic stability of the world, retinotopic location storagemay hamper it. Is it then true that saccades perturb more retinotopicthan spatiotopic memory storage? We address this issue by assessing localizationperformances of the penultimate (N-1) saccade-target in a series of3 to 6 saccades. One white letter-pair (target) and eight black letter-pairs(distracters) were displayed on a virtual 3° radius circle around a fixationdot for 100 ms within a 20°x20° gray rectangular frame. Subjects wereinstructed to saccade to the target. Once the eye landed at the target position,now displaying a fixation dot, a spatially permuted target-distracters326 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong> See page 3 for Abstract Numbering System


VSS 2010 AbstractsWednesday Morning Talksarrangement was displayed anew around the fixation dot and triggeredthe next saccade. At the end of a trial, a color change of the fixation dotprompted subjects to report the location of the target in either retinotopic orspatiotopic coordinates. The retinotopic location was referred to the fixationdot. The spatiotopic location was referred to the gray frame. Identical conditionswere run with the eyes maintaining fixation throughout the trial butwith the gray frame moving so as to mimic its retinal displacement whenthe eyes moved. Spatiotopic location was better stored (by ~0.33 d’ units)and reported faster (by ~140 ms) in the saccade compared to the maintainedfixation condition. Instead, saccades degraded retinotopic location memory(by ~0.29 d’ units) and delayed response time (by ~68 ms). The better andfaster spatiotopic location storage and retrieval during eye-movements iscompatible with the notion that spatiotopic representation takes over retinotopicrepresentation during eye movements thereby contributing to thestability of the visual world as its projection jumps on our retina from saccadeto saccade.Acknowledgement: CNRS61.15, 9:15 amTemporal encoding of visual space by means of fixational eyemovementsDavid Richters 1 (drichters@gmail.com), Ehud Ahissar 2 , Michele Rucci 1,3,4 ;1 Psychology Department, Boston University, 2 Department of Neurobiology,Weizmann Institute of Science, 3 Department of Biomedical Engineering, BostonUniversity, 4 Program in Neuroscience, Boston UniversityThe processing of fine detail in moving stimuli appears to rely on informationencoded in the temporal domain. During natural fixation, all stimulicontinually move on the retina because of perpetual eye movements. Retinalmotion caused by eye movements allows for the possibility of encodingand decoding spatial information in the temporal domain (Ahissar, 2001).In this study, we examined whether temporal modulations caused by oculardrift contribute to spatial perception. Observers viewed a standard Verniertwo-line stimulus through a narrow, digital, retinally-stabilized aperture.The aperture was too narrow for both lines of the stimulus to be seenat once, and it moved synchronously with the observers’ eye, allowing onlya thin fixed vertical stripe of the retina to be stimulated. In each trial, the topline of the Vernier stimulus was randomly selected to be either on the left oron the right of the bottom line. Thus, as the observers’ eye moved from leftto right, the upper line would be seen first for one stimulus arrangement(top-left) but second for the other arrangement (top-right). The order of lineexposures and the timing difference between exposures was determinedsolely by eye movements. Only fixational drift allowed the lines to be seen;saccades and microsaccades were identified in real time and the stimuluswas not displayed during these movements. We show that observers canuse the temporal modulations caused by ocular drift to make accurate spatialjudgments. This research provides a direct link between fixational eyemovements and visual perception and shows that temporally-encoded spatialinformation resulting from eye movements is useful and accessible tothe visual system.Acknowledgement: NIH EY18363, NSF BCS-0719849, NSF CCF-072690161.16, 9:30 amWhere are you looking? Pseudogaze in afterimagesDaw-An Wu 1,3 (daw-an@caltech.edu), Patrick Cavanagh 2,3 ; 1 Division of Humanitiesand Social <strong>Sciences</strong>, Caltech, 2 Laboratoire Psychologie de la Perception,Université Paris Descartes, 3 Department of Psychology, Harvard UniversityThe point in the visual scene that lands on the center of the fovea is assumedto define where we are looking – our direction of gaze. To test this, we askedsubjects to “shift their gaze” to different locations in an afterimage. Oncesubjects had fixated a dim red laser point in a dark room, a strong flashilluminated a matte stimulus. The fixation point was then extinguished,leaving the afterimage as the only visual input. When subjects were askedto fixate points in the far periphery of the afterimage, they reported that theimage jumped away in the direction of the attempted gaze shift. For pointsin the near periphery, however, subjects reported “fixating” them withoutcausing any perceived motion of their afterimage. The region within whichgaze could be shifted was generally limited to 2-4 degrees from true center,depending on the subject. Eye tracking data revealed constant movementsof the eye, which the subjects were unaware of. During “fixation” of thecentral point of the afterimage, these drifts were random. When they settheir gaze on a point within 2-4 degrees of the center, an additional, systematiccomponent to the drift was often produced, in the same direction as theintended offset in gaze. Finally, when they fixated a point to the left of foveaand then attempted a saccade to a point directly below the fovea, somesubjects’ eyes moved diagonally in accordance with their subjective feeling,while other subjects’ eyes moved vertically, in accordance with the target’sactual position relative to fovea. These results suggest that the apparentdirection of gaze can be flexibly assigned to an attended object near thefovea allowing visual coordinates to remain centered on a steady locationin the world, despite the incessant small eye movements of fixation.Acknowledgement: NEI EY0295861.17, 9:45 amAn equivalent noise investigation of saccadic suppressionTamara Watson 1 (tamarawatson@med.usyd.edu.au), Bart Krekelberg 2 ; 1 Brain andMind Research Institute, The University of Sydney, 2 Center for Molecular andBehavioral Neuroscience, Rutgers UniversityIt is well known that perisaccadic visual stimuli are less visible than thosepresented during fixation and that many visual areas change their responseproperties perisaccadically. The link between these phenomena remainstentative however. Our goal was to quantify the behavioral phenomenon toenable a more focused search for its neural mechanism. Several mechanismsmay be responsible for reduced perisaccadic visibility: spatial uncertainty1,internal multiplicative noise, and/or response inhibition2 (or, equivalently,additive internal noise3). We tested these using equivalent noise analysis.Each mechanism predicts a unique pattern of detection thresholds whentarget stimuli are embedded in external noise4. Spatial uncertainty predictsno perisaccadic effect on sensitivity at low external noise, while sensitivityat high external noise should be reduced as the external noise swampsthe signal. The multiplicative noise model predicts lower sensitivity atboth high and low external noise. The response inhibition model predictslower sensitivity at low external noise, with equal thresholds at high externalnoise. In our experiments, participants identified the location of a lowspatial frequency grating above or below the fixation point. Stimuli werepresented up to 50ms prior to saccade onset. The targets were embeddedin Gaussian noise; stimulus and noise contrast were manipulated independently.Detection thresholds were calculated at each external noise level atfixation and perisaccadically. We found that response inhibition was sufficientto describe the perisaccadic detection thresholds relative to thosefound at fixation. *1 Greenhouse and Cohn. 1991. J. Opt. Soc. Am. A, 8:587-595 *2 Burr, and Ross. 1982. Vis. Res. 23, 3567-3569 *3 Diamond, Ross andMorrone. 2000. J. Neurosci. 20, 3442-3448 *4 Liu and Dosher. 1998. Vis. Res.,38, 1183-1198Acknowledgement: Funded by the Human Frontiers Science Program (TW), the PewCharitable Trusts, and NIH R01EY17605 (BK).Perception and action: Navigation andmechanismsWednesday, May 12, 8:15 - 10:00 amTalk Session, Royal Ballroom 4-5Moderator: William Warren61.21, 8:15 amRoute selection in complex environments emerges from thedynamics of steering and obstacle avoidanceBrett Fajen 1 (fajenb@rpi.edu), William Warren 2 ; 1 Cognitive Science, RensselaerPolytechnic Institute, 2 Cognitive and Linguistic <strong>Sciences</strong>, Brown UniversityFajen and Warren (2003) developed a dynamical systems model of steeringand obstacle avoidance based on data from human subjects, in whichlocomotor paths emerge on-line. By linearly combining goal and obstaclecomponents, the model can be used to predict route selection behavior incomplex scenes containing multiple obstacles. In this study, we comparethe predictions of the steering dynamics (SD) model with models that minimizepath length (MPL) and minimize total lateral impulse (MLI), where I =∫ F dt. The experiment was conducted in a 12 m x 12 m virtual environmentviewed through a head-mounted display (FOV 63° H x 53° V). Subjects (N= 11) walked from a home location to a goal 8 m away while avoiding anarray of 12 randomly positioned obstacles (2 m posts). There were eightdifferent obstacle arrays, and each array was presented in both the forwardand backward directions six times, yielding 16 configurations and a totalWednesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>327


Wednesday Morning TalksVSS 2010 AbstractsWednesday AMof 96 trials. The MLI model was the worst predictor of human routes, forthe mean total lateral impulse on all observed routes exceeded that of theMLI route by 67%. The MPL model was comparatively better, for the meanlength of all observed paths exceeded the minimum path length by just 8%.The SD model generated paths that were nearly identical in length to thehuman paths, and predicted human routes as well as the MPL model. Weconclude that people select routes that nearly minimize path length but nottotal impulse, and that the SD model captures an on-line control strategyfrom which human-like, nearly minimum length paths emerge.Acknowledgement: NIH R01 EY1092361.22, 8:30 amAdaptation of visual straight ahead requires an unrestricted fieldof viewTracey Herlihey 1 (HerliheyTA@Cardiff.ac.uk), Simon Rushton 1 , Cyril Charron 2 ;1 School of Psychology, Cardiff University, 2 School of Engineering, CardiffUniversityTraditional accounts of adaptation to a rotation of the optic array (due toprisms) identify both visual and proprioceptive sites (Redding & Wallace,1985). Last year (Brandwood, Rushton & Charron, VSS 2009), we reportedthe results of a walking experiment: Through manipulation of observer’swalking behaviour, we demonstrated that the magnitude and site of adaptationdepends on the availability of optic flow. Specifically we found thatoptic flow plays an important role in the recalibration of perceived visualstraight-ahead (Held and Freedman, 1963). This year we have taken a differentapproach to the same problem: In a repeated measures design wemanipulated visual information. Participants wore glasses containingpaired horizontally orientated wedge prisms and walked back and forthbetween targets for a short period of time. During locomotion vision was (i)unrestricted; (ii) restricted to 90°, or (iii) restricted to 90° with 400ms snapshots(through the use of optical shutters). Perceived visual straight-aheadand perceived proprioceptive straight-head was measured before and afterexposure to the prisms. Adaptation was defined as the difference betweenthe before and after measures. In the natural, unrestricted, condition wefound adaptation was primarily visual. However, the site of adaptationswitched towards proprioception as vision became more restricted. Thus,in line with our previous study we found that the site of adaptation variedwith the availability of optic flow. Interestingly, the reduction of adaptationin visual straight-ahead with the restricted field of view in our studymay explain the lack of shift in visual straight-ahead in Bruggeman, Zosh& Warren’s (2007) study: They used a HMD with a restricted field of view.To conclude, taken together with our previous findings, the results of thisstudy provide further support to the contention that optic flow drives arecalibration of visual straight ahead.61.23, 8:45 amAnticipating the actions of others: The goal keeper problemGabriel Diaz 1 (diazg2@rpi.edu), Brett Fajen 1 ; 1 Cognitive Science Deptartment,Rensselaer Polytechnic InstituteWhen humans observe the actions of others, they can often accurately anticipatethe outcome of those actions. This is perhaps best exemplified on theplaying field, where athletes must anticipate the outcome of an action basedin part on the complex movement of an opponent’s body. In this study, wetested the reliability and use of local and distributed sources of informationavailable in the actor’s motion. These issues were investigated within thecontext of blocking a penalty kick in soccer. Because of extreme time constraints,the keeper must anticipate the direction in which the ball is kickedbefore the ball is contacted, forcing him or her to rely on the kicker’s movement.In Experiment 1, we used a motion capture system to record the jointlocations of experienced soccer players taking penalty kicks. The reliabilityof both local (e.g. orientation of the non-kicking foot) and distributed (e.g.mode/motor synergy) sources of information was measured by computingthe degree to which each source correlated with true kick direction. Experiment2 investigated the relationship between reliability and use of informationsources. The motion data were used to create animations from a keeper’sviewpoint of a point-light kicker approaching and kicking a ball. Oneach trial, subjects watched an animation and judged kick direction (left orright). The sources of information upon which judgments were based wereidentified by computing the correlation between information and judgedkick direction. By comparing the reliability and use of different sources ofinformation, we can characterize the ability to exploit local and distributedinformation when anticipating a movement’s outcome. In Experiment 3,we presented subjects with artificial stimuli in which only one source ofinformation was reliable, providing a more direct test of people’s ability touse specific sources of information.61.24, 9:00 amPerceptual Body Illusion Affects ActionSandra Truong 1,2 (truongs@mail.nih.gov), Regine Zopf 1 , Matthew Finkbeiner 1 , JasonFriedman 1 , Mark Williams 1 ; 1 Macquarie Centre for Cognitive Science, Institute ofHuman Cognition and Brain Science, Macquarie University, 2 Lab of Brain andCognition, National Institute of Mental Health, National Institute of HealthSynchronously stimulating an artificial hand and a participant’s hand thatis hidden from view induces an apparent proprioceptive shift towards theartificial hand (Rubber Hand Illusion; RHI), such that participants subjectivelyreport their hand location to be between the real and artificial hand.This effect is reduced or eliminated with asynchronous visual and somatosensorystimulation. Although previously thought of as a purely perceptualillusion, here we show that the RHI influences ballistic movements directly.A repeated measures design was used to compare participants’ task performanceduring synchronous and asynchronous stimulation conditions.First, the RHI was induced, and then participants were asked to performballistic hand movements towards targets presented in a randomized locationon a touch screen. A motion capture system was utilized to recordhand movement trajectories for analysis. We found significantly largerreaching endpoint errors in the synchronous than asynchronous conditions.Importantly, these errors were biased to the side of the target opposite theside of the artificial hand, consistent with participants moving their handas if position is computed to be intermediate between the real and rubberhand. It is believed that peripersonal space systems integrate multisensoryinformation to form body-part-centered (e.g. hand-centered) maps of localspace. The computation of hand position incorporates input from visual,tactile and proprioceptive modalities and a shift in any or multiple of thesesensory mappings, as induced by the RHI, results in a misperception ofhand location. This study suggests that the re-alignment of mappings,which is modulated by the RHI leads to direct effects in reaching biasesfor action and visually-based proprioceptive judgments. Importantly, theseresults show, beyond the limitations of subjective report of perceived handposition used in previous studies, that the RHI has a fundamental impacton motor action towards visual targets.Acknowledgement: MAW is a Queen Elizabeth Fellow and this work was funded by theAustralian Research Council (DP0984919)61.25, 9:15 amActive is good for auditory timing but passive is good for visualtimingLucica Iordanescu 1 (lucicaiordanescu2010@u.northwestern.edu), MarciaGrabowecky 1,2 , Satoru Suzuki 1,2 ; 1 Department of Psychology, NorthwesternUniversity, 2 Interdepartmental Neuroscience program, Northwestern UniversityPeople naturally dance to music, and it has been shown that auditory perceptionfacilitates generation of precisely timed body movements. Here weinvestigated the converse: Does initiating action enhance auditory perceptionof timing? Participants performed a temporal bisection task; they hearda sequence of three sounds (13 ms each) spread over 550 ms. The timing ofthe middle sound was randomly varied, and participants responded as towhether the middle sound was temporally closer to the first or last sound.The slope of the resultant psychometric function indicated the precisionof temporal bisection. In the active condition, participants initiated eachsound sequence (via a key press), whereas in the passive condition eachstimulus sequence was initiated by the computer. White noise was playedover headphones throughout the experiment to mask key-press sounds.Auditory temporal bisection was more precise in the active than in the passivecondition. To determine whether action similarly facilitated visual perceptionof timing, we repeated the same experiment except that we replacedthe brief sounds with brief flashes. Interestingly, visual temporal bisectionwas more precise in the passive than in the active condition. These oppositeresults for auditory and visual modalities indicate that the benefit ofaction in auditory timing perception could not be attributable to increasedalertness or reduced temporal uncertainty that could have been caused byvoluntarily initiating each stimulus sequence. Thus, we have demonstrateda reciprocal relationship between action and auditory perception; as auditorystimuli facilitate precisely timed action, action enhances auditory per-328 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsWednesday Morning Talksception of timing. In contrast, visual timing perception is enhanced whenattention is fully focused in the visual modality and is distracted by action.These results suggest that auditory timing operates synergistically withmotor mechanisms, whereas visual timing operates most effectively whenneural resources are fully engaged to visual perception.Acknowledgement: NSF BCS0643191, NIH R01EY018197-02S161.26, 9:30 amHuman Echolocation ILore Thaler 1 (lthaler2@uwo.ca), Stephen R. Arnott 2 , Melvyn A. Goodale 1 ; 1 Departmentof Psychology, The University of Western Ontario, 2 Rotman ResearchInstitute, Baycrest CentreIt is common knowledge that animals such as bats and dolphins use echolocationto navigate the environment and/or to locate prey. It is less wellknown, however, that humans are capable of using echolocation as well.Here we present behavioral and fMRI data from two blind individuals(aged 27 and 45 years) who produce mouth-clicks and use click-basedecholocationto go about their everyday activities, which include walkingthrough crowded streets in unknown environments, mountain biking, andother spatially demanding activities. Behavioral testing under regular conditions(i.e. in which each person actively produced clicks) showed thatboth individuals could resolve the angular position of an object placed infront of them with high accuracy (~ 2° of auditory angle at a distance of1.5 meters). This extremely high level of performance is remarkable, butnot unexpected, given what they are capable of doing in everyday life. Tovalidate the stimuli we planned to use in fMRI conditions, we took in-earaudio recordings from each individual during active echolocation andplayed those recordings back using MRI compatible earphones. In theseconditions, both individuals were still able to use echolocation to determinewith considerable accuracy the angular position, shape (concave vs. flat),motion (stationary vs. moving), and identity (car vs. tree vs. streetlight)of objects. Importantly, during the recordings, none of the objects emittedany sound but simply offered a sound-reflecting surface. We conclude thatecholocation, during both active production and passive listening, enablesour two participants to perform tasks that are typically considered impossiblewithout vision. To investigate the neural substrates of their echolocationabilities, we employed our passive listening paradigm in combinationwith fMRI (see Abstract ‘Human Echolocation II’).Acknowledgement: This research was supported by a grant to MAG from the CanadianInstitutes of Health Research.61.27, 9:45 amHuman Echolocation IIStephen R. Arnott 1 (sarnott@rotman-baycrest.on.ca), Lore Thaler 2 , Melvyn A.Goodale 2 ; 1 Rotman Research Institute, Baycrest Centre, 2 Department ofPsychology, The University of Western OntarioHere we report fMRI data that reveal the neural substrates underlying theecholocation abilities of two blind individuals (aged 27 and 45 years) (seealso Abstract ‘Human Echolocation I’). A passive listening paradigm wasemployed in all fMRI experiments. Using fMRI, we found increased BOLDsignal in auditory and visual cortices in both persons in response to the presentationof sounds. Remarkably, however, a contrast analysis, applied tothe whole brain, revealed that the BOLD signal in ‘visual’ cortex increasedduring the presentation of echolocation sounds as compared to spectrallymatched control sounds, while the BOLD signal in auditory cortex remainedunchanged. Furthermore, a region-of-interest analysis of visual cortex suggestedthat the processing of echoes reflected from objects placed to the leftor right of the head were associated with increased activity in the contralateraloccipital cortex. Finally, when our two participants were instructedto judge either the shape (concave vs. flat) or the location (right vs. left) ofa sound reflecting surface, a contrast analysis applied to the whole brainrevealed a stronger BOLD signal in ventral occipital areas during the shapejudgment task. Importantly, the sounds that had been used in the shapeand location tasks were the same. In their entirety, the results suggest thatthe echolocation abilities in our two blind participants appear to make useof the functional and topographic organization of visual cortex.Acknowledgement: This research was supported by a grant to MAG from the CanadianInstitutes of Health Research.3D perception: Depth cues and spatiallayoutWednesday, May 12, 11:00 - 12:45 pmTalk Session, Royal Ballroom 1-3Moderator: Martin S. Banks62.11, 11:00 amAnalyzing the Cues for Recognizing Ramps and StepsGordon E. Legge 1 (legge@umn.edu), Deyue Yu 1,2 , Christopher S. Kallie 1 , Tiana M.Bochsler 1 , Rachel Gage 1 ; 1 Psychology Department, University of Minnesota, TwinCities, 2 University of California, BerkeleyThe detection of ramps and steps is important for the safe mobility of peoplewith low vision. We used ramps and steps as stimuli to examine theinteracting effects of lighting, object geometry, contrast, viewing distanceand spatial resolution. Gray wooden staging was used to construct a sidewalkwith a transition to one of five targets: a step up or down, a rampup or down, or a flat continuation. 48 normally sighted subjects viewedthe sidewalk monocularly through blur goggles which reduced acuity tolow-vision levels. In each trial, they indicated which of the five targets waspresent. Here, we report on a probabilistic cue-based model to explain datain the resulting target/response confusion matrices. A set of cues for distinguishingamong the five targets included contrast at the transition fromsidewalk to target, discontinuities in the edge contours of the sidewalk, andvariations in the height in the picture plane of the targets. We formulatedthe problem of recognition in two parts: the independent probabilities fordetecting the cues, and the optimal use of the detected cues in making a recognitiondecision. To estimate the cue probabilities, we derived and solvedequations relating the cue probabilities to the conditional probabilities inthe cells of the confusion matrices. We found that the high probability fordetecting the contrast cue explained superior visibility of step up over stepdown. Cues determined by discontinuities in the edge contours of the sidewalkwere vulnerable to changes in viewing conditions. Cues associatedwith the height in the picture plane of the targets were more robust acrossviewing conditions. We conclude that a probabilistic cue-base model can beused to understand the effects of environmental variables on the visibilityof ramps and steps.Acknowledgement: NIH Grant EY01783562.12, 11:15 amDirect Physiological Evidence for an Economy of Action: Bioenergeticsand the Perception of Spatial LayoutJonathan Zadra 1 (zadra@virginia.edu), Simone Schnall 2 , Arthur L. Weltman 3 , DennisProffitt 1 ; 1 Department of Psychology, University of Virginia, 2 Department ofSocial and Developmental Psychology, University of Cambridge, 3 DirectorExercise Physiology Laboratory, GCRC, University of VirginiaA good deal of evidence supports the notion that physiological state andthe anticipated energetic demands of acting on the environment affect perception(e.g. Proffitt, 2006). Until recently, however, the role of such bioenergeticfactors in the perception of spatial layout could only be inferred.Here, we directly assessed the role of bioenergetics by manipulating bloodglucose (BG) levels (glucose is the primary source of energy for immediatemuscular action). In each experiment, participants ingested either a glucose-or artificially-sweetened (placebo) drink, and multiple blood sampleswere obtained to assess changes in BG. Two experiments assessing perceptionof hill slant showed that people who ingested the glucose drink perceivedhills to be less steep. An experiment in which participants gave distanceestimates before and then again after ingesting a drink revealed thatparticipants given glucose subsequently perceived distances to be shorterwhile those given the placebo did not. Furthermore, a battery of self-reportmeasures assessed individual differences on a host of bioenergetically relevantproperties. Regardless of the experimental manipulation, individualswith a reduced energy state perceived hills to be steeper and distancesto be greater. A final study tested highly trained cyclists on two separatedays before and after 45 minutes of intense pedaling on a stationary bike.They ingested glucose drinks at regular intervals on one day and placebodrinks on the other. After exercising, participants perceived distances tobe shorter when given glucose and greater when given placebo drinks.Multiple direct physiological measures obtained during exercise indicatedthat across experimental conditions, greater energy expenditure and lowerWednesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>329


Wednesday Morning TalksVSS 2010 AbstractsWednesday AMBG levels predicted greater distance estimates, and multiple indicators ofphysical fitness (heart rate, oxygen consumption, blood lactate) independentlypredicted shorter distance estimates for more fit individuals. Thesefindings are consistent with the view that spatial perceptions are influencedby bioenergetic factors.62.13, 11:30 amTesting the generalizability of perceptual-motor calibration onspatial judgmentsBenjamin R. Kunz 1 (benjamin.kunz@psych.utah.edu), Sarah H. Creem-Regehr 1 ,William B. Thompson 2 ; 1 Department of Psychology, University of Utah, 2 Schoolof Computing, University of UtahThe relationship between biomechanical action and perception of selfmotionduring walking is typically consistent and well-learned but alsoadaptable. This perceptual-motor pairing can be recalibrated by creating amismatch between the visual perception of self-motion and walking speed.Recalibration has been shown to influence subsequent distance judgmentssuch as blindwalking and imagined walking to previously viewed targets(Rieser et al., 1995; Mohler et al., 2006; 2007; Kunz et al., 2009). Whetherperceptual-motor recalibration generally influences the scaling of spaceor the process of spatial updating during movement is an open question.Moreover, it is unknown if the perceptual-motor calibration resulting fromwalking influences other types of locomotion that involve spatial updating.We conducted three experiments to determine how broadly perceptualmotorrecalibration influences distance perception. In each experiment,participants completed a pretest in a real world hallway, in which theyeither blindwalked to previously viewed targets (Experiment 1), matchedthe perceived size of a sphere with their hands (Experiment 2), or wheeledto previously viewed targets in a wheelchair while blindfolded (Experiment3). After this pretest, participants donned a head-mounted displayand walked through a virtual hallway that appeared to move past them ateither twice or half their walking speed. Following this recalibration phase,participants returned to the adjacent hallway and repeated the pretest task.While Experiment 1 replicated previous findings that perceptual-motorrecalibration influences posttest blindwalking performance, Experiment2 showed that adaptation to a new perceptual-motor relationship duringwalking does not influence size judgments, an indirect measure of perceiveddistance. Experiment 3 suggests that the effect of perceptual-motorrecalibration of walking on blind wheelchair locomotion appears relativelyweaker and more variable than the effect on blindwalking. These resultshave implications for understanding the specificity of perceptual-motorcalibration and how this calibration influences spatial judgments madewithin a virtual environment.Acknowledgement: This work was supported by NSF grant 074513162.14, 11:45 amBlur and Disparity Provide Complementary Distance Informationfor Human <strong>Vision</strong>Robert T. Held 1 (rheld@berkeley.edu), Emily A. Cooper 2 , Martin S. Banks 1,2,3 ; 1 JointGraduate Group in Bioengineering, University of California, San Francisco andUniversity of California, Berkeley, 2 Helens Wills Neuroscience Institute, Universityof California, Berkeley, 3 <strong>Vision</strong> Science Program, University of California,BerkeleyDisparity is generally considered the most precise cue to depth, while bluris considered a coarse, qualitative cue. Depth from disparity and depthfrom blur have similar underlying geometries: one is based on triangulationbetween images collected by different eyes and the other is based ontriangulation between images collected through different parts of the pupil.Thus, from a geometric standpoint, they provide complementary distanceinformation. Physiologically, the two cues have very different sensitivities.Disparity thresholds, expressed as just-noticeable differences in depth, arelow near fixation, but increase rapidly away from fixation. In contrast, blurthresholds are relatively large and do not vary significantly with positionrelative to fixation. Thus, one might expect disparity to determine depthdiscrimination near fixation and blur to determine discrimination awayfrom fixation. We tested this expectation in a psychophysical experiment.Observers were presented a reference and test stimulus on each trial. Thetwo stimuli were either both in front of fixation or both behind fixation.After each trial, observers indicated which stimulus appeared more distant.We used a novel volumetric display (Love et al., 2009) to present stimulithat contained 1) only disparity information (Gaussian dot viewed binocularly),2) only blur information (a disk with 1/f noise viewed monocularly),or 3) both disparity and blur information (1/f disk viewed binocularly). Asexpected, thresholds were lower in the disparity-only condition than in theblur-only condition and the two-cue thresholds were similar to the disparity-onlythresholds when the reference and test were near fixation. The situationreversed, however, behind fixation where blur thresholds were lowerthan disparity thresholds and two-cue thresholds were similar to blur-onlythresholds. Thus, disparity and blur are complementary sources of informationwith disparity providing the best depth information near fixationand blur providing the best information away from fixation.62.15, 12:00 pmEffects of Shape and Surface Material on Perceived Object RotationAxisGizem Kucukoglu 1 (gizemkucukoglu@gmail.com), Roland Fleming 2 , Katja Doerschner3 ; 1 Department of Cognitive <strong>Sciences</strong>, Middle East Technical University,Ankara, Turkey, 2 Max Plack Institute for Biological Cybernetics, Tuebingen,Germany, 3 Department of Psychology and National Research Center forMagnetic Resonance (UMRAM), Bilkent University, Ankara, TurkeyUsing rotating matte and shiny objects, Hartung and Kersten (2002) showedhow image motion can affect material appearance. What their demonstrationsalso revealed, which wasn’t however explicitly noted, was that surfacematerial of an object also affects the perceived axis of rotation. For example,a specular teapot appears to rock back and forth while its matte counterpartis perceived as rotating around a vertical axis – though both objectsundergo the same rotation. Why is this so? We argue that the perceived axisof rotation of a moving object involves the integration of multiple sourcesof flow information. Flow from the object contour only (silhouette) can atbest provide ambiguous information about an objects rotation axis and atworse give rise to non-rigid percepts. Supplementing contour flow withoptic flow arising from the object’s material should provide sufficient informationto disambiguate the perceived rotation axis, however, flow patternsarising from moving matte textured objects are very different than thosearising from specular ones. Here we argue that it is these differences inflow patterns which lead to the differences in perceived rotation axis in theabove described phenomenon.In this work we investigate systematically how 3D shape, contour and surfacematerial contribute to the estimation of the rotation axis and directionof novel, irregular (Experiment I) and rotationally symmetric (ExperimentII) objects. We analyze observers’ patterns of errors in an orientation estimationtask under four different shading conditions: Lambertian, specular,textured and silhouette (Examples: http://bilkent.edu.tr/˜katja/orientation.html).Rotation axes were randomly sampled from the unit hemisphere.Results show, as expected, largest errors for the silhouette condition in bothexperiments. However, the patterns of errors for the remaining shadingconditions differ notably across experiments, yielding larger differencesbetween shaders for the rotationally symmetric objects. We will describehow flow patterns predict these differences.Acknowledgement: KD and GK were supported by EC FP7 Marie Curie IRG-239494. RFwas supported by DFG FL 624/1-162.16, 12:15 pmVeridical Perception of Non-rigid 3-D Shapes from Motion CuesAnshul Jain 1 (anshuljjain@gmail.com), Qasim Zaidi 1 ; 1 Graduate Program in <strong>Vision</strong><strong>Sciences</strong>, SUNY College of OptometryMany objects in the world are non-rigid when they move. To identify suchobjects, a visual system has to separate shape changes from movements.Standard structure-from-motion schemes use rigidity assumptions, so arenot applicable to these shapes. Computational solutions proposed for nonrigidshapes require additional constraints on form and motion. Despitean enormous literature on human perception of structure-from-motion,the ability of observers to correctly infer non-rigid 3-D shapes from motioncues has not been examined.We examined whether the human visual system could make metric judgmentsabout simple 3-D shapes using only motion cues, and if there was adifference in performance between rigid and non-rigid shapes.Stimuli consisted of white dots randomly placed on an opaque black horizontalcylinder on a black background. The cylinder underwent simultaneousrotation about the vertical and depth axes (it did not spin on its ownaxis). The elliptical cross-section of the cylinder was varied from trial-to-330 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsWednesday Morning Talkstrial and observers reported whether the cross-section was deeper or shallowerthan a perfect circle. The cylinder was either rigid or flexed non-rigidlyin depth or in the fronto-parallel plane. The rigid central portion of thecylinder was occluded.Observer’s judgment of cross-section circularity was generally slightly shallowerthan veridical. The non-rigid cylinders were judged as deeper thanrigid cylinders; however, the psychometric functions had similar slopes.Rotation in depth (about vertical axis) was critical, as 3D shape was not perceivedwith rotation only in the fronto-parallel plane. We compared humanperformance with existing computational models. Akhter et al.’s trajectorybasis extension (2008) of Tomasi and Kanade’s factorization method (1992)yielded cylindrical shapes similar to human judgments. Koenderink’s defbasedmotion-flow analysis (1986) yielded slants and tilts that were consistentwith cylindrical shapes. The human visual system thus does notrequire rigidity assumptions to extract veridical 3-D shapes from motion.Acknowledgement: NIH Grants: EY13312 & EY0755662.17, 12:30 pmInteractions between disparity, parallax and perspective: Linking‘Reverspectives’, hollow masks and the apparent motion seen inrandom dot stereogramsBrian Rogers 1 (bjr@psy.ox.ac.uk); 1 Department of Experimental Psychology,University of Oxford, UKBackground:Three different perceptual scenarios create the appearance ofa rotating 3-D structure during observer motion: Patrick Hughes’ ‘Reverspective’artworks; hollow masks; and the disparate region of a randomdotstereogram. Papathomas (Spatial <strong>Vision</strong> 21, 2007) has offered a ‘higherlevel’ explanation of the three effects based the ‘expected’ optic flow whileRogers and Gyani (Perception, 2009, in press) have put forward a low-levelexplanation based on the properties of the stimulation. One problem inunderstanding these different effects, and their relationships, has been thedifficulty of manipulating the variables involved. For example, the directionand amount of parallax motion is a fixed consequence of the particular3-D structure used. Purpose: The present experiment was designedto independently manipulate (i) the direction and amount of motion parallax;as well as (ii) the binocular disparities; and (iii) the richness of theperspective information. In doing this, we created a continuum between‘reverspectives’ (parallax appropriate for a convex 3-D structure); randomdot stereograms (no parallax); and the hollow mask (parallax appropriatefor a concave 3-D structure). Methods: A continuous sequence of imageswas generated depicting a particular 3-D structure seen from a series ofdifferent vantage points. The presentation of the sequence was linked tothe observer’s side-to-side head movements (observer-produced parallax).Results: When either perspective or disparities specified the 3-D structure,the structure appeared to rotate in a direction that was consistent with thatinformation. Perspective information typically dominated over binoculardisparities when the two were presented in conflict. Apparent rotation wasconsistent with a convex interpretation of ambiguous shading informationalthough the effect reversed when disparities were introduced. Conclusions:There is nothing special about the particular scenarios that have beenused previously. Rather, they represent particular points on a continuumof possible combinations of 3-D information. Moreover, there is no need toinvoke ‘higher-level’ explanations.Face perception: Social cognitionWednesday, May 12, 11:00 - 12:30 pmTalk Session, Royal Ballroom 4-5Moderator: Roberto Caldara62.21, 11:00 amTurning neutral to negative: subcortically processed angry facesinfluence valence decisionsJorge Almeida 1,2 (jalmeida@wjh.harvard.edu), Petra Pajtas 1 , Bradford Mahon 3 ,Ken Nakayama 2 , Alfonso Caramazza 1,4 ; 1 Cognitive Neuropsychology Laboratory,Harvard University, 2 Harvard University <strong>Vision</strong> <strong>Sciences</strong> Laboratory, HarvardUniversity, 3 Department of Brain and Cognitive <strong>Sciences</strong>, University of Rochester,4 Center for Mind/Brain <strong>Sciences</strong> (CIMeC), University of Trento, ItalyFast, efficient detection of potential danger is critical to ensure the survivalof an organism. In rodents, this ability is supported by subcorticalstructures that bypass slow but highly detailed cortical visual processingareas. Similarly, anatomical tracing and functional neuroimaging studies inhuman and non-human primates suggest that limbic structures responsiblefor emotional processing, such as the amygdala, receive input from the retinathrough subcortical structures, as well as through cortical visual areas.Whether outputs from this subcortical pathway can support perception ofemotionally laden stimuli and influence cognitive-level decisions, independentlyof cortical input, is not known. Here we show that informationdelivered by the subcortical pathway to the amygdala modulates emotionalprocessing. In Experiments 1 and 2, emotional faces were rendered invisibleusing an interocular suppression technique (Continuous Flash Suppression;CFS), that prevents visual information from reaching the amygdalavia the geniculate-cortical pathway, favoring instead the use of subcorticalstructures. In these experiments, likeability judgments over novelneutral stimuli (Chinese characters) were modulated by invisible picturesof angry but not happy faces. In Experiment 3, the same emotional faceswere rendered invisible through backward masking (BM). BM is a visualmasking technique that allows visual information to reach inferior and ventro-temporalregions, which can then serve as afferents to the amygdalaalongside subcortical afferents. In this experiment, happy and angry facesmodulated the likeability judgments. This valence-specific effect fits wellwith the extant literature on the role of the amygdala in threat detection. Italso suggests that subcortical processing may be specifically tuned to threatdetection processes. The coarse information processed by the subcorticalpathway from the retina to the amygdala may prevent potential threateningevents from going unnoticed by enhancing arousal levels and directingattentional resources to areas of interest for further detailed geniculo-striatecortical processing.Acknowledgement: The research reported here was supported by the FondazioneCassa di Risparmio di Trento e Rovereto to AC. JA was supported by grant SFRH/BD/28994/2006 from the Fundação para a Ciencia e a Tecnologia, Portugal62.22, 11:15 amLaughter produces transient and sustained effects on the perceptionof facial expressionsAleksandra Sherman 1 (aleksandrasherman2014@u.northwestern.edu), TimothySweeny 1 , Marcia Grabowecky 1,2 , Satoru Suzuki 1,2 ; 1 Department of Psychology,Northwestern University, 2 Interdepartmental Neuroscience Program, NorthwesternUniversityLaughter is a powerful auditory stimulus conveying positive emotion.Here we demonstrate that laughter modulates visual perception of facialexpressions. We simultaneously presented a sound of a laughing childand a schematic face with a happy (upward-curved mouth) or sad (downward-curvedmouth) expression. The emotional face was presented eitheralone or among a crowd of neutral faces with or without laughter. Participantsindicated the valence and magnitude of the perceived expressionby selecting a curved segment that most closely resembled the curvatureof the mouth of the emotional face. In this way, we were able to measurehow laughter influenced both the strength (mean perceived curvature)and tuning (standard deviation of perceived curvature) of the perceptionof happy and sad facial expressions. We found that when a single emotionalface was presented, laughter enhanced the strength of a congruenthappy expression. In contrast, when an emotional face was presented in acrowd of neutral faces, laughter enhanced the perceived signal strength ofan incongruent sad face. These effects were transient in that they occurredon a trial-to-trial basis. Laughter also produced a sustained effect of selectivelyenhancing the reliability of perceiving happy faces on no-sound trialsfollowing laughter trials compared to a control block with no-sound trialsonly. These effects arise from interactive processing of auditory laughterand visual facial expression rather than from abstract semantic interactionsbecause presenting the spoken word “laugh” instead of laughing soundsproduced no effects. In summary, simultaneous laughter makes a singlehappy expression appear happier, but makes an incongruent sad expressionstand out in a crowd; these opposite effects based on single versusmultiple faces preclude response bias. Laughter also produces a sustainedeffect of fine-tuning the perception of happy faces. These results demonstratemultifaceted auditory-visual interactions in the processing of facialexpressions.Acknowledgement: NSF BCS 0643191, NIH R018197-02S1Wednesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>331


Wednesday Morning TalksVSS 2010 AbstractsWednesday AM62.23, 11:30 amReverse correlation in temporal FACS space reveals diagnosticinformation during dynamic emotional expression classificationOliver Garrod 1 (oliver@psy.gla.ac.uk), Hui Yu 1 , Martin Breidt 2 , Cristobal Curio 2 ,Philippe Schyns 1 ; 1 Department of Psychology, Centre for Cognitive Neuroimaging,University of Glasgow, 2 Department of Human Perception, Cognition andAction, Max Planck Institute for Biological CyberneticsReverse correlation experiments have previously revealed the locations offacial features crucial for recognition of different emotional expressions, andrelated these features to brain electrophysiological activity [SchynsEtal07].However, in social perception we expect the generation and encoding ofcommunicative signals to share a common framework in the brain [SeyfarthCheney03]and neither ‘Bubbles’ [GosselinSchyns03] nor white noisebased manipulation effectively target the input features underlying facialexpression generation - the combined activation of sets of facial musclesover time. [CurioEtal06] propose a motion-retargeting method that controlsthe appearance of facial expression stimuli via a linear 3D MorphableModel [BlanzVetter99] composed of recorded Action Units (AUs). Each AUrepresents the surface deformation of the face, given the full activation of aparticular muscle or muscle group taken from the FACS [EkmanFriesen79]system. The set of weighted linear combinations of AUs are hypothesisedas a generative model for the set of typical facial movements for this actor.Here we report the outcome of a facial emotion reverse correlation experimentwith one such generative AU model over a space of temporallyparameterized AU weights. On each trial, a random selection of between1 and 5 AUs are selected. Random timecourses for selected AUs are generatedaccording to 6 temporal parameters (see supplementary figure). Theobserver rates the stimulus for each of the 6 ‘universal emotions’ on a continuousconfidence scale from 0 to 1 and, from these ratings, optimal AU timecourses(timecourses whose temporal parameters maximize the expectedrating for a given expression) are derived per expression and AU. Theseare then fed as weights into the AU model to reveal the feature dynamicsassociated with the expression. This method extends Bubbles and reversecorrelation techniques to a relevant input space – one that makes explicithypotheses about the temporal structure of diagnostic information.Acknowledgement: The Economic and Social Research Council and Medical ResearchCouncil (ESRC/RES-060-25-0010)62.24, 11:45 amAttenuation of the dorsal-action pathway suppresses fear prioritization:An evolutionary link between emotion and actionGreg West 1 (greg.west@utoronto.ca), Adam Anderson 1 , Jay Pratt 1 ; 1 Department ofPsychology, University of TorontoIt is widely thought that emotional facial expressions receive privilegedneural status compared to their non-affective counterparts. This prioritization,however, comes at a cost, as the neural capacity of the human brainis finite; the prioritization of any one object comes at the expense of otherconcurrent objects in the visual array competing for awareness (Desimone& Duncan, 1995). Despite this reality, little work has examined the functionalbenefit derived from the perceptual prioritization of affective information.Why do we preferentially attend to emotional faces? According toevolutionary accounts, emotions originated as adaptations towards action,helping to prepare the organism for locomotion (Darwin, 1872; Frijda,1986). To directly examine this relationship between emotion and action,we reasoned that the prioritization of affective events may occur via twoparallel pathways originating from the retina - a parvocellular (P) pathwayprojecting to ventral stream structures responsible for object recognition,or a faster and phylogenetically older magnocellular (M) pathway projectingto dorsal stream structures responsible for localization and action. Herewe tested whether the fast propagation along the dorsal-action pathwaydrives an accelerated conduction of fear-based content. We took advantageof the fact that retinal exposure to red diffuse light suppresses M cell neuralactivity. Using a visual prior entry procedure, accelerated stimulus perceptionwas assessed while either suppressing the M pathway with red diffuselight, or leaving it unaffected with green diffuse light. We show thatthat the encoding of a fearful face is accelerated, but not when M-channelactivity is suppressed. Additional control experiments confirmed that thisaffective prioritization is driven by coarse low spatial frequency information,and that red diffuse uniquely affects dorsal competition but not topdownventral competition. Together, our results reveal a dissociation thatimplicates a privileged neural link between emotion and action that beginsat the retina.Acknowledgement: This work was supported by a Natural Science and EngineeringResearch Council of Canada grant to J.P. and a Canada graduate scholarship from theNatural <strong>Sciences</strong> and Engineering Research Council of Canada to G.L.W.62.25, 12:00 pmThe Speed of RaceRoberto Caldara 1 (r.caldara@psy.gla.ac.uk), Luca Vizioli 1 ; 1 Department ofPsychology and Centre for Cognitive Neuroimaging (CCNi), University ofGlasgow, United KingdomRace is a universal, socially constructed concept used to categorize humansoriginating from different geographical locations by salient physiognomicvariations (i.e., skin tone, eye shape, etc.). Race is extracted quickly andeffectively from faces and, interestingly, such visual categorization impactsupon face processing performance. Humans are noticeably better at recognizingfaces from the same- compared to other-racial groups: the socalledother-race effect. This well-established phenomenon is paired withan intriguing paradox: a faster categorization by race of other-race faces.Yet, the visual information and the cortical correlates driving this speedcategorization advantage for race remain unknown. To this end, we combineda parametric psychophysical approach with electrophysiologicalsignals recorded from Westerners and Easterners observers performingface categorization by race. We removed external features and normalizedthe amplitude-spectra, luminance and contrast of all the faces to controlfor potential confounds that could arise from trivial physiognomic differencesbetween faces of different races. Importantly, we also manipulatedthe quantity of information available for race categorization, by using a linearphase interpolation technique with 11 phase noise levels, ranging from20% to 70% in 5% increments (see supplementary figure). Consistent withcurrent knowledge, race did not modulate the face sensitive N170 componentwith 60% phase signals or above. Strikingly, however, other-racefaces containing weak phase signals were categorized more accurately andinduced larger amplitudes differences on the N170 (arising at 40%, peakingat 50%) in both groups of observers. In contrast, same-race faces showedgradual accuracy and gradual N170 sensitivity to the quantity of phase signal,suggesting a more effective coding of information. Our findings showearly categorical perception of race in the visual cortex, allowing a speedadvantage to the detriment of fine-grained information coding. The veryearly detection of race could relate to biologically relevant mechanisms thatshape human social interactions.Acknowledgement: The Economic and Social Research Council and Medical ResearchCouncil (ESRC/RES-060-25-0010)62.26, 12:15 pmInternal Representations of Facial Expressions Reveal CulturalDiversityRachael E. Jack 1,2 (rachael@psy.gla.ac.uk), Roberto Caldara 1,2 , Philippe G.Schyns 1,2 ; 1 Department of Psychology, University of Glasgow, United Kingdom,G12 8QQ, 2 Centre for Cognitive Neuroimaging (CCNi), University of Glasgow,United Kingdom, G12 8QQWe recently (Jack et al., 2009) challenged one of the most widely held beliefsin psychological research – the universality of facial expressions of emotion.Merging behavioural and novel spatio-temporal eye movement analyses,we showed that East Asian (EA) observers decode expressions with aculture-specific strategy that is inadequate to reliably distinguish ‘universal’expressions of ‘Fear’ and ‘Disgust.’ Using a model observer, we demonstratedthat EA observers persistently sample ambiguous eye information,while neglecting the mouth, thereby systematically confusing ‘Fear’with ‘Surprise,’ and ‘Disgust’ with ‘Anger.’ Our rejection of universalitythus raises the question - how are facial expressions represented acrosscultures? To investigate, we reconstructed the internal representations ofWestern Caucasian (WC) and EA observers, using a reverse correlationtechnique. On each trial, we added white noise to a racially ambiguousneutral expression face, producing a perceptively different expressive face(Figure 1 in Supplemental Materials shows sample stimuli). We instructed15 WC and 15 EA naïve observers to categorize stimuli (12,000 trials perobserver) according to the 6 Ekman expressions (i.e., ‘Happy,’ ‘Surprise,’‘Fear,’ ‘Disgust,’ ‘Anger’ and ‘Sad’). We then reconstructed the internal representationsof each expression by summing the noise templates across tri-332 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsWednesday Morning Talksals before adding the neutral face (Figure 2 in Supplemental Materials illustratesthe procedure; Figure 3 shows examples of reconstructed representations).Using complementary statistical image processing tools to examinethe signal properties of the representations, we reveal that while certainexpressive facial features are common across cultures (e.g., wide openedeyes for ‘Surprise’), others are culture-specific (e.g., only WC observersshowed the mouth for ‘Surprise’). Furthermore, we show modified gazedirection is unique to EA observers (Figure 4 in Supplemental Materialsshows examples). For the first time, our results demonstrate that cultureshapes the representations of facial expressions, thereby refuting their universalityas signals of human social communication.Acknowledgement: The Economic and Social Research Council (ESRC) and MedicalResearch Council (MRC) (ESRC/RES-060-25-0010).Wednesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>333


Wednesday Morning PostersWednesday AMScene perception: AestheticsOrchid Ballroom, Boards 401–409Wednesday, May 12, 8:30 - 12:30 pm63.401 Shot Structure and Visual Activity: The Evolution of HollywoodFilmJordan DeLong 1 (jed245@cornell.edu), Kaitlin Brunick 1 , James Cutting 1 ;1 Psychology, Cornell UniversityFew stimuli can captivate human attention like a movie; young children andadults alike are drawn to the screen. Properties of film are able to engageviewer’s attention for hours, but what is it about a movie that makes peoplestop and watch?Recent research into Hollywood film has revealed a number of trends: theaverage shot length (ASL) in popular film has decreased while the overallrunning length of films has remained the same. Our current research intothis change began by collecting and analyzing a database of over 150 popularHollywood films ranging from 1935-2005. Utilizing a mixture of algorithmiccut detection and human confirmation, we were able to accuratelyfind shot transitions for all 150 films. Power and autocorrelation analysisshow that shots are not only becoming shorter over time, but the distributionof shot lengths exhibit a distribution that is approaching 1/f. This typeof distribution is very similar to the endogenous rhythms found in humanreaction times, and is thought to be due to temporal fluctuations in attention.We propose that film has evolved to interface with the rhythms ofhuman attention, and by extension, the temporal structure of the world.In addition to a changing shot distribution, the correlation of neighboringframes has decreased over the past 70 years. This can be explained as agradual increase in the amount of visual activity: motion of scenes in frontof the camera or movement of the camera itself. This change is evident inthe modern Action and Adventure genres including recent “queasy-cam”films such as Cloverfield and The Bourne Ultimatum. These films, amongothers, exhibit just how fast-moving, and thus uncorrelated, frames in filmhave become. These trends in film raise questions about where the limits ofvisual attention are placed.63.402 Video content modulates preferences for video enhancementPhilip (Matt) Bronstad 1 (matthew.bronstad@schepens.harvard.edu), PremNandhiniSatgunam 1 , Woods Russell 1 , Peli Eli 1 ; 1 Schepens Eye Research Institute, HarvardMedical SchoolIn a study of image quality using local, adaptive, contrast enhancement, wenoted and investigated how responses varied between subjects and variedwith video content.Forty normally sighted subjects made pair-wise comparisons of side-bysideviews of HD video enhanced at four levels (off, low, medium, andhigh) by two PureAV Razor<strong>Vision</strong> devices, each separately connected toone of two side-by-side 42” LCD HDTVs. We used logistic regression toderive Thurstone-like preference scales for the enhancement levels.After each session subjects were asked to explain their preferences.Responses fell into two broad categories, with some subjects preferringenhanced video whereas others did not. This was reflected in their preferencescales, which showed substantial, repeatable, individual differences.From each subject’s comparisons of 64 video clips an enhancement preferencescore (EPS) was calculated. EPS was distributed bi-modally.Subjects also indicated that video content was important to their enhancementpreferences. Thus, in a post-hoc investigation of video content, were-calculated EPS twice for each subject, once for videos that had primarilyhuman faces, and second for videos with minimal face content (“nonface”).EPS differed between face and non-face videos, with less enhancementpreferred for videos with substantial face content. This was true formost subjects (p=.005), and did not depend on whether subjects generallypreferred enhancement or not.Image quality measurement may be complicated by individual preferencedifferences and by video content. This suggests that image quality metricsneed to consider content and that a single model or computational metricfor preferred image quality may not be representative of all viewers.Acknowledgement: NIH grants EY05957, EY16093, and Analog Devices, Inc.63.403 Photo Forensics: How Reliable is the Visual System?Hany Farid 1 (farid@cs.dartmouth.edu), Mary Bravo 2 ; 1 Computer Science, DartmouthCollege, 2 Psychology, Rutgers University, CamdenIn 1964, the Warren commission concluded that John F. Kennedy had beenassassinated by Lee Harvey Oswald. This conclusion was based in part onthe famous “backyard photograph” of Oswald holding a rifle and Marxistnewspapers. A number of people, including Oswald himself, have claimedthat the photograph was forged. These claims of forgery have been bolsteredby what appear to inconsistencies in the lighting and shadows inthe photo.This is but one of several cases in which accusations of photographic inauthenticityhave spawned national or international controversies and conflicts.Because these claims are often based on perceptual judgments ofscene geometry, we have examined the ability of observers to make suchjudgments. To do this, we rendered scenes that were either internally consistentor internally inconsistent with respect to their shadows, reflections,or planar perspective distortions. We then asked 20 observers to judge theveridicality of the scenes. The observers were given unlimited viewing timeand no feedback. Except for the most degenerate cases, performance wasnear chance, even though the information required to make these judgmentswas readily available in the scenes. We demonstrate the availabilityof this information by showing that straightforward computational methodscan reliably discriminate between possible and impossible scenes.We have used computational methods to also test the claims of inauthenticitymade about the Oswald backyard photo. By constructing a 3D modelof the scene, we show that the shadows in the photo are consistent with asingle light source. Our psychophysical results suggest that the claims tothe contrary arose because human observers are unable to reliably judgecertain aspects of scene geometry. Accusations of photo inauthenticitybased solely on a visual inspection should be treated with skepticism.63.404 What velvet teaches us about 3D shape perceptionMaarten Wijntjes 1 (m.w.a.wijntjes@tudelft.nl), Katja Doerschner 2 , Gizem Kucukoglu2 , Sylvia Pont 1 ; 1 Perceptual Intelligence Lab, Industrial Design Engineering,Delft University of Technology, 2 Computational and Biological <strong>Vision</strong> Group,Department of Psychology, Bilkent UniversityHumans are able to perceive a large variety of optical material properties.The shading patterns that convey these properties are used by painters torender e.g. a luxurious dining table with golden bowls and crystal glasses.Furthermore, painters have developed techniques to render the clothingmaterial, e.g the velvet cape of Pope Leo X by Raphael. The shading trickthat is often used to convey a velvet appearance is ‘inverted Lambertianshading’. While for Lambertian shading the reflected light is highest atfrontal illumination, the opposite holds for the hairy surface of velvet.Whereas it seems easy to identify velvet objects in paintings, it is unknownhow these shading patterns affect the perception of shape. On the basis ofa computational model we predicted that if the velvet shading is (partly)interpreted as Lambertian, the perceived 3D shape should flatten in theviewing direction. This is indeed what we found in a previous study wherewe used computer rendered objects. In the present study we used 3D printsof those virtual objects and applied a Lambertian (matte spray paint) andvelvet (flock) layer. Photographs of these objects were used in the experiment.We found a similar flattening effect of the velvet surface material.Furthermore, we analyzed the non-linear (second order) differences in perceivedshape between reflectance properties (Lambertian and velvet) andbetween illumination conditions. Surprisingly, we found that the non-lineardifferences were larger between illumination conditions.334 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong> See page 3 for Abstract Numbering System


VSS 2010 AbstractsWednesday Morning PostersThe results show that real shapes show similar perceived shape deviationsas the rendered shapes. This suggests that in both cases, shape perceptionseems to be affected by a Lambertian prior. Furthermore, the non-lineardifferences can be interpreted as a stronger shape constancy for differentBRDFs than illumination conditions.Acknowledgement: This work was supported by the Netherlands Organisation forScientific Research (NWO) and EC FP7 Marie Curie IRG-23949463.405 Re-examination of methods for measuring pictorial balanceperception using Japanese calligraphySharon Gershoni 1 (sharon.gershoni@gmail.com), Shaul Hochstein 1 ; 1 NeurobiologyDepartment, Faculty of Science, The Hebrew University of JerusalemIn art, pictorial balance is considered the primary principle that unifies elementsof a perceived composition. Balance has the power to turn a randomarray of elements into a cohesive harmonious picture. The goal ofthis study was to investigate the role of pictorial balance in visual organizationprocessing. Previous studies suggest that balance is perceived ina 50-100ms glance and serves to create a global image scan path, guidingobserver gaze, selecting features for processing (Locher & Nagy, 1996).Individual scan paths depend on art training (Nodine et al.1993) and eyemovements relate to visual aesthetics when viewing art (Locher et al. 2007).The Artistic Probabilistic Balance (APB) test (Wilson and Chatterjee 2005)is a computational balance assessment, based on the sum of area balanceratios across eight symmetry axes. APB values successfully matched subjectbalance ratings and preferences when tested with simple geometric images,without accounting for influences of grouping, training, gaze duration andmemory. We now compare APB measures for sixteen Japanese calligraphiccharacters to results from two psychophysical experiments, using brief(200ms) masked presentation. In Experiment 1, subjects rated balance onthe scale of 1-to-6 for Japanese characters presented in the left, center orright of the visual field. In Experiment 2, they rated relative balance in a 2-AFC paradigm for characters presented left/right of fixation. Results showhigh correlation between performances in different locations and betweenexperiments, but low correlation with APB measures. We suggest a computationalrevision of APB, assigning different weights to the symmetryaxes and adding contribution directionality. This revision revealed interestingorientation preferences in perceiving balance, which were confirmed innew rating experiments using stimuli presented in various rotations.63.406 Representational Fit in Position and Perspective: A UnifiedAesthetic AccountJonathan S. Gardner 1 (jonathansgardner@gmail.com), Stephen E. Palmer 1 ;1 Psychology Department, University of California, BerkeleyPrevious research on aesthetic preference for spatial compositions has shownrobust and systematic preferences for object locations within frames, suchas the center bias, the inward bias, and various ecological biases (Palmer,Gardner, & Wickens, 2008; Gardner & Palmer, VSS-2006, VSS-2008, VSS-2009). These preferences can be dramatically altered, however, by changingcontextual meaning through different titles for the same picture (Gardner &Palmer, VSS-2009). Perspective is a similar factor: People also prefer canonicalperspectives (Palmer, Rosch, & Chase, 1981) when rating their aestheticresponse to pictures of everyday objects (Khalil & McBeath, VSS-2006),but these preferences can also be shifted by changing the context throughdifferent titles. Our theoretical account of preference for the compositionthat best fits the context – which we call “representational fit” (Gardner &Palmer, VSS-2009) – can explain not only preferences in the “default” case,where the goal is simply to present the focal object(s) optimally in a waythat best captures its most salient features (e.g., as in stock photography,see Gardner, Fowlkes, Nothelfer, and Palmer, VSS-2008), but also morenuanced and realistic cases in which there is a meaning associated withthe image beyond its explicit image content. The current research examinesseveral aspects of this preference for representational fit. People prefernon-standard compositions (with regard to position and/or perspective)more than standard compositions, so long as there is a context that justifiesthe unexpected composition. Put another way, there is greater artisticvalue in novelty and violating expectations, provided that the results aremeaningful and coherent. These results provide strong evidence for representationalfit as an aesthetic theory that unifies fluency accounts, where thedefault context prevails (Reber, Schwarz, & Winkielman, 2004), with classicaesthetic accounts in terms of novelty and violating expectations, where anonstandard meaning is intended or inferred.Acknowledgement: NSF to Stephen Palmer: BCS-074582063.407 Aesthetics of Spatial Composition: Semantic Effects inTwo-Object PicturesMieke H.R. Leyssen 1 (mieke.leyssen@student.kuleuven.be), Sarah Linsen 1 , JonathanS. Gardner 2 , Stephen E. Palmer 2 ; 1 Department of Psychology, KULeuven,2 Department of Psychology, UC BerkeleyPrevious research on aesthetic response to spatial composition of simplepictures examined preferences for the horizontal position of single objectswithin a rectangular frame (Palmer, Gardner & Wickens, 2007). The resultsrevealed a center bias for front-facing objects and an inward bias for leftorright-facing objects. The current studies examined aesthetic preferencesfor compositions containing two objects. Each picture contained one stationaryobject, whose position was fixed, and one movable object, whoseposition was adjusted by the participant to create the most aestheticallypleasing composition. The stationary object was presented at one of fiveequally-spaced locations along a horizontal axis. In the first experiment,four vertically symmetrical objects without a facing direction – two short,wide objects (sponge, cake) and two tall thin objects (plastic bottle of liquiddish soap, bottle of sparkling wine) – were presented in pairs consistingof one short, wide object and one tall, thin object. The center points of thepreferred position for the movable objects were then binned to computefrequency histograms of their preferred positions. When the two objectswere related (wine and cake or liquid soap and sponge), people generallyplaced the movable object close to the fixed object, whereas when theywere unrelated (wine and sponge or liquid soap and cake), people generallyplaced the movable object far away from the fixed object. In a secondstudy the same data were collected in a between-participants design – suchthat each participant saw only a single pair of objects – to control for possibledemand characteristics arising from the same participant seeing bothrelated and unrelated pairs of objects. The second experiment also assessedthe effects of semantic relatedness on the preferred direction of facing.Acknowledgement: National Science Foundation Grant BCS-074582063.408 Aesthetic Preferences in the Size of Images of Real-worldObjectsSarah Linsen 1 (sarah.linsen@student.kuleuven.be), Mieke H.R. Leyssen 1 , JonathanS. Gardner 2 , Stephen E. Palmer 2 ; 1 Department of Psychology, KULeuven,2 Department of Psychology, UC BerkeleyIn previous research, Konkle and Oliva (VSS-2009) found that the preferredvisual size (“canonical size”) of a picture of an object is proportional to thelog of its known physical size: Small physical objects are preferred whentheir images are small within a frame and large physical objects are preferredwhen their images are large within a frame. They employed withinparticipantdesigns using multiple objects in several different tasks, includinga perceptual preference task in which they asked participants to adjustthe image size so that the object “looks best.” Because of concerns about howthe instructions were interpreted (is the image that “looks best” the one atwhich it “looks most like itself” or the one that is “most aesthetically pleasing”)and possible demand characteristics (the same person seeing multipleobjects of different sizes may implicitly feel pressured to make their relativesizes consistent), we studied image size effects on aesthetic judgmentsusing a two-alternative forced-choice method in both within- and betweenparticipantdesigns, asking them to choose the picture that you “like best.”In Experiment 1, participants saw all possible pairs of images depicting thesame object at six different sizes for twelve real-world objects that variedin physical size. Consistent with Konkle and Oliva’s findings, participantspreferred small objects to be smaller in the frame and large objects to belarger, regardless of whether they saw only a single object (the betweenparticipantdesign) or all objects intermixed (the within-participant design).In Experiment 2, we examined whether this effect would still be evidentif the amount of visual detail present at different sizes was equated by“posterizing” the images. Here the ecological bias toward relative sizeeffects disappeared. Our findings indicate that multiple factors interact indetermining aesthetic responses to images of different sizes.Acknowledgement: National Science Foundation Grant BCS-0745820Wednesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>335


Wednesday Morning PostersVSS 2010 AbstractsWednesday AM63.409 Perceptual, semantic and affective dimensions of theexperience of representational and abstract paintingsSlobodan Markovic 1 (smarkovi@f.bg.ac.rs); 1 University of BelgradeIn this study we investigated the difference between representational andabstract paintings in judgments on three groups of subjective dimensions.All three groups of dimensions were specified in our previous factor analyticstudies: (1) Perceptual dimensions Form, Color, Space, and Complexity,(2) Semantic dimensions Illusion-Construction of Reality (bipolar dimension),Expression, Ideology and Decoration, and (3) Affective dimensionsHedonic Tone, Arousal, Relaxation and Regularity. Each dimension wasrepresented by three 7-step bipolar scales (perceptual and affective dimensions)or two 7-step unipolar scales (semantic dimensions). Two samplesof paintings taken from our previous studies (18 representational and 18abstract) were used as stimuli. Two groups of participants (N1= 30, N2 =30)judged either the representational or the abstract paintings. Participantscompleted three instruments, i.e. judged paintings on three sets of 7-stepscales which corresponded to perceptual, semantic and affective dimensions.Results have shown that representational paintings were judged assignificantly higher on the perceptual dimensions Form and Complexity,the semantic dimension Illusion of Reality (opposite pole of Constructionof Reality), and the affective dimension Regularity. We had expected thesedifferences because the representational paintings were made so to haverelatively highly defined, precise, detailed and regular forms which couldeasily have been associated with objects in the physical world. Also, theabstract paintings are judged as higher on the perceptual dimension Color,semantic dimensions Expression and Construction of Reality (opposite poleof Illusion of Reality), and the affective dimension Arousal. All these differencesreflect the basic characteristics of abstract art: abstract paintings arenot created to represent anything from the physical reality, but to constructa new iconic world with intention to express the artist’s emotions and toarouse the observer’s mind. Color is one of the most effective artistic meanswhich artists use to achieve these goals.Color and light: Categories, culture andpreferencesOrchid Ballroom, Boards 410–420Wednesday, May 12, 8:30 - 12:30 pm63.410 What do we know about how humans choose grey levels forimages?Marina Bloj 1 (M.Bloj@brad.ac.uk), David Connah 2 , Graham Finlayson 2 ; 1 BradfordOptometry Colour and Lighting Lab, School of Life <strong>Sciences</strong>, University ofBradford, Bradford, BD7 1DP, UK , 2 Department of Computer Science, Universityof East Anglia, Norwich, NR4 7TJ, UKFrom cave paintings that are more than 32000 years old to the work of currentartists using charcoal or pastels humans have used different levels of asingle colour (e.g. greyscales) to portray the colourful world that surroundsus. The aim of this study was to investigate how we construct these monochromaticimages; do we choose the same relative order of grey levels asother people? How does image content and contrast influence our greysettings? For this purpose, we presented, on a calibrated CRT, 5 simplifiedcartoon-type colourful images. These images where shown in 4 differentversions: two where the content of image was recognisable (one withadded black outline around each object, the other without). In the othertwo versions each pixel in an image was grouped with other pixels of thesame colour into rectangular areas in a way that made the content of imageabstract while preserving the area taken up by a particular colour. One ofthese versions had a black outline around each colour area; the other didnot. Six naïve, colour normal observers used a digital ‘re-colouring tool’ tocreate greyscales version of the randomly presented colour images. Preliminaryanalyses of participants’ settings for different version of the imagesindicate that the addition of black outlines does not seem to affect chosengrey levels. More surprisingly, the change in content from a recognisablescene to abstract rectangles also seems to leave grey levels un-changed.Although each individual’s absolute settings were different, the overallranking was largely preserved across participants for a given image. Forour chosen set of test images, the most influential factor driving grey settingswas the perceived lightness of the coloured patch, not image contentor contrast.Acknowledgement: This work is supported by joint EPSRC grants number EP/E012248/1and EP/E12159/163.411 Color categories and perceptual groupingLucy Pinto 1 (pintol2@unr.nevada.edu), Paul Kay 2 , Michael A Webster 1 ;1 Psychology, University of Nevada, Reno, 2 International Computer ScienceInstitute, UC BerkeleyStudies of reaction times for color discrimination have found fasterresponses to differences between (e.g. blue vs. green) compared to within(e.g. two shades of blue) color categories (e.g. Gilbert et al. PNAS 2006). Thebetween-category advantage is more prominent in the right visual field andis abolished by verbal interference, consistent with an effect of language onthe perceptual response to color. We asked whether an effect of linguisticcategory might be manifest early, in the perceptual encoding of color, bymeasuring the influence of color on perceptual grouping, a task which didnot require a speeded response. Stimuli were composed of 5 1-deg circlesforming the corners and center of a 4-deg square centered 8-deg in the leftor right field. Diagonal corners had the same color and differed from theopposite pair by a fixed hue angle of 30 deg in CIELAB. Absolute anglevaried over a range spanning blue and green. For each, the center colorwas varied in a staircase to estimate the angle at which the two diagonalsappeared equally salient. Interleaved settings measured the angle of theblue-green boundary for the center spot presented alone. For corner colorsspanning blue and green, a strong categorical effect predicts that thepoint of subjective equality (PSE) should remain tied to the boundary angle.Instead, PSEs varied monotonically with corner color angles, and did notconsistently differ between the right and left fields. Perceptual salience asmeasured by grouping thus showed little influence of linguistic category.These results are consistent with other recent measures pointing to a lack ofcategorical effects on color similarity judgments (Lindsey and Brown JOV2009), and suggest that the influence of language on color could occur late,e.g. at the stage of response selection.Acknowledgement: EY-1083463.412 Cross-Cultural Studies of Color Preferences: US, Japan,and MexicoKazuhiko Yokosawa 1 (yokosawa@l.u-tokyo.ac.jp), Natsumi Yano 1 , Karen B.Schloss 2 , Lilia R. Prado-León 3 , Stephen E. Palmer 2 ; 1 Department of Psychology,The University of Tokyo, 2 Department of Psychology, University of California,Berkeley, 3 Ergonomics Research Center, University of GuadalajaraConsistent with Schloss and Palmer’s (VSS-2009) Ecological Valence Theory(EVT) of color preference, 80% of the variance in average American preferencesfor 32 chromatic colors was explained by the Weighted AffectiveValence Estimate (WAVE) of American preferences for the objects that arecharacteristically those colors. To test predictions of the EVT cross-culturally,corresponding color preferences and ecological WAVE measures werecollected in Japan and Mexico for the same 32 colors. American participantsshowed a broad preference for cool over warm hues, an aversion to darkorange (brown) and dark yellow (olive), and greater preference for moresaturated than less saturated colors. Japanese participants showed similarpreferences for cool over warm colors, dislike for brown and olive, andhigh preference for saturated colors, but a greater preference for light colors(pastels) and a lesser preference for dark colors relative to Americans.Mexican participants showed the same aversion to brown and olive, butliked warm and cool colors about equally and tended to like both light andsaturated colors less than American and Japanese participants. The WAVEsin each culture were computed from the results of the same three-part procedure:eliciting object descriptions for each of the 32 colors, rating the similarityof the presented color to the colors of the described objects, and ratingthe affective valence (degree of liking) of each described object. The WAVEfor each color is the average valence over objects weighted by the averagesimilarity of the given color to the described object. American WAVEs predictAmerican preferences (r=.89) better than Japanese (r=.77) or Mexicanpreferences (r=.54). Similarly, Japanese WAVEs predict Japanese color preferences(r=.66) better than American preferences (r=.55) or Mexican prefer-336 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsWednesday Morning Postersences (r=.29). These findings are consistent with the EVT, which predictsthat culturally specific WAVEs should predict within-culture preferencesbetter than between-culture preferences.Acknowledgement: National Science Foundation Grant BCS-074582063.413 The Color of Emotionally Expressive FacesZoe Xu 1 (zxxu@berkeley.edu), Karen B. Schloss 1 , Stephen E. Palmer 1,2 ; 1 Departmentof Psychology, UC Berkeley, 2 Program in Cognitive Science, UC BerkeleySchloss, Lawler, and Palmer (VSS-2008) investigated the relation betweencolor and classical music by having participants select the 5 colors that“went best” (and, later, the 5 colors that “went worst”) with 18 musicalpieces from among the 37 colors of the Berkeley Color Project (Palmer &Schloss, submitted). They found that the emotional associations of the colorsthat were chosen for a particular musical selection were closely relatedto the emotional associations of that musical selection. They proposed thatwhen people perform this task, they have an emotional response to themusic and chose the colors that are most (or least) closely associated withthose same emotions. In this study we used the same paradigm to test foranalogous associations between colors and emotionally expressive faces. Inthe color-face task, the participants were presented with the entire array of37 colors beside a photograph of a face that appeared happy, sad, angry, orcalm to varying degrees. Their task was to choose the five colors that weremost consistent with the face and (later) the five colors that were least consistentwith the face. In the color-emotion task, the same participants ratedthe strength of association for each of 37 colors with three emotional dimensions:happy-sad, angry-contented, and strong-weak. In the face-emotiontask, the same participants rated the strength of association for the facesalong the same three emotional dimensions. Analogous to color-musicassociations, the emotional associations of the colors chosen to go with thefaces were highly correlated with the emotional content of the faces. Theresults are consistent with the general hypothesis that associations betweencolors and stimuli that have clear emotional content (e.g., classical musicand emotionally expressive faces) are mediated by emotion: People choosethe colors that have the most similar emotional content.Acknowledgement: National Science Foundation Grant BCS-074582063.414 Is Object Color Memory Categorical?Ana Van Gulick 1 (ana.e.van.gulick@vanderbilt.edu), Michael Tarr 2 ; 1 VanderbiltUniversity, 2 Carnegie Mellon UniversityShape is considered the most important visual feature of objects but colorcan also aid in object recognition and naming. However, it is unclear howmuch color information is automatically encoded in visual long-term memory.What is the nature of color object memory? And if color is encodedin visual object memory, is it exact or categorical? We investigated thesequestions in 6 experiments with both color-diagnostic and non-color-diagnosticfamiliar objects and human faces. The experiments use a study taskfor object images followed by a perceptual 2-interval forced choice taskbetween two shifted color versions of the same object image. Experiments1 and 2 found a preference for shifted color images that stay within theoriginal color category of a color-diagnostic object even if the color is moreextreme. Experiment 2 found the same result as Experiment 1 withoutany study trials, which suggests color categories in object memory existand that they are at least in part based on prior knowledge of object color.Experiments 3 and 4 demonstrated that color memory is influenced by bothrecent perceptual experience, such as the shifted color of a studied object,and by prior knowledge of the typical color of an object, which may besemantically mediated by object identity. Distance in color space was mostimportant only when a color category boundary is crossed. Experiment 5found evidence for categorical color memory for human faces and food,categories for which color is an important visual feature. Again categoricalcolor memory seems to be strongest for these object classes, with someevidence for exact color memory. Experiment 6 found no evidence for exactor categorical color memory for non-color-diagnostic objects. Overall, wefind that object color memory is categorical for objects for which color is animportant diagnostic feature.63.415 The Good the Bad and the Ugly: Effects of Object Exposureon Color PreferencesEli D. Strauss 1 (edstrauss@berkeley.edu), Karen B. Schloss 2 , Stephen E. Palmer 1,2 ;1 Program in Cognitive Science, UC Berkeley, 2 Department of Psychology, UCBerkeleyPalmer and Schloss (submitted) proposed an Ecological Valence Theory(EVT) of color preferences, which states that color preferences are determinedby individuals’ emotional experiences with objects characteristicallyassociated with those colors. An implication of the EVT is that an individual’scolor preferences change as he/she has new emotional experienceswith colored objects. The present experiment tests whether exposing subjectsto emotional objects of particular colors produces a reliable change inpreferences for those colors. Participants first rated their color preferencesfor the 37 BCP colors. Half then completed four “spatial aesthetics” tasksin which they were exposed to positive green images (e.g., trees and grass)and negative red images (e.g., wounds and lesions), and the other half didthe same with negative green images (e.g., slime and mold) and positivered images (e.g., berries and roses). Both groups also saw neutral objects ofother colors. The “spatial aesthetics” tasks were designed to insure that participantshad processed the content of the images: judging whether a verballabel was appropriate, clicking on the center of the focal objects, rating thecomplexity of the image, and rating their preference for the depicted object.Following these four tasks, participants rated their color preferences again,and difference scores were computed for the corresponding red and greencolors. There was an interaction between the change in color preference andthe images viewed: Those who saw positive images of a given color (eitherred or green) showed an increase in preference relative to those who sawnegative images of the same color. These results provide causal evidencein support of the EVT by showing that exposure to (or priming of) emotionalobjects of a particular color can increase or decrease preference forthat color, depending on the emotional valence of the objects.Acknowledgement: National Science Foundation Grant BCS-074582063.416 Effects of school spirit on color preferences: Berkeley ’sBlue-and-Gold vs. Stanford’s Red-and-WhiteRosa M. Poggesi 1 (rosiposi@berkeley.edu), Karen B. Schloss 1 , Stephen E. Palmer 1 ;1 Department of Psychology, UC BerkeleyAccording to the Ecological Valence Theory (EVT), people’s color preferencesare determined by their average affective response to all “things”associated with those colors (Palmer & Schloss, submitted). Accordingly,preference for a color should increase with increasingly positive feelingsfor a strong associate of that color (e.g., one’s university) and decrease withincreasingly negative feelings about that same associate. We tested this predictionby comparing color preference ratings from Berkeley and Stanfordundergraduates a few weeks before the intensely rivalrous “Big Game.”The EVT predicts that students should like their own school colors morethan their rival’s school colors, and that the degree of preference for thesecolors should be related to their amount of school spirit. Berkeley and Stanfordundergraduates, rated their preferences for 40 single colors (37 colorsof the Berkeley Color Project plus Berkeley-blue, Berkeley-gold, and Stanford-red)and 42 figure-ground color pairs (all pair-wise permutations ofBerkeley-blue, Berkeley-gold, Stanford-red, white, light-blue, dark-yellowand light-red). Participants then rated their degree of agreement with fivestatements designed to assess school spirit. Total school spirit scores fromBerkeley and Stanford were combined into a single bipolar dimension bymultiplying the Stanford scores by -1. For single colors, there was a significantpositive correlation (r=0.44) between school spirit and the signed differencein preference (Berkeley-blue plus Berkeley-gold minus Stanford-red),showing that Berkeley students like blue and gold more than red, whereasStanford students like red more than blue and gold. Preference for colorpairs showed analogous effects: School spirit was significantly correlatedwith the difference in preference for pairs containing Berkeley’s blue-andgoldand those containing Stanford’s red-and-white (r=0.36). These resultssupport the EVT by showing that positive feelings towards one’s universitypromote higher preference for colors associated with that university thanfor colors associated with a rival university.Acknowledgement: National Science Foundation Grant BCS-074582063.417 An Ecological Account of Individual Differences in ColorPreferencesStephen Palmer 1 (palmer@cogsci.berkeley.edu), Karen Schloss 1 ; 1 PsychologyDepartment, U. C. BerkeleySchloss and Palmer (VSS-2009) reported that 80% of the variance in averagecolor preferences for 32 chromatic colors by American participants wasexplained by an ecological measure of how much people like the objectsthat are characteristically those colors. The weighted affective valenceWednesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>337


Wednesday Morning PostersVSS 2010 AbstractsWednesday AMestimate (WAVE), computed from the results of a multi-task procedure,outperformed three other models containing more free parameters. Onegroup of participants described all objects that came to mind for each color,which were compiled into 222 categories of object descriptions. A secondgroup rated how similar each presented color was to the color of each objectdescribed for that color. A third group rated their affective valences (positive-to-negative)for each object from its verbal description. The WAVE foreach color is the average valence for each object, weighted by the ratedsimilarity of the given color to the described object. The WAVEs werestrongly correlated with average color preference ratings (r=.89). We nowshow that when WAVEs are calculated at the level of individual participants,they account for significantly more variance in the same individual’scolor preference ratings than do average WAVEs computed from the entiregroup. A new group of participants rated their preferences for all 32 colors,after which they provided their own idiosyncratic object descriptions foreach color and rated their affective responses to them. They also rated theiraffective response to each of the 222 object descriptions provided by theoriginal group. The correlation between the individual’s color preferencesand his/her individual WAVEs, computed from their personal ratings ofthe 222 object valences, proved to be reliably better than the fit of the groupWAVEs, computed from the average affective ratings. Together, the conecontrastmodel (Hurlbert & Ling, 2007) and the WAVE predictor explain58% of the variance in individual participants.Acknowledgement: NSF, Google63.418 Desaturated color scaling does not depend on colorcontext: an MLDS experimentDelwin Lindsey 1 (lindsey.43@osu.edu), Angela Brown 2 ; 1 Department ofPsychology, Ohio State University, 2 College of Optometry, Ohio State UniversityPrevious work showed that visual search for desaturated targets amongheterogeneous arrays of white and saturated distractors, of the same hueas their target, is governed by low-level color-opponent responses (Kuzmova,VSS, 2008). Is this a local effect, involving mechanisms that directlyprocess the color signals arising from the targets themselves? Or is it morecomplex, perhaps involving relatively global perceptual processes thatare governed by their low-level inputs? To investigate this question, weasked whether the distractors in the original search experiment influencedthe color appearance of the targets. We presented reddish test stimuli ofvarying saturation and lightness (including the target that was found fastestin the search experiment) and a similar range of tritan-purplish stimuli(including the slowest target), in three color-context conditions: (1) a mixed“distractor” set of 60 red and white squares, (2) a similar set of purple andwhite squares, and (3) no squares (just the dark surrounding field). Weassessed the color appearance of test stimuli, as a function of saturation,using Maximum Likelihood Difference Scaling. Test stimuli fell at equal∆E intervals, along two lines in CIELAB, passing through red (10 cd/m2)and white (50 cd/m2), and white and purple (9 cd/m2), respectively. Threesubjects (two naïve) viewed randomly-selected quadruples of 1-deg. coloredsquares displayed in the center of a color CRT. On each trial, subjectsjudged whether the top or the bottom pair of test stimuli had the largercolor difference. We found no systematic difference in scaled hue across thethree color-context conditions. Thus, the distractors in the original searchexperiment apparently did not influence the color appearance of the targetstimuli. This result reinforces our previous conclusion that visual searchengages local, low-level color opponent channels when desaturated targetsare embedded in heterogeneous distractor arrays.Acknowledgement: NIH R21EY01832163.419 Individual Differences in Preference for HarmonyWilliam S. Griscom 1 (wgriscom@berkeley.edu), Stephen E. Palmer 1 ; 1 Departmentof Psychology, University of California, BerkeleyPrevious research has shown that individuals differ in the degree to whichthey prefer harmonious color pairs, as measured by the correlation betweentheir ratings of preference for figure-ground color pairs and their ratingsof harmony for the same color pairs (Schloss & Palmer, VSS-2007). In thisstudy, we investigated whether individual preference for visually “harmonious”or internally coherent stimuli is consistent across different stimulustypes. The stimuli used were: 35 images of a single dot at one of 35 positionsinside a rectangular frame, 22 Garner-type 9-dot configurations, and 16color pairs. All displays were chosen to span the full range of internal coherencepossible within the given stimulus type. Twenty subjects were askedto rate aesthetic preferences for each stimulus on a computerized line-markscale, and were later asked to rate the internal coherence of the same stimuli(“harmony” for color pairs, “goodness of fit” for dot-in-a-frame images,and “simplicity” for Garner dot patterns) using the same method. Subjectsalso completed the 44-question Big Five Inventory and the 40-questionSensation Seeking Scale. We found that individual subjects’ preference forinternally coherent stimuli (i.e., harmonious/good-fitting/simple displays)was strongly correlated across different stimulus types: r=.46 for color pairsand dot-in-a-frame images, r=.71 for color pairs and Garner dot patterns,and r=.49 for dot-in-a-frame images and Garner dot patterns). Somewhatsurprisingly, the personality measures we examined were not significantlyrelated to preference for harmonious stimuli. These results may indicate thatthere may be an underlying factor (aesthetic style?) connecting preferencefor harmony across visual stimulus types, and we are currently engaged inan expanded follow-up study using a larger number of color-pairs, quasirandomlygenerated polygons that differ in the number of sides and degreeof symmetry, and harmonious-to-dissonant solo piano music.Acknowledgement: National Science Foundation Grant BCS-074582063.420 Adaptation and visual discomfortIgor Juricevic 1 (juricevi@unr.nevada.edu), Arnold Wilkins 2 , Michael Webster 1 ;1 Department of Psychology, University of Nevada, Reno, 2 Department ofPsychology, University of EssexImages with spatial or chromatic properties that are uncharacteristic of typicalvisual environments tend to be rated as less comfortable to view (Fernandezand Wilkins Perception 2007; Land et al. VSS 2009). This effect couldreflect how visual responses are normalized for the image statistics thatare routinely encountered in natural viewing. Here we examine whethershort term exposure to modified statistics alters judgments of visual discomfort,to assess whether perceptions of discomfort can be recalibrated forthe observer’s ambient environment, and whether this adjustment reflectsrenormalization of perceived image qualities through a process like adaptation.Images consisted of a dense collage of overlapping rectangles of differentcolors (Mondrian patterns) that could be varied in their spatial (e.g.blurred or sharpened) or spectral (e.g. mean color and color and luminancecontrast) properties. In further conditions we also explore the effects of prioradaptation on discomfort ratings for images of art that have been shown tobe uncomfortable because they include excessive energy at medium spatialfrequencies. Observers initially adapted to a rapidly changing sequence ofimages with a common attribute that would normally appear comfortable(e.g. focused) or uncomfortable (e.g. blurred). They then used 7-point scalesto rate both the perceived discomfort and the aesthetic quality of a successionof test images (e.g. ranging from blurred to sharpened) that were eachshown interleaved with periods of readaptation. Separate measurementsalso directly evaluated changes in the adapting attribute (e.g. perceivedfocus). Adaptation strongly changes the appearance of the images, and weuse the changes in the ratings to assess the extent to which perceptions ofboth discomfort and aesthetics are tied through this adaptation to the averagecharacteristics of the visual environment.Acknowledgement: EY-10834Attention: Brain and behavior IIOrchid Ballroom, Boards 421–430Wednesday, May 12, 8:30 - 12:30 pm63.421 The spatial distribution of visual attention in early visualcortexSucharit Katyal 1 (sucharit@mail.utexas.edu), David Ress 1 ; 1 Department ofPsychology and Section of Neurobiology, University of Texas at AustinPurpose: Previous studies have suggested that in addition to the spatiallyspecific enhancement of neural activity in early visual areas due to selectiveattention, there is a suppression of activity corresponding to regions surroundingthe attentional target (e.g., Hopf et al, PNAS, 4, 1053-1058, 2004);the suppressive surround may be one source of the poorly understood“negative BOLD” effect often observed in visual cortex. We use high-resolution(1.4-mm) fMRI to measure the spatial distribution of activity withinand surrounding an attentional target in early visual cortex. The experimentsexamine how attention affects this spatial distribution, and how thedistribution is modulated by size of the attentional target. Methods: Wecompared responses in two conditions. In the attend-toward condition,subjects were sequentially cued to focus attention on one of four circular338 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsWednesday Morning Postersdrifting-grating targets (each 1.2º diameter; 14-s period) arranged diagonallyaround the fixation point at an eccentricity of 2.75º. Subjects’ task wasto discriminate small changes in grating orientation of the cued target; taskdifficulty was adjusted continually to maintain performance. In the attendawaycondition, subjects performed a fixation point task while the targetorientations were orthogonally alternated (24-s period). In a separate localizerexperiment we precisely mapped the region of cortex stimulated bythe target using a blocked stimulus that alternated between target and surround.Results: Cue-driven responses are clearly evident and are maximalat the centers of each target, with targets subtending a diameter of ~3 mmon V1. Responses drop toward the edge of the target, eventually becomingnegative between ~7—12 mm from the center. The magnitude and size ofthis negative-BOLD surround is modulated by attention. Measurements oftarget size effects are in progress. Discussion: Attention drives a negativesurround around a visual target, where cortical activity is reduced belowbaseline.63.422 Interactions of sustained spatial attention and surroundsuppression: an SSVEP studyAyelet Landau 1 (ayeletlandau@berkeley.edu), Anna Kosovicheva 2,3 , MichaelSilver 2,3 ; 1 Department of Psychology, University of California, Berkeley, 2 HelenWills Neuroscience Institute, University of California, Berkeley, 3 School ofOptometry, University of California, BerkeleyWe employed steady-state visual evoked potentials (SSVEPs) to measureeffects of sustained spatial attention on visual responses and their modulationby orientation-specific surround suppression. Displays contained acircular sinusoidal grating consisting of separate annulus and surroundregions. Subjects continuously maintained fixation on a central square for4.8 s while the contrast of the surround and annulus regions of the displayreversed at 12.5 Hz and 16.67 Hz, respectively. In separate blocks,we manipulated spatial attention by instructing participants to detect acontrast decrement either within the target annulus or within the fixationsquare. Surround suppression of contrast discrimination performance wasstronger when the annulus and surround had the same orientation comparedto when they were orthogonally oriented. Fourier decompositionwas used to separately measure response amplitudes at the surround andannulus frequencies. This analysis revealed distinct scalp topographies forannulus and surround portions of the stimuli, with maximal responsesto the annulus at lateral occipital recording sites and maximal surroundresponses at posterior midline sites. Based on these topographies, sites ofinterest corresponding to annulus and surround responses were defined.We examined effects of both surround processing and spatial attention.First, orientation-specific modulation of SSVEP responses to the surroundwas positively correlated with behavioral orientation-specific surroundsuppression. This finding links neural responses to the surround with surroundsuppression of discriminability of the annulus. Second, sustainedspatial attention exhibited different effects at annulus and surround sites.For annulus sites, attending to the annulus enhanced annulus responses.Furthermore, this enhancement was only observed when the annulus andsurround shared the same orientation, corresponding to the condition inwhich behavioral surround suppression is strongest. For surround sites,attending to the annulus decreased surround responses. These results demonstrateattentional enhancement of responses to a task-relevant stimulusand reduction of responses to an ignored surround.63.423 Attentional modulation in perception of speed occurs in thefirst motion-processing stageFumie Sugimoto 1 (fsugimoto@kwansei.ac.jp), Akihiro Yagi 1 ; 1 Department ofIntegrated Psychological Science, Kwansei Gakuin UniversityRecently, there have been studies indicating that attention does not onlyimprove performance, but also alters subjective perception. Turatto, Vescoviand Valsecchi (2007) found this phenomenon in perception of speed, reportingthat a moving grating presented in an attended position was perceivedmoving faster than when presented in a less attended position. In the presentstudy, we investigated the stages of motion information processing tofind where attentional modulation occurred. Motion information processingconsists of a stage where local motion is detected, and a following stagewhere integration is done to generate perception of coherent motion. Wepresented plaid patterns that have the same coherent motion but differentcomponents and measured the alteration of plaids’ speed perception. If theattention affects speed perception in the local motion detection stage, theamount of change in perceived speed by attention in each plaid should bedifferent. On the other hand, in the case that the attention affects speedperception in the integration stage, there should be no difference betweenthe changed speeds of each plaid. We used the cueing paradigm to manipulatethe participants’ attention. After a cue appeared in left or right of thefixation, two plaids were presented in both peripheral positions simultaneously.The participants’ task was to report the plaid that appeared to movefaster. As a result, it was showed that speeds of plaids were perceived fasterin cued positions. We then calculated the amount of change in perceivedspeed of each plaid pattern and compared them, and found that the amountof changed speed differed between each plaid. These findings indicate thatthe alteration in perception of speed by attention occurs in the first stage ofmotion processing, which perform local motion detection.63.424 fMRI responses in human MT+ depend on task and not theattended surfaceErik Runeson 1 (eruneson@u.washington.edu), Geoffrey Boynton 1 , Scott Murray 1 ;1 University of WashingtonPrevious neuroimaging studies show that the effects of attention onresponses in the human visual cortex depend on the physical properties ofan attended surface. For example, responses in MT+ have been shown toincrease when attention is directed to a moving surface relative to a static surface.Separate studies demonstrate that responses in particular visual areasare also dependent on the task being performed. For example, responses inMT+ have been shown to increase during a speed discrimination task relativeto color discrimination or shape discrimination tasks. We investigatedthe separable effects of the attended surface and task on responses throughouthuman visual cortex. Participants viewed two spatially overlapping butperceptually separable surfaces of dots, one continuously moving, and onenearly static. In alternating 20-second blocks composed of two-intervalforced-choice trials, the participants attended to either the moving or thenearly static field, and performed either a speed discrimination task, or acolor discrimination task. Consistent with previous findings, responses inearly visual areas (V1-V3) did not depend on either the attended surfaceor the task being performed. However, responses in MT+ did vary withtask, showing greater responses during speed discrimination than duringcolor discrimination, but did not depend on whether a moving or a nearlystatic surface was being attended. The same pattern of responses was demonstratedin unstimulated areas of MT+ (contralateral hemisphere). Theresults suggest that overall task-dependent response modulation is independentof the physical properties of an attended surface.63.425 Feature binding signals in visual cortexSeth Bouvier 1 (sbouvier@princeton.edu), Anne Treisman 1 ; 1 Psychology, PrincetonUniversityThe contributions of visual cortex neurons to feature binding are not wellunderstood. A central issue is distinguishing models of feature binding inwhich separate feature-coding neurons are linked together from models inwhich binding is done by neurons that explicitly encode feature conjunctions.Here, we measured brain responses with fMRI as subjects viewed anannulus containing colored, moving dots and detected the presence of targetdots defined either by a single feature (color, or direction of motion), orby a conjunction of those features. Each trial consisted of a 500 millisecondstimulus presentation, followed by a one second response period. Data wereacquired in a blocked design; the blocks contained 16 trials and were separatedby 25 seconds of rest. Patterns of activity during the three conditionscould be distinguished with machine learning algorithms, as early as V1.Analysis of the voxel-based weights output by the classification algorithmrevealed that voxels informative for classification of the conjunction taskwere less informative for the feature tasks, and vice versa. In other words,when subjects searched for a feature conjunction (e.g. red and down), informativecortical locations were different than when subjects searched for thesame color (red) or motion direction (down) separately. This result suggeststhat separate populations of neurons were activated during binding relativeto feature detection. A whole-brain analysis revealed involvement ofparietal cortex in binding, even though no task explicitly required shifts ofspatial attention. Overall, these data are consistent with a feature bindingmechanism sensitive to conjunction-coding neurons in early visual cortex.Acknowledgement: his research was supported by grant 1000274 from the IsraeliBinational Science Foundation, and by NIH grants 2RO1MH058383-04A1 and1RO1MH062331Wednesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>339


Wednesday Morning PostersVSS 2010 AbstractsWednesday AM63.426 Spatial cueing effects in perceptual decisions of humans,monkeys, and beesStephen C Mack 1 (mack@psych.ucsb.edu), Dorion B Liston 2 , Richard J Krauzlis 3 ,Lisa Bogusch 4 , Randolf Menzel 4 , Miguel P Eckstein 1 ; 1 Department of Psychology,University of California, Santa Barbara, 2 Human Systems Integration Division,NASA Ames Research Center, 3 Salk Institute for Biological Studies, 4 Departmentof Neurobiology, Freie University, BerlinAlthough the influence of predictive spatial cues on perceptual decisionshas been studied in humans and monkeys, few studies have directly comparedcueing effects across species (Bowman et al., 1993). Here, we investigatethe effects of spatial cueing and its interaction with target detectabilityin a similarly structured paradigm across humans, monkeys, and beesand compare the results to a Bayesian ideal observer. Methods: Humansand monkeys participated in the same spatial two alternative forced choicetask in which a Gaussian signal of varying detectability (SNRs=0, 2.7, 4.0)embedded in white noise had to be localized. Subjects indicated the targetlocation by making a rapid eye movement towards it. Prior to the onset ofthe stimulus, a brief precue was presented indicating the target locationwith 75% accuracy. Bees were trained to fly to one of two boxes containinga target of colored cardboard. The distractor box contained a similar pieceof cardboard with a color that varied in its discriminability from the target(e.g. blue/blue vs. blue/grey). A secondary black cardboard served as acue and co-occurred with the target on 80% of the trials. Results: Cueingeffects, defined as the difference in proportion correct for validly and invalidlycued trials, were present for all three species but less than those predictedby an optimal Bayesian observer. These effects were comparable forhumans and monkeys, but smaller for bees. However, consistent with idealobserver predictions, cueing effects increased with decreasing detectabilityof the target for all three organisms. Conclusions: Our results show that theinfluence of spatial cues on perceptual decisions is pervasive across species.The modulation of the cueing effect with signal strength for all threeorganisms is consistent with a Bayesian mechanism whereby sensory dataare weighted by prior probabilities.Acknowledgement: National Science Foundation (0819582)63.427 Gaze position-dependent modulation of the primary visualcortex from the eye proprioceptive representation – an offlineTMS-fMRI studyDaniela Baslev 1 (d.balslev@gmail.com), Tanja Kassuba 1 ; 1 Danish Research Centrefor Magnetic Resonance, Copenhagen University Hospital, Hvidovre, DenmarkPlanned eye movements produce involuntary shifts in attention. The recentdiscovery of a proprioceptive eye position signal in the somatosensory cortex(Wang&al., 2007; Balslev&Miall, 2008) prompts the question whetherstatic eye position, too, can affect visual attention. TMS over the leftsomatosensory cortex targets the eye proprioceptive representation, causingan underestimation of the gaze angle of the right eye (Balslev&Miall,2008) and a pseudo-neglect for visual targets presented far versus nearthe perceived direction of gaze despite their equal retinal eccentricity.Namely when right eye was rotated leftwards and TMS shifted perceivedeye position rightwards, visual detection increased in the right visual fieldand decreased in the left. When the right eye was rotated rightwards andTMS assumingly produced an underestimation of this rotation shifting perceivedeye position leftwards, visual detection decreased in the right hemifield(Balslev, Gowen&Miall, unpublished). Here we used fMRI before andafter rTMS over the left somatosensory cortex or motor cortex control areato investigate whether the eye proprioceptive representation is functionallyconnected to the visual cortex. Participants (n=11) fixated their right eyeon a cross located on the screen at either to the left or right of the sagittalplane through the right eye. Visual targets appeared to the left or rightof fixation at equal retinal eccentricity. In line with our behavioral results,somatosensory TMS significantly increased neural activity in a right parieto-occipitalcluster (cluster p


VSS 2010 AbstractsWednesday Morning Postersa colored rectangle in a memory array, and memory was tested at the endof the trial by a test array. In Experiment 1, task-irrelevant circles (probestimuli) were briefly presented between the memory and test arrays. Wefound that the task-irrelevant circles matching the color of the rectanglein memory did not elicit N2pc but instead elicited Pd. This suggests thatthe item matching memory was detected but was suppressed before attentioncould be captured by it. In Experiment 2, a search-target square waspresented on 50% of the trials between the memory and test arrays, requiringa button-press response. In the other 50% of the trials, task-irrelevantcircles were presented, as in Experiment 1. We found that the target squareelicited an N2pc, and N2pc onset latency was earlier if the target matchedthe color being held in visual working memory. In contrast, the task-irrelevantcircles again elicited Pd if they matched the color being held in visualworking memory. These results suggest that sensory inputs matching thecontents of visual working memory are detected and have an advantagein competing for attention, but these inputs can be either suppressed orfacilitated depending on task demands. Thus, items that match memoryautomatically receive priority for attention, but the actual deployment ofattention is flexibly controlled by top-down factors.Attention: Features and objectsOrchid Ballroom, Boards 431–449Wednesday, May 12, 8:30 - 12:30 pm63.431 Multi-level neural mechanisms of object-based attentionElias Cohen 1 (elias.h.cohen@vanderbilt.edu), Frank Tong 1 ; 1 Psychology Department,Vanderbilt UniversityWe used fMRI in conjunction with pattern classification to characterize theneural mechanisms responsible for the attentional selection of complexobjects. Stimuli consisted of single faces, single houses, and blended imagesin which the two images were spatially overlapping. Observers performeda same/different discrimination task involving pairs of sequential stimuli.In the blend condition, observers performed the discrimination task for oneobject type, which required attending selectively to either faces or houseswhile ignoring the other object type. Functional activity patterns from individualvisual areas (V1-V4, fusiform face area, and parahippocampal placearea) were used to train a linear classifier to predict the object categorythat was seen or attended on independent fMRI test blocks. Activity patternsin both high-level object areas and low-level retinotopic areas couldaccurately distinguish between single faces and houses. More importantly,activity in these areas could reliably distinguish the target of attentionalselection in the blended stimulus condition, across all visual areas tested.When classifiers trained on single object conditions were used to predictthe attended object category in the blended condition, decoding performancewas high across all areas indicating wide-ranging effects of objectbasedattention. Furthermore, block-by-block analysis of the strength of theobject-specific attentional bias revealed a strong correlation between biassignals in object-selective areas and low-level areas. Finally, the object-specificactivity patterns found for attended upright objects could effectivelygeneralize to cases when the same object classes were viewed upside down.Results indicate that an object-selection mechanism involves a multi-levelbias modulating diverse feature responses throughout the visual pathway.Selecting one of two overlapping objects involves widespread biasing ofcortical activity, resulting in activity patterns that resemble the target objectviewed in isolation. Such attentional filtering may be essential for flexibleefficient processing in crowded and complex real-world visual settings.Acknowledgement: NIH R01-EY017082 and NSF BCS-064263363.432 Object-Based Attentional Selection is Affected by VisualSearch StrategyAdam S. Greenberg 1 (agreenb@jhu.edu), Steven Yantis 1 ; 1 Department of Psychologicaland Brain <strong>Sciences</strong>, Johns Hopkins UniversityObject-based selective attention is often assumed to spread automaticallythroughout any object to which attention is cued. However, Shomsteinand Yantis (2002, 2004) reported that object-based attentional modulationarises only when the target location is uncertain, suggesting that attentionis deployed only to behaviorally relevant locations on the objects. In a previouspresentation (VSS, 2006) we extended these findings by showing thatcontingent attentional capture (Folk et al, 1992) is object-based only whentarget location is uncertain. Here we examined the role of visual searchstrategy in object-based attention. Displays contained two rectangles with 5RSVP streams (one at fixation and one at each end of the two objects). Thefour peripheral streams contained all gray letters and the central streamwas either all gray (Experiment 1) or multicolored (Experiment 2). Subjectswere aware that the central stream contained a red target letter (or greenfor half the subjects) on each trial, which they were to identify. One or twoframes prior to target onset, a target-colored or nontarget-colored distractercould appear in one of the peripheral RSVP streams (same or different rectangle).Robust contingent capture was observed in both experiments, butits magnitude differed for distracters on the same versus different objectsonly in Experiment 1. Furthermore, performance on nontarget-colored distractertrials suggested that subjects used a temporal Singleton DetectionMode (SDM) to detect targets in Experiment 1 and Feature Search Mode(FSM) in Experiment 2 (Bacon & Egeth, 1994). Because FSM requires a morespecific definition of the target-defining feature than does SDM, the presentdata show that uncertainty in the target-defining feature (here, duringSDM) can evoke object-based attentional modulation, even when targetlocation is certain. Thus, a more general principle of target uncertainty mayguide the allocation of attention to objects.Acknowledgement: Supported by NIH grants F31-NS055664 to ASG and R01-DA13165to SY63.433 When two objects are easier than one: Effects of objectocclusionW. Trammell Neill 1 (neill@albany.edu), Yongna Li 1 ; 1 Department of Psychology,University at Albany, State University of New YorkAccording to theories of “object-based attention”, it should be easier todivide attention between two attributes of one object, than between twoattributes of two different objects. However, some studies find that twoattributes can be compared faster when on separate objects than when onthe same object (e.g., Burnham & Neill, 2006; Cepeda & Kramer, 1999; Davis& Holmes, 2005). This effect appears to occur when the whole objects canbe compared more easily than component parts (Neill, Li & Seror, 2009).The present experiments investigate whether between-object superiority(BOS) can occur for partially obscured objects, via amodal completion. Inthe first experiment, subjects judged small notches in the ends of one or tworectangular “objects” as same or different in shape (rectangular and/or triangular).The spatial separation between the notches was equal for withinandbetween-object presentations. A third perpendicular object appearedto partially occlude either (a) both target objects, (b) only one target object,or (c) neither target object. Reaction times showed BOS if neither object wasoccluded, and also a smaller but significant effect when only one objectwas occluded. Surprisingly, there was no within- vs. between-object differencewhen both objects were partially occluded. The results suggest thatBOS can occur for partially occluded objects, but amodal completion mayonly occur for an occluded object if cued by a non-occluded object. Followupexperiments investigate the effects of intra-list and intra-trial cueing ofamodal completion on between-object superiority. Implications for theoriesof object-based attention will be discussed.63.434 The Role of Surface Feature and Spatiotemporal Continuityin Object-Based Inhibition of ReturnCaglar Tas 1 (caglar-tas@uiowa.edu), Michael Dodd 2 , Andrew Hollingworth 1 ;1 Department of Psychology, University of Iowa, 2 Department of Psychology,University of Nebraska-LincolnHow is object correspondence established across dynamic changes in theworld so that objects are treated as continuous entities? Correspondencecould be established by spatiotemporal continuity or by surface featurecontinuity. We examined the relative contributions of surface feature andspatiotemporal information to object correspondence, within the context ofobject-based inhibition of return (IOR). In contrast to previous paradigmsthat depended on conscious report to assess object correspondence, objectbasedIOR assesses correspondence implicitly by measuring the efficiencyof orienting to a previously attended object. In the present experiments,one of two colored array objects was cued, the objects moved to new locations,and participants executed a saccade to either the previously attendedor unattended object. We systematically manipulated the objects’ surfacefeature and spatiotemporal properties to measure the relative contributionsof both sources of information to object correspondence. In Experiments 1and 2, we kept spatiotemporal information consistent but either changedthe objects’ colors to new values (Experiment 1) or swapped the objects’ col-Wednesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>341


Wednesday Morning PostersVSS 2010 AbstractsWednesday AMors (Experiment 2) during the change in spatial position. Object-based IORwas observed despite a color change in Experiment 1 but was eliminatedin Experiment 2 when the objects switched surface features, demonstratingthat surface feature continuity contributes to object correspondence in IOR.In Experiment 3, we kept surface feature information consistent but eliminatedspatiotemporal continuity by removing the linking motion betweenthe original and updated locations. Surface feature continuity was not sufficientto support object-based IOR in the presence of this type of salientspatiotemporal discontinuity. These data indicate that, contrary to theclassic view that only spatiotemporal information drives correspondenceoperations, object persistence in dynamic displays is also computed on thebasis of surface feature continuity. However, spatiotemporal features maybe weighted more heavily than surface features in this paradigm.63.435 Attentional control settings can be object-basedStacey Parrott 1 (staceyparrott2014@u.northwestern.edu), Brian Levinthal 1 , StevenFranconeri 1 ; 1 Northwestern UniversityWe are able to guide attentional selection toward features that are relevantto our current goals. Recent work shows that independent features can bepreferentially selected in independent locations of the visual field (Adamo,Pun, Pratt, & Ferber, 2008). The present study demonstrates that our abilityto control attentional selection is even more sophisticated, such that wecan select for different features of different objects, even when those objectsshare the same spatial location. To test whether or not participants couldhold separate attentional control settings for objects that occupy essentiallythe same spatial region, participants viewed two object outlines (ahorizontal and a vertical rectangle) that between conditions were eitherspatially separated or overlapped in space. Participants were instructed topress a key whenever a target color was presented within a specific rectangle,such as green within the horizontal object, or blue within the verticalobject. Before the target appeared, one outline was cued with either greenor blue. This cue could match the color, orientation, or both color and orientationof the subsequent stimulus. For spatially separated objects, resultsconfirmed past results showing independent control settings for objects atdifferent locations (Adamo et. al., 2008). Moreover, the effect was equallystrong (if not stronger) when the object overlapped in space, suggestingthat observers can form complex attentional control settings for differentobjects, even when those objects appear at the same location. A controlexperiment showed that this effect was not due to simple visual or responsepriming following a compatible cue. We suggest that these control settingsare determined by the contents of visual working memory. That is, simultaneouslystoring the attentional control settings for “horizontal/green” and“vertical/blue” may require a memory representation of a horizontal greenand a vertical blue object.63.436 Studying Object-Based Attention with a Steady/Pulsed-Pedestal ParadigmBenjamin A. Guenther 1 (benguenther@gmail.com), James M. Brown 1 , ShrutiNarang 1 , Aisha P. Siddiqui 1 ; 1 University of GeorgiaThe steady/pulsed-pedestal paradigm has been shown to be an effectivemanipulation of relative magnocellular (M) and parvocellular (P) activity(e.g., Leonova, Pokorny, & Smith, 2003; McAnany & Levine, 2007). However,this manipulation has primarily been used with contrast sensitivitymeasures. The purposes of the present study were to evaluate the effectivenessof this manipulation using a simple reaction time (RT) measureand then test previous findings showing specific influences on space- andobject-based attention under M- and P-biased conditions. Cuing studiesinvestigating object-based attention have shown the cost for shifting attentionwithin an object is less than equidistant shifts between two objects(object advantage = within-object RTs


VSS 2010 AbstractsWednesday Morning PostersA growing body of research has shown that spatial attention alters stimulusappearance on a number of dimensions including contrast (Carrasco, Ling,& Read, 2004), spatial resolution (Gobell & Carrasco, 2005), and size (Anton-Erxleben, Henrich, & Treue, 2007). Here we explored whether feature-basedattention would also influence perceived spatial resolution, and whether itsinfluence would mirror that of spatial attention. Each trial began with abrief presentation of a colored cue in order to direct feature-based attentionto that cue’s color. Following the cue presentation, a stimulus displaywas presented consisting of two differently-colored Landolt squares—oneof which matched the cue color on 66% of the trials. Participants performeda two-alternative forced-choice discrimination task requiring them to providea response indicating which of the two Landolt squares possessed thelarger gap. Our results suggested that the effect of feature-based attentionon perceptual experience may be different than that of spatial attention.Acknowledgement: Natural <strong>Sciences</strong> and Engineering Research Council63.440 Feature-based attention enhances motion processingduring dominance and suppression in binocular rivalryMiriam Spering 1 (mspering@cns.nyu.edu), Marisa Carrasco 1 ; 1 Department ofPsychology & Center for Neural Science, New York UniversityGoal. Feature-based attention enhances the neuronal and perceptual representationof consciously perceived visual objects. But can feature-basedattention also modulate responses to stimulus properties we are unawareof? Here we use binocular rivalry flash suppression to compare the effectsof feature-based attention on perceptual and eye movement responses tovisual motion in the presence and absence of visual awareness. Method.Stimuli were two orthogonally oriented, leftward or downward movingsine-wave gratings presented separately to each eye through a stereoscope.We used binocular rivalry flash suppression to manipulate stimulus visibility,rendering one stimulus dominant and the other suppressed. Ineach trial, the speed of either the dominant or suppressed stimulus brieflyincreased or decreased for 50 ms. Attention was directed to either leftwardor downward motion by a 75%-valid arrow cue. In two judgments observersindicated (1) whether the speed change was an increment or decrementand (2) their perceived motion direction. Eye movements were recordedthroughout each trial. Results. Feature-based attention affected (1) speeddiscrimination and (2) motion processing. (1) Performance in the speedchange discrimination task was better for the dominant than suppressedstimulus. It was also better for attended than for unattended stimuli, regardlessof whether the stimulus was dominant or suppressed. These findingsindicate that observers successfully deployed attention to either leftwardor downward motion. (2) When the suppressed stimulus was attended,perceptual and eye movement responses were shifted towards the motiondirection of the suppressed stimulus. This study therefore shows that feature-basedattention can affect motion processing as assessed by perceptionand eye movements, even when a stimulus is substantially weakenedthrough binocular rivalry.Acknowledgement: German Research Foundation 1172/1-1 (MS), NIH RO1 EY016200(MC)63.441 Measuring the spatial spread of feature-based attention toorientationAlex White 1 (alex.white@nyu.edu), Marisa Carrasco 1,2 ; 1 Department ofPsychology, New York University, 2 Center for Neural Science, New YorkUniversityGoal: Feature-based attention (FBA) enhances the processing of stimuli thatare spatially coextensive with distractors that must be ignored. For somefeature dimensions (e.g., color and motion), FBA increases sensitivity to thefeature value that distinguishes the target – both at the target location and,remarkably, at ignored or unstimulated locations across the visual field.Several studies have demonstrated location-specific effects of attention toan oriented target that overlaps with a differently-tilted distractor, but thereis little evidence for a spatial spread of orientation-based attention.Methods: We developed a novel psychophysical paradigm to measure thespread of orientation-based attention to peripheral locations where theattended feature is irrelevant. At the center of the display, a group of leftwardtilted white lines overlapped with a group of rightward tilted whitelines. Each of these groups pulsed in luminance at random times throughoutthe trial. The primary task was to count the number of pulses in one orientationwhile ignoring the other. Also present in the display were groupsof non-overlapping lines arranged in an annulus at 4.5° eccentricity. Halfof these groups were tilted leftwards and half rightwards, and, independently,half were red and half green. Once per trial, a randomly selectedperipheral group pulsed in luminance, and the observers’ secondary taskwas to report its color. Importantly, that group’s orientation was not correlatedwith the orientation of the central target.Results: Performance in the secondary task was better when the peripheraltarget’s orientation matched the orientation attended at fixation. Thus,heightened sensitivity to the attended orientation spread to peripherallocations where observers were not instructed to selectively attend andwhere doing so conferred no advantage. This new paradigm also allowsus to investigate the spatial and temporal limits of the involuntary spreadof FBA.Acknowledgement: National Institute of Health Research Grant RO1 EY016200.63.442 The spread of attention across features of a surfaceZachary Raymond Ernst 1 (zernst@u.washington.edu), Geoffrey M Boynton 1 ,Mehrdad Jazayeri 2 ; 1 Department of Psychology, University of Washington,2 HHWF, Physiol. & Biophys., University of WashingtonQuestion Does attending to a single feature within a surface facilitate perceptualjudgments about other features of that surface? Methods Stimuliwere composed of two superimposed fields of dots, each associated witha color and a direction of motion. For each surface, each feature changedslowly and independently along a smooth trajectory. Surfaces thus weredefined by time-varying conjunctions of motion and color in feature space.While fixating, observers tracked a single feature (color or motion) of oneof the two surfaces and reported discontinuities along its trajectory withbutton presses. Each trial lasted 10 s and contained 0-3 such discontinuitiesper field. On approximately 70 percent of the trials, a single discontinuitywas also introduced in the untracked feature of either surface with equalprobability. At the end of a trial the number of misses and false alarms wereprovided, and a 3AFC paradigm was used to assess whether observersdetected a discontinuity in an untracked feature, and if so in which surface.Results Observers were better at detecting discontinuities in the untrackedfeature when it occurred in the tracked surface compared to the untrackedsurface. For example, while tracking the motion of one surface, subjectswere better at detecting color discontinuities in that surface compared tocolor discontinuities in the other surface. Conclusion Our results suggestthat attending to a feature of a surface recruits attentional mechanismsassociated with other features of that surface. Our preliminary results usingfMRI suggest that this feature-tracking task may lead to enhanced modulationof hemodynamic responses associated with multiple features of anattended surface.Acknowledgement: NIH EY1292563.443 Feature Exchange: the unstable contribution of features inthe maintenance of objects moving along ambiguous trajectoriesArthur Shapiro 1 (arthur.shapiro@american.edu), Gideon Caplovitz 2,3 ; 1 Departmentof Psychology, American University, 2 Department of Psychology, PrincetonUniversity, 3 Princeton Neuroscience InstituteWe introduce a novel visual phenomenon in which two objects traversingdistinct motion trajectories seemingly “exchange” their defining featureswith each other. We demonstrate feature exchange using an ambiguousmotion display in which the motion of two translating objects is consistentwith the objects colliding or passing through each other (Metzger 1938).We observe that when the two objects have different features (e.g., color,textures, orientations, size, faces), collisions may still be perceived, butthe features appear to un-bind from one object and bind to the other. Theun-binding and re-binding of the features is quite compelling and occurswith both simple (color, size) and complex (texture, faces) features. Thisdissociation of feature information and motion trajectory suggests a limited(perhaps non-existent) role of feature information in the spatiotemporalmaintenance of object representations. In a series of psychophysical experiments,we use the occurrence of feature exchange as an empirical tool fortesting this hypothesis. Methods: Observers reported whether they perceiveda pass-through or collision when presented with displays in whichthe two objects were defined by either the same feature or by one or moredifferent features. We compared the percentage of reported collisions onsame-feature and different-feature trials. Results: Although collisions werereported when the two objects differed on one or more feature dimensions(feature exchange), the percentage of these trials was smaller than when thetwo objects were defined by the same features. Conclusions: Although theWednesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>343


Wednesday Morning PostersVSS 2010 AbstractsWednesday AMexistence of feature exchange demonstrates that the features do not fullymediate, they do contribute to, the spatiotemporal maintenance of objectrepresentations. Feature exchange depends critically on stimulus parameters,such as contrast relative to the background, differences in relativesize of the objects, and whether or not the objects overlap at the point ofcollision.63.444 Global feature-based attention distorts feature spaceMarc Zirnsak 1 (zirnsak@psy.uni-muenster.de), Fred Hamker 1,2 ; 1 PsychologischesInstitut II, Westf. Wilhelms-Universität Münster, Germany, 2 Informatik, KünstlicheIntelligenz, TU Chemnitz, GermanySelective visual attention is generally conceptualized to control the flow ofinformation with respect to the task at hand. Various studies in the spacebasedand feature-based domain have demonstrated that the visual systemachieves this via gain-control mechanisms. These mechanisms are supposedto result in an enhanced neural representation of relevant stimulior features while irrelevant ones are suppressed. For example, attendingto a specific direction of motion results in an enhanced response of thoseneurons whose tuning characteristics match with the attended direction,while neurons which prefer the opposite direction are inhibited. However,some models predict that attention can alter the perceived feature as well(e.g. Hamker, Adv. Cogn. Psychol., 2007).Our study provides psychophysical evidence that global feature-basedattention does indeed not simply result in a more salient representation ofattended features, but that global feature-based attention is able to dynamicallyalter the entity of an encoded stimulus in feature space.While subjects attended to the direction of a target random-dot kinematogram(RDK) presented in one hemisphere, another adaptor RDK waspresented in the opposite hemisphere. After a certain time, subjects indicatedthe direction of the perceived motion aftereffect of the unattendedadaptor RDK. We observed that directions close to the attended target areattracted (32 degrees on average) while directions farther away are repelled(29 degrees on average) resulting in an effective expansion of the featurespace between these directions. We explain these effects by model simulationsin which gain-modulations lead to distortions of the neural populationresponses if the adaptor direction differs from the one of the target.Furthermore, consistent with recent electrophysiological observations thismodel predicts changes of tuning curves for cells which are driven by thesedistorted responses with the consequence that more cells are recruited toprocess the attended feature.Acknowledgement: F. H. was supported by the German Science Foundation (DFG HA2630/4-1), the European Comission (FP7-ICT: Eyeshots), and the Federal Ministry ofEducation and Research (BMBF 01GW0653).63.445 Attention is Directed by Prioritization in Cases of CertaintyAlexandra Fleszar 1 (aflesz@gmail.com), Anna Byers 2 , Sarah Shomstein 1 ; 1 GeorgeWashington University, 2 University of California San DiegoWhile some recent studies suggest that object-based attentional selection isdriven by spatial uncertainty of target location (Shomstein & Yantis, 2002,2004), other studies suggest that target-to-object relationship, rather thanuncertainty, is the determining factor (Chen & Cave, 2006; Richard, Lee,& Vecera, 2008). In the present study we re-evaluate the contribution ofspatial uncertainty to object-based effects as well as examine the interactionof uncertainty with target-to-object relationship. In a series of fourexperiments we manipulate uncertainty and target-to-object relationship toexamine the extent to which target uncertainty contributes to object-basedattention. Participants were presented with a series of single and multipleobjects that contained “bites” (concavities), and were asked to perform ashape discrimination task in two conditions: (i) when target location wasknown in advance (certainty), and (ii) when target location was uncertain.In addition, the properties of the “bites” were manipulated, they wereeither interpreted as being part of the object or as having been placed onto the object. We observed object-based effects when target location wasuncertain, and only if the targets were interpreted as being a part of theobject. On the other hand, when target location was known in advance,or when the target shapes appeared to be independent of the object (i.e.,not a part of it), object-based effects disappeared. These results re-instatethe importance of location uncertainty in object-based attentional guidance,and suggest that object properties have to be task relevant if they are toinfluence object-based effects.63.446 Cue Position Alters Perceived Object SpaceFrancesca C. Fortenbaugh 1 (fortenbaugh@berkeley.edu), Lynn C. Robertson 1,2 ;1 Department of Psychology, University of California, Berkeley, 2 VA NorthernCalifornia Health Care SystemBrief visual cues are often used to induce involuntary shifts of attention tolocations away from the center of gaze. However, it is not fully understoodhow displacements of the focus of attention alter visual processing and, inparticular, the perceived structure of objects in the visual field. The presentstudy addressed this question by presenting cues that were either within oroutside the contour of a subsequently presented oval and measuring howcue placement altered the perceived shape of the ovals. On every trial twowhite dots were briefly presented as cues (50ms) at equal eccentricities alongthe horizontal or vertical meridian (8° or 14°). Following a 100ms ISI one offifteen blue ovals was presented for 100ms. The ovals were centered at fixationand had horizontal radii of 5°, 11°, or 14°. The height of the ovals was+0%, ±5%, ±10% the horizontal radius. Participants responded after everytrial whether the oval was wider or taller than a perfect circle. The cueswere paired with ovals such that cue positions (inside/outside contour;horizontally/vertically aligned) were uninformative of which dimensionof the oval was larger. There were 40 cue-oval combinations and 25 repeatsper configuration. There was a significant Cue Side x Cue Configuration xOval Height interaction showing that when the cues were located withinthe oval contour the percentage of taller responses increased for verticallyaligned cues and decreased for horizontally aligned cues relative to whenthe cues were placed outside the oval contour. However, this pattern ofresponses was only seen for the middle Oval Heights (0% = circle) and notthe most extreme (e.g. ±10%). This double dissociation cannot be explainedby a simple response bias and suggests that the relative position of cues cansystematically alter subsequent processing of an object’s spatial structure.Acknowledgement: NSF GRF (F.C.F.) and NIH #EY16975 (L.C.R.)63.447 Neural representation of targets and distractors duringobject individuation and identificationSu Keun Jeong 1 (skjeong@fas.harvard.edu), Yaoda Xu 1 ; 1 Department ofPsychology, Harvard UniversityMany everyday activities, such as driving on a busy street, require theencoding of multiple distinctive target objects among distractor objects. Toexplain how multiple visual objects are attended and perceived, the neuralobject file theory argues that our visual system first selects a fixed numberof about four objects from a crowded scene based on their spatial/temporalinformation (object individuation) and then encodes their details (objectidentification) (Xu & Chun, 2009, TICS). In particular, while object individuationinvolves the inferior intra-parietal sulcus (IPS), object identificationinvolves the superior IPS and higher visual areas such as the lateraloccipital complex (LOC). Because task irrelevant distractor objects werenever present in previous studies, it is unclear how distractor objects areprocessed and whether they influence the encoding of target objects duringobject individuation and identification. In the current fMRI study, we askedobservers to encode target shapes among distractor shapes, with targetsand distractors defined by two different colors. If distractors can be ignoredbased on their color before objects are individuated and identified, then thepresence of distractors should have minimal impact on fMRI responses inthe inferior and the superior IPS and the LOC. However, if distractors areautomatically individuated or even identified, then neural responses in therelevant brain areas should be modulated accordingly. In a third possibility,if irrelevant information is only encoded when the central processingresources are unfilled as argued by the perceptual load theory, then distractorswill be processed under low but not under high target load. Consistentwith the perceptual load theory, we found that distractors only affectedneural responses under low, but not under high, target load. Moreover, thepresence of distractors under low target load had different effects on fMRIresponses in the inferior and the superior IPS and the LOC.Acknowledgement: This research was supported by NSF grant 0855112 to Y.X.63.448 Number and Area Perception Engage Similar Representations:Evidence from Discrimination TasksDarko Odic 1 (darko.odic@jhu.edu), Ryan Ly 1 , Tim Hunter 2 , Paul Pietroski 2 , JeffreyLidz 2 , Justin Halberda 1 ; 1 Department of Psychological and Brain <strong>Sciences</strong>, JohnsHopkins University, 2 Department of Linguistics, University of Maryland CollegePark344 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsWednesday Morning PostersThe present research addresses the long-standing question of the similaritybetween the perception of individual “count” objects (“three cats”), andcontinuous “mass” objects (“some sand”). Such distinctions have beenimportant in studies of higher cognition (e.g., linguistics) and basic visualrepresentation (e.g., ensemble features). Can mass and count be representedin a shared format? A prominent theory by Chierchia (1998; Natural LanguageSemantics 6, 339-405) proposes that mass and count representationsshare a similar format as ensembles of discrete individuals. This view, inconcert with certain theories of mid-level vision (Pylyshyn, 1989; Cognition32, 65-97), takes objects to be the foundation upon which higher cognitionis built. Our study tested whether number and area perception sharea visual format through sets of individuals or through noisy continuousapproximations. In Experiment 1, 20 adults were presented with two setsof dots for 150 ms on each trial and were asked to evaluate which set wasgreater in number (see Figure 1). An additional 20 subjects saw, for 150 ms,images of 2-D blobs made up of two colors, and had to judge which colorhad a greater area (see Figure 2). A classic psychometric curve derived froma Gaussian subtraction model was highly correlated with the observeddata for each condition (R2 > 0.90), and suggests that the representationsof number and area share a similar format through noisy approximations.Therefore, the underlying representation, for vision, of both count andmass objects is not a set of discrete individuals, but a continuous Gaussianvariable (or a Bayesian probability distribution). These findings speaknot only to the psychophysics of area and number discrimination in adults,but also contribute to long-standing debates in lexical semantics, as wellas to the relationship between vision and language and the extraction ofensemble features.Acknowledgement: NSERC PGS-M Fellowship63.449 Representations of “Event Types” in Visual Cognition: TheCase of Containment vs. OcclusionBrent Strickland 1 (brent.strickland@yale.edu), Brian J. Scholl 1 ; 1 Perception &Cognition Lab, Department of Psychology, Yale UniversityThe visual system segments dynamic visual input into discrete event representations,but these are typically considered to be token representations,wherein particular events are picked out by universal segmentation routines.In contrast, recent infant cognition research by Renee Baillargeonand others suggests that our core knowledge of the world involves “eventtype” representations: during perception, the mind automatically categorizesdynamic events into types such as occlusion, containment, support,etc. This categorization then automatically guides attention to differentproperties of events based on their category. For example, an object’s widthis particularly relevant to containment events (wherein one object is loweredinside another), because that variable specifies whether the event ispossible (i.e. whether it will ‘fit’). However, this is not true for the variableof height. This framework has been supported by looking-time experimentsfrom Baillargeon’s group: when viewing containment events, infantsencode objects’ widths at a younger age than they encode their heights -- but no such difference in age is observed for similar occlusion events.Here we tested the possibility that this type of ‘core knowledge’ can also beobserved in mid-level object-based visual cognition in adults. Participantsviewed dynamic 2D displays that each included several repeating eventswherein rectangles either moved into or behind containers. Occasionally,the moving rectangles would change either their height or width while outof sight, and observers pressed a key when they detected such changes.Change detection performance mirrored the developmental results: detectionwas significantly better for width changes than for height changes incontainment events, but no such difference was found for occlusion events.This was true even though many observers did not report noticing thesubtle difference between occlusion and containment. These results suggestthat event-type representations are a part of the underlying currencyof adult visual cognition.Spatial vision: MaskingOrchid Ballroom, Boards 450–461Wednesday, May 12, 8:30 - 12:30 pm63.450 Visual performance fields in noiseJared Abrams 1 (jared.abrams@nyu.edu), Marisa Carrasco 1,2 ; 1 Department ofPsychology, New York University, 2 Center for Neural Science, New YorkUniversityGoal: Here we investigated the changes in visual performance fields withexternal noise to determine whether the observed asymmetries are due tosensitivity or internal noise. Contrast sensitivity (CS) varies at isoeccentriclocations. Specifically, CS is higher on the horizontal (East and West) thanthe vertical (North and South) meridian. Along the vertical meridian, CSis higher in the South than the North (e.g. Carrasco, Talgar, & Cameron,2001). Generally, perceptual performance is a conjoint measure of the sensitivityto the signal and the amount of internal noise. Examining CS as afunction of noise contrast reveals the equivalent input noise of the visualsystem. This value indicates the level of external noise at which internalnoise ceases to dominate signal processing (Pelli & Farell, 1999).Method: Observers performed an orientation discrimination task on Gaborsof varying contrast at four isoeccentric locations (North, South, East, andWest). The Gabors were embedded in Gaussian white noise of varyingcontrast. We measured the 75% performance threshold for each locationand noise contrast and plotted threshold versus noise contrast functionsfor each location.Results: Without external noise, thresholds were lowest in the East andWest, higher in the South, and highest in the North. However, as noisecontrast increased, thresholds increased the most at the East and West, lessin the South, and least in the North. Thus, the equivalent input noise islowest on the horizontal meridian, higher in the South, and highest in theNorth. At a high enough level of external noise, the thresholds for all fourlocations were the same. These data suggest that the differences in processingbetween the two meridians and within the vertical meridian are due tovariations in internal noise across the visual field.Acknowledgement: NIH EY016200 to MC and NIH T32 EY007136 to NYU63.451 Orientation uncertainty reveals different detection strategiesin noiseRemy Allard 1 (remy.allard@umontreal.ca), Patrick Cavanagh 1 ; 1 Centre Attention &<strong>Vision</strong>, Laboratoire Psychologie de la Perception, Université Paris DescartesWidely used external noise paradigms are based on the assumption thatadding external noise quantitatively affects performance but does notqualitatively affect the processing strategy. However, we recently foundevidence in a crowding task that some forms of external noise do changeprocessing strategies at higher levels and we confirm this effect of noiseon strategy here using an uncertainty reduction procedure. Specifically, wemeasured the impact of orientation uncertainty (fixed- versus randomizedorientation)on contrast detection thresholds of a sine wave grating in threedifferent noise conditions: noiseless, spatiotemporally localized noise (i.e.signal and noise shared the same spatiotemporally window) and spatiotemporallyextended noise (i.e. continuously displayed full screen dynamicnoise). In the no-noise and extended-noise conditions, knowing the orientationof the signal to detect had little impact on performance (orientationuncertainty increased contrast thresholds by 4% and 8%, respectively) suggestingthat detection was not based on an orientation recognition strategybut rather mediated by an energy-based strategy as it is generally assumed.In contrast, knowing the orientation of the signal substantially improvedperformance in the localized-noise condition (orientation uncertaintyincreased contrast thresholds by 28%) suggesting that detection was basedon an orientation recognition strategy. We conclude that spatiotemporallocalized external noise can qualitatively affect the processing strategy. Ourresults suggest that external noise paradigms should use only spatiotemporallyextended dynamic noise in order to match the likely characteristicsof the internal noise and avoid triggering qualitative changes in theprocessing strategy. These results raise questions about the validity of theconclusions of many previous studies using external noise paradigms withlocalized external noise.Acknowledgement: This research was supported by a FQRNT post-doctoral fellowship toRA and a Chaire d’Excellence grant to PCWednesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>345


Wednesday Morning PostersVSS 2010 AbstractsWednesday AM63.452 Mutual effects of orientation and contrast within andbetween the eyes: from summation to suppressionJohn Cass 1 (johncassvision@gmail.com); 1 School of Psychology, University ofWestern SydneyPrevious masking studies estimate the bandwidths of orientation-selectivemechanisms based on the rate at which threshold elevations decrease as afunction of the angular difference between spatio-temporally overlaid targetand masking stimuli, with many finding that bandwidths decrease withincreasing spatial frequency. In this study we examine, in detail, the effectsof relative phase, contrast, and the ocular mode of presentation (MON vsDICH) on the relationship between orientation masking and spatial frequency.Results: When relative phase was randomized across trials weobserved similar masking functions to those observed previously. Whenrelative phase was blocked, however (ie. in phase or anti-phase), monopticthresholds increased up to ~10 degrees of angular difference, thendecreased monotonically out to 90 degrees. This qualitative pattern, evidentat all mask contrasts (0.5 – 64 x threshold), is well fitted by an orientationdefinedDifference of Gaussian (DoG) model, consisting of narrowbandsummation (~5-10 degrees) combined with a broader suppressive componentwhich becomes narrower with increasing spatial frequency. Interestingly,identical non-monotonic patterns were observed dichoptically at lowmask contrasts (0.5 - ~8 x threshold). At higher mask contrasts however, thesign of the narrowband component reversed to produce what appeared tobe narrowband inter-ocular suppression (Baker & Meese, 2007), whilst thebroader suppressive component remained almost identical to that derivedmonoptically. These results indicate that human orientation channels laterallyinhibit one another at a neural locus that receives binocular input, withthe extent of lateral inhibition decreasing with spatial frequency. Whilstnarrowband summation is evident within and between the eyes, narrowbandsuppression results when inter-ocular contrast differences are high,possibly to reduce inter-ocular redundancy.Acknowledgement: Australian Research Council Discovery Project (DP0774697)63.453 Effects of arbitrary structural choices on the parameters ofearly spatial vision modelsHannah M.H. Dold 1 (hannah.dold@bccn-berlin.de), Sven Dähne 1 , Felix A. Wichmann1 ; 1 Modelling of Cognitive Processes, Berlin Institute of Technology andBernstein Center for Computational Neuroscience, Berlin, GermanyTypical models of early spatial vision are based on a common, genericstructure: First, the input image is processed by multiple spatial frequencyand orientation selective filters. Thereafter, the output of each filter is nonlinearlytransformed, either by a non-linear transducer function or, morerecently, by a divisive contrast-gain control mechanism. In a third stagenoise is injected and, finally, the results are combined to form a decision.Ideally, this decision is consistent with experimental data (Legge and Foley,1980 Journal of the Optical <strong>Society</strong> of America 70(12) 1458-1471; Watsonand Ahumada, 2005 Journal of <strong>Vision</strong> 5 717-740).Often a Gabor filter bank with fixed frequency and orientation spacingforms the first processing stage. These Gabor filters, or Gaussian derivativefilters with suitably chosen parameters, both visually resemble simplecells in visual cortex. However, model predictions obtained with either ofthose two filter banks can deviate substantially (Hesse and Georgeson, 2005<strong>Vision</strong> Research 45 507-525). Thus, the choice of filter bank potentially influencesthe fitted parameters of the non-linear transduction/gain-controlstage as well as the decision stage. This may be problematic: In the transductionstage, for example, the exponent of a Naka-Rushton type transducerfunction is interpreted to correspond to different mechanisms, e.g. amechanism based on stimulus energy if it is around two.Here we systematically examine the influence of arbitrary choices regardingfilter bank properties−the filter form, number and additional parameters−onthe psychophysically interesting parameters at subsequent stages of earlyspatial vision models. We reimplemented different models within a Pythonmodeling framework and report the modeling results using the ModelFestdata (Carney et al., 1999, DOI:10.1117/12.348473).63.454 On the fate of missed targetsDov Sagi 1 (dov.sagi@weizmann.ac.il), Andrei Gorea 2 ; 1 Department of Neurobiology,The Weizmann Institue of Science, Rehovot, Israel, 2 LaboratoirePsychologie de la Perception, Université Paris Descartes & CNRS, Paris,FranceSignal Detection Theory assumes no sensory threshold, that is, an internalresponse monotonically increasing with stimulus strength, available fordecision. While a wealth of empirical observations unfailingly sustainedthis principle, the debate on the existence of a sensory threshold persists.An educated SDT intuition is that an identification response of a nondetectedstimulus (Miss) should yield above chance performance. A highthreshold theory predicts chance performance. Using a single-presentationdouble-task paradigm, consisting of independent detection (Yes/No) andidentification (2AFC) reports, data of three highly trained observers showthat orientation (±45°) as well as position (±2° eccentricity) discriminationperformance for a missed Gabor patch (5 cpd) is very close to chance forcriteria as high as 1.3 noise units (σ) above mean noise level (d’ up to 2.5).To compare the data with SDT predictions, we make the standard assumptionthat observers are monitoring two independent neuronal populationscorresponding to the two possible targets in a given task. A ‘Yes’ report inthe detection task occurs when a response from any of the two populationscrosses a criterion level; an identification report corresponds to the identityof the population producing the largest response (‘labelled line’). ReceiverOperating Characteristics of these populations were estimated using ratingexperiments, with the experimental results conforming to earlier reports (σ= 1 + r/4, with r the signal evoked mean internal response; Green & Swets,1966; Graham, 1989). SDT predictions using these ROCs match observers’identification rate for Hit trials (r2=0.9) but exceed by far the measuredidentification rate for Miss trials (r2=0.04). While the rating data supportthe availability of a continuous internal response, the detection/identificationdata point to the fact that decision criteria in a Yes/No task are moreakin to a high threshold inasmuch as they are considered to be boundariesfor ‘invisible’ events.Acknowledgement: Mairie De Paris63.455 Do illusory contours prevent spatial interactions?Lynn Olzak 1 (olzakla@muohio.edu), Patrick Hibbeler 1 , Thomas Wickens 2 ; 1 Departmentof Psychology, Miami University of Ohio, 2 Department of Psychology,University of California, BerkeleyIn the third of a series of concurrent-response experiments examining decisionprocesses in fine spatial discriminations, we have investigated theeffect of creating an illusory contour between a center grating patch anda surrounding grating annulus by shifting the relative phase of the two by180 deg. In single-response paradigms, the phase shift appears to renderthe two stimulus parts independent (Saylor & Olzak, 2006). Four stimuliwere intermingled into a single session of 160 trials. In different conditionsof the experiment, observers either made hyperacuity-level orientationor spatial frequency judgments. Both the center and surround gratingpatches contained a cue to discrimination; for example, left-left, right-right,left-right and right-left. Observers made two consecutive judgments, oneon the center and one on the surround (order was counterbalanced acrossobservers), rating their confidence on a 6-point scale that either Pattern A (e.g. left) or Pattern B (e.g. right) was presented in that part of the stimulus.Like previous results found with in-phase abutting stimuli and those separatedby a tiny gap, independent processing was strongly rejected. Observersappeared once again to be using a complex decision strategy in whichthey first classified the stimulus as the “same” type (left-left or right-right)or of the “different” type (left-right or right-left), and then using the ratingscale in a correlated or anti-correlated way for the two decisions. We discussthese findings in the context of General Recognition Theory (Ashby &Townsend, 1986). Saylor, S. A. and Olzak, L. A. (2006) Contextual effects onfine orientation discrimination tasks. <strong>Vision</strong> Research, 46 (18), 2988-2997.Ashby, F. G and Townsend J. T. (1986) Varieties of perceptual independence.Psychological Review, 93, 154-179.63.456 Orientation Dependence of Contextual Suppression Derivedfrom Psychophysical Reverse-CorrelationChristopher A. Henry 1 , Michael J. Hawken 1 ; 1 Center for Neural Science, New YorkUniversityThe ability to detect a visual target can be affected, often quite dramatically,by the spatial context surrounding it. It is thought that the spatialsummation and extra-classical receptive field (eCRF) properties of visualcortical neurons underlie this perceptual metacontrast masking. Orientation-dependentcontextual effects have been reported in both perceptualmasking and neurophysiology and it is natural to compare the two.346 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


VSS 2010 AbstractsWednesday Morning PostersWe have previously investigated the tuning properties of the eCRFs ofneurons in macaque V1 (Henry et al. SFN 2008). Using a subspace reversecorrelationparadigm, we measured the temporal impulse-responses ofneurons to the presentation of oriented gratings in their eCRFs. In general,the eCRF stimuli suppressed the neurons’ responses, most often with thestrongest suppression occurring for collinear stimuli.Here, we measured the orientation dependence of metacontrast maskingpsychophysically using a similar reverse-correlation paradigm. Parafoveallypresented stimuli consisted of an outer annulus – the surround – andan inner patch. A vertical near-threshold contrast grating target was briefly(24 ms) and randomly presented in the inner patch at a rate of 2 Hz. A continuoussequence of gratings was presented in the outer annulus (18 totalorientations, changing every 24 ms). During 1 minute blocks subjects wereinstructed to press a key when they detected a target grating.Analysis consisted of correlating the randomly-presented stimuli in theouter annulus with subjects’ key presses, as we did with neuronal spikingin the previous study. Subjects were less likely to detect the targets in thepresence of oriented surrounds, with the most masking coming from collinearstimuli. Moreover, the orientation bandwidth of the masking effect issimilar to the bandwidth measured for neuronal eCRFs. Thus, using similarapproaches, we have shown that masking effects in human behavior aresimilar to eCRF suppression observed in macaque V1.Acknowledgement: This work is supported by NIH grants: EY08300, EY P31-1307963.457 ON/OFF channel asymmetry or consequences of a LuminancenonlinearityStanley J. Komban 1 (stanley.kj@gmail.com), Jose-Manuel Alonso 1 , Qasim Zaidi 1 ;1 Graduate Program in <strong>Vision</strong> Science, SUNY College of Optometry, New York,NYHartline (1938) showed that visual information is processed by parallel ONand OFF channels that signal light and dark regions in the visual scene.Later studies have demonstrated that there are functional and anatomicaldifferences between ON and OFF channels in the retina (Chichilnisky &Kalmar, 2002), OFF center geniculate afferents dominate the cortical representationof the cat area centralis (Jin et al., 2008), and OFF cells outnumberON cells in the corticocortical output layers 2/3 of macaque striate cortex(Yeh et al, 2009). In psychophysics, the most convincing evidence for thespecial importance of dark signals was obtained from texture variation discrimination(Chubb & Nam, 2000) and attributed to the OFF channel. Adifferent asymmetry between light and dark neural signals is caused bythe compressive photoreceptor response-function, and we test whetherthis nonlinearity can explain some of the published ON/OFF asymmetries.We measured the salience of black versus white spots on a uniformlydistributed black/white binary random-noise pattern. Reaction times fordetecting one to three black spots were significantly lower than for whitespots. Although white and black spots occupy physically equal areas in thebackground pattern, the white areas look larger --- the irradiation illusion(Galileo, 1632). When the perceived areas are equated for each observerby changing the relative distribution, the salience difference disappears.We found that the black/white salience difference also occurs when theChubb textures are used for backgrounds, and could account for the greaterinfluence of dark texels in their task. We were able to model the percept ofthe binary background by a compressive non-linearity at the photoreceptorlevel, so we attribute the salience differences to the non-linearity ratherthan to greater OFF channel sensitivity.Acknowledgement: EY07556 and EY1331263.459 When simultaneous presentation results in backwardmaskingMaria Lev 1 (mariale2@post.tau.ac.il), Uri Polat 1 ; 1 Faculty of Medicine, GoldschlegerEye Research Institute, Sheba Medical Center, Tel-Aviv University,IsraelCollinear facilitation is an enhancement in the visibility of a target by laterallyplaced collinear flankers (COL). Non-collinear configuration (parallel,side-by-side, SBS) produces less facilitation. Surprisingly, presentation ofCOL and SBS configurations simultaneously (CROSS) abolishes the facilitationrather than increases it - a phenomenon that has not been well understood.Here we directly explored the effect of canceled facilitation in theCROSS configuration. We used a Yes/No detection task measuring thehit-reports (Phit) and the false-positive-reports (false-alarm, Pfa) for a lowcontrastGabor target embedded between flankers. We compared COL, SBSand CROSS configurations. We also recoded Evoked Responses Potentials(ERP) from the occipital cortex. The results show that Phit were higher forCOL and SBS, but decreased for the CROSS configuration. Pfa were also thelowest for the CROSS configuration. Thus, the decision criterion switchedfrom target present (Yes) in COL to target absent (No) in CROSS, reminiscentof our earlier results of suppression by backward masking. However,the amplitude of the P1 ERP component (reflecting the strength of stimulation)is not reduced for the CROSS configuration – opposing the possibilityof lateral inhibition between flankers. The amplitude of the N1 component,a marker of lateral interactions, is in correlation with the reduction incollinear facilitation in the SBS configuration. Thus, the ERP results showthat the abolished collinear facilitation in the CROSS is not due to lateralinhibition between the COL and the SBS flankers. Interestingly, the latencyof SBS is delayed by about 10 ms compared to COL, suggesting that thefacilitatory process is selectively canceled due to backward masking effectby the delayed signal from the SBS. Thus, perceptual advantage of collinearfacilitation may be lost when interfered with facilitation from the sides,whereas the final perception is determined by the overall spatial-temporalintegration of the lateral interactions.Acknowledgement: Supported by grants from the National Institute for Psychobiology inIsrael, funded by the Charles E. Smith Family and the Israel Science Foundation63.460 Characteristics of dichoptic lateral masking of contrastdetectionErwin Wong 1 (wonge@umsl.edu); 1 Graduate Program in <strong>Vision</strong> Science, Collegeof Optometry, University of Missouri - St. LouisPurpose. Past studies show strong effects by flanks on contrast detection.A limited dichoptic study showed no net flank effect (Huang et al., 2006).Here we more extensively investigate dichoptic lateral masking. Methods.Observers: 5 adults with normal vision and 2 non-binocular, non-amblyopic(NBNA) adults. Measure: contrast detection threshold (CDT) for asinusoid (3 c/deg, 2° diameter, vertical) in isolation and with two flankingsinusoids at 3 separations (edge to edge 0.5° overlap, abutting, 0.5° separation).Flanks: target sinusoid (normalized 1.5x and 3x flank CDT) orientedvertical or horizontal, or Gaussian blobs (2° diameter) normalized by theflank CDT. Presentations: target to dominant eye, flanks monoptic (abuttingonly) or dichoptic (3 separations), 2-AFC with the MOCS, and mirrorhaploscope with septum. Results. For the normal observers, monopticviewing produced facilitation by collinear flanks (mean 10%±4% (95%CI)), no effect from orthogonal, and suppression by blobs (8%±6%). Theseeffects were greater for 3x CDT flanks and this difference was not shownunder dichoptic viewing. For dichoptic viewing, the overlap condition producedfacilitation by collinear flanks (7%±4%), suppression by orthogonal(4%±4%), and no effect by blobs. The abutting condition produced facilitationby collinear flanks (8%±4%) and orthogonal (7%±5%), and no effectby blobs. The separation condition produced suppression by collinearflanks (9%±5%), orthogonal (6%±3%), and blobs (8%±4%). For the NBNAobservers, under monoptic and dichoptic viewing all separations showedsuppression by collinear flanks (14%±7%), orthogonal (10%±8%) and blobs(8±4%). Conclusions. Dichoptic integration of contrast across space is similarto the known monocular mechanism: facilitation by flanks slightly overlappingor abutting the target, and masking only via spatial channels (blobshad little effect). However, an exclusive dichoptic mechanism is suggested:flanks suppress rather than have no effect when separated from the target.Dichoptic integration is further supported by the suppression shown by theNBNA observers.63.461 Shape discrimination in migraineursDoreen Wagner 1 (Doreen.Wagner@gcal.ac.uk), Gunter Loffler 1 , VelitchkoManahilov 1 , Gael E. Gordon 1 , Gordon N. Gordon 1 , Peter Storch 2 ; 1 <strong>Vision</strong> ScienceDepartment, Glasgow Caledonian University, Glasgow G4 0BA, UK, 2 MitteldeutschesKopfschmerzzentrum, Friedrich-Schiller University, University ClinicJena, 07747 Jena, GermanyMigraine is a disabling condition for which the underlying neuronal mechanismsremain elusive. Patients with migraine often experience visual hallucinations(aura) and have been shown to exhibit subtle differences in visualprocessing compared to non-migraineurs interictally. Deficits have beenreported under conditions including metacontrast masking and motionperception.We compared masking effects in migraineurs and headache-free controlsusing a shape discrimination task, thought to involve processing inextrastriate cortical areas. Observers had to detect subtle deviations in cir-Wednesday AMSee page 3 for Abstract Numbering System<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>347


Wednesday Morning PostersVSS 2010 Abstractscular contour shapes (radial-frequency patterns [RF]) in the presence of alarger contour mask. Thresholds - defined as the amount of radial amplitude(sharpness of the contours corners) required to discriminate perfectcircles from pentagon-like shapes (RF 5) - were determined using astaircase procedure. The mask, a RF 5 pentagon shape with an amplitude16 times its detection threshold, was presented at 5 stimulus onset asynchronies(SOA): 0ms (simultaneous), 66.7ms, 100ms, 133.3ms and 250ms.Tests (1-deg radius) and mask (1.5-deg radius) were shown for 25ms. Thecross-sectional profile of the contours was given by a fourth derivative of aGaussian with a peak spatial frequency of 8 cpd. Luminance contrast of allstimuli was 0.9. Nine migraineurs with aura, 9 migraineurs without auraand 10 headache-free controls participated.Confirming typical masking effects, all subjects showed raised thresholdsbetween SOA 66.7-100ms compared to simultaneous presentation of maskand test shape (SOA=0ms).While migraineurs without aura performed almost as well as the controlgroup, migraineurs with aura had higher thresholds for all backward maskingconditions (SOA>0), with peak difference occurring at SOA 66.7ms(p=0.036). This finding could reflect a general difficulty for migraineurswith aura to detect shapes in a distractive environment which might be dueto a hyperexcitability in migraineurs with aura.Wednesday AM348 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>See page 3 for Abstract Numbering System


Topic IndexBelow is a list of talk and poster sessions by topic. Parentheses indicate the abstracts that are included in each session.3D perception: Binocular and motion cuesPoster Presentation (16.451-16.460)Friday, May 7, 6:30 - 9:30 pm3D perception: Depth cues and spatiallayoutOral Presentation (62.11-62.17)Wednesday, May 12, 11:00 - 12:45 pm3D perception: Distance and sizePoster Presentation (56.445-56.451)Tuesday, May 11, 2:45 - 6:45 pm3D perception: Pictorial cuesPoster Presentation (36.501-36.512)Sunday, May 9, 2:45 - 6:45 pm3D perception: Spatial layoutPoster Presentation (43.548-43.556)Monday, May 10, 8:30 - 12:30 pmAttention: Brain and behavior IPoster Presentation (36.451-36.459)Sunday, May 9, 2:45 - 6:45 pmAttention: Brain and behavior IIPoster Presentation (63.421-63.430)Wednesday, May 12, 8:30 - 12:30 pmAttention: Brain imagingOral Presentation (32.21-32.27)Sunday, May 9, 11:00 - 12:45 pmAttention: CapturePoster Presentation (36.432-36.450)Sunday, May 9, 2:45 - 6:45 pmAttention: Deciding where we lookPoster Presentation (43.401-43.412)Monday, May 10, 8:30 - 12:30 pmAttention: Divided attentionPoster Presentation (26.310-26.319)Saturday, May 8, 2:45 - 6:45 pmAttention: Endogenous and exogenousPoster Presentation (56.426-56.429)Tuesday, May 11, 2:45 - 6:45 pmAttention: Eye movementsPoster Presentation (23.313-23.327)Saturday, May 8, 8:30 - 12:30 pmAttention: Features and objectsPoster Presentation (63.431-63.449)Wednesday, May 12, 8:30 - 12:30 pmAttention: Inattention and attentionblindnessPoster Presentation (43.425-43.438)Monday, May 10, 8:30 - 12:30 pmAttention: Interactions with eye and handmovementOral Presentation (21.11-21.17)Saturday, May 8, 8:15 - 10:00 amAttention: Mechanisms and modelsPoster Presentation (43.413-43.424)Monday, May 10, 8:30 - 12:30 pmAttention: Models and mechanisms ofsearchOral Presentation (54.21-54.26)Tuesday, May 11, 2:45 - 4:15 pmAttention: Numbers and thingsPoster Presentation (33.436-33.444)Sunday, May 9, 8:30 - 12:30 pmAttention: Object attention and objecttrackingOral Presentation (52.21-52.27)Tuesday, May 11, 11:00 - 12:45 pmAttention: Reward, motivation, emotionPoster Presentation (16.528-16.541)Friday, May 7, 6:30 - 9:30 pmAttention: Spatial selection and modulationPoster Presentation (23.440-23.456)Saturday, May 8, 8:30 - 12:30 pmAttention: Special populationsPoster Presentation (26.320-26.327)Saturday, May 8, 2:45 - 6:45 pmAttention: Temporal selection andmodulationPoster Presentation (26.301-26.309)Saturday, May 8, 2:45 - 6:45 pmAttention: TimeOral Presentation (41.21-41.27)Monday, May 10, 8:15 - 10:00 amAttention: TrackingPoster Presentation (56.409-56.425)Tuesday, May 11, 2:45 - 6:45 pmAttention: Visual working memoryPoster Presentation (53.416-53.422)Tuesday, May 11, 8:30 - 12:30 pmBinocular vision: Models and mechanismsOral Presentation (41.11-41.17)Monday, May 10, 8:15 - 10:00 amBinocular vision: Rivalry and bistabilityPoster Presentation (23.501-23.521)Saturday, May 8, 8:30 - 12:30 pmBinocular vision: Rivalry and mechanismsOral Presentation (25.11-25.16)Saturday, May 8, 5:15 - 6:45 pmBinocular vision: Stereo mechanismsPoster Presentation (36.540-36.547)Sunday, May 9, 2:45 - 6:45 pmBinocular vision: StereopsisPoster Presentation (56.301-56.316)Tuesday, May 11, 2:45 - 6:45 pmColor and lightOral Presentation (31.11-31.17)Sunday, May 9, 8:15 - 10:00 amColor and light: Adaptation and constancyPoster Presentation (16.439-16.450)Friday, May 7, 6:30 - 9:30 pmColor and light: Categories, culture andpreferencesPoster Presentation (63.410-63.420)Wednesday, May 12, 8:30 - 12:30 pmColor and light: Lightness and brightnessPoster Presentation (36.415-36.431)Sunday, May 9, 2:45 - 6:45 pmColor and light: MechanismsPoster Presentation (23.410-23.424)Saturday, May 8, 8:30 - 12:30 pmColor and light: Surfaces and materialsPoster Presentation (53.401-53.410)Tuesday, May 11, 8:30 - 12:30 pmDevelopment: DisordersPoster Presentation (16.426-16.438)Friday, May 7, 6:30 - 9:30 pmDevelopment: EarlyPoster Presentation (53.501-53.512)Tuesday, May 11, 8:30 - 12:30 pmDevelopment: LifespanPoster Presentation (56.517-56.528)Tuesday, May 11, 2:45 - 6:45 pmDevelopment: MechanismsOral Presentation (32.11-32.17)Sunday, May 9, 11:00 - 12:45 pmEye movements: Mechanisms and methodsPoster Presentation (16.414-16.425)Friday, May 7, 6:30 - 9:30 pmEye movements: Perisaccadic perceptionPoster Presentation (56.501-56.516)Tuesday, May 11, 2:45 - 6:45 pmEye movements: Selection and cognitionPoster Presentation (43.301-43.315)Monday, May 10, 8:30 - 12:30 pmEye movements: Smooth pursuitPoster Presentation (26.439-26.446)Saturday, May 8, 2:45 - 6:45 pmEye movements: Top-down effectsOral Presentation (34.11-34.16)Sunday, May 9, 2:45 - 4:15 pmEye movements: UpdatingOral Presentation (61.11-61.17)Wednesday, May 12, 8:15 - 10:00 am<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>349


Topic IndexVSS 2010 AbstractsFace perception: Brain mechanismsOral Presentation (24.21-24.26)Saturday, May 8, 2:45 - 4:15 pmFace perception: DevelopmentPoster Presentation (16.514-16.527)Friday, May 7, 6:30 - 9:30 pmFace perception: DisordersPoster Presentation (53.539-53.552)Tuesday, May 11, 8:30 - 12:30 pmFace perception: Emotional processingPoster Presentation (33.501-33.516)Sunday, May 9, 8:30 - 12:30 pmFace perception: ExperiencePoster Presentation (23.522-23.538)Saturday, May 8, 8:30 - 12:30 pmFace perception: Eye movementsPoster Presentation (56.529-56.541)Tuesday, May 11, 2:45 - 6:45 pmFace perception: FeaturesPoster Presentation (36.513-36.528)Sunday, May 9, 2:45 - 6:45 pmFace perception: Neural processingPoster Presentation (43.513-43.529)Monday, May 10, 8:30 - 12:30 pmFace perception: Parts and configurationsPoster Presentation (56.542-56.556)Tuesday, May 11, 2:45 - 6:45 pmFace perception: Social cognitionPoster Presentation (33.517-33.530)Sunday, May 9, 8:30 - 12:30 pmFace perception: Social cognitionOral Presentation (62.21-62.26)Wednesday, May 12, 11:00 - 12:30 pmMemory: Brain mechanisms of working andshort-term memoryPoster Presentation (43.316-43.330)Monday, May 10, 8:30 - 12:30 pmMemory: Capacity and resolution ofworking and short-term memoryPoster Presentation (16.542-16.556)Friday, May 7, 6:30 - 9:30 pmMemory: Encoding and retrievalOral Presentation (54.11-54.16)Tuesday, May 11, 2:45 - 4:15 pmMemory: Encoding and retrievalPoster Presentation (26.447-26.460)Saturday, May 8, 2:45 - 6:45 pmMemory: Objects and features in workingand short-term memoryPoster Presentation (53.301-53.316)Tuesday, May 11, 8:30 - 12:30 pmMemory: Working and short-term memoryOral Presentation (21.21-21.27)Saturday, May 8, 8:15 - 10:00 amMotion: Biological motionPoster Presentation (33.419-33.435)Sunday, May 9, 8:30 - 12:30 pmMotion: Flow, depth, and spinPoster Presentation (56.317-56.331)Tuesday, May 11, 2:45 - 6:45 pmMotion: MechanismsOral Presentation (51.21-51.27)Tuesday, May 11, 8:15 - 10:00 amMotion: Mechanisms and IllusionsPoster Presentation (26.427-26.438)Saturday, May 8, 2:45 - 6:45 pmMotion: Mechanisms and modelsPoster Presentation (43.501-43.512)Monday, May 10, 8:30 - 12:30 pmMotion: PerceptionOral Presentation (22.21-22.27)Saturday, May 8, 11:00 - 12:45 pmMultisensory processingOral Presentation (22.11-22.16)Saturday, May 8, 11:00 - 12:30 pmMultisensory processing: Cross-modalperceptionPoster Presentation (53.423-53.436)Tuesday, May 11, 8:30 - 12:30 pmMultisensory processing: SynesthesiaPoster Presentation (53.437-53.444)Tuesday, May 11, 8:30 - 12:30 pmMultisensory processing: Visual-auditoryinteractionsPoster Presentation (43.530-43.547)Monday, May 10, 8:30 - 12:30 pmNeural mechanisms: Adaptation,awareness, actionPoster Presentation (26.401-26.411)Saturday, May 8, 2:45 - 6:45 pmNeural mechanisms: CortexOral Presentation (52.11-52.17)Tuesday, May 11, 11:00 - 12:45 pmNeural mechanisms: Cortical organizationPoster Presentation (23.401-23.409)Saturday, May 8, 8:30 - 12:30 pmNeural mechanisms: Human electrophysiologyPoster Presentation (56.401-56.408)Tuesday, May 11, 2:45 - 6:45 pmNeural mechanisms: Neurophysiology andtheoryPoster Presentation (36.301-36.314)Sunday, May 9, 2:45 - 6:45 pmObject recognition: CategoriesOral Presentation (42.21-42.26)Monday, May 10, 11:00 - 12:30 pmObject recognition: Development andlearningPoster Presentation (16.501-16.513)Friday, May 7, 6:30 - 9:30 pmObject recognition: Features andcategoriesPoster Presentation (26.501-26.515)Saturday, May 8, 2:45 - 6:45 pmObject recognition: Object and sceneprocessingOral Presentation (34.21-34.26)Sunday, May 9, 2:45 - 4:15 pmObject recognition: Recognition processesPoster Presentation (53.523-53.538)Tuesday, May 11, 8:30 - 12:30 pmObject recognition: Selectivity andinvariancePoster Presentation (33.542-33.556)Sunday, May 9, 8:30 - 12:30 pmPerception and action: LocomotionPoster Presentation (16.401-16.412)Friday, May 7, 6:30 - 9:30 pmPerception and action: MechanismsPoster Presentation (53.513-53.522)Tuesday, May 11, 8:30 - 12:30 pmPerception and action: Navigation andmechanismsOral Presentation (61.21-61.27)Wednesday, May 12, 8:15 - 10:00 amPerception and action: Navigation andmechanismsPoster Presentation (33.318-33.331)Sunday, May 9, 8:30 - 12:30 pmPerception and action: Pointing and hittingPoster Presentation (36.315-36.332)Sunday, May 9, 2:45 - 6:45 pmPerception and action: Pointing, reaching,and graspingOral Presentation (42.11-42.16)Monday, May 10, 11:00 - 12:30 pmPerception and action: Reaching andgraspingPoster Presentation (23.425-23.439)Saturday, May 8, 8:30 - 12:30 pmPerceptual learning: Mechanisms andmodelsOral Presentation (31.21-31.27)Sunday, May 9, 8:15 - 10:00 amPerceptual learning: Mechanisms andmodelsPoster Presentation (53.317-53.331)Tuesday, May 11, 8:30 - 12:30 pmPerceptual learning: Plasticity andadaptationOral Presentation (55.21-55.27)Tuesday, May 11, 5:15 - 7:00 pmPerceptual learning: Sensory plasticity andadaptationPoster Presentation (36.401-36.414)Sunday, May 9, 2:45 - 6:45 pm350 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>


VSS 2010 AbstractsTopic IndexPerceptual learning: Specificity andtransferPoster Presentation (26.412-26.426)Saturday, May 8, 2:45 - 6:45 pmPerceptual organization: Contours and 2DformOral Presentation (24.11-24.15)Saturday, May 8, 2:45 - 4:15 pmPerceptual organization: Contours and 2DformPoster Presentation (56.430-56.444)Tuesday, May 11, 2:45 - 6:45 pmPerceptual organization: Grouping andsegmentationOral Presentation (51.11-51.17)Tuesday, May 11, 8:15 - 10:00 amPerceptual organization: Grouping andsegmentationPoster Presentation (43.439-43.458)Monday, May 10, 8:30 - 12:30 pmPerceptual organization: ObjectsPoster Presentation (33.410-33.418)Sunday, May 9, 8:30 - 12:30 pmPerceptual organization: TemporalprocessingPoster Presentation (33.401-33.409)Sunday, May 9, 8:30 - 12:30 pmScene perceptionOral Presentation (25.21-25.26)Saturday, May 8, 5:15 - 6:45 pmScene perception: AestheticsPoster Presentation (63.401-63.409)Wednesday, May 12, 8:30 - 12:30 pmScene perception: Categorization andmemoryPoster Presentation (33.531-33.541)Sunday, May 9, 8:30 - 12:30 pmScene perception: MechanismsPoster Presentation (36.529-36.539)Sunday, May 9, 2:45 - 6:45 pmScene perception: Objects and scenesPoster Presentation (23.539-23.554)Saturday, May 8, 8:30 - 12:30 pmSearch: AttentionPoster Presentation (26.527-26.545)Saturday, May 8, 2:45 - 6:45 pmSearch: Eye movements and mechanismsOral Presentation (35.21-35.27)Sunday, May 9, 5:15 - 7:00 pmSearch: Learning, memory and contextPoster Presentation (33.445-33.457)Sunday, May 9, 8:30 - 12:30 pmSearch: Neural mechanisms and behaviorPoster Presentation (26.516-26.526)Saturday, May 8, 2:45 - 6:45 pmSpatial vision: Cognitive factorsPoster Presentation (53.411-53.415)Tuesday, May 11, 8:30 - 12:30 pmSpatial vision: Crowding and eccentricityPoster Presentation (33.301-33.317)Sunday, May 9, 8:30 - 12:30 pmSpatial vision: Crowding and mechanismsOral Presentation (55.11-55.17)Tuesday, May 11, 5:15 - 7:00 pmSpatial vision: Image statistics and texturePoster Presentation (23.301-23.312)Saturday, May 8, 8:30 - 12:30 pmSpatial vision: MaskingPoster Presentation (63.450-63.461)Wednesday, May 12, 8:30 - 12:30 pmSpatial vision: Mechanisms and modelsOral Presentation (35.11-35.17)Sunday, May 9, 5:15 - 7:00 pmSpatial vision: Mechanisms and modelsPoster Presentation (26.546-26.557)Saturday, May 8, 2:45 - 6:45 pmTemporal processing: Mechanisms andmodelsPoster Presentation (36.548-36.556)Sunday, May 9, 2:45 - 6:45 pmTemporal processing: Perception of timePoster Presentation (53.445-53.456)Tuesday, May 11, 8:30 - 12:30 pm<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>351


Author IndexEntries are indexed by abstract number, not page number; bold entries indicate first author abstracts.“S” entries indicate symposia.AAbbey, C - 23.304, 35.13Abe, S - 23.502, 23.512Abegg, M - 36.404, 43.315Abekawa, N - 23.428Aberg, KC - 26.417Aboitiz, F - 43.319Abrams, J - 23.440, 63.450Abrams, R - 36.432, 36.433Ackerman, C - 43.330Ackermann, JF - 43.305Adamo, M - 43.327, 54.25Adams, RJ - 16.433Adams, W - 33.511Adams, WJ - 24.15, 53.430Adelson, E - 53.527Adolph, K - 42.14Adolph, KE - 56.443Agam, Y - 34.23Agostini, R - 23.312Agrawala, M - 36.508Aguirre, G - 33.539, 43.517, 43.527Ahissar, E - 61.15Ahmad, N - 33.304Ahmed, F - 33.552Ahumada, A - 26.549Ahumada, AJ - S2Aimola Davies, A - 33.523Aissani, C - 23.547, 33.404Aitkin, CD - 16.422Akau, M - 33.301Aks, D - 56.422Al-Aidroos, N - 36.433, 54.25Alais, D - 22.16, 22.24Albrecht, AR - 36.533Albright, T - 26.438Ales, J - 32.27Alessandra, S - 53.508Alexander, R - 26.525Al-Kalbani, M - 16.425Allard, R - 26.428, 63.451Allen, C - 21.16Allen, GC - 43.548Allen, H - 26.544, 35.26, 36.459,43.456, 56.519Allenmark, F - 36.540Alleysson, D - 33.328Allison, R - 16.424, 56.309Almeida, J - 62.21Almeida, Q - 16.406Alonso, J - 63.457Altschuler, E - 36.425, 56.549Alvarez, BD - 53.440Alvarez, G - S4, 26.536, 41.24,43.328, 43.437, 53.312, 56.413Alvarez, GA - 26.310, 56.412Alvarez, J - 53.502Amano, K - 26.518, 43.508Amieva, H - 36.430Aminoff, E - 26.449, 51.15Amir, O - 16.503Amishav, R - 56.548Anand, A - 36.302Andersen, G - 52.11, 56.311, 56.448Andersen, GJ - 26.413, 36.409,56.319, 56.521Andersen, R - 23.435Andersen, T - 43.428Anderson, A - 26.458, 43.522, 62.24Anderson, AK - 56.538Anderson, BL - 24.11Anderson, DE - 16.552, 23.517Anderson, E - 33.503, 33.510Anderson, J - 36.302, 36.321Andrews, L - 43.438Andrews, S - 42.16Andrews, TJ - 56.546Andrus, S - 56.308Anes, MD - 36.521, 43.519Angelone, BL - 16.555Anghel, M - 53.532Annamraju, S - 56.422Annaz, D - 53.549Ansorge, U - 36.448Anstis, S - 22.27Anton-Erxleben, K - 23.440Anzures, G - 23.531, 43.313Aoki, S - 56.310Appelbaum, L - 32.27Apthorp, D - 22.24Arathorn, D - 22.22Arcaro, M - 26.507Arcizet, F - 52.16Arguin, M - 36.516, 36.550, 53.413Arita, J - 26.516Arizpe, J - 56.534Arman, AC - 55.12Armann, R - 33.526Arnell, K - 43.425Arnold, D - 26.555Arnold, PJ - 33.546Arnoldussen, D - 22.21, 23.409Arnott, SR - 61.26, 61.27Arsenault, E - 23.301Asano, M - 53.438Ásgeirsson, Á - 26.531Ashida, H - 26.433Ashton, M - 25.16Asplund, CL - 54.16Atkinson, J - S3, 32.11, 56.402Atkinson, JH - 53.434Attarha, M - 33.303Aurich, M - 26.531Austerweil, JL - 53.331Avidan, G - 24.25, 56.514Awh, E - 16.552, 53.306, 54.11Axelsson, E - 53.502Ayhan, I - 53.445, 53.447Ayyad, J - 36.523Aznar-Casanova, A - 23.321, 34.16BBaccus, W - 33.420Backus, B - 36.412Backus, BT - 31.27, 55.25Badcock, D - 43.501, 43.502Bae, GY - 36.449Baeck, A - 26.405, 26.416Bahdo, L - 26.406Bahrami, B - 22.24, 36.450, 43.329Baker, C - 23.301, 25.26, 33.545,43.450, 56.534Baker, CI - 23.406Baker, D - 23.501Baker, J - 33.438Balas, BJ - S4, 35.23Baldridge, G - 26.515Ballard, D - 43.410Baluch, F - 43.402Banks, M - 36.508, 56.301Banks, MS - 56.302, 62.14Bannerman, R - 23.325, 23.521Banton, T - 56.324Bao, M - 23.311, 55.21Bao, P - 34.24Bar, M - 26.449, 26.508, 34.21, 51.15Barbeau, E - 53.534Barbot, A - 23.441Barenholtz, E - 16.511, 43.543Barense, MD - 26.448, 43.327Barlasov Ioffe, A - 33.437Barnes, T - 43.505Barnett-Cowan, M - 36.552Barr, S - 53.532, 53.533, 56.440Barraza, J - 56.318Barraza, JF - 33.407Barrett, L - 33.510Barrionuevo, P - 16.450Bartholomew, A - 56.518Barton, B - 16.549, 26.402, 26.403Barton, J - 26.321, 33.556, 36.526,43.315, 53.539, 53.540, 53.541,53.543Barton, JJ - 36.404Barton, K - 33.331Baruch, O - 43.414Barwick, R - 36.523Basak, C - 26.313Baslev, D - 63.427Bates, C - 33.439Battaglia, P - 33.512Battelli, L - 16.459Baugh, L - 26.404Bauhoff, V - 33.446Baurès, R - 56.330Bavelier, D - 26.458, 33.449Beck, C - 56.317Beck, D - 25.23, 26.313, 36.556Beck, DM - 25.22, 26.511Beck, M - 53.308Beck, MR - 26.451Beck, V - 26.528Bedell, H - 53.301Bedford, R - 32.15Beers, A - 36.510Behrmann, M - 24.25, 36.547, 42.22,43.453Beilock, S - 16.540Belkin, M - 26.550Bell, J - 31.11, 56.435, 56.441Bellebaum, C - 56.503Belmore, S - 16.449Belopolsky, A - 23.319Ben Yaish, S - 26.550Benassi, M - 26.320, 26.418Benedetti, D - 23.312Ben-Hur, T - 23.553Benmussa, F - 33.404Bennett, P - 56.542Bennett, PJ - 16.525, 26.319, 26.414,36.553, 56.522, 56.523, 56.524,56.525, 56.543Bennett, R - 53.533Ben-Shahar, O - 43.405, 56.436Benson, V - 16.534Bentin, S - 33.432, 42.25, 43.417,56.556Benton, C - 36.531Ben-Yosef, G - 56.436Berg, D - 21.17, 43.401Berger-Wolf, T - 36.302Bernard, J - 55.17Bertenthal, BI - 23.320Berthet, V - 33.317, 43.426Bertulis, A - 36.555Best, L - 56.520Bettencourt, L - 53.532, 53.533Bettencourt, LM - 56.440Betts, L - 23.528Bex, P - 23.308, 33.307, 43.506,53.526Bi, T - 23.533, 26.419Bian, Z - 56.448Bian, Z - 56.311, 56.319, 56.521Biederman, I - 16.503, 23.539,26.505, 26.506, 33.502Bielevicius, A - 36.555Biggs, A - 36.446Bigoni, A - 26.320Bilenko, NY - 23.405Binda, P - 22.11Bingham, G - 33.548Bingham, GP - 16.460, 53.517,53.518Bingley, T - 36.326, 36.330Binsted, G - 36.315, 36.325, 36.328,36.329Biondi, M - 53.511Birch, EE - 16.433Bishop, S - 43.527Bisley, J - 23.322, 52.16Bisley, JW - 61.12352 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>


VSS 2010 AbstractsAuthor IndexBittner, J - 23.537, 53.326, 56.545,56.555Blagrove, E - 33.456Blaha, L - 33.417Blais, C - 36.550, 43.312, 56.539Blake, R - 22.15, 23.519, 23.526,25.13, 26.304Blakely, D - 33.445Blakeslee, B - 36.424Blank, I - 36.525Blaser, E - 53.511Blocker, H - 33.508Bloj, M - 23.415, 36.507, 63.410Blume, M - 53.520Blythe, H - 36.545Bochsler, TM - 16.401, 62.11Bogadhi, A - 26.445Bogdan, N - 16.424Bogusch, L - 63.426Boi, M - 21.12Bolzani, R - 26.320, 26.418Bombari, D - 56.535Bonneh, Y - 26.322, 26.548Boot, W - 33.445, 43.406Bordaberry, P - 16.504Bordier, C - 26.401Boremanse, A - 24.22Born, S - 23.454, 26.535, 43.407Bosco, A - 52.17Bostan, SR - 23.445Boswell, A - 36.510, 56.518Bosworth, R - 53.505Bosworth, RG - S3Boucart, M - 23.548, 43.306Bouecke, J - 43.510, 56.317Bourdon, M - 33.517, 36.516Bouvier, S - 63.425Bower, J - 26.413Bower, JD - 36.409Boyer, TW - 23.320Boyle, G - 16.425Boynton, G - 54.13, 56.414, 63.424Boynton, GM - 63.442Bracci, S - 42.24Bracewell, RM - 26.411Braddick, O - S3, 32.11, 56.402Brady, T - 53.312Brady, TF - 21.21Brainard, D - 31.16Brainard, DH - 53.408Braithwaite, J - 35.26, 43.438Brandman, T - 24.26Brang, D - 53.437Brants, M - 23.525, 26.405Brascamp, J - 25.13, 26.304Braun, J - 21.23Braunstein, ML - 56.319Bravo, M - 63.403Bray, M - 53.441Breckenridge, K - 32.11Breidt, M - 62.23Breitmeyer, BG - 53.515, 56.433Bremmer, F - 16.417, 43.536, 56.511,61.11Brennan, AA - 36.324Brenner, E - 42.11, 56.512Breveglieri, R - 52.17Brewer, A - 16.549, 26.402Brewer, AA - 26.403Bridgeman, B - 21.15, 22.13Bridwell, D - 56.428Brillinger, DR - 16.430Brimhall, SE - 16.412Brincat, S - 52.15Brisson, B - 53.313Brockmole, J - 36.413, 43.406Bronstad, P( - 63.402Brown, A - 63.418Brown, C - 56.420Brown, JM - 63.436Broyles, EC - 53.515Bruggeman, H - 16.409Brumby, S - 53.532, 53.533, 56.440Brunamonti, E - 43.321Brunick, K - 63.401Bruno, A - 53.445, 53.447Buchsbaum, M - 26.327Buckingham, G - 36.329Buckthought, A - 23.508Buetti, S - 23.314Buia, C - 34.23Bukach, C - 53.538Bukowski, H - 43.516Bulakowski, PF - 33.302Bulatov, A - 36.555Bulganin, L - 36.426Bullock, D - 34.15Bülthoff, HH - 33.509, 36.552Bülthoff, I - 33.526Buneo, C - 23.435Burdzy, DC - 33.444Burge, J - 35.14, 56.302Burke, KE - 56.528Burns, D - 56.544Burr, D - 53.454, 53.508Burton, C - 56.520Busch, N - 43.415Busigny, T - 53.548Butcher, S - 43.453Butler, JS - 36.552Butler, S - 36.514, 42.12Butts, D - 36.301Byers, A - 63.445Byrne, P - 23.427Byrne, PA - 43.310CC Jackson, M - 53.316Cacciamani, L - 43.439Caddigan, E - 25.22, 25.23Caggiagno, V - 23.430Caharel, S - 23.522Cain, MS - 23.453Caldara, R - 23.523, 33.528, 33.529,43.452, 56.539, 62.25, 62.26Calder, A - 53.546Calder, AJ - 33.526, 56.552, 56.553Camerer, C - 33.512Cameron, I - 26.324Cant, JS - 32.26Cantor, C - 26.434, 41.17, 56.502Cao, B - 36.427Cao, Y - 16.512Caplan, J - 36.322Caplovitz, G - 26.507, 63.443Cappadocia, DC - 43.310Capps, M - 23.515Caramazza, A - 53.535, 62.21Carlin, JD - 56.552Carlisle, NB - 53.416Carlson, L - 33.536Carlson, T - 33.543Carmel, D - 25.14Carney, T - 16.416, 26.556, 56.405Carrasco, M - 16.536, 23.440, 23.441,25.14, 32.23, 63.440, 63.441,63.450Carré, JM - 33.514Cass, J - 22.16, 63.452Cassanello, C - 43.501, 43.502CASTET, E - 43.314Casti, A - 36.301Castronovo, J - 43.324, 53.415Cate, A - 33.414Catherwood, D - 53.502Catrambone, J - 43.457Cauchoix, M - 42.23Caudek, C - 43.549Cavallet, M - 36.441Cavanagh, P - 21.13, 23.317, 36.318,36.548, 56.510, 61.16, 63.451Cavezian, C - 23.553, 26.327Cavina-Pratesi, C - 42.24Ceballos, N - 16.533Cecchini, P - 26.320Cerf, M - 23.313Cha, O - 56.504Chabris, C - 24.24, 56.550Chai, B - 25.23Chai, Y - 23.503Chakravarthi, R - 33.313, 33.314,52.21, 55.14Chalk, M - 33.436Chambeaud, J - 56.318Chambers, C - 21.16Champion, RA - 26.444Chan, JS - 43.539, 43.541, 56.528Chan, L - 26.529Chan, W - 43.430Chanab, W - 33.306, 33.312Chandna, A - S3Chang, C - 33.330Chang, DH - 33.425Chang, L - 26.412, 52.11Chappell, M - 43.537Charron, C - 61.22Chasteen, AL - 33.444Chatterjee, G - 16.516, 24.24, 53.542,56.550Chaudhuri, A - 23.516Chaumon, M - 51.15Chelazzi, L - S5, 16.528Chen, C - 23.552, 33.415, 34.26,34.26, 43.455Chen, H - 23.517, 23.527Chen, J - 23.533, 33.408Chen, N - 26.419Chen, SI - S3Chen, X - 53.329, 53.330Chen, Y - 23.427, 43.531, 43.547,53.432Cheng, D - 36.315, 36.325Cheng, J - 33.327Cheng, JC - 56.322Cheng, W - 23.424Cherian, T - 24.21Cheung, O - 56.547Cheung, S - 26.306, 26.513, 33.310Cheung, T - 53.541Cheyne, D - 16.427, 36.451Chiao, C - 23.309Choi, H - 31.24Chokron, S - 23.553, 26.327Cholewiak, SA - 36.511Chong, SC - 23.505, 33.525, 53.314,53.412, 56.504Choo, H - 33.418Chopin, A - 23.515Chou, W - 16.556, 63.438Chouinard, PA - 23.432, 23.433Chrastil, E - 33.322Christensen, J - 53.402, 56.419Christopoulos, V - 23.429Chu, C - 23.552Chu, H - 16.540Chu, W - 26.447Chua, F - 36.447CHUA, PY - 36.530Chubb, C - S4, 23.309, 23.412,36.421Chuldzhyan, M - 53.505Chun, M - 16.531, 51.12, 54.15Chung, S - 33.301, 35.15, 55.17Chung, ST - 33.302, 33.303Cichy, RM - 43.525Cinelli, M - 16.402, 16.404, 16.406Cipriani, R - 36.310, 36.451Cisarik, P - 56.313Citek, K - 53.520Clark, K - 33.451, 43.325Clarke, MP - 36.546Clavagnier, S - 23.437Close, A - 56.426Coakley, D - 16.425Coats, R - 23.426Cohen, A - 16.422Cohen, E - 63.431Cohen, J - 33.318, 63.428Cohen, M - 43.437Colas, J - 25.15Colino, F - 36.325, 36.328, 36.329Coll, M - 53.313Collins, N - 16.425Collins, T - 56.516Colombo, E - 16.450Coltheart, M - 56.551Cong, L - 53.325Conklin, S - 56.438Conlan, L - 16.508Connah, D - 63.410Connell Pensky, AE - 36.457Connolly, A - 26.501Connolly, AC - 26.502Connor, C - 52.15Constantinidis, C - 36.455Conte, M - 23.305Cook, R - 33.431Cooper, EA - 56.302, 62.14Corbet, R - 33.513Corbett, C - 53.520Cormack, L - 56.320Cormack, LK - 51.23Cornelissen, FW - 25.24, 25.25Corradi, N - 23.450Corrow, S - 16.516, 32.12<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>353


Author IndexVSS 2010 AbstractsCortese, F - 16.427Coslett, HB - 23.438Cosman, J - 43.431Cosman, JD - 26.533Cottereau, B - 23.547, 56.314Cottrell, G - 16.437, 33.555, 36.314Courage, ML - 16.433Courtney, S - 43.330Cowell, R - 36.314Cowey, A - 26.407Cowie, D - 16.407Crapse, T - 56.505Crawford, J - 23.434Crawford, JD - 23.427, 36.323,43.310, 43.536Crawford, LE - 43.553Creem-Regehr, S - 56.447Creem-Regehr, SH - 43.548, 62.13Crewther, D - 25.16, 25.16, 43.424Crewther, S - 43.424Cristino, F - 36.531Crognale, M - 23.421Crollen, V - 53.415Crookes, K - 16.517, 32.13Crouzet, SM - 56.531, 56.532Crowell, C - 23.538Cruikshank, L - 36.322Cui, M - 34.14Cunningham, DW - 33.509Curio, C - 36.535, 56.430, 62.23Curran, W - 43.511, 51.21Curtis, C - 16.420Cusack, R - 43.326Cutting, J - 63.401Czuba, T - 56.320Czuba, TB - 51.23DDaar, M - 36.524Dachille, L - 43.529Dähne, S - 63.453Dakin, S - 23.308, 33.307, 43.506Dalrymple, K - 36.454, 53.541Danckert, J - 23.448Dang, L - 16.423Daniel, F - 56.545Daniels, L - 36.456Dannemiller, J - 36.452D’Antona, A - 23.414, 36.423Darke, H - 33.523Darling, EF - 36.438Das, K - 26.517, 35.21Dasgupta, S - 33.421Dassonville, P - 16.434, 53.550Dastrup, E - 33.326Datta, R - S2Daum, I - 56.503Davidenko, N - 26.456Davidoff, J - 16.502, 33.410Davidson, M - 26.407, 43.543Davies-Thompson, J - 56.546Davis, G - 23.451Davis, J - 33.538Davis, N - 56.313D’avossa, G - 56.426Day, B - 16.407de Benedictis-Kessner, J - 56.451de Fockert, J - 23.551de Graaf, TA - 23.510De Graef, P - 26.534, 43.412, 56.501,56.530de Grosbois, J - 36.328De Grosbois, J - 36.325, 36.329de Heering, A - 16.514, 24.23de Jong, MC - 23.506, 23.510de Lange, F - 26.406de Montalembert, M - 36.551De Ryck, K - 43.412de Vries, JP - 16.414Dearing, RR - 53.423Debono, K - 26.446Dechter, E - 32.17Dedrick, D - 26.408DeGrosbois, J - 36.315Dekker, T - 53.509Del Grosso, NA - 43.519Del Viva, MM - 23.312, 23.413Delerue, C - 43.306Delnicki, R - 16.420DeLong, J - 63.401Delord, S - 16.504, 36.430DeLoss, DJ - 26.413Delvenne, J - 43.324Demeyer, M - 56.501Denison, R - 25.12Dent, K - 35.26Desai, M - 32.15Desanghere, L - 23.436Desmarais, G - 43.538D’Esposito, M - 56.429Dessing, JC - 36.323, 43.536Deubel, H - 21.13, 54.24, 56.507Devaney, K - 36.519Devyatko, D - 23.504Dey, AK - 16.525DeYoe, E - S2Di Lollo, V - 26.308Di Luca, M - 22.14, 53.452di Luca, M - 53.450Diaz, G - 61.23Diesendruck, L - 43.405Dieter, K - 22.15Dilks, DD - 23.406, 32.17Dimitrova, Y - 53.401Ding, J - 55.26Ding, Y - 53.329Dixon, L - 43.520Dixon, M - 16.541, 53.444Dobbyn, S - 43.541Dobkins, K - S3, 53.437, 53.504,53.505Dobres, J - 55.27Dodd, M - 63.434Dody, Y - 26.326Doerrfeld, A - 33.429Doerschner, K - 62.15, 63.404Dojat, M - 26.401Dold, HM - 63.453Domini, F - 16.459, 43.549Dong, M - 36.450Dong, Y - 34.22Donk, M - 43.416Donnelly, N - 16.534, 53.547Doon, J - 26.310Doran, MM - 56.421Dormal, G - 33.442Dosher, B - 31.23, 52.24, 54.22Dosher, BA - S2, 23.315, 26.447Doti, R - 53.428, 53.431Dove, CN - 16.433Drew, T - 43.316, 52.23, 56.409Drewes, J - 16.421Dricot, L - 43.513, 43.515, 43.516Driver, J - 21.27, 54.26Drover, JR - 16.433Drummond, L - 63.437Dryden, L - 16.419Du, F - 36.432Du, S - 33.506, 33.507Dubois, J - 26.311Dubois, M - 16.505Dubuc, D - 43.519Duchaine, B - 24.24, 43.521, 53.539,53.540, 53.542, 53.543, 56.550Duijnhouwer, J - 26.443Dumont, JR - 56.403Duncan, C - 23.421Dungan, B - 16.543Dunham, K - 16.419Dunlop, J - 36.523Dupuis-Roy, N - 33.517, 53.413Durán, G - 26.315Durand, F - 43.408Durant, S - 43.512Durette, B - 33.328Durgin, F - 33.540, 43.550, 43.551Dux, P - 41.25Dux, PE - 41.26Dyde, RT - 53.423, 53.424Dzieciol, A - 43.452EE Raymond, J - 53.316Eagleman, D - 53.441, 53.448, 53.453Eckstein, M - S5, 23.304, 35.13,56.533Eckstein, MP - 23.404, 26.517, 35.21,63.426Edwards, M - 43.501, 43.502Egan, E - 36.502, 36.505Egeth, H - 26.537Ehgoetz Martens, K - 16.402Ehinger, K - 25.21Ehinger, KA - 33.533EJ Linden, D - 53.316Elazary, L - 36.536, 43.420Elder, JH - 24.15Eli, P - 63.402Elkis, V - 33.545Ellard, C - 33.331, 33.457Ellemberg, D - 26.524, 53.324,53.503, 56.401Elliott, J - 23.456Elliott, S - 16.446Ellis, K - 33.538Eloka, O - 23.425Emanuele, B - 26.511Emrich, SM - 23.445, 43.317Endres, D - 56.317Engel, D - 36.535, 56.430Engel, S - 23.311, 55.21English, T - 53.309Ennis, R - 16.448Enns, JT - 26.512, 36.324Epstein, R - 23.549, 33.539Epting, A - 16.457Erdemir, A - 36.320Erickson, G - 53.520, 53.522, 56.312Erickson, S - 56.325Ericson, J - 33.321, 56.419Erkelens, C - 36.506Ernst, M - 22.14, 36.412, 42.15,53.450Ernst, MO - 53.452Ernst, ZR - 63.442Essock, EA - 36.532Ester, E - 54.11Esterman, M - 53.456Etezad-Heydari, L - 43.308Evans, K - 23.542Evans, KK - 33.532Ewbank, MP - 56.553Ewing, K - 33.427FF. Troje, N - 33.424Fabiani, M - 36.556Fabre-Thorpe, M - 23.548, 26.437,53.534Facoetti, A - 23.450, 26.320, 26.323Fagard, J - 56.516Fagot, J - 33.410Fahrenfort, JJ - 33.542Faivre, N - 33.317Fajen, B - 16.403, 16.410, 61.21,61.23Falikman, M - 43.427Fallaha, N - 16.435Fan, J - 43.328Fan, Z - 26.309Fang, F - 23.533, 26.419, 33.408Fantoni, C - 16.459, 43.549Farber, LE - 36.553Farell, B - 23.446Farid, H - 63.403Farzin, F - S3Fath, A - 16.403Fattori, P - 52.17Faubert, J - 16.408, 26.428, 33.516,53.428, 53.431Favelle, SK - 23.530Fei-Fei, L - 25.22, 25.23Feigenson, L - 16.553Feitosa-Santana, C - 23.414Feldman Barrett, L - 33.503Feldman, J - 43.451, 56.327Felgueiras, P - 16.439Feltner, K - 56.443Fencsik, D - 53.309, 53.422Feng, L - 16.429Fenske, MJ - 16.538Ferber, S - 23.445, 43.317, 43.327,54.25Feria, C - 56.411Fermuller, C - 36.417Fernandez, J - 23.446Fernandez-Duque, D - 16.509Fernando, S - 16.438Ferneyhough, E - 16.536Ferretti, T - 53.444Ferrey, AE - 16.538Ferwerda, J - 16.440, 31.15, 53.409Fesi, J - 56.408354 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>


VSS 2010 AbstractsAuthor IndexFine, I - 52.12Finkbeiner, M - 61.24Finlayson, G - 63.410Firestone, CZ - 33.320Fischer, J - 33.450, 55.16Fischl, B - 32.17Fiser, J - 26.421, 34.14, 53.526Fiset, D - 33.517, 36.515, 36.516,43.312, 53.413Fisher, M - 43.538Fiske, S - 33.452Fitousi, D - 23.537, 53.326, 56.555FitzGibbon, E - 41.11Fize, D - 42.23, 53.534Fleck, MS - 33.451Fleischer, F - 23.430Fleming, R - 36.501, 36.511, 62.15Fleming, RW - S1Fleszar, A - 63.445Fletcher, D - 16.423Flevaris, AV - 36.453, 43.417Flombaum, J - 36.449, 54.12, 56.417Flynn, I - 33.326Fogelson, SV - 26.503Foley, J - 35.12Foley, NC - 43.418Folstein, J - 16.507Folstein, JR - 36.402Foreman, K - 33.529Formankiewicz, MA - 33.304,33.305, 33.308Fortenbaugh, FC - 23.455, 24.14,63.446Fortunato, L - 56.442Foster, DH - 26.518Fougnie, D - 54.16Foulsham, T - 43.303Fox, C - 53.539, 53.541Foxe, JJ - 36.552Fracasso, A - 56.508Francis, G - 23.416Francis, WS - 26.315Franconeri, S - 23.447, 33.418, 51.14,52.26, 63.435Frank, M - 33.534Frank, P - 56.424Franklin, A - 53.502Franz, VH - 23.425, 23.431Freeman, J - S4, 43.301, 43.304,55.11, 55.14Freeman, T - 36.537Freeman, TC - 26.444Friedenberg, J - 26.509Friedman, J - 61.24Friesen, C - 33.508Froyen, V - 43.451Fründ, I - 36.411Fu, M - 26.442Fu, X - 33.527, 53.304Fuchs, I - 56.506Fujimoto, K - 33.551Fujisaki, W - 22.12Fujita, I - 56.310, 56.315Fujiwara, C - 23.449Fukai, T - 36.306Fukuda, K - 16.544, 54.11Fukui, MM - 52.13Fuller, S - 31.27Fulvio, JM - 26.420GG Harris, M - 56.519G. Chambeaud, J - 33.407Gabrieli, JD - 56.412Gage, R - 16.401, 62.11Gajda, K - 56.503Gajewski, D - 56.449Gallant, J - 23.544Gallant, JL - S2, 23.405Galletti, C - 52.17Gallie, B - 16.431Gallistel, C - 53.420Galperin, H - 53.526Ganguli, D - 43.301Ganis, G - 53.530Gao, T - 52.27Gao, X - 36.517Garass, T - 56.513Garay-Vado, AM - 36.443Garcia, JO - 33.422Gardiner, B - 16.521Gardner, J - 32.21, 32.23Gardner, JS - 63.406, 63.407, 63.408Garner, M - 33.511Garrido, L - 53.539, 53.540Garrigan, P - 56.442Garrod, O - 62.23Gatenby, C - 23.401Gauchou, H - 26.457Gaudry, I - 23.553Gauthier, I - 16.506, 16.507, 23.535,36.402, 56.547Ge, L - 23.531, 23.532Gegenfurtner, KR - 26.446, 26.459,31.12, 34.12, 53.410Geisbrecht, B - 35.21Geisler, W - 35.14Geisler, WS - S2, 53.406George, J - 53.532, 53.533Georgeson, M - 23.501Gepshtein, S - 26.438Gerbino, W - 43.448, 53.433Gergely, G - 23.437Gerhard, HE - 53.407Gerhardstein, P - 33.513Germeys, F - 26.534Germine, L - 24.24, 56.550Gershoni, S - 63.405Geuss, M - 43.554, 56.451Geuss, MN - 43.548Ghebreab, S - 23.310, 35.17, 53.525Gheorghiu, E - 31.11, 56.441Ghorashi, S - 26.308Ghuman, A - 51.15Giaschi, D - 16.426, 56.316Gibson, B - 23.451, 36.446, 56.427Giersch, A - 36.554, 43.441, 43.446Giesbrecht, B - 23.456, 26.517,43.422Giese, M - 56.317Giese, MA - 23.430, 56.525Giesel, M - 53.410Gilaie-Dotan, S - 33.432, 33.550,42.25, 53.449Gilchrist, A - 36.415Gilchrist, ID - 36.531Gill, J - 56.413Gillam, B - 41.12Gilman, A - 16.548Gilmore, R - 56.408Gintautas, V - 53.532, 53.533, 56.440Giora, E - 26.431Giovagnoli, S - 26.418Giuliana, L - 53.508Gizzi, M - 26.440Glaser, J - 43.423Glasser, DM - 43.509Glennerster, A - S1Glosson, PE - 21.22Gmeindl, L - 53.456, 54.12Godwin, H - 53.547Goebel, R - 43.515, 43.528Goffaux, V - 33.442, 43.528Golarai, G - 16.526Golby, A - 34.23Gold, J - 36.513Goldstein, J - 33.410Golish, M - 53.434Golomb, J - 42.21Goltz, H - 16.427Gómez-Cuerva, J - 16.535Gomi, H - 23.428Goodale, MA - 23.432, 23.433,61.26, 61.27Goodhew, S - 41.25Goodhew, SC - 41.26Goodman, N - 33.534Goossens, J - 22.21, 23.409Gorbunova, E - 43.427Gorchetchnikov, A - 33.329Gordon, GE - 63.461Gordon, GN - 63.461Gore, J - 23.401Gorea, A - 61.14, 63.454Gori, M - 53.454, 53.508Gori, S - 23.450, 26.320, 26.323,26.431Goryo, K - 23.502, 23.512Gosselin, F - 26.514, 33.517, 36.515,36.516, 36.550, 43.312, 53.413Gottesman, CV - 33.537Gottlieb, J - S5Gout, O - 23.553, 26.327Goutcher, R - 56.304, 56.307Govenlock, SW - 56.523Grabowecky, M - 36.453, 41.15,43.534, 43.535, 53.443, 53.446,61.25, 62.22Graf, EW - 24.15, 53.430Graf, M - 53.548Graham, N - 35.11Graham, R - 16.533, 33.508Granata, Y - 33.313Granrud, C - 32.12Gratton, G - 36.556Graves, T - 26.312Gray, K - 33.511Grayem, R - 56.305Green, C - 26.458Green, CS - 26.420, 33.449, 55.24Greenberg, AS - 63.432Greene, M - 23.541Greenlee, MW - 53.320Greenwood, J - 23.308, 33.307Gregory, E - 33.544, 61.13Griffin, H - 33.518Griffiths, TL - 53.331Grill-Spector, K - 16.513Grinshpun, B - 35.11Griscom, WS - 63.419Groen, I - 53.525Gronau, N - 23.550Grose-Fifer, J - 16.524Grossberg, S - 16.512, 33.329, 34.15,36.305, 43.418, 56.329Grossman, E - 33.421Grossman, ED - 33.422Grove, P - 53.426Grueschow, M - 36.408Gu, J - 33.521Guenther, BA - 63.436Guindon, AH - 56.521Guntupalli, JS - 26.502, 26.503Guo, F - 26.517, 35.21Guo, R - 36.433Gupta, R - 43.436Gureckis, TM - 33.401Gurnsey, R - 33.306, 33.312, 56.441Guzman-Martinez, E - 43.534,53.446HHaak, KV - 25.24, 25.25Haberkamp, A - 16.539Haberman, J - 33.450Hackney, A - 16.404Hadad, B - 32.16, 43.443, 56.434Hadj-Bouziane, F - 24.25Hadwin, J - 16.534Haenel, NV - 36.411Hahn, A - 33.524Hairol, MI - 33.304, 33.305, 33.308Haladjian, H - 53.420, 56.422Halberda, J - 53.311, 63.448Halen, K - 16.442Hall, A - 16.527Hallum, LE - S4Ham, M - 53.532, 53.533, 56.440Hamalainen, M - 51.15Hamker, F - 63.444Hamker, FH - 56.507hammal, z - 26.514Hamon, G - 26.534Han, S - 52.24Handy, T - 23.449, 36.454, 53.541Hanlon, C - 56.537Hanseeuw, B - 43.513, 43.516Hansen, BC - 56.401Hansen, T - 31.12Hanssens, J - 16.408Harber, K - 33.429Harders, M - 22.14Harding, G - 36.507Harel, A - 42.25Harel, M - 33.432Harlow, J - 33.508Haroush, K - 41.27Harrigan, K - 16.541Harris, A - 43.517Harris, I - 42.16Harris, J - 36.507Harris, JM - 36.542, 41.13Harris, L - 53.423Harris, LR - 43.540, 53.424Harrison, SJ - 55.25<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>355


Author IndexVSS 2010 AbstractsHartcher-O’Brien, J - 53.450Hartmann, TS - 56.511Harvey, M - 42.12Harwood, M - 16.418Hasegawa, H - 33.412Hasegawa, T - 53.315Hashimoto, K - 56.331Hass, C - 36.307Hasson, U - 43.304Hatzitaki, V - 16.411Havanur, S - 43.456Hawco, C - 53.444Hawelka, S - 56.506Hawken, MJ - 63.456Hawthorne Foss, A - 43.527Haxby, J - 26.501Haxby, JV - 26.502Hayes, J - 26.526Hayhoe, M - 43.306, 43.410Haynes, J - 36.408, 43.525Hays, J - 25.21Hayward, J - 16.426Hayward, W - 26.529, 42.16, 43.430Hayward, WG - 23.530Hayworth, K - 26.505He, D - 26.419He, S - 23.513, 23.514, 26.303,33.433, 36.439, 56.423He, X - 53.523He, Y - 33.527He, ZJ - 23.507, 36.403, 43.449,51.16, 56.446Heath, M - 16.419, 36.326, 36.327,36.328, 36.329, 36.330hebert, s - 26.514Hecht, H - 56.330Hecht, L - 26.305Heeger, D - 32.23Heeger, DJ - S4, 43.304Heffner-Wong, A - 16.554, 36.413Hegdé, J - 16.457, 16.458, 53.330Hegenloh, M - 43.307Hein, E - 24.12Heinen, K - 33.542Heinen, S - 26.439Heinze, H - 36.408Heitz, R - 63.428Held, RT - 62.14Helseth, S - 52.26Henderson, J - 43.411Hennig, J - 56.320Henry, CA - 63.456Henson, RN - 56.553Hérault, J - 33.328Herlihey, T - 61.22Herman, J - 16.418Herrington, J - 33.501Herron, T - 33.414Herzog, M - 21.12Herzog, MH - 26.417, 35.16Hess, R - 26.553Hesse, FW - 56.415Hetley, R - 54.22Heyes, C - 33.431Heyselaar, E - 16.551Hibbard, P - 16.451, 26.429, 36.541Hibbeler, P - 63.455Hibbeler, PJ - 53.324Hibino, H - 33.521Hickey, C - S5, 16.528Hidalgo-Sotelo, B - 33.447Higgins, JS - 33.547Highsmith, J - 16.446Hilkenmeier, F - 26.307, 43.429Hillenbrand, S - 25.12Hillis, J - 33.529Hillstrom, AP - 56.537Hindle, J - 26.411Hine, T - 43.533, 43.537Hinton, G - 43.522Hipp, D - 33.513Hirai, M - 33.424Hiris, E - 33.427Hisakata, R - 26.432Histed, M - 34.15Ho, C - 26.306Hochstein, S - 26.322, 33.437, 41.27,63.405Hock, H - 36.456Hodsoll, J - 36.459Hoffman, JE - 56.421Hoffman, K - 61.11Hoffmann, K - 56.503Hogendoorn, H - 33.543, 53.451Holcombe, A - 53.455, 56.416Hollingworth, A - 16.542, 23.517,34.13, 43.311, 63.434Holloway, SR - 16.412Holmes, T - 23.326Holt, D - 36.519Holtmann-Rice, D - 36.501Hommel, B - 23.314Hon, A - 36.425Hong, SW - 23.526Hooge, IT - 16.414Hoover, A - 16.533Hoover, AE - 43.540Hoover, S - 16.524Hopkins, E - 56.324Horne, G - 36.515Horowitz, T - 52.23, 52.25, 53.422,56.410Horowitz, TS - 33.454, 56.409Horwitz, G - 36.307Hospadaruk, L - 32.21Hou, C - 32.14Hou, F - 16.429Houpt, J - 56.544Howard, S - 16.458Howe, P - 56.410Hsiao, J - 53.432Hsiao, JH - 23.529Hsieh, P - 25.15Hu, B - 53.429Huang, A - 36.425Huang, C - 16.428, 16.429, 41.16,51.25Huang, J - 43.323Huang, P - 43.455Huang, S - 23.452Huang, T - 36.410, 53.317Huber, D - 36.314Hubert-Wallander, B - 33.449Huebner, GM - 26.459Huff, M - 33.446, 56.415, 56.418Hughes, M - 16.509Huh, AE - 53.512Huk, A - 56.320Huk, AC - 51.23Hulleman, J - 26.530, 43.438Hummel, JE - 21.22, 23.540Humphreys, G - 35.26, 36.459,43.438, 43.456Hunt, A - 21.11Hunter, T - 63.448Hunyadi, E - 33.501Hupé, J - 26.401, 33.405Hurwitz, M - 23.448Hussain, Z - 26.414Huth, AG - 23.405Hutzler, F - 56.506Huynh, CM - 23.536Hwang, A - 26.541, 33.552Hwang, AD - 36.413Hwang, S - 53.314Hyun, J - 43.320Hyvärinen, A - 56.431IIaria, G - 33.556, 53.539, 53.541Ichikawa, H - 16.520Ichikawa, M - 53.519Ietswaart, M - 42.24Ihssen, N - 43.322Ikeda, K - 53.315Ikeda, T - 21.17, 43.401Ikkai, A - 16.420Illie, L - S4, 35.23Im, HY - 53.311Interrante, V - 33.325Iordanescu, L - 53.443, 61.25Isa, T - 21.17, 43.401Ishai, A - 33.520Ishibashi, K - 56.541Ishii, M - 16.455Ishii, S - 43.524Issolio, L - 16.450Itakura, S - 23.531Itier, R - 33.457Itti, L - 21.17, 23.327, 26.324, 33.330,36.536, 43.401, 43.402, 43.420,53.529Ivory, S - 36.415JJ. Calder, A - 56.540Jack, RE - 62.26Jacob, B - 24.22Jacob, J - 56.433Jacob, M - 33.437Jacobs, R - 53.319Jacobs, RA - 16.546Jacques, C - 23.522, 24.23, 43.514Jacques, T - 56.401Jäger, F - 36.426Jahn, G - 56.415, 56.418, 56.424Jain, A - 53.529, 62.16Jain, R - 16.510James, K - 53.512James, T - 43.529Jangraw, D - S5Jankovic, D - 53.435Jansma, BM - 23.524Janssen, P - S1Jantzen, K - 33.524Jardine, NL - 56.425Jarick, M - 53.444Jastorff, J - 23.437Jax, SA - 23.438Jazayeri, M - 51.22, 63.442Jefferies, LN - 26.308Jeffery, L - 16.517, 33.526Jehee, J - 36.401Jenkin, HL - 53.424Jenkin, MR - 53.424Jensen, MS - 36.514, 43.432Jeong, SK - 63.447Jessula, S - 23.508Jiang, F - 43.515Jiang, Y - 33.433, 41.22Jiang, YV - 26.317, 33.515Jin, Z - 26.439Jingling, L - 26.527Jochum, J - 36.426Johnson, A - 16.532, 26.524, 43.302,53.324Johnson, AP - 56.401Johnson, M - 21.25, 21.25, 53.422Johnson, MH - 53.509, 53.549Johnston, A - 22.25, 22.26, 33.431,33.518, 53.401, 53.445, 53.447,53.451Johnston, K - 16.551, 43.321Johnston, S - 26.309, 26.410, 26.411Jolicoeur, P - 36.451Jones, S - 53.512Jonikaitis, D - 21.13, 43.307, 54.24Jordan, K - 33.438Joseph, C - 16.537Joubert, OR - 26.460Judd, T - 43.408Julian, JB - 23.406Juni, MZ - 33.401Juricevic, I - 16.442, 63.420Jurs, B - 26.415Juttner, M - 16.502KKabata, T - 26.538Kaddour, L - 43.324Kaeding, M - 33.325Kafaligonul, H - 43.530Kahn, D - 43.517Kakigi, R - 16.518, 16.519, 16.520Kaldy, Z - 53.511Kalghatgi, S - 53.409Kalia, AA - 36.534Kallenberger, S - 36.544Kallie, CS - 16.401, 36.534, 62.11Kam, J - 23.449Kanai, R - 21.27, 23.408, 33.543,36.450, 53.449Kanan, C - 33.555Kanaya, S - 43.546Kanazawa, S - 16.501, 16.518,16.519, 16.520, 36.509, 53.403,53.501Kane, A - 36.317Kane, D - 43.506Kang, H - 23.520Kang, M - 16.547Kang, X - 33.414Kantner, J - 16.522, 53.321356 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>


VSS 2010 AbstractsAuthor IndexKanwisher, N - 21.26, 23.406, 25.15,32.17, 42.21, 43.521Karmiloff-Smith, A - 53.549Kashiwase, Y - 63.429Kasper, R - 23.456Kassam, K - 51.15Kassuba, T - 63.427Kastner, S - 26.507, 42.22Kato, R - 21.17, 43.401Katsuki, F - 36.455Katyal, S - 63.421Kaul, C - 22.24, 33.520Kaulard, K - 33.509Kawahara, J - 36.445Kay, P - 63.411Keane, B - 51.13Keating, T - 26.509Keil, B - 32.17Kellman, P - 31.22, 51.13Kellman, PJ - 33.412Kelly, DJ - 43.452Kelly, J - 26.314, 26.422Kelly, K - 16.431Kenney, A - 33.534Kenny, R - 56.528Kenyon, G - 53.532, 53.533Kenyon, GT - 56.440Kerrigan, IS - 53.430Kersten, D - 55.24Kerzel, D - 23.314, 23.454, 26.535,43.407Khan, A - 23.323, 26.439Khawaja, F - 36.311Khesin, A - 25.14Khuu, S - 23.422Kies, S - 36.421Kihara, K - 36.445Kilias, J - 56.515Killian, A - 36.520Killingsworth, S - 53.516Kim, C - 23.520, 53.439Kim, D - 53.318, 53.514, 56.405Kim, J - 23.416, 23.520, 23.539Kim, R - 43.532Kim, S - 23.505, 53.439, 56.327Kim, YJ - 23.511, 36.532Kimchi, R - 56.548Kimura, E - 23.502, 23.512, 36.418Kimura, K - 26.318, 53.315Kingdom, F - 31.11Kingdom, FA - 56.435, 56.441Kingstone, A - 36.454, 43.303,56.546Kinka, D - 53.538Kiorpes, L - 42.14, 56.443Kirchner, H - 53.534Kirkby, J - 36.545Kirkham, N - 53.507Kit, D - 26.455Kita, S - 56.541Kitaoka, A - 26.433Kitazaki, M - 33.434, 53.427Klatzky, R - 36.547Klatzky, RL - 36.512Klein, S - 16.416, 26.426, 26.556,56.405Kleinholdermann, U - 23.431Klingenhoefer, S - 16.417Klink, C - 25.13Knapen, T - 26.304, 56.510Knill, D - 36.319, 53.429Knill, DC - 16.546Knörlein, B - 22.14Ko, PC - 26.302Kobayashi, M - 16.518Koch, C - S5, 23.313, 52.12, 56.432Kochli, DE - 36.521Koelewijn, L - 56.403Koenig-Robert, R - 26.540Kogelschatz, L - 16.511, 43.543Kohl, P - 36.426Kohler, A - 23.403, 51.24Kohler, P - 43.435Kohler, PJ - 26.503, 56.423Kolster, H - 23.407Komban, SJ - 63.457Kömek, K - 36.505Komogortsev, O - 16.533Konar, Y - 56.542Konen, C - 42.22Konkle, T - 34.25, 36.529Konstantinou, N - 43.329Kornprobst, P - 43.504, 43.510Kosmides, A - 56.422Kosovicheva, A - 63.422Kosovicheva, AA - 23.455Kouider, S - 33.317, 43.426Kourtev, H - 56.422Kourtzi, Z - 23.506, 31.21, 55.22,56.439Kowler, E - 16.422, 23.315, 26.440,43.409Koyama, S - 33.521Kramer, R - 33.428Kranjec, A - 53.411Krauzlis, RJ - 51.27, 63.426Kravitz, D - 25.26, 33.545, 56.534Kreiman, G - 34.23, 42.23Kreindel, E - 33.532Kreither, J - 43.319Krekelberg, B - 26.443, 26.504,26.552, 56.511, 61.11, 61.17Kridner, C - 36.537, 36.538Kriegeskorte, N - 56.552Krigolson, O - 36.328Kristjansson, A - 54.26Kristjánsson, Á - 26.531Kromrey, S - 16.458Kuai, S - 56.439Kuang, A - 25.16Kubischik, M - 61.11Kubodera, T - 53.426Kucukoglu, G - 62.15, 63.404Kuefner, D - 24.23Kuhl, B - 54.15Kumada, T - 23.314Kunar, M - 33.453Kunsberg, B - 56.440Kunz, BR - 62.13Kurby, C - 26.532Kuriki, I - 26.433, 56.331, 63.429Kurki, I - 56.431Kuzmova, Y - 33.531, 52.25Kvam, P - 43.457Kveraga, K - 26.508, 51.15Kwak, M - 16.415Kwok, I - 35.11Kwon, M - 42.26Kwon, O - 36.319Kwon, T - 16.454LLaboissière, R - 36.406Laddis, P - 26.438Lages, M - 33.529Lalanne, L - 36.554Laliette, P - 23.553Lam, S - 23.529, 23.530Lamme, V - 23.310, 26.409, 35.17,53.525Lamme, VA - 33.409, 33.542, 53.328,53.417, 54.14, 56.406Lamp, G - 43.424Lamy, D - 26.423Landau, A - 36.457, 56.429, 63.422Landwehr, K - 56.330Landy, MS - S4, 23.441, 43.305Lansey, J - 23.306Lanyon, L - 26.321, 53.540Lao, J - 33.529Lappe, M - 23.324Lappin, J - 36.320Larocque, A - 16.538Larsen, S - 36.440Larson, A - 36.538LaSala, A - 56.442Lassonde, M - 16.435Lau, H - 26.312, 26.406, 26.407,43.413Lau, S - 33.310Lavie, N - 43.329Lawrence, J - 26.404Lawson, RP - 56.553Lawton, T - 16.437Le, A - 23.439Leal, S - 53.441Leber, AB - 23.518, 36.444Lebrecht, S - 23.535, 26.510Lechak, JR - 36.444Lee, A - 51.26Lee, AL - 26.430Lee, D - 16.531Lee, DH - 56.538Lee, H - 53.412Lee, HS - 51.24Lee, J - 33.326, 33.441, 52.22, 56.402Lee, K - 23.531, 23.532, 43.313Lee, T - 23.520Lee, YF - 23.445Lee, YL - 33.548Leek, C - 16.508, 26.410, 26.411,33.546Lefèvre, P - 56.529, 56.530Legge, G - 26.303, 42.26Legge, GE - 16.401, 36.534, 62.11Lehky, S - 26.547Lengyel, M - 34.14Lenkic, PJ - 26.512Lennert, T - 32.22, 36.451Lenoble, Q - 36.430Leonard, CJ - 36.436Leonard, HC - 53.549Leonardo, Z - 16.555Lepore, F - 16.435, 53.503Lescroart, M - 26.505Lescroart, MD - 26.506Lesmes, L - 16.429, 26.438Lesperance, J - 36.505Lester, BD - 43.316Lev, M - 63.459Leveille, J - 56.328, 56.329Leventhal, AM - 53.515Levi, D - 26.426, 55.26Levi, DM - 16.430Levin, D - 53.516Levin, N - 23.553Levine, M - 36.321Levinthal, B - 16.529, 51.14, 63.435Lewis, D - 23.422Lewis, TL - 32.16, 43.443, 56.434Lewkowicz, D - 43.543Leyssen, MH - 63.407, 63.408Li, A - 36.504Li, H - 33.415Li, K - 56.301Li, L - 33.327, 56.321, 56.322Li, M - 53.524Li, S - 31.21, 56.321Li, W - 36.331, 36.332, 56.439Li, X - 23.507, 33.536Li, Y - 16.454, 36.417, 63.433Li, Z - 33.540, 43.550, 43.551Liao, H - 36.435Libera, CD - S5Liberman, A - 16.526Lidz, J - 63.448Liégeois-Chauvel, C - 53.534Likova, L - 21.24Limber, J - 16.548Limousin, P - 16.407Lin, I - 61.14Lin, J - 54.13, 56.414Lin, L - 26.402, 26.403Lin, ST - 16.432Lin, T - 51.26Linares, D - 53.455, 56.416Linden, D - 43.322Lindsey, D - 63.418Ling, S - 36.401Lingeman, J - 42.14Linhares, J - 16.439, 23.411Link, S - 23.532Linsen, S - 63.407, 63.408Lipp, O - 41.25Lipp, OV - 41.26List, A - 36.453, 53.443Liston, DB - 63.426Listorti, C - 23.318Liu, C - 31.25, 53.527Liu, G - 36.324Liu, H - 34.23Liu, J - 31.23, 35.27Liu, N - 24.25Liu, T - 23.514, 26.303, 32.21Liu, X - 26.425Liverence, BM - 41.21Liversedge, S - 36.545Livingstone, M - 23.527Livitz, G - 16.441Lleras, A - 16.529, 16.540, 26.532,26.545, 36.556, 43.423Lo, H - 53.511Lo, O - 26.513Loebach, J - 43.544Loffler, G - 63.461<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>357


Author IndexVSS 2010 AbstractsLogan, G - 21.14Logvinenko, A - 36.419, 53.404Logvinenko, AD - 16.444Loken, E - 56.550Longfield, D - 16.523Lopez-Calderon, J - 43.319Lorenceau, J - 23.547, 33.404Loschky, L - 33.538, 36.537, 36.538Lossin, F - 36.426Louveton, N - 33.328Loveland, K - 35.27Lovell, PG - 36.531Low, WT - 53.310Lu, H - 26.430, 36.308, 51.26Lu, J - 36.308Lu, Z - S2, 16.428, 16.429, 26.447,31.23, 41.16, 51.25, 52.24, 54.22Lucia, L - 53.536Luck, S - 16.542, 16.545, 26.528,43.319, 43.320, 63.430Luck, SJ - 34.13, 36.436Lugo, J - 53.428, 53.431Lugtigheid, A - 56.445Lundwall, R - 36.452Lunghi, C - 22.11Luo, G - 56.513Lupyan, G - 53.411Luria, R - 53.419Ly, R - 63.448Lynch, D - 56.305Lynn, C - 43.511, 51.21MM. Swallow, K - 43.454Ma, WJ - 16.556, 21.23Ma, X - 56.404Macdonald, J - 26.311, 56.407MacEvoy, S - 33.539Machizawa, M - 21.27Machulla, T - 53.452Mack, M - 53.531Mack, SC - 63.426MacKay, M - 23.313MacKenzie, K - 26.421Macknik, SL - 16.425, 36.549Maddock, R - 52.14Madelain, L - 16.418Madigan, SC - 53.408Madsen, J - 34.23Maertens, M - 36.420Maguinness, C - 43.541, 56.528Mahon, B - 62.21Maier, A - S6Maij, F - 56.512Major, A - 16.438Makovac, E - 53.433Makovski, T - 43.454Malach, R - 42.25Malhotra, P - 42.12Malkoc, G - 31.11Maloney, L - 33.416, 33.448, 43.308Maloney, LT - 23.431, 33.401,33.454, 53.407Mamassian, P - 16.452, 23.515,26.445, 26.546, 36.551, 43.503,53.436Manahilov, V - 63.461Mancuso, G - 16.459Manginelli, A - 36.414Mangini, M - 23.538Maniscalco, B - 26.407, 43.413Manjunath, V - 36.546Mansour, R - 35.27Marchant, A - 23.551Mareschal, D - 32.15, 53.509Mareschal, I - 23.307Markovic, S - 63.409Markowitz, J - 16.512Markus, H - 56.424Marlow, P - 41.12Marois, R - 54.16Marotta, J - 23.436, 26.404Marotta, JJ - 33.440Marsman, J - 25.24Martin, A - 26.519Martín, A - 56.318Martin, R - 33.442Martinez, A - 33.506, 33.507Martínez, M - 26.315Martinez-Conde, S - 16.425, 36.549Martinez-Trujillo, J - 32.22, 36.451,43.318Maruya, K - 43.507, 43.508Maryott, J - 26.452Masakura, Y - 53.519Masson, G - 26.445, 34.11Masson, GS - 43.503, 43.504, 43.510Mast, F - 56.535Masterman, H - 33.457Mathewson, K - 36.556Mathey, MA - 56.531Mathôt, S - 23.316Matin, L - 36.331, 36.332Matsukura, M - 16.542, 34.13Matsumiya, K - 22.23, 56.331,63.429Matsumoto, E - 26.538Matthews, N - 26.314, 26.422Matthews, T - 36.441Matthis, J - 16.410Matziridi, M - 56.512Maurer, D - 16.514, 16.515, 32.16,36.517, 43.443, 56.434Maus, GW - 55.16Ma-Wyatt, A - 36.316, 36.317May, K - 36.541Mayer, E - 53.548Mayer, KM - 33.535Mayer-Brown, S - 36.440Mayhew, S - 31.21Mayo, JP - 36.309McBeath, MK - 16.412McCamy, MB - 16.425McCarthy, R - 53.547McCloskey, M - 33.544, 33.549,61.13McCollough, A - 16.543, 52.23McCormick, CM - 33.514McCourt, M - 36.424McDermott, K - 16.442McDonald, P - 43.541McGovern, D - 56.435McGugin, RW - 23.535McInerney, J - 53.529McIntosh, R - 42.12Mckee, S - 56.314McKeeff, T - 53.535McKerral, M - 16.435McKinnon, K - 56.402McKone, E - 16.517, 16.527, 32.13,33.523, 53.545McOwan, P - 53.401McPeek, R - 23.323, 42.13McQuaid, J - 53.434Medendorp, WP - 36.323Medina, J - 23.438Mednick, S - 55.23Meek, B - 23.436Meese, T - 23.501Meeter, M - 43.429Mei, M - 23.528Meixner, TL - 16.522Mel, B - 16.510, 23.302Melcher, D - 56.508Meleis, A - 56.549Mendez, R - 33.508Mendola, JD - 23.508Mendoza, D - 43.318Meng, M - 24.21, 33.504, 56.554Meng, X - 36.503Menneer, T - 33.417, 53.547Menzel, R - 63.426Mereu, S - 26.532Merriam, EP - 43.304Mesik, J - 33.543Meso, A - 26.553Mestry, N - 53.547Mettler, E - 31.22Meuwese, JD - 53.328Mevorach, C - 26.326, 36.459Mhatre, H - 33.329Micelli, C - 43.448Michel, M - 53.319Miellet, S - 23.523, 33.528Mihalas, S - 34.22Miksch, S - 36.426Milders, M - 23.325Miles, F - 41.11Miller, E - 34.15Miller, TS - 16.412Miller, W - 53.402Millin, R - 55.12Mingolla, E - 16.441, 26.310, 26.431,36.427, 43.418, 43.505, 56.306Mintz, J - 23.305Miranda, A - 33.439Miriyala Radhakrishn, S - 16.411Mirpour, K - 23.322, 52.16Mishima, T - 36.306Mitra, A - 43.315Mitroff, SR - 23.453, 33.451Miyahara-Self, E - 23.419Miyazaki, Y - 53.425Mizokami, Y - 16.443MJ Thomas, P - 53.316Mobbs, D - 26.407Moher, J - 26.537, 36.437Mojica, A - 33.413Mok, P - 52.26Molteni, M - 26.320, 26.323Mondloch, C - 16.523, 33.530Mondloch, CJ - 33.514, 43.518Moniz, E - 23.534Monnier, P - 23.410Montagnini, A - 26.445, 34.11,43.503Moore, C - 24.12Moore, CM - 23.517, 54.23Moore, KS - 36.438Moore, T - 43.325Mordkoff, JT - 26.533Morgan, L - 33.539Morgan, M - S4, 23.307Morgenstern, Y - 53.406Moriya, J - 33.505Moro, S - 23.321Morris, A - 26.552, 61.11Morris, S - 33.450Morrone, C - 22.11, 53.454Morvan, C - 33.448, 33.454, 43.308Mossbridge, J - 43.535, 53.443Most, S - 53.421Motoyoshi, I - 41.23, 53.403Mou, W - 33.536Mould, MS - 26.518Mouri, C - 23.516Movshon, JA - 51.22Mozgova, O - 33.420Muckli, L - 43.526, 43.542Muggleton, N - 33.419Muise, G - 56.517Mulla, A - 36.326Mulligan, JB - 26.316Mullin, C - 23.543Mundy, P - 36.513Munger, MP - 33.537Munneke, M - 26.406Munoz, D - 26.324, 26.520Murakami, I - 26.432, 26.433,26.435, 26.436, 26.441Murphy-Aagten, D - 36.522Murray, A - 42.16Murray, J - 16.521Murray, RF - 53.406Murray, S - 54.13, 56.414, 63.424Muthukumaraswamy, S - 26.309Muthukumaraswamy, SD - 56.403Myers, L - 35.22NNadasdy, Z - 56.326Nagai, J - 53.414Nah, G - 16.432Naito, S - 43.434Nakajima, Y - 26.551Nakano, L - S4Nakashima, R - 23.554Nakato, E - 16.518, 16.519Nakayama, K - 16.516, 23.527,24.24, 33.512, 43.437, 53.513,53.542, 53.544, 56.550, 62.21Nandy, A - 55.13Nandy, AS - 55.15Nanez, J - 26.412Narang, S - 63.436Narasimham, G - 36.320Narayanan, V - 16.425Nardini, M - 32.15Nascimento, S - 16.439, 23.411Naselaris, T - 23.405, 23.544Natu, V - 33.522, 53.543Navalpakkam, V - S5, 26.521Nawrot, E - 53.510Nawrot, M - 16.456, 53.510358 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>


VSS 2010 AbstractsAuthor IndexNayar, K - 56.443Neely, K - 36.327Neider, M - 33.445Neill, WT - 63.433Nelson, R - 56.438Nelson, S - 53.441Nemenman, I - 53.532Nennig, L - 56.308Neumann, H - 43.510, 53.327,56.306, 56.317Neville, I - 56.402Newell, FN - 43.539, 43.541, 56.528Ngo, KJ - 26.448Nguyen, H - 36.407Nguyen, VD - 54.23Nguyen-Tri, D - 26.428Ni, R - 52.11, 56.303Nichols, D - 36.456Nicol, J - 43.538Niebur, E - 34.22Niehorster, DC - 56.321, 56.322Nielsen, S - 43.428Niemeier, M - 23.439Nishida, S - 22.12, 22.26, 43.501,43.502, 43.507, 43.508, 53.445Nishimura, A - 26.318Nishimura, M - 42.22Nishina, S - 53.514Noble, C - 53.443Nolan, AM - 43.556Nolan, H - 36.552Norcia, A - 32.14, 32.27, 33.411,56.314Norcia, AM - S3Norman, H - 56.518Norman, JF - 36.510, 56.518, 56.520Noudoost, B - 43.325November, A - 26.456Noyce, A - 26.452Nunez-Elizalde, A - 43.527OOakley, JP - 26.518Oba, S - 43.524Obadia, M - 26.327Oberfeld, D - 56.330O’Brien, J - 16.530O’Callaghan, C - 42.16O’Connor, J - 26.301Odic, D - 63.448Ogiya, M - 26.318Ogmen, H - 21.12, 53.301O’Herron, P - 51.17Ohtsu, K - 33.324Ohzawa, I - 26.453O’Kane, L - 56.304, 56.307Okazaki, Y - 56.315Olagbaju, O - 35.27Oleskiw, TD - 24.15Oliva, A - S4, 23.541, 25.21, 26.460,33.447, 33.533, 33.534, 34.25,36.529Olivers, C - 22.16, 26.307, 43.429Olk, B - 36.443Olkkonen, M - 31.16Olman, C - 23.402Olman, CA - 56.437Olsen, M - 56.325Olzak, L - 63.455Olzak, LA - 53.324Omlor, L - 56.317O’Neil, SF - 16.447Ong, W - 52.16Ong, WS - 23.322, 61.12Onimaru, S - 53.427Ono, K - 33.434Ons, B - 43.458Ooi, TL - 23.507, 36.403, 43.449,51.16, 56.446Oouchi, Y - 33.324Op de Beeck, H - 23.525, 26.405,26.416Op De Beeck, H - 33.553Or, CC - 26.427Orban, G - 23.407, 34.14Orban, GA - S1, 23.437O’Reilly, R - 53.528Orhan, AE - 53.319Ortega, L - 53.446Oruc, I - 53.541Oruç, I - 36.526Osborn, AF - 43.433Osborne, V - 16.555O’Shea, J - 36.508Ostendorf, F - 56.515O’Sullivan, C - 43.541Osunkunle, O - 36.546Otero-Millan, J - 16.425O’Toole, A - 33.522, 36.523, 53.543Otsuka, Y - 16.518, 16.519, 53.403Otto, TU - 53.436Owens, DA - 43.433PP. Ewbank, M - 56.540Pachai, MV - 16.525, 56.543Pack, C - 36.310, 36.311Pack, W - 16.416Pajtas, P - 62.21Palermo, R - 23.545, 53.546Pallett, P - 33.504Palmer, E - 33.439, 56.420Palmer, J - 54.21, 54.23Palmer, S - 63.417Palmer, SE - 24.14, 63.406, 63.407,63.408, 63.412, 63.413, 63.415,63.416, 63.419Palmeri, T - 16.507, 21.14, 53.531Palmero-Soler, E - 24.22, 24.23Palomares, M - S3, 32.14, 33.411Pan, JS - 16.460Pannasch, S - 43.403Papanastassiou, A - 34.23Papathomas, T - 23.503Papenmeier, F - 56.415, 56.418Paradis, A - 23.547, 33.404Parasurman, R - 26.301Pardhan, S - 53.305Pardo-Vazquez, JL - 36.549Paré, M - 16.420, 16.551, 35.25Pare, M - 43.321Pariyadath, V - 53.448Park, J - 23.444, 36.527Park, S - 36.529Parker, L - 56.438Parkes, K - 53.434Parkinson, J - 33.435, 56.426Parks, D - 53.529Parr, L - 36.522Parrott, S - 63.435Partanen, M - 16.426Pascalis, O - 43.313Passingham, R - 26.407Pastakia, B - 36.511Pasupathy, A - 52.15Patel, MN - 56.434Patel, S - 35.15Patten, M - 36.543Patterson, M - 53.310Pavlovskaya, M - 26.322Payne, H - 26.544Paz, N - 23.419Pearson, D - 35.27Peck, C - S5Pedersini, R - 33.454Peeters, R - 23.407Pegors, T - 23.549Peissig, J - 36.520Peissig, JJ - 23.534, 23.536Peli, E - 23.406, 56.513Pelli, D - 33.313, 33.314Pelli, DG - 55.14Pellicano, E - 16.517Peng, C - 25.26Pennartz, C - S5Perdreau, F - 56.516Perera, D - 56.420Peretz, I - 26.514Pérez Zapata, L - 34.16Perez Zapata, L - 23.321Perez, C - 23.553, 26.327Perlato, A - S5Perona, P - S5, 26.521Perrinet, L - 26.445Perrinet, LU - 43.503Perrone, JA - 33.323, 51.27Persaud, N - 26.407Pertzov, Y - 56.514Pestilli, F - 32.23Peterburs, J - 56.503Peters, A - 16.407Peterson, M - 33.413, 56.533Peterson, MA - 24.13, 26.448,33.406, 43.439Peterson, MS - 26.301, 36.442Peterzell, D - 53.434Petrini, K - 33.430Petro, L - 43.526, 43.542Petrov, A - 26.424Petrov, Y - 36.422Petrova, D - 23.423Petters, D - 16.502Peverelli, M - 26.323Pfeiffer, T - 53.506Pham, A - 42.14Phelps, E - 16.536Philbeck, J - 43.555, 56.449phillips, F - 36.505Phillips, J - 31.15, 36.523Phillips, L - 33.325Piazza, E - 56.556Pichler, P - 36.526Pick, H - 36.320Pidcock, M - 16.527Pierce, R - 56.311Pietroski, P - 63.448Pilz, KS - 56.522, 56.525Pinna, B - 23.417, 43.444Pins, D - 23.548Pinsker, EA - 36.438Pinto, L - 63.411Pinto, Y - 56.410Pisoni, D - 43.544Pitcher, D - 43.521Piwek, L - 33.430Pizlo, Z - 16.454, 43.457Place, S - 53.422Plank, T - 53.320Ploner, C - 56.515Poggesi, RM - 63.416Poggio, T - 36.312Pohl, K - 23.403Poirier, FJ - 33.516Pola, J - 56.509Polat, U - 26.548, 53.323, 63.459Poletti, M - 23.306, 23.318Pollick, F - 33.430Pollmann, S - 36.414Poltoratski, S - 53.418Pomerantz, J - 43.447Pomplun, M - 26.541, 33.552,36.413, 56.513Pont, S - 63.404Pont, SC - 53.405Powell, N - 31.26Prablanc, C - 36.406Prado-León, LR - 63.412Prasad, S - 23.438Pratt, J - 33.444, 36.433, 54.25, 62.24Preston, A - 26.526Preston, TJ - 23.404Priftis, K - 26.323Prime, S - 23.434Prime, SL - 33.440Prins, N - 26.557Prinz, W - 33.435Prinzmetal, W - 36.457, 56.429Priot, A - 36.406Proffitt, D - 43.553, 56.324, 62.12Prudhomme, C - 36.556Pun, C - 43.317Punzi, G - 23.312, 23.413Purcell, B - 63.428Puri, A - 33.450Putnam, N - 22.22Pyles, J - 33.421Pylyshyn, Z - 53.420, 56.422Pype, A - 54.13, 56.414QQian, J - 36.422Qin, J - 33.309Qu, Z - 16.506, 53.329Quinn, CF - 56.437Quinn, P - 43.313RR. Saunders, D - 33.424Raboy, D - 33.522, 53.543Rademaker, RL - 16.550Radulescu, P - 36.433Rafal, R - 26.325, 42.13<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>359


Author IndexVSS 2010 AbstractsRaghunandan, A - 56.308Rahnev, D - 26.406Raj, A - S4Rajan, A - 33.439Rajashekar, U - 43.301Rajsic, J - 63.439Ramachandra, C - 23.302Ramachandran, V - 53.437Ramírez, F - 43.525Ramon, M - 23.522, 56.529Ramscar, M - 26.456Rand, K - 56.447Rangel, A - S5Rashal, E - 33.311Raudies, F - 53.327, 56.306Rawji, F - 16.438Raymond, J - 16.530Raymond, JE - 16.535Rayner, K - 36.545Raz, N - 23.553Read, J - 36.540Read, JC - 36.546Reavis, E - 43.435Reavis, EA - 26.503, 56.423Reddoch, S - 35.27Reed Jones, J - 26.408Reed, S - 53.550Rees, G - 21.27, 22.24, 26.554,33.432, 33.520, 33.550, 36.450,43.329, 43.442, 53.449Reeves, A - 56.305Régis, J - 53.534Reh, T - 43.536Reichow, A - 53.520, 53.522, 56.312Reilly, RB - 36.552Reiss, J - 56.438Remus, D - 16.513Rémy, F - 23.548Renaudo, C - 53.533Renken, R - 25.24, 25.25Rensink, R - 26.457, 26.515Ress, D - 63.421Rhea, C - 33.319Rhodes, G - 16.517, 33.526Rice, HJ - 43.541Rich, A - 53.309Rich, AN - 56.403Richard, B - 26.524Richards, H - 16.534Richler, J - 56.547Richters, D - 61.15Ridderinkhof, R - 26.409Riddle, M - 36.419Rider, A - 22.26Rieiro, H - 36.549Ries, B - 33.325Rieser, J - 36.320Righi, G - 23.534, 43.515Riley, M - 33.501Rinaudo, C - 53.532Ringbauer, S - 53.327Ringer, R - 33.538, 36.537, 36.538Rio, K - 33.319Ripamonti, C - 23.418Rissman, J - 54.15Ritchie, K - 21.11, 23.521Rivera, SM - S3Rivolta, D - 23.545, 53.546Rizzo, M - 33.326, 33.441Robbins, R - 56.551Roberts, K - 53.538Robertson, L - 56.556Robertson, LC - 23.455, 43.417,53.440, 63.446Robertson, M - 26.429Roddy, G - 33.306, 33.312Rodger, H - 56.539Rodrigues, A - 16.524Rodzon, K - 33.438Roe, A - 36.308Roelfsema, PR - S5Rogers, B - 62.17Rogers, DK - 43.539Roggeveen, AB - 26.319Rokem, A - 52.14, 53.322, 56.429Rokers, B - 51.23, 56.320Rolfs, M - 21.13Roller, B - 33.413Romeo, A - 33.403Roorda, A - 16.430, 22.22, 56.301Roper, ZJ - 26.533Rosander, K - 42.14Rosen, S - 33.313, 33.314Rosenberg, A - 36.423Rosengarth, K - 53.320Rosenholtz, R - S4, 35.23, 53.527Ross, NM - 43.409Rossi, EA - 16.430Rossini, JC - 26.522Rossion, B - 16.514, 23.522, 24.22,24.23, 43.513, 43.514, 43.515,43.516, 53.548, 56.529, 56.530Rossit, S - 42.12Roth, E - 16.446Rothkopf, C - 43.410Rothlein, D - 33.549Roudaia, E - 56.522, 56.524Roumes, C - 36.406Rovet, J - 16.436Rowe, J - 56.552Rowland, J - 31.14Roy, C - 43.312Roy, M - 16.435Rubin, N - 23.511, 52.13Rucci, M - 23.306, 23.318, 61.15Rudd, ME - 36.416Ruffino, M - 23.450, 26.320, 26.323Rufin, V - 23.546Runeson, E - 63.424Rusch, M - 33.326, 33.441Rushton, S - 61.22Russell, R - 23.527, 53.544Russell, W - 63.402Rutherford, M - 53.551Rutledge, T - 53.434SS. Stoyanova, R - 56.540Saarinen, J - 56.431Sack, AT - 23.510Sadr, J - 36.515Saegusa, C - 23.444, 36.527Saenz, M - 52.12Sagi, D - 63.454Sahraie, A - 21.11, 23.325, 23.521Said, C - 36.528Saiki, J - 26.453, 43.309, 53.302Sakai, K - 36.306Sakamoto, S - 53.426Sakurai, K - 53.426Salat, D - 52.11Salvagio, E - 33.406, 33.413Sampasivam, S - 56.435Sampath, V - 53.504Sanada, M - 53.315Sanbonmatsu, K - 53.533Sanchez-Rockliffe, A - 43.424Sandini, G - 53.454, 53.508Sanocki, T - 16.530, 33.452, 33.541,36.539Santos, EM - 26.440Sapir, A - 56.426Sapkota, R - 53.305Sasaki, Y - 26.412, 52.11Satgunam, P - 63.402Sato, M - 16.455, 23.442Sato, T - 26.551, 53.427Saunders, DR - 33.426Saunders, J - 56.323Savazzi, S - 26.511Savoy, S - 16.537Sawada, T - 16.454, 24.13, 36.509,43.457Sawaki, R - 63.430Sawayama, M - 36.418Saxe, R - 32.17Saygin, AP - 33.419, 33.432Sayim, B - 35.16Scalf, P - 26.313Scarfe, P - 22.25, 26.429Schall Jr., M - 33.441Schall, J - 21.14, 63.428Scharff, A - 54.21Scharlau, I - 26.307, 43.421, 43.429Scheel, M - 33.556, 53.540, 53.543Schendan, H - 53.530, 53.536Schiller, P - 16.415Schiltz, C - 33.442, 43.513, 43.516,43.528Schindel, R - 26.555Schirillo, J - 36.419Schlicht, E - 33.512Schloss, K - 63.417Schloss, KB - 24.14, 63.412, 63.413,63.415, 63.416Schmalzl, L - 23.545Schmidt, C - 36.544Schmidt, F - 33.402Schmidt, J - 33.455Schmidt, T - 16.539, 33.402, 36.426Schnall, S - 62.12Schneegans, S - 43.311Schneider, KA - 32.25Schneiderman, M - 43.318Schneps, M - 16.554Schneps, MH - 36.413Schnitzer, BS - 16.422, 23.315Scholl, BJ - 36.533, 41.21, 52.27,56.444, 63.449Scholte, H - 23.310Scholte, HS - 33.409, 33.542, 53.328,54.14, 56.406Scholte, S - 26.409, 35.17, 53.525Schoonveld, W - S5Schor, C - 41.17, 56.502Schor, CM - 26.434Schotter, E - 36.545Schrater, P - 23.429, 31.26, 55.24Schrater, PR - 26.420Schreiber, K - 26.504Schuerer, M - 23.415Schultz, R - 33.501, 53.552Schulz, J - 43.403Schumacher, J - 23.402Schumacher, JF - 56.437Schütz, AC - 26.446, 34.12Schwalm, M - 51.24Schwan, S - 33.446Schwarzkopf, DS - 22.24, 26.554,33.550, 43.442Schyns, P - 43.526, 62.23Schyns, PG - 62.26Scilipoti, E - 36.405Scolari, M - 32.24Sears, T - 33.538Sebastian, S - 43.457Sedgwick, H - 41.12Sedgwick, HA - 43.556Segalowitz, SJ - 43.518Seiffert, A - 33.443Seiffert, AE - 26.302, 56.425Seitz, A - 33.436Sekuler, A - 56.542Sekuler, AB - 16.525, 26.319, 26.414,36.553, 56.522, 56.523, 56.524,56.525, 56.543Sekuler, R - 26.452, 43.323Sekunova, A - 33.556, 53.539,53.540, 53.543Serences, J - 32.24, 36.314Sereno, A - 26.547Sereno, M - 23.408Sereno, MI - 53.509Sergio, L - 23.434Series, P - 33.436Seron, X - 53.415Serrano-Pedraza, I - 36.546Serre, T - 36.312, 42.23Setti, A - 56.528Sexton, J - 23.401Sha, L - 56.554Shachar, M - 23.550Shah, M - 36.425Shalev, L - 26.326, 36.459Shams, L - 43.532Shapiro, A - 51.25, 63.443Shapiro, K - 26.309, 43.322Shapley, R - 23.511, 36.304Shapley, RM - 36.303Sharan, L - 53.527Sheedy, J - 26.526Sheinberg, D - 36.312Sheinberg, DL - 36.313Sheldon, C - 33.556Sheliga, B - 41.11Shelton, A - 54.12Shen, J - 23.327Shen, K - 35.25Shen, YJ - 16.531Sherman, A - 26.536, 36.453, 53.443,62.22Sheth, B - 35.27, 36.407Shevell, S - 16.449, 23.413, 36.423,51.11Shevell, SK - 23.414360 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>


VSS 2010 AbstractsAuthor IndexShi, Y - 16.454Shibata, K - 43.523Shibata, T - 43.524Shiffrar, M - 16.537, 33.429Shikauchi, M - 43.524Shim, WM - 21.26Shimojo, E - 23.444, 36.527Shimojo, S - 23.444, 33.512, 36.527,56.326Shimozaki, S - 36.428Shin, E - 26.545Shin, J - 36.504Shin, K - 33.316Shioiri, S - 22.23, 26.318, 56.331,63.429Shiozaki, H - 56.310Shivik, J - 33.438Shneor, E - 33.437Shoda, M - 53.414Shomstein, S - 36.440, 52.22, 63.437,63.445Shooner, C - 53.301Short, L - 33.514, 33.530Shotts, M - 36.513Shroff, P - 16.431Siagian, C - 33.330Siddiqui, AP - 63.436Siegel, E - 33.503, 33.510Sigurdardottir, HM - 36.313Sigurjonsdottir, O - 54.26Sillan, O - 36.406Silvanto, J - 33.550Silver, M - 25.12, 34.15, 52.14,53.322, 56.429, 63.422Simic, N - 16.436Simoncelli, E - 43.301, 55.11Simoncelli, EP - S4Simoncini, C - 43.503Simons, DJ - 23.540, 36.514, 43.432Simpson, S - 36.404Sims, CR - 16.546Singal, G - 24.21Singer, J - 36.312Singer, W - 23.403, 51.24Singh, K - 26.309Singh, KD - 56.403Singh, M - 36.511, 43.451, 56.327Singhal, A - 36.322Sinha, P - 24.21Skilters, J - 43.444Slater, A - 43.313Sligte, IG - 53.417, 54.14Smeets, JB - 42.11, 56.512Smeulders, A - 23.310, 35.17Smilek, D - 33.331Smirl, J - 36.315Smith, D - 41.13Smith, F - 43.526, 43.542Smith, T - 23.320, 43.411Snapp-Childs, W - 53.517, 53.518Snodderly, M - 36.431Snyder, K - 26.455Sole Puig, M - 23.321Solomon, J - S4, 23.307Solski, A - 56.316Sommer, M - 56.505Sommer, MA - 36.309Song, C - 26.554Song, J - 23.323, 42.13Song, S - 16.430Song, Y - 53.325, 56.404Sonnert, G - 36.413Soowamber, R - 16.408Soroker, N - 26.322Soska, K - 42.14Souto, D - 23.454, 34.11Souverneva, A - 23.444Speck, O - 36.408Spector, K - 16.526Spehar, B - 43.440Speiser, R - 16.554Spence, C - 43.547, 53.432Spencer, J - 43.311Spencer, JM - 56.525Spering, M - 63.440Sperling, G - S2, 25.11Spitschan, M - 36.542Sprague, T - 53.453Springer, A - 33.435Srinivasan, K - 36.305Srinivasan, N - 43.436Srinivasan, R - 56.428Srivastava, N - 36.549Stanisor, L - S5Stanley, D - 16.536Stansbury, D - 23.544Steelman-Allen, KS - 26.316Steeves, J - 16.431, 23.543Steeves, JK - 43.540Stefanucci, J - 43.554, 56.451Stefanucci, JK - 43.548Steinberg, JB - 36.438Steinman, R - 43.457Steinman, S - 56.313Sterkin, A - 26.548, 53.323Stetten, G - 36.512Stevens, N - 56.451Stevenson, S - 22.22, 36.407Stewart, E - 36.316Stigliani, A - 33.540Stokes, S - 33.523Stoner, G - 43.530Storch, P - 63.461Stransky, D - 41.14Strasburger, H - 36.544Strauss, ED - 24.14, 63.415Street, WN - 36.514Strickland, B - 63.449Striemer, CL - 23.433Stringham, J - 36.431Strnad, L - 53.535Stroyan, K - 16.456Stubitz, S - 43.425Stupina, A - 43.447Su, YG - 23.507Su, YR - 43.449, 51.16Suben, A - 56.444Suchow, J - 41.24Suchy-Dicey, C - 43.419Sugarman, M - 33.449Sugimoto, F - 63.423Sugovic, M - 53.521Sullivan, B - 26.455, 43.410Sulman, N - 33.541, 36.539Sun, H - 23.452Sun, V - 26.442Sun, Y - 51.11Sunny, MM - 26.543, 36.434Supèr, H - 23.321, 33.403, 34.16Surkys, T - 36.555Suryakumar, R - 16.424Susilo, T - 53.545Susskind, J - 43.522Susskind, JM - 56.538Suzuki, S - 36.453, 41.15, 43.534,43.535, 53.443, 53.446, 61.25,62.22Suzuki, Y - 53.426Swallow, K - 41.22Swallow, KM - 26.317Sweeny, T - 41.15, 62.22Swindle, J - 36.510, 56.518Swisher, J - 23.401, 36.401, 56.510Sy, J - 43.422Symons, L - 33.524Szinte, M - 23.317TTa, KN - 36.324Tadin, D - 22.15, 43.509Tadros, K - 53.413Tahir, HJ - 26.434Takahama, S - 26.453Takashima, M - 53.501Takaura, K - 21.17, 43.401Takemura, H - 26.436Tam, D - 36.504Tam, J - 54.25Tamber-Rosenau, B - 36.437Tan, C - 36.312Tanaka, J - 23.535, 36.515, 43.520,53.321, 53.552Tanaka, JW - 16.522, 36.514Tanca, M - 43.444Taniguchi, K - 53.537Tanno, Y - 33.505Tapia, E - 53.515, 56.433Tarampi, M - 56.447Tarr, M - 23.535, 26.510, 43.515,63.414Tarr, MJ - 23.534Tas, C - 63.434Tassone, F - S3Taubert, J - 36.522Tayama, T - 53.537Taylor, JE - 16.405Taylor, L - 16.517te Pas, S - 16.453te Pas, SF - 53.405Tenenbaum, J - 33.534Tenenbaum, JB - 21.21Terao, M - 26.441Teszka, R - 43.303Tey, F - 16.432Thabet, M - 26.427Thaler, L - 61.26, 61.27Tharp, I - 33.410Theeuwes, J - S5, 16.528, 22.16,23.316, 23.319, 43.407Thelen, L - 56.526Thomas, C - 26.508Thomas, L - 33.443Thomas, M - 23.546Thomas, N - 43.321Thompson, A - 56.427Thompson, J - 33.420Thompson, P - 51.27Thompson, R - 56.552Thompson, S - 23.402Thompson, TW - 56.412Thompson, W - 56.447Thompson, WB - 43.548, 62.13Thomson, DM - 33.323Thorpe, S - 56.428Thorpe, SJ - 56.531, 56.532Tillman, M - 16.447Timney, B - 16.438Tiruveedhula, P - 22.22Tisdall, MD - 32.17Tjan, B - 36.513, 55.13Tjan, BS - 33.309, 33.315, 33.316,34.24, 55.12, 55.15Tlapale, É - 43.504Tlapale, E - 43.510To, MP - 36.530Todd, J - 16.540, 36.502Todd, JT - S1, 36.518Todor, A - 43.404Todorov, A - 36.528Todorović, D - 36.429Tokunaga, R - 16.444, 36.419, 53.404Tolhurst, DJ - 36.530Tolhurst, DJ - 36.531Tomassini, A - 53.454Tong, F - 16.550, 23.401, 36.401,56.510, 63.431Tong, J - 56.502Tootell, R - 36.519Tootell, RB - 53.544Torralba, A - 23.541, 25.21, 33.533,33.534, 43.408Torres, E - 23.435Toskovic, O - 43.552Tower-Richardi, SM - 36.444Townsend, J - 56.544Toxopeus, R - 56.527Tran, C - 23.419Tran, M - 56.305Treisman, A - 63.425Tremblay, E - 16.435Tremblay, S - 53.313Triantafyllou, C - 32.17Trick, L - 26.408, 56.527Tripathy, S - 53.301Troiani, V - 33.501Troje, N - 33.423Troje, NF - 33.425, 33.426Troncoso, XG - 16.425Troscianko, T - 36.531Troyer, M - 43.544Truong, G - 16.426Truong, S - 61.24Tsai, T - 56.314Tsai, Y - 36.410Tse, P - 43.435Tse, PU - 26.503, 56.423Tseng, P - 21.15, 22.13, 26.324Tseng, Y - 43.423, 53.303Tsien, J - 23.303, 53.523Tsirlin, I - 56.309Tsotsos, LE - 26.319Tsuchiya, N - S6Tsuruhara, A - 36.509Tsushima, Y - 53.513Tsutsui, K - 26.318<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>361


Author IndexVSS 2010 AbstractsTurgeon, C - 53.503Turpin-Lavallée, P - 16.408Turret, J - 33.543Twedt, E - 43.553Tyler, C - 26.454, 34.26, 43.455Tyler, S - 33.422UUchikawa, K - 23.442Ueda, Y - 43.309Umemoto, A - 53.306Ungerleider, L - 24.25Uniat, D - 36.325Unuma, H - 33.412VV. Jiang, Y - 43.454Valadao, D - 23.448Valdois, S - 16.505van Assche, M - 43.446Van Assche, M - 43.441Van Belle, G - 56.530van Belle, G - 56.529van Boxtel, JJ - S6, 26.304, 56.432van Dam, L - 36.412, 42.15van den Berg, A - 23.409, 26.443van den Berg, AV - 22.21van den Berg, R - 21.23van den Hurk, J - 23.524Van der Burg, E - 22.16van der Kooij, K - 16.453, 53.405van der Kouwe, A - 32.17van der Linde, I - 53.305van der Togt, C - S5van Ee, R - 23.506, 23.509, 23.510van Gaal, S - 26.409, 33.542Van Gulick, A - 63.414Van Humbeeck, N - 43.445van Kemenade, B - 33.419van Koningsbruggen, M - 26.325van Lamsweerde, A - 53.308van Loon, AM - 56.406van Stijn, S - 51.24van Wezel, R - 25.13, 26.443Vandenbroucke, AR - 53.417vanEe, R - 36.506vanOostveen, R - 16.406VanRullen, R - 16.421, 26.311,26.437, 26.540, 43.415, 52.21,56.407Varakin, DA - 26.451Varghese, L - 35.27Vavassis, A - 16.532Vayssière, N - 23.548Vaziri Pashkam, M - 36.318Vaziri-Pashkam, M - 56.416Vecera, S - 23.517, 26.305, 33.326,33.441, 43.431Vecera, SP - 26.533Veenemans, A - 36.548Velichkovsky, B - 43.403Verfaillie, K - 26.534, 56.501, 56.530Verghese, P - 35.24Verhoef, B - S1Vermaercke, B - 33.553Versace, M - 56.329Verstraten, FA - 16.414, 53.451Vese, L - 51.26Vesia, M - 23.434, 43.310Vettel, J - 36.520Vetter, P - 43.542Vicente, GI - 23.536Vicente-Grabovetsky, A - 43.326Vickery, T - 51.12Victor, J - 23.305Vida, M - 16.515Villano, M - 23.538Vincent, W - 23.408Viret, A - 23.553Vishwanath, D - 16.451Visser, T - 41.25Visser, TA - 41.26Viswanathan, J - 43.315Vitu, F - 43.314Vizioli, L - 23.523, 43.452, 62.25Vlaskamp, B - 56.301Vo, M - 43.411Vogel, E - 16.543, 52.23, 53.419,54.11Vogel, EK - 16.544, 43.316, 56.409Voigt, K - 23.326Volbrecht, V - 23.410Volcic, R - 23.324Von Der Heide, R - 23.537, 53.326,56.545, 56.555von der Heydt, R - 26.519, 34.22,51.17von Grunau, M - 16.532von Grünau, M - 26.522, 36.441von Hofsten, C - 42.14von Muhlenen, A - 26.543, 36.434von Trapp, G - 42.14Voorhies, R - 43.420Voss, J - S6Vrkljan, BH - 26.319Vu, A - 23.544Vu, AT - 23.405Vu, L - 43.440Vuong, Q - 36.520Vuong, QC - 33.535WWade, A - 23.443, 31.13, 31.14Wagatsuma, N - 36.306Wagemans, J - 23.525, 26.405,43.412, 43.445, 43.458, 56.501Wagner, A - 54.15Wagner, D - 63.461Wagner, K - 53.437, 53.504Wake, T - 53.425Wakui, E - 16.502, 33.410Wald, LL - 32.17Walker Renninger, L - 16.423Wallace, D - 56.429Wallace, JM - 33.315, 33.316Wallisch, P - 51.22Wallman, J - 16.418Wallraven, C - 33.509Walsh, E - 33.552Walsh, M - 54.12Walsh, V - 33.419, 43.521Walter, A - 23.415Walther, D - 25.23Walther, DB - 25.22Wang, F - 56.404Wang, H - 43.304Wang, J - 23.443Wang, L - 33.433, 53.421Wang, R - 26.426Wang, RF - 33.547Wang, Y - 16.433, 53.329Wang, Z - 23.531Wann, J - 23.426Ward, R - 33.428Ware, C - 16.548Warren, PA - 26.444Warren, W - 16.409, 33.318, 33.319,33.321, 33.322, 61.21Warren, WH - 33.320Watamaniuk, S - 26.439Watanabe, T - S6, 26.412, 31.24,31.25, 36.410, 43.419, 43.523,52.11, 53.317, 53.318, 53.514,55.27Watkins, TJ - 54.16Watson, A - 23.419, 26.549Watson, D - 33.453, 33.456, 43.438Watson, T - 61.17Watt, R - 43.445, 56.526Wattam-Bell, J - 56.402Waugh, SJ - 33.304, 33.305, 33.308Weber, J - 43.515Webster, M - 16.443, 16.446, 63.420Webster, MA - 16.442, 16.447,63.411Weigelt, S - 23.403Weiler, J - 36.326Weimer, S - 36.523, 53.543Weiskrantz, L - 21.11Weiß, K - 43.421Weissman, DH - 36.438Welch, L - 36.405Welchman, A - 36.543, 56.445Welchman, AE - S1Wells, ET - 23.518Weltman, AL - 62.12Weng, Q - 26.419Wenger, M - 16.534, 23.537, 33.417,53.326, 56.545, 56.555Werner, J - 16.446Werner, JS - 23.417West, G - 62.24Westheimer, G - 35.16Wexler, M - 23.317, 36.506Wheeler, A - 43.313Whitbread, M - 24.11White, A - 43.533, 43.537, 63.441White, B - 26.520White, D - 33.510Whitney, D - S3, 33.450, 55.16Whittaker, G - 36.407Whitwell, RL - 23.432Wichmann, FA - 36.411, 36.420,63.453Wickens, T - 63.455Wickham, C - 16.430Wiering, MA - 16.414Wijntjes, M - 63.404Wilbraham, DA - 36.518Wilcox, L - 41.14, 56.309, 56.316Wilder, JD - 16.422Wilimzig, C - 21.14Wilken, P - 21.23Wilkins, A - 63.420Wilkinson, F - 23.528, 26.427Willemsen, P - 56.325Willenbockel, V - 36.515, 36.516Williams McGugin, R - 16.506Williams, C - 26.450Williams, M - 21.16, 23.545, 24.24,53.307, 61.24Williamson, CA - 16.434Williamson, DK - 33.426Willis, M - 53.546Wilmer, J - 24.24, 56.550Wilson, AD - 53.517, 53.518Wilson, CE - 53.546Wilson, D - 56.527, 63.439Wilson, H - 23.528Wilson, HR - 26.427, 36.524Wilson, K - 43.327Wilson, N - 16.555Winawer, J - 53.442Windeatt, S - 26.429Wing, E - 36.440Winkler, A - 23.412Winter, D - 22.13Wismeijer, D - 36.506Witt, J - 53.521, 56.450Witt, JK - 16.405Witthoft, N - 53.442Wokke, ME - 33.409Wolf, J - 53.552Wolf, K - 53.506Wolf, TR - 16.425Wolfe, B - 56.510Wolfe, J - 23.541, 23.542, 26.539,33.531, 35.22, 56.409Wolfe, JM - 33.454, 33.532Wolfson, SS - 35.11Wolk, D - 43.517Won, B - 33.515Wong, A - 16.427, 16.427, 16.508Wong, AC - 16.506Wong, E - 63.460Wong, J - 36.442Wong, YK - 36.402Woodman, G - 16.547, 26.516,53.307, 63.428Woodman, GF - 53.416Woods, AJ - 43.555Woods, D - 33.414Wray, J - 56.301Wright, CE - 23.412Wright, J - 26.552Wu, B - 36.512Wu, C - 26.437Wu, D - 61.16Wu, R - 16.503, 53.507Wüstenberg, T - 36.544Wyatte, D - 53.528XXiao, B - 31.13Xiao, J - 25.21Xing, D - 36.303, 36.304Xu, J - 23.303Xu, JP - 36.403Xu, M - 16.445Xu, X - 33.502Xu, Y - 23.447, 32.26, 53.418, 63.447Xu, Z - 63.413362 <strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>


VSS 2010 AbstractsAuthor IndexXuan, Y - 33.527, 53.304YYagi, A - 33.551, 63.423Yamaguchi, MK - 16.501, 16.518,16.519, 16.520, 36.509, 53.403,53.501Yamamoto, N - 33.551Yamanashi Leib, A - 56.556Yamashita, W - 16.501Yamazaki, Y - 53.501Yan, X - 23.434, 43.310Yang, A - 16.432Yang, B - 56.404Yang, C - 53.303Yang, E - 23.519, 23.526Yang, H - 33.408Yang, J - 53.403Yang, Q - 22.22Yang, Z - 23.303, 53.523, 53.524Yano, N - 63.412Yantis, S - 63.432Yao, R - 23.540, 36.514Yao-N’dre, M - 43.314Yashar, A - 26.423Yau, J - 52.15Yazdanbakhsh, A - 26.431, 36.305,36.427, 56.328Yeh, C - 36.303, 36.304Yeh, S - 36.435, 43.531, 43.547,53.432, 63.438Yeh, Y - 23.552, 53.303Yehezkel, O - 26.548, 26.550, 53.323Yeshurun, Y - 33.311, 43.414Yin, L - 33.513Yokosawa, K - 23.554, 43.546,53.438, 63.412Yokoyama, T - 56.541Yonas, A - 16.516, 32.12, 36.509,53.510, 56.325Yoo, H - 53.520, 53.522, 56.312Yoon, J - 16.526, 52.14Yoon, T - 33.525Yoonessi, A - 31.17, 43.450Yoshida, M - 21.17, 43.401Yoshida, T - 53.425Yoshioka, Y - 26.453Yotsumoto, Y - 26.412, 52.11Young, AW - 56.546Yovel, G - 24.26, 36.525, 56.534Yu, C - 23.320, 26.425, 26.426,53.325, 56.439Yu, D - 33.301, 33.302, 33.303, 62.11Yu, H - 62.23Yuan, J - 16.445Yue, X - 36.519, 53.544Yuille, A - 51.26Yuksel-Sokmen, O - 16.524ZYund, EW - 33.414Zachariou, V - 36.547Zacher, JE - 53.424Zacks, J - 26.532Zadra, J - 62.12Zaidi, Q - 16.448, 31.17, 36.503,62.16, 63.457Zalevsky, Z - 26.550Zanker, J - 23.326Zanker, JM - 43.512Zannoli, M - 16.452Zarei, A - 43.302Zdravković, S - 36.429Zeiner, KM - 36.542Zelinsky, G - 26.525, 33.455, 43.404Zhang, B - 16.445Zhang, D - 16.440Zhang, G - 53.325Zhang, H - 33.416, 33.448, 43.308,53.304Zhang, J - 26.426, 55.22Zhang, K - 33.433Zhang, M - 36.439Zhang, P - 23.311, 23.513, 55.21Zhang, S - S5, 23.304Zhang, T - 26.425zhang, w - 16.545Zhang, X - 33.513Zhang, Y - 36.439Zhang, Z - 41.17, 56.502Zhao, L - 33.554Zhao, M - 23.315Zhaoping, L - 26.523, 26.542, 36.541Zheng, X - 43.518Zhou, J - 16.428, 41.16Zhou, L - 26.542, 56.446Zhou, X - 33.529Zhou, Y - 16.428, 16.429, 41.16Zhu, D - 32.21Zhuang, X - 23.503Ziemek, TR - 43.548Ziesche, A - 56.507Zirnsak, M - 63.444Zlotnik, A - 26.550Zohary, E - 56.514Zopf, R - 61.24Zosh, J - 16.553Zottoli, T - 16.524Zucker, S - 56.440<strong>Vision</strong> <strong>Sciences</strong> <strong>Society</strong>363


Notes______________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!