Limit search to available items
43 results found. Sorted by relevance | date | title .
Book Cover
E-book
Author Rolls, Edmund T

Title Computational neuroscience of vision / Edmund T. Rolls and Gustavo Deco
Published Oxford : Oxford University Press, 2002

Copies

Description 1 online resource (xviii, 569 pages, 2 unnumbered pages of plates) : illustrations (some color)
Contents 1.2 Neurons 2 -- 1.3 Neurons in a network 2 -- 1.4 Synaptic modification 4 -- 1.5 Long-Term Potentiation and Long-Term Depression 7 -- 1.6 Distributed representations 11 -- 1.6.2 Advantages of different types of coding 12 -- 1.7 Neuronal network approaches versus connectionism 13 -- 1.8 Introduction to three neuronal network architectures 14 -- 1.9 Systems-level analysis of brain function 16 -- 1.10 The fine structure of the cerebral neocortex 21 -- 1.10.1 The fine structure and connectivity of the neocortex 21 -- 1.10.2 Excitatory cells and connections 21 -- 1.10.3 Inhibitory cells and connections 23 -- 1.10.4 Quantitative aspects of cortical architecture 25 -- 1.10.5 Functional pathways through the cortical layers 27 -- 1.10.6 The scale of lateral excitatory and inhibitory effects, and the concept of modules 29 -- 1.11 Backprojections in the cortex 30 -- 1.11.1 Architecture 30 -- 1.11.2 Learning 31 -- 1.11.3 Recall 33 -- 1.11.4 Semantic priming 34 -- 1.11.5 Attention 34 -- 1.11.6 Autoassociative storage, and constraint satisfaction 34 -- 2 The primary visual cortex 36 -- 2.2 Retina and lateral geniculate nuclei 37 -- 2.3 Striate cortex: Area V1 43 -- 2.3.1 Classification of V1 neurons 43 -- 2.3.2 Organization of the striate cortex 45 -- 2.3.3 Visual streams within the striate cortex 48 -- 2.4 Computational processes that give rise to V1 simple cells 49 -- 2.4.1 Linsker's method: Information maximization 50 -- 2.4.2 Olshausen and Field's method: Sparseness maximization 53 -- 2.5 The computational role of V1 for form processing 55 -- 2.6 Backprojections to the lateral geniculate nucleus 55 -- 3 Extrastriate visual areas 57 -- 3.2 Visual pathways in extrastriate cortical areas 57 -- 3.3 Colour processing 61 -- 3.3.1 Trichromacy theory 61 -- 3.3.2 Colour opponency, and colour contrast: Opponent cells 61 -- 3.4 Motion and depth processing 65 -- 3.4.1 The motion pathway 65 -- 3.4.2 Depth perception 67 -- 4 The parietal cortex 70 -- 4.2 Spatial processing in the parietal cortex 70 -- 4.2.1 Area LIP 71 -- 4.2.2 Area VIP 73 -- 4.2.3 Area MST 74 -- 4.2.4 Area 7a 74 -- 4.3 The neuropsychology of the parietal lobe 75 -- 4.3.1 Unilateral neglect 75 -- 4.3.2 Balint's syndrome 77 -- 4.3.3 Gerstmann's syndrome 79 -- 5 Inferior temporal cortical visual areas 81 -- 5.2 Neuronal responses in different areas 81 -- 5.3 The selectivity of one population of neurons for faces 83 -- 5.4 Combinations of face features 84 -- 5.5 Distributed encoding of object and face identity 84 -- 5.5.1 Distributed representations evident in the firing rate distributions 85 -- 5.5.2 The representation of information in the responses of single neurons to a set of stimuli 90 -- 5.5.3 The representation of information in the responses of a population of inferior temporal visual cortex neurons 94 -- 5.5.4 Advantages for brain processing of the distributed representation of objects and faces 98 -- 5.5.5 Should one neuron be as discriminative as the whole organism, in object encoding systems? 103 -- 5.5.6 Temporal encoding in the spike train of a single neuron 105 -- 5.5.7 Temporal synchronization of the responses of different cortical neurons 108 -- 5.5.8 Conclusions on cortical encoding 111 -- 5.6 Invariance in the neuronal representation of stimuli 112 -- 5.6.1 Size and spatial frequency invariance 112 -- 5.6.2 Translation (shift) invariance 113 -- 5.6.3 Reduced translation invariance in natural scenes 113 -- 5.6.4 A view-independent representation of objects and faces 115 -- 5.7 Face identification and face expression systems 118 -- 5.8 Learning in the inferior temporal cortex 120 -- 5.9 Cortical processing speed 122 -- 6 Visual attentional mechanisms 126 -- 6.2 The classical view 126 -- 6.2.1 The spotlight metaphor and feature integration theory 126 -- 6.2.2 Computational models of visual attention 129 -- 6.3 Biased competition -- single cell studies 132 -- 6.3.1 Neurophysiology of attention 133 -- 6.3.2 The role of competition 135 -- 6.3.3 Evidence of attentional bias 136 -- 6.3.4 Non-spatial attention 136 -- 6.3.5 High-resolution buffer hypothesis 139 -- 6.4 Biased competition -- fMRI 140 -- 6.4.1 Neuroimaging of attention 140 -- 6.4.2 Attentional effects in the absence of visual stimulation 141 -- 6.5 The computational role of top-down feedback connections 142 -- 7 Neural network models 145 -- 7.2 Pattern association memory 145 -- 7.2.1 Architecture and operation 146 -- 7.2.2 The vector interpretation 149 -- 7.2.3 Properties 150 -- 7.2.4 Prototype extraction, extraction of central tendency, and noise reduction 151 -- 7.2.5 Speed 151 -- 7.2.6 Local learning rule 152 -- 7.2.7 Implications of different types of coding for storage in pattern associators 158 -- 7.3 Autoassociation memory 159 -- 7.3.1 Architecture and operation 160 -- 7.3.2 Introduction to the analysis of the operation of autoassociation networks 161 -- 7.3.3 Properties 163 -- 7.3.4 Use of autoassociation networks in the brain 170 -- 7.4 Competitive networks, including self-organizing maps 171 -- 7.4.1 Function 171 -- 7.4.2 Architecture and algorithm 171 -- 7.4.3 Properties 173 -- 7.4.4 Utility of competitive networks in information processing by the brain 178 -- 7.4.5 Guidance of competitive learning 180 -- 7.4.6 Topographic map formation 182 -- 7.4.7 Radial Basis Function networks 187 -- 7.4.8 Further details of the algorithms used in competitive networks 188 -- 7.5 Continuous attractor networks 192 -- 7.5.2 The generic model of a continuous attractor network 195 -- 7.5.3 Learning the synaptic strengths between the neurons that implement a continuous attractor network 196 -- 7.5.4 The capacity of a continuous attractor network 198 -- 7.5.5 Continuous attractor models: moving the activity packet of neuronal activity 198 -- 7.5.6 Stabilization of the activity packet within the continuous attractor network when the agent is stationary 202 -- 7.5.7 Continuous attractor networks in two or more dimensions 203 -- 7.5.8 Mixed continuous and discrete attractor networks 203 -- 7.6 Network dynamics: the integrate-and-fire approach 204 -- 7.6.1 From discrete to continuous time 204 -- 7.6.2 Continuous dynamics with discontinuities 205 -- 7.6.3 Conductance dynamics for the input current 207 -- 7.6.4 The speed of processing of one-layer attractor networks with integrate-and-fire neurons 209 -- 7.6.5 The speed of processing of a four-layer hierarchical network with integrate-and-fire attractor dynamics in each layer 212 -- 7.6.6 Spike response model 215 -- 7.7 Network dynamics: introduction to the mean field approach 216 -- 7.8 Mean-field based neurodynamics 218 -- 7.8.1 Population activity 218 -- 7.8.2 A basic computational module based on biased competition 220 -- 7.8.3 Multimodular neurodynamical architectures 221 -- 7.9 Interacting attractor networks 224 -- 7.10 Error correction networks 228 -- 7.10.1 Architecture and general description 229 -- 7.10.2 Generic algorithm (for a one-layer network taught by error correction) 229 -- 7.10.3 Capability and limitations of single-layer error-correcting networks 230 -- 7.10.4 Properties 234 -- 7.11 Error backpropagation multilayer networks 236 -- 7.11.2 Architecture and algorithm 237 -- 7.11.3 Properties of multilayer networks trained by error backpropagation 238 -- 7.12 Biologically plausible networks 239 -- 7.13 Reinforcement learning 240 -- 7.14 Contrastive Hebbian learning: the Boltzmann machine 241 -- 8 Models of invariant object recognition 243 -- 8.2 Approaches to invariant object recognition 244 -- 8.2.1 Feature spaces 244 -- 8.2.2 Structural descriptions and syntactic pattern recognition 245 -- 8.2.3 Template matching and the alignment approach 247 -- 8.2.4 Invertible networks that can reconstruct their inputs 248 -- 8.2.5 Feature hierarchies 249 -- 8.3 Hypotheses about object recognition mechanisms 253 -- 8.4 Computational issues in feature hierarchies 257 -- 8.4.1 The architecture of VisNet 258 -- 8.4.2 Initial experiments with VisNet 266 -- 8.4.3 The optimal parameters for the temporal trace used in the learning rule 274 -- 8.4.4 Different forms of the trace learning rule, and their relation to error correction and temporal difference learning 275 -- 8.4.5 The issue of feature binding, and a solution 284 -- 8.4.6 Operation in a cluttered environment 295 -- 8.4.7 Learning 3D transforms 301 -- 8.4.8 Capacity of the architecture, and incorporation of a trace rule into a recurrent architecture with object attractors 307 -- 8.4.9 Vision in natural scenes -- effects of background versus attention 313 -- 8.5 Synchronization and syntactic binding 319 -- 8.6 Further approaches to invariant object recognition 320 -- 8.7 Processes involved in object identification 321 -- 9 The cortical neurodynamics of visual attention -- a model 323 -- 9.2 Physiological constraints 324 -- 9.2.1 The dorsal and ventral paths of the visual cortex 324 -- 9.2.2 The biased competition hypothesis 326 -- 9.2.3 Neuronal receptive fields 327 -- 9.3 Architecture of the model 328 -- 9.3.1 Overall architecture of the model 328 -- 9.3.2 Formal description of the model 331 -- 9.3.3 Performance measures 336 -- 9.4 Simulations of basic experimental findings 336 -- 9.4.1 Simulations of single-cell experiments 337 -- 9.4.2 Simulations of fMRI experiments 339 -- 9.5 Object recognition and spatial search 341 -- 9.5.1 Dynamics of spatial attention and object recognition 343 -- 9.5.2 Dynamics of object attention and visual search 345
Summary This new book from Edmund Rolls presents a unique approach to understanding the complex subject of vision. It will be useful to psychologists interested in vision and attentional processes neuroscientists and vision scientists
Bibliography Includes bibliographical references and index
Notes Print version record
Subject Vision.
Computational neuroscience.
Neuropsychology.
Neurophysiology.
Neurosciences.
Computational Biology
Models, Neurological
Visual Perception -- physiology
Computer Simulation
Neurosciences
Vision, Ocular
Neurophysiology
sight (sense)
simulation.
Neurosciences
Computational neuroscience
Neurophysiology
Neuropsychology
Vision
Form Electronic book
Author Deco, Gustavo
ISBN 9780191689277
0191689270