Mean-field analysis of orientation selectivity in inhibition-dominated networks of spiking neurons
© Sadeh et al.; licensee Springer. 2014
Received: 13 March 2014
Accepted: 14 March 2014
Published: 19 March 2014
Mechanisms underlying the emergence of orientation selectivity in the primary visual cortex are highly debated. Here we study the contribution of inhibition-dominated random recurrent networks to orientation selectivity, and more generally to sensory processing. By simulating and analyzing large-scale networks of spiking neurons, we investigate tuning amplification and contrast invariance of orientation selectivity in these networks. In particular, we show how selective attenuation of the common mode and amplification of the modulation component take place in these networks. Selective attenuation of the baseline, which is governed by the exceptional eigenvalue of the connectivity matrix, removes the unspecific, redundant signal component and ensures the invariance of selectivity across different contrasts. Selective amplification of modulation, which is governed by the operating regime of the network and depends on the strength of coupling, amplifies the informative signal component and thus increases the signal-to-noise ratio. Here, we perform a mean-field analysis which accounts for this process.
KeywordsOrientation selectivity Contrast invariance Inhibition-dominated Mean-field analysis Common-mode attenuation Tuning amplification Operating regime
Neurons in sensory cortices of mammals often respond selectively to certain features of a stimulus. A well-known example in the visual system is the orientation of an elongated bar (Hubel and Wiesel 19621968). This specificity of neuronal responses is believed to be a fundamental building block of stimulus processing and perception in the mammalian brain. Although well studied for more than half a century now, it is not fully clear which neuronal mechanisms generate this selectivity. In particular, it is not clear which kind of network structure is necessary for its establishment, and whether different system architectures are being employed by different species.
In the center of the current debate is the role of feedforward vs. recurrent connectivity in the initial establishment of selectivity, as well as its further properties like contrast invariance (Alitto and Usrey 2004; Niell and Stryker 2008; Sclar and Freeman 1982) and tuning sharpening (Ferster and Miller 2000; Sompolinsky and Shapley 1997). Models relying on a purely feedforward structure, as it was originally suggested by Hubel and Wiesel (1962), cannot explain why the orientation tuning of both neuronal spiking and membrane potentials is invariant with respect to stimulus contrast (Anderson et al. 2000) (see Finn et al. (2007), however, for a more elaborate feedforward model to account for contrast invariance). On the other hand, the prevailing recurrent network models for the intra-cortical origin of contrast-invariant orientation selectivity (Ben-Yishai et al. 1995; Somers et al. 1995) cannot explain how highly selective neuronal responses emerge in mice (Niell and Stryker 2008), as they rely on feature specific connectivity which rodents seem to lack, at least at the onset of eye opening (Ko et al. 2013)a. Also, it is not clear whether the orderly arrangement of preferred features on the cortical surface, which has been described in most primates and carnivores (Blasdel and Salama 1986; Bonhoeffer and Grinvald 1991; Ohki et al. 2006; Ts’o et al. 1990), is necessary for the emergence of feature selectivity, or serves any function at all (Horton and Adams 2005). Different answers to these questions would have radically different implications for understanding higher brain functions, and how to study them.
Here we investigate the emergence of contrast invariant orientation selectivity in large-scale networks of spiking neurons with dominant inhibition. The biological motivation for studying such networks come from experimental studies which show functional dominance of inhibition (Haider et al. 2013; Rudolph et al. 2007). The dominance of inhibition is probably a consequence of the dense, local pattern of inhibitory connectivity, which has been reported for different cortices (Fino and Yuste 2011; Hofer et al. 2011; Packer and Yuste 2011). From a theoretical point of view, such networks are well-studied (Brunel 2000; van Vreeswijk and Sompolinsky 1996) and they have been shown to exhibit asynchronous-irregular (AI) activity states that are in many respects resembling the spiking activity recorded in the mammalian neocortex.
We first show that highly selective tuning curves, which are contrast invariant, can be obtained in these networks, even in absence of any feature-specific connectivity and any spatial network structure. We then analyze these networks by proposing a simplified mean-field description, which predicts the main properties of output orientation selectivity in the networks. The analysis identifies the mechanisms responsible for tuning amplification and contrast invariance. We show that the results hold for a wide range of parameters, and for networks operating in different recurrent regimes.
Contrast invariant orientation selectivity in random networks
Table of notations and parameters
Membrane time constant
Synaptic time constant
Inhibition dominance ratio
Number of neurons
w i j
Preferred orientation (PO)
Baseline firing rate
6 000 ms
Orientation selectivity index
Orientation selectivity vector
Scatter degree index
Baseline (modulation) gain
γ B (γ M )
Linearized neuronal gain
T i (θ)
If neurons are sorted according to their preferred orientation, the differences in the firing rates become visible (Figure 1B). Neurons with a preferred orientation closer to the orientation of the stimulus on average respond with higher rates, while the neurons closer to the orthogonal orientation are mostly silent. The cosine tuning of the input is reflected by the cosine tuning of firing rates across the population (Figure 1B, right).
To directly verify this invariance, we compare the OSI of all neurons at different contrasts. Plotting the OSIs at the medium contrast (MC) vs. the lowest contrast (LC), and at the highest contrast (HC) vs. the medium, indeed reveal that the majority of neurons show a remarkable robustness of their tuning curves upon a change in contrast (Figure 3C).
The high selectivity in the network emerges despite the fact that each neuron receives input from a large pool of neurons with heterogeneous selectivity and different preferred orientations (Figure 3D). In fact, the PO distribution of presynaptic neurons is essentially uniform (Figure 3D, top histogram), and the presynaptic OSI distribution (Figure 3D, right histogram) is very similar to the OSI distribution of the whole population (Figure 3A). Therefore, the output response is highly selective, despite the fact that the input is quite heterogeneous, as reported in experiments (Chen et al. 2011; Jia et al. 2010; Varga et al. 2011).
This result is similar to a recent study of orientation selectivity in rodents (Hansel and van Vreeswijk 2012), in that both show random networks are capable of generating selective output responses. In the following, we provide a detailed mathematical analysis of the mechanisms involved in this process. Our mean-field analysis indeed enables us to compute the mean output responses of networks quite precisely.
A reduced linear rate model of the network
The emergence of different processing pathways is a consequence of the linear recurrent dynamics: Any activity vector (describing either input to the network, or output from the network) can be decomposed in terms of a sum . The part is a pure baseline vector, representing the mean response rate of each neuron across all stimuli. The remaining part is a pure modulation vector, with zero baseline. If the input is processed by a recurrent network that operates linearly on the input according to for some effective matrix (cf. Eq. (3)) it is evident that and (see Section Theoretical analysis for further explanation). In particular, there is no cross-talk in the processing of baseline, , and modulation, , whatsoever. The independent, non-interfering processing of the baseline and the modulation component of the input is exactly corresponding to the two separate pathways depicted in Figure 4B.
For the networks considered in this work, mean and variance of all inputs are identical, and all neurons in the recurrent network have the same number of excitatory and inhibitory recurrent afferents. Therefore, the entries of the baseline vector of the input are all identical, and it is mapped to a baseline output firing rate , which is again a uniform vector. This means that uniform vectors are eigenvectors of the matrices and , respectively. The eigenvalue belonging to these eigenvectors is exactly the feedback gain β B of the baseline. In networks with dominant inhibition β B is negative. As a consequence, the corresponding closed-loop gain γ B is a positive number, smaller than 1 (Figure 4C). The effective enhancement of feature specificity mediated by the recurrent operation of the network is the result of different closed-loop gains for modulation, γ M , and baseline, γ B . For the example network of Figures 1, 2 and 3, this leads to comparable strengths of baseline and modulation in the output tuning curves (Figure 4D), despite having much weaker modulation in the input.
To see how these gains change with the strength of recurrent coupling, we fixed the network connectivity and only changed the post-synaptic amplitudes in the network. We then computed the mean baseline and modulation gains in each network, corresponding to cross marks in Figure 4D. Note that the gains are now computed from individual tuning curves, r i (θ). For each output tuning curve, the baseline and modulation component is computed as the zeroth and the second Fourier component, respectively, and then the average values are computed across the population (see Methods for details). For the cosine tuning we are using here these gains are equal to population gains we described before (for rate vectors over the population).
Stability of the network
A simplified mean-field analysis of the network
To compute the gains for baseline and modulation of inputs, respectively, we employ a simplified mean-field approximation, which considers the corresponding average gains. For that, we need to compute the mean baseline and modulation rate of output tuning curves in the network.
Next, we compute the mean modulation component of output tuning curves, r M . Here we make an approximation: We neglect the modulation of other neurons in the network and consider only the modulation of input to one neuron (see Section Modulation of the firing rate and of the membrane potential). This is equivalent to assuming ‘perfect balance’ in terms of modulation, where all the modulation components of recurrent inputs from the network cancel each other perfectly (β M = 0), such that only feedforward modulation remains.
Based on this simplification, the mean modulation of the response, r M , is already well predicted (Figure 8B). The residual small discrepancy as compared to numerical simulations, which is most pronounced for intermediate recurrence and high contrast, should be accounted for by including network interactions (Section Mixing of preferred features in random recurrent networks) that amplify the modulation, and spike correlations that are ignored in the simplified treatment presented here.
The non-interference property of baseline and modulation can also be directly demonstrated from the weight matrix. The fact that baseline input does not have any component along the modulation vectors became clear from the eigenvector of that corresponds to the exceptional eigenvalue. To show the opposite, namely that an input modulation vector induces none, or only negligible, baseline in the output response, explicit numerical simulations of the result of are performed. The result is shown in Figure 9B, which demonstrates that the expected value of (over orientation) is exactly zero, therefore not introducing any baseline component, as we discussed above.
Therefore, baseline and modulation are processed separately and independently, with no cross-talk involved, provided the network acts linearly on its inputs. In contrast, we currently cannot mathematically justify the assumption of perfect balance. The reason is that the modulation vectors, unlike the baseline vector, are not eigenvectors of the weight matrix, . As a result, it is not justified to replace in the product with a scalar value β M . Treating the problem more rigorously could involve expanding the modulation vector in terms of the eigenvectors corresponding to the bulk eigenvalues of (Figure 4C), and obtaining the gain accordingly. This gain would not, in general, be a single scalar value, nor would it be exactly zero, as we have assumed here. We come back to a more precise treatment of the problem in Section Linear tuning in recurrent networks.
Tuning of recurrent inputs
The assumption of perfect balance of modulation is the first-order approximation we make to obtain average gains of the network. Here we numerically check how far this assumption is from the result of our simulations. To answer this question, we investigate tuning of different components of the input to neurons in a network. We reconstruct the input from excitatory and inhibitory presynaptic sources by replacing each spike with the synaptic kernel (alpha function) and adding all the contributions to obtain a shot-noise signal. We then compute the mean value of this signal as the mean presynaptic excitation and inhibition, respectively.
Linear tuning in recurrent networks
The simplified mean-field analysis discussed above accounts for the average tuning curve and the mean selectivity in the network. It does not, however, account for the distribution of orientation selectivity across neurons. In this section, we resort to a linear analysis of modulation processing, in order to provide an approximative analytic treatment of this distribution.
For this linear analysis we need to make two additional assumptions. First, we assume that modulations in the network can be treated as small perturbations about the baseline, and that the dynamics can be linearized about this operating point. Note that this assumption implies that the contribution of nonlinearities like rectification is negligible. Second, we assume that the mixture of tunings is linear. This assumption is justified as we have used cosine tuning (i.e. linear tuning) in the inputs (see Methods for details). This allows us to represent each tuning curve as a 2D feature vector. Since the operation of network on feature vectors is linear, the mixture of tunings is now reduced to the vectorial summation of corresponding tuning vectors (see below, and Methods).
Here we are neglecting the contribution of higher-order recurrent inputs (, ) in the processing.
Next step is to interpret the above equation for tuning curves. Since we have assumed cosine tuning for the input, we can represent each input tuning curve by a vector, S i . Likewise, we represent output tuning curves by vectors R i . We refer to these vectors as the Tuning Vectors. For a 2D feature like orientation selectivity we obtain 2D Tuning Vectors. The angle of this vector represents the input preferred orientation , and the length of it is a measure of orientation selectivity (it is indeed equal to OSI before normalization). For notational convenience, we represent these 2D vectors by complex numbers. We identify real parts with x-directions, and imaginary parts with y-directions. Any 2D vector then corresponds to a complex number in a one to one fashion.
Here and are vectors with complex elements, representing output and input tuning of all neurons in the network, respectively. This is now an equation which expresses the output Tuning Vectors in terms of a linear mixture of input Tuning Vectors. Similar to Eq. (7), we are neglecting higher-order terms (, here.
An individual output Tuning Vector, R i , is then given by : The output tuning of each neuron is a mixture of its input tuning and weighted vectors of all presynaptic sources. For the specific example of the neuron in Figure 2A, all the contributions of presynaptic sources are shown in Figure 11B, for excitatory and inhibitory populations separately. Each small jump in the space represents the contribution of a presynaptic Tuning Vector, the size of which is ζ J or − ζ gJ for excitatory and inhibitory presynaptic sources, respectively. S i is normalized to 1.
Although each presynaptic contribution is much smaller than the input (of order in this case), the resultant vector (dashed lines) can be large. In particular, the resultant vector of inhibition is comparable to the length of the input vector for this specific example (Figure 11C, left). Since the angle of the vector is close to the input angle, this leads to an amplification of the resultant tuning, although it typically also changes the preferred orientation (Figure 11C, left). This explains why the OSI of this neuron (0.65) is larger than the mean OSI of the network (0.42, see Figures 2 and 3).
Not all the vectors resulting from recurrent contributions, however, are large, nor do all have a similar preferred orientation as the input tuning. Indeed, as the connectivity is random and the initial preferred orientations are assigned randomly to each neuron, the preferred orientation of the resultant vectors are also random. This is shown in Figure 11C, right panel, where all the recurrent Tuning Vectors are explicitly computed (by reading the input Tuning Vectors and the connectivity), and plotted in the complex plane. The distribution of the vectors in this plane is a 2D Gaussian distribution, according to the Central Limit Theorem. As a result, most of the Tuning Vectors have small magnitude, and only a few of greater magnitude contribute to the tail of the distribution.
The distribution of this length is plotted in Figure 11D, for different subpopulations of neurons. The peak of the distribution for excitatory neurons is at smaller values than for inhibitory neurons, and the overall length of recurrent tuning is mainly determined by inhibition in the network. Knowing the standard deviation of distributions of Tuning Vectors, one obtains the distribution of vector lengths according to (see Methods, Eq. (62)). In this example, the standard deviations are σexc = 0.11, σinh = 0.44 and σtot = 0.45, for excitation, for inhibition, and for all recurrent neurons, respectively. The length of tuning vectors predicted by this result is plotted in Figure 11D (dashed lines). The length of overall tuning, i.e. recurrent Tuning Vectors vectorially combined with the input Tuning Vector (green), can also be computed (see Methods, Eq. (63)). Normalizing the input Tuning Vectors to length 1, the distribution amounts to , plotted in the same figure (dashed purple line).
From Figure 11D one can compare the strength of the input tuning (green line) with the tuning generated within the random network (black distribution). The mean length of the recurrent tuning is smaller than the feedforward, and only few neurons show comparable tuning strength. The distribution of the combined tuning strength (purple line), which is a mixture of feedforward and recurrent components, has now a broad distribution, where many neurons show less tuning than their input (less than 1, attenuated), and many more have enhanced selectivity (greater than 1, amplified). In general, amplification happens when the randomly generated Tuning Vector within the network is roughly aligned with the initial Tuning Vector, and attenuation happens for recurrent Tuning Vectors in the opposite directionsb. A random recurrent network, thus, in itself is capable of attenuating and amplifying orientation selectivity. This is a mechanism in addition to the selective gains of baseline and modulation described before. This mechanism, however, comes at the expense of shifting the tuning curves of neurons from their initial, feedforward preferred orientations.
Regimes of orientation selectivity
As the strength of recurrent couplings increases, the contribution of the network becomes more effective to attenuate the baseline and selectively enhance the modulation (Figure 13 and Figure 5). This leads to more stable OSI distributions across different contrasts (Figure 14B and 14C), and hence makes the selectivity more robust. Moreover, as a consequence of stronger recurrence in the network, output POs deviate more from their initial PO (Figure 12B and 12C), since the strength of recurrent contributions (recurrent Tuning Vectors) has now increased. This is summarized for all networks in Figure 12D by a scatter degree index (SDI), which quantifies the degree of PO deviation in each network.
Notably, SDI does not linearly increase with recurrent connection strength. Rather, it saturates for rather weak connections, and then reaches an asymptotic value. The reason for this behavior is that the contribution of recurrent Tuning Vectors in the final tuning depends on J and the linear gain, ζ, as we described in the previous section (see Eq. (8)). Although J is monotonically increasing in Figure 12D, the effective strength of recurrent Tuning Vectors depends on the product J ζ. It appears, therefore, that the linear gains are not increasing as J increases, or they are even decreasing.
This trend is further supported by shape and size of the tuning curves (Figure 13). For networks with strong recurrent coupling, the maximum firing rate of tuning curves decreases and the modulation component becomes smaller. Since the linear gains determine the embedded gain of neurons in the network in response to modulations, this trend also suggests that these gains are decreasing in the highly recurrent regime. This was indeed visible in the behavior of gains and firing rates for modulations, shown in Figures 5A and 8A, respectively.
Although increasing the recurrence stabilizes the OSI, it makes the neurons of the network less feature selective, if the recurrence is too large (Figure 14A-D). There is, therefore, a trade-off between orientation selectivity and contrast invariance in the networks. Increasing the recurrence makes the negative feedback in the baseline pathway stronger, making the divisive suppression of the baseline – and hence the contrast invariance – more effective. This comes, however, at the expense of a decrease in the gain in the modulation pathway, which makes the responses weaker and less selective. We have quantified this trade-off by dividing the mean OSI of individual tuning curves in a network by its standard deviation across different contrasts. The intermediate recurrent regime shows optimal behavior with large and stable OSI (Figure 14D), and it more or less coincides with the region of optimal tuning amplification (Figure 5).
Tuning and invariance of membrane potentials
If the recurrent compensation was not effective, a different behavior would emerge. In fact, if the recurrent coupling is very weak (Figure 16B, left), the free membrane potential is above threshold for almost all orientations. In such a case, the response to all orientations is in the mean-driven regime, which yields significant firing rates for the preferred as well as orthogonal orientations. As a result, the so-called iceberg effect broadens the tuning curves significantly upon increasing the contrast. Increasing the recurrent coupling shifts the mean membrane potential down and the network operates in the fluctuation driven regime; this makes the tuning of membrane potential and the resulting spiking activity more robust and contrast invariant (Figure 16B, center and right panels). Indeed, for the intermediate recurrent regime (Figure 16B, center) the tuning is perfectly contrast invariant.
Spiking activity in inhibition-dominated networks
The recurrent excitation in inhibition-dominated networks of the sort we are studying here is over-compensated by the surplus recurrent inhibition. Some residual inhibition remains as the net effect of recurrent interactions. If the recurrent coupling is strong enough, this net inhibition determines the effective threshold of neurons in the network. Therefore, it is not the threshold mechanism of neurons which cuts off the responses at non-preferred orientations. Balance of excitation and inhibition within the network, in contrast, governs the generation of spikes and, hence, attenuation and amplification of the baseline and modulation components, respectively.
Considering spike-triggered averages of the net excitatory and inhibitory input (Figure 17B) reveals that spike emission in the network is mainly governed by recurrent inhibition, rather than recurrent excitation. Therefore, in the inhibition-dominated regime, fluctuations of inhibition is the main determining factor for spiking activity, in agreement with the results of experimental studies (Rudolph et al. 2007).
Modulation gain depends on the operating point of the network
The result of this prediction for different contrasts is plotted in solid lines in Figure 18A.
Note that this prediction is obtained under the assumption of a Gaussian distribution of input to all neurons. The prediction, therefore, fails to be exact if this assumption is violated. The distribution is, in fact, skewed as a result of correlations in the network (Kuhn et al. 2003). The deviation from a Gaussian distribution increases for higher recurrences, explaining the discrepancy of our predictions for highly recurrent regimes.
The mean membrane potential is crucial in determining the overall gain of the linearized dynamics. A very depolarized membrane potential reduces the chance of an input perturbation to reach the firing threshold and to elicit a spike. Therefore, it affects the gain of modulation, as shown in Figure 18B: The mean output modulation of tuning curves, γ M , is indeed inversely related to the mean distance to threshold. This suggests that the embedded gain of neurons in the network in response to modulation is also inversely proportional to the distance to threshold. Although the mean modulation in the input is below the spike threshold of a single neuron, fluctuations are nevertheless capable of eliciting reasonable firing rates. The resultant linearization of the f - I curve, as shown in Figure 11A, is a result of the noise, σ B , generated within the network due to the balance of excitation and inhibition.
for δ s = 100 spikes/s (Figure 19).
In summary, the inhibitory feedback in a recurrent network contributes to orientation selectivity in crucial ways. First, it provides a negative feedback which offsets the baseline component of the input tuning curves and leads to divisive attenuation of the common mode (selective attenuation in the baseline pathway). Second, it sets the operating point of the network and determines the linearized, embedded gain, which in turn determines the modulation gain (selective amplification in the modulation pathway). Moreover, the feature selectivity generated by the recurrent network as a result of summing many inputs of random selectivity leads to either amplification, or attenuation, of the feedforward tuning (random summation of recurrent tuning). Since the contribution of each presynaptic modulation vector must be weighted according to the linearized gain, the bulk spectrum of the connectivity matrix W must also to be weighted accordingly. The spectrum of the network with EPSP = 0.2 mV shown in Figure 6A, for instance, was obtained by normalizing J by Vth. The linear gain suggests now that J should be multiplied by ζ = 0.026, which is by a factor 2 smaller than 1/Vth = 0.05. This leads to a decrease in the radius of the bulk from to . This implies that none of the modulation modes corresponding to the bulk are actually unstable, at this operating point of the network. Indeed, if we plot the normalized radius, ρnorm, for all the networks at different contrasts, the ρnorm never exceeds one (Figure 19, inset). This means that, although the coupling strength is monotonically increasing, the network dynamics stabilizes the spectrum in inhibition-dominated networks (see Pernice et al. (2012) for related observations).
Using large-scale simulations and associated mean-field analysis of networks of spiking neurons, we have demonstrated how highly selective neuronal responses can be obtained in random networks without any spatial or feature specific structure. Our mathematical analysis pinpoints the mechanisms responsible for selective attenuation of the common mode and selective amplification of modulation, and predicts some essential properties of these networks.
A generic model of local circuitry
Here we discussed the specific case of orientation selectivity in the early visual system, as we were able to link our findings to an ample body of experimental literature. However, our model could potentially be of a much broader scope. It proposes a general mechanism for the emergence of strong feature selectivity, which could actually be at work in other sensory modalities as well. Our network model can thus be conceived as a generic model for the local cortical circuitry, which enhances feature selectivity and ensures contrast invariance of processing, without resorting to feature specific structure or experience-dependent fine tuning.
Our analysis suggests that a randomly connected network with dominant inhibition is already capable of selectively removing the uninformative common mode of a stimulus that is represented by the network in a distributed fashion, while preserving the informative modulations in the response pattern induced by stimulation. This way, the tuned part gains salience, and the signal-to-noise ratio improves. The network also amplifies the tuned component (signal) by two mechanisms: First, by modulating the embedded gain through adjusting the operating point of the network, and second, by recurrently mixing presynaptic selectivities and thereby amplifying weakly tuned inputs in some neurons.
Regimes of orientation selectivity
The same mechanisms could also lead to attenuation of the signal in the network. First, increasing the recurrent coupling in the network increases the mean distance of the membrane potential to the firing threshold, which in turn decreases the modulation gain in the network. Second, the recurrent mixing of weak tunings in the network generates a distribution of selectivity, with attenuation in many neurons. As a result, this mechanism cannot increase the selectivity of a certain fraction of the neuronal population beyond the input selectivity, unless this is compromised by a decrease in the selectivity of another fraction of the population.
In addition, there is a trade-off between selectivity and contrast invariance of the neuronal responses. Increasing the degree of recurrence in the network makes the selectivity more invariant and the distribution of it more robust with regard to variations of contrast, but it decreases the overall gain of modulation. As a result, there is a region of intermediate recurrence in our networks, where tuning amplification is most pronounced, and orientation selectivity has the largest value with the lowest sensitivity to contrast.
Our computational study, therefore, suggests an intermediate regime of recurrence as the optimal regime of feature selectivity for early sensory processing. This is the state of the network which exhibits the stimulus driven properties of neurons observed in experiments (Hofer et al. 2011), while preserving other important features like strong and contrast invariant orientation tuning. In this regime, the feature selectivity of neurons would exhibit the least deviation from their input selectivities, essentially reflecting the tuning of the feedforward input. The role of the recurrent network at this stage would then be to enhance this selectivity, by performing operations like increasing the signal-to-noise ratio and contrast invariant gain control.
This scenario might best explain the state of the input layer L4, in which orientation selectivity first emerges in the cortex. The same is not necessarily true for more recurrent layers like L2/3 or L5, which are involved in later stages of sensory processing like learning, association and motor control. It is, therefore, plausible that different regimes of recurrence exist in different layers, which may be suited to perform different types of processing. One measure for the degree of recurrence in a network is the tuning of the total recurrent input. As the recurrent coupling increases in a network, the tuning of the recurrent input generated within the network increases as well, and the assumption of untuned total input becomes a questionable approximation.
There are indeed contradictory results reported in experimental studies on the tuning of input in rodents: Both untuned inhibition (Li et al. 2012a; Liu et al. b; Liu et al. 2011) and co-tuning of excitation and inhibition (Tan et al. 2011) have been reported by different laboratories. Our results show that even in absence of significant tuning of the total input received from the recurrent network, another mechanism of selective attenuation and amplification can lead to strong selectivity and contrast invariance. This, however, does not exclude a random summation of selectivity within the recurrent network as a contributing mechanism. Indeed, in the first example network we investigated here, both mechanisms were at work. It is, therefore, plausible that both tuned and untuned components exist to some degree in such networks, but the exact mixing depends on the operating point and, specifically, on the degree of recurrence. This would therefore suggest that in more recurrent layers like L2/3 more neurons with strong input tuning should be recorded, while in the input layer L4 untuned inputs preponderatec.
It should be noted, however, that our discussion here applies to recurrent excitation and inhibition. Tuned excitation and inhibition, when measured in terms of the total excitatory and inhibitory conductances in intracellular recordings, are the total excitatory and inhibitory input that a neuron observes. It is therefore possible that the tuning of the feedforward input is dominant in the tuning. Likewise, feedforward inhibition, mediated by disynaptic inhibition, can have the same tuning as the feedforward excitatory input, as the former is mediating it. The mechanisms discussed here, however, apply to recurrent excitation and inhibition, since they are a consequence of the dynamics of a network of synaptically connected neurons and, in particular, recruitment of feedback inhibition within the network.
Recurrent vs. feedforward inhibition
Recurrent inhibition in our networks selectively feeds the mean signal back and subtracts it from the tuning curves. The overall effect of this subtraction results in a divisive attenuation of the baseline. This untuned suppression has been experimentally demonstrated to play a crucial role in increasing the selectivity (Shapley et al. 2003; Xing et al. 2011). More specifically, it has been recently shown that in L4 of mouse visual cortex, it underlies sharpening of orientation selectivity (Li et al. 2012a,b). This is also consistent with the results of a recent study on the role of somatostatin expressing, SOM +, GABAergic neurons in orientation selectivity (Wilson et al. 2012) (but see Lee et al. (2012b)). The subtraction of the baseline, attributed to this specific subtype of inhibition by Wilson et al. (2012), effectively leads to the selective attenuation/division of the baseline, as described in the present article. Reduction of this inhibition would therefore lead to a constant increase in the baseline activity of the tuning curves, which has indeed been recently reported in Dlx1(-/-) mice with selective reduction of activity in dendrite targeting inhibitory interneurons (Mao et al. 2012).
This mechanism is in contrast to the role of feedforward inhibition of fast spiking interneurons, parvalbumin expressing, PV +, GABAergic neurons. As opposed to SOM + neurons, which are more recurrent, these neurons are mainly driven by feedforward input (Adesnik et al. 2012). Unlike SOM + neurons, which are involved in dendritic computation and controlling the input, PV + neurons are better suited for controlling the output, as they innervate the peri-somatic regions (Di Cristo et al. 2004; Ma et al. 2010). Consistently, they are also more effective during the transient responses, as reflected in their activation pattern (Tan et al. 2008). In contrast, SOM + neurons are better suited for sustained activity.
These properties might then suggest that feedforward inhibition is primarily involved in gain control (Atallah et al. 2012), which uniformly rescales all components of the tuning curves. The attenuation of the baseline, therefore, comes at the expense of attenuating the modulation. It cannot be selective to the baseline, in contrast to the recurrent mechanism we have modeled here. Note that, for simplicity, we have not considered feedforward inhibition in our model. However, feedforward inhibition could easily be added to the model by mediating the same excitatory input to each neuron by an inhibitory neuron. Doing so would effectively lead to a change in the overall feedforward gain, provided that inhibition is not strong enough to result in rectification (compare with Lee et al. (2012b)).
If PV + neurons are strongly driven by feedforward input, and if the feedforward input is only slightly tuned, as we assumed here, the responses of PV + neurons should be only weakly modulated. In contrast, as the results of our simulation showed here, inhibitory neurons involved in recurrent computations can be highly selective, although they receive weakly modulated inputs. In agreement with this, SOM + inhibitory neurons involved in recurrent inhibition have been reported to be as selective as excitatory neurons, in contrast to PV + neurons with broader selectivity (Ma et al. 2010). However, it should be possible to make PV + interneurons more selective, by providing more recurrent inhibition to them. In fact, it has recently been reported that activating SOM + inhibitory neurons can unmask and enhance the selectivity of PV + cells by suppressing untuned input (Cottam et al. 2013).
Note that, as contrast invariance depends on the selective attenuation of the baseline, it should be the result of a recurrent mechanism. We therefore suggest that the constant increase of untuned inhibition that neurons receive upon increasing the contrast (Li et al. 2012b) should be a result of the recurrent, and not of the feedforward, inhibition. This may explain why dark reared mice in this experiment, which lacked a broadening of PV + responses, still show an aggregate untuned input from the network and hence highly selective responses (Li et al. 2012b). Although individual inhibitory neurons were on average highly selective in our networks, the emergent result of the interaction of excitation and inhibition lead to an effective untuned inhibition, which increases proportionally with contrast. This is again consistent with the results of Li et al. (2012b), who demonstrated that “blocking the broadening of output responses of individual inhibitory neurons does not block the broadening of the aggregate inhibitory input to excitatory neurons”. It is also consistent with the results of a previous report, demonstrating that “broad inhibitory tuning” of fast spiking cells is “not required for developmental sharpening of excitatory tuning” (Kuhlman et al. 2011). Based on these results, we therefore hypothesize that untuned inhibition might be an emergent property of an inhibition dominated network, and not a feedforward consequence of broadly tuned fast spiking neurons.
Comparison with other models
Most existing recurrent theories of orientation selectivity consider the case of species like carnivores and primates, with a clustered organization of selectivity in orientation maps (Ben-Yishai et al. 1995; Somers et al. 1995). Consistent with the proximity of neurons with similar selectivity, these theories assume a feature specific connectivity of neurons. The Mexican hat profile of connectivity which they assume leads to a more broadly tuned inhibition, which suppresses the mean, and to a sharper tuning of excitatory input, which amplifies the modulation. Therefore, these models cannot be applied to the case of a salt-and-pepper structure, as found in rodents, with no apparent spatial or feature specific connectivity.
Even in species with orientation maps, there seem to be some issues with these models. First, they rely on – and predict – a sharpening mechanism of selectivity due to tuned recurrent excitation. However, the late (presumably recurrent) sharpening of selectivity, which these theories predict, has not been observed in experiments (Gillespie et al. 2001; Sharon and Grinvald 2002). Also, the orientation selectivity of neurons seem to be the same as their feedforward input, since the preferred orientation of neurons does not change with recurrent interactions (Gillespie et al. 2001). Rather than sharpening of tuning curves, a more plausible function of the recurrent network is increasing the modulation ratio, by suppressing the baseline (Sharon and Grinvald 2002).
This was indeed the main mechanism of orientation selectivity we described in our networks here. As it is based on essentially linear processing, our model predicts no sharpening of the tuning as a result of recurrent interactions, but only an increase in modulation depth, not affecting the tuning width (Sharon and Grinvald 2002)d. Sharpening of tuning curves would only be a consequence of the feedforward nonlinearity, reflected in a nonlinear transfer function of neurons. As our results do not depend on the power-law transfer function of single neurons, our model would also work if the operating regime of the cortex suggested a smaller exponent of the power-law (Xing et al. 2011)e. Moreover, as we demonstrated above, this mechanism does not have to be accompanied, on average, by a large shift between the input and output preferred orientations.
Another consequence of the sharpening theories is the emergence of strong pairwise correlations in the network (Seriés et al. 2004). This seems not to be consistent with the very low correlations reported in the neocortex (Ecker et al. 2010 it has recently been shown that highly selective neurons in the input layer of monkey V1 exhibit very low noise correlations 2012. This imposes an important constraint on recurrent models which need sharper tuning of excitatory input to the neurons as compared to inhibition, as this sharper tuning leads to higher noise correlations in the local network (see Figure five in Hansen et al. (2012)). Hence, it might be difficult for these models to simultaneously account for both sharp orientation selectivity and low pairwise correlations in the input layers.
The mechanism we discussed here, in contrast, does not need – and not predict – strong pairwise correlations in the network. In fact, our mean field analysis was even based on the assumption of no correlations in the network. As the network operates in the AI state, the amount of linear read-out of information from our networks would therefore be several times higher than in sharpening theories, comparable to feedforward models (Seriés et al. 2004).
In comparison to feedforward models, however, our analysis suggests that contrast invariant tuning of both membrane potentials and spiking activity (Anderson et al. 2000) can robustly and reliably emerge through the action of a recurrent network. Contrast invariance is a critical property of feature selectivity, which ensures reliable and consistent feature detection for a wide range of different stimuli. Without a network mechanism of the sort described here, neurons would need a specific fine-tuning for each contrast, in order to be selective for the same feature. The network mechanism proposed here provides a generic mechanism to dynamically achieve contrast invariance, without the need for feature specific wiring, special correlation structure, power-law transfer function, contrast-dependent variability, shunting inhibition, synaptic depression, adaptation or learning. However, it remains to be experimentally verified whether intra-cortical recurrent connectivity is indeed necessary for contrast invariance, or whether feedforward mechanisms are enough to account for this phenomenon (Finn et al. 2007; Priebe and Ferster 2012; Sadagopan and Ferster 2012). A crucial experiment would be to test whether the tuning of the membrane potential is still contrast invariant if lateral interactions in the cortex are deactivated (Chung and Ferster 1998; Ferster et al. 1996; Kara et al. 2002).
There are, however, several issues which need to be further examined in future works.
Extending the scope of linear analysis
First, our analysis is mainly provided to compute the mean values of baseline and modulation gains in the network. It is therefore necessary to extend the analysis such that it accounts for the distribution of these gains. Also, the model assumes cosine tuning of inputs (linear tuning), and linear network operation (e.g. no rectification in the tuning curves). It would therefore be revealing to see if, and to which degree, the results of the linear analysis hold in the presence of nonlinearities reported to exist in the biological cortex (Anderson et al. 2000).
We used current based LIF neurons with unconstrained membrane potentials in our simulations, since this gave us the opportunity to perform a theoretical analysis of network dynamics. It is however necessary to test to which degree the results of our study change by recruiting a different neuron model. For instance, using an alternative neuron model like concuctance-based LIF neurons might not allow the distance to threshold of the membrane potential to increase unboundedly. This was not the case here, as we did not impose any minimum bounday condition on our current-based LIF neurons.
Such a difference may change the effective gain of neurons and, as a result, a different eigenvalue spectrum might be obtained. This, in turn, may change network dynamics and lead to a qualitatively different behavior of the network. It might also have consequences for the structure of correlations in the network, and may affect the AI state. Our preliminary results indicate that imposing a lower boundary condition on current based LIF neurons can amplify correlations and synchrony in the network, to the extent that the network does not operate anymore in the AI state. Feature selectivity is still obtained and even enhanced in the network. Tuning curves show a higher average OSI and maximum firing rates, more rectification and reduced tuning widths follow from this, while contrast invariance is preserved (not shown). Such a scenario should be analyzed in more detail in future studies.
It is also important to study the effect of different connectivity patterns in the network. Here, we have modeled the dominance of inhibition by increasing the relative strength of IPSPs. An alternative implementation is to increase the density of inhibitory axonal projections, which increases the density of connectivity. Dense pattern of connectivity has been reported for inhibition in different cortices (Fino and Yuste 2011; Hofer et al. 2011; Packer and Yuste 2011), and seems to be a common motif. Such a change in network connectivity might have consequences for sensory processing. For instance, a decrease in temporal fluctuations of the local inhibitory population and, likewise, in the quenched noise of preferred orientations is implied, which can, in turn, affect the tuning of inputs and amplification or attenuation of orientation selectivity.
Also, the model and the analysis provided here should be extended to account for networks with spatial structure. It should be analytically studied how distance dependent connectivity, and in particular different connectivity profiles for excitation and inhibition, affect the results obtained here. It has been shown, for instance, that balanced networks can show topologically invariant statistics (Yger et al. 2011). It would therefore be interesting to see if the same analysis also applies to functional properties of these networks. Of special interest would be to study how the spectrum of the network changes (Voges et al. 2011), and to which degree this predicts the operation of the network, in particular, how the spatial extent of excitation and inhibition affects this behavior. Experimentally, it has been reported that inhibition is more local than excitation in terms of anatomical projections. Many theoretical models, however, assume broader inhibition for convenience. Studying the functional properties of networks with realistic patterns of connectivity might therefore shed light on this aspect.
Orientation selectivity and orientation maps
As we were primarily interested in the emergence of orientation selectivity in species without orientation maps, we studied here random networks with salt-and-pepper structure. However, the model could be extended to networks with spatial or functional maps, which imply feature specific connections. As opposed to the Mexican hat profile assumed in the ring model, if one now assumes a more realistic pattern of local inhibition and longer range excitation, different dynamic properties might follow (Hansen et al. 2012; McLaughlin et al. 2000; Pernice et al. 2011). The analysis of the new regime of orientation selectivity therefore calls for a further study. The results of this modeling would in turn help identifying the basic mechanisms that are responsible for the emergence of orientation selectivity in different species with different structures, and to eventually provide an answer to the question whether common design principles exist, or whether different strategies have been recruited by different species.
Simulation and analysis of network dynamics
Neuron model and surrogate spike trains
The current I i (t) represents the total input to the neuron, the integration of which is governed by the leak resistance R, and the membrane time constant τ =20 ms. When the voltage reaches the threshold at Vth = 20 mV, a spike is generated and transmitted to all postsynaptic neurons, and the membrane potential is reset to the resting potential at V0 = 0 mV. It remains at this level for short absolute refractory period, tref = 2 ms, during which all synaptic currents are shunted.
To simulate spiking inputs to neurons from outside the network (e.g. from the lateral geniculate nucleus, LGN), we resorted to the conceptually simpler model of a Poisson process. The associated surrogate spike trains have the property that spikes are generated randomly and independently with a prescribed firing rate at each point in time. The linear superposition of an arbitrary number of Poisson processes (as in the case of multiple afferents) is again a Poisson process. The rate of the superposition process is exactly the linear sum of the rates of its components, and it can be effectively simulated as a single process with high rate.
Network connectivity and activity dynamics
The networks considered in this study comprised N = 12 500 neurons, f = 80% of which were excitatory and 1 − f = 20% inhibitory. Synaptic connections were drawn randomly and independently, such that each neuron received exactly 1 000 inputs from the excitatory and 250 from the inhibitory neuron population, respectively. This amounted to an overall connectivity of ε=10%, as suggested by statistical neuroanatomy of local cortical networks (Braitenberg and Schüz 1998). The wiring imposed in our model was in accordance with Dale’s principle, i.e. each neuron formed the same type of synapse with all its postsynaptic partners, either excitatory or inhibitory (Kriener et al. 2008). Self-connections were excluded.
After each spike, the membrane potential was reset to the resting potential at 0 mV, therefore the size of the voltage jump is Vreset = Vth − V0 = Vth. The cross-neuron coupling W ij encodes the amplitude of the postsynaptic potential (PSP), corresponding to a synapse from neuron j (source) to neuron i (target). A uniform transmission delay of D = 1.5 ms was assumed for all recurrent synapses in the network. The spike train X i (t) stands for the accumulated external input to neuron i, and the corresponding synapses have connection strength of amplitude J s .
In all the simulations described in our paper, in fact, we used stereotyped synaptic transients of finite width, instead of normalized impulses, as postsynaptic currents. All synaptic kernels had the shape of an alpha-function , with a fixed time constant τsyn = 0.5 ms, replacing the delta-functions in the spike trains. The peak amplitude of the kernel is J α , to which we refer as EPSP to denote the strength of post-synaptic potentials. The parameter W ij corresponds to the integral of the PSP, which is J = e τsynEPSP.
In this model, keeping all time constants at fixed values, the efficacy of a synaptic connection is determined by the peak amplitude of the PSP. For any specific network, we assumed that all recurrent excitatory synapses induce excitatory postsynaptic potentials of the same peak amplitude, EPSP. The peak amplitudes of inhibitory postsynaptic potentials were taken to be a fixed multiple, g, of the excitatory ones, such that IPSP = − g EPSP. For all our results presented in the main text, individual inhibitory couplings were assumed to be much more effective than excitatory ones: The excitation-inhibition ratio was fixed at g = 8. As a consequence, recurrent connectivity in our networks was characterized by a net surplus of inhibition, since the small number of inhibitory neurons was over-compensated by the strength of individual inhibitory couplings. Fixing the balance between recurrent excitation and inhibition in this way is an important concept in models of cortical dynamics, although measurements in real brains are difficult (see e.g. Okun and Lampl (2008)).
In different simulations, we used excitatory synapses with an EPSP amplitude in the range between 0 mV and 1.0 mV. Accordingly, inhibitory synapses had IPSP amplitudes between 0 mV and − 8.0 mV. All external inputs in our simulations were excitatory, and the amplitude of their synapses, EPSPffw, was fixed at 0.1 mV throughout all simulations.
This configuration of parameters, combined with a stationary driving input to each neuron in the network, was previously shown to induce relatively low rates in all neurons, while spike trains are irregular, and pairwise correlations remain weak (Brunel 2000). These properties are known to be a result of complex recurrent network dynamics, and not a consequence of random inputs (e.g. Poisson spike trains) that drive the network (Kriener et al. 2008; van Vreeswijk and Sompolinsky 1996). Inhibitory feedback actively decorrelates the network activity (Pernice et al. 2011; Renart et al. 2010; Tetzlaff et al. 2012). The resulting states of network dynamics are dubbed asynchronous-irregular, AI, and they are thought to closely resemble the dynamic states of neocortical networks recorded with extracellular electrodes (Ecker et al. 2010).
In this parameter setting, the degree of recurrence in any specific network is essentially determined by the amplitude of excitatory postsynaptic potentials, EPSP, of the recurrent connections. Recurrence can be effectively quantified by the spectral radius of the connectivity matrix , which scales linearly with the EPSP amplitude. This fact is explained in more detail below.
The baseline s B is the mean level of input across all orientations. We used a logarithmic relation between input contrast C and input baseline, s B ∝ log10(1 + 100C), as a practical way to specify the input intensity, inspired by biology.
In all our simulations, the relative amplitude, m of the stimulus dependent modulation was fixed to a fraction of 10% of the baseline level, corresponding to setting m = 0.1. The parameter θ∗ signified the stimulus orientation at which the neuron received its maximal input, smax = (1 + m) s B . It represented the initial preferred orientation, Input PO, of the neuron, a parameter that was randomly and independently assigned to each neuron in the population.
To measure the output tuning curves in numerical simulations, we stimulated the networks for 12 different stimulus orientations, covering the full range between 0° and 180° in steps of 15°. Each simulation was run for 6.3 s, using a simulation time step of 0.1 ms. In order to include only steady state activity into our analysis, and to avoid onset transients, the first 300 ms in each simulation were discarded. The output tuning curve of any neuron in the network was obtained in terms of its average firing rate r in response to each stimulus orientation θ, and normally plotted as a curve r (θ). An output tuning curve would be termed contrast invariant, if its overall shape does not depend on the contrast, C, of the stimulus.
To explore the interaction between feedforward and recurrent connectivity on orientation selectivity, we systematically changed two parameters in our networks: The mean input firing rate, s B , and the EPSP amplitude as a measure for the strength of recurrent coupling. We changed these two parameters in a network, while fixing all other parameters, including the network topology given by a specific realization of the random synaptic connectivity, , the inhibition-excitation ratio, g, and the input modulation ratio, m. We used three different values for the baseline intensity s B = 12 000, 16 000, and 20 000 spikes/s. This is corresponding to low, medium, and high contrast, C ≈ 9%, 39%, and 99%, respectively.
Free membrane potential
Contrast invariance of the membrane potential tuning is, in the case of a tuned spike response, compromised by the reset mechanism in our neuron model: After each spike, the membrane potential is reset to its resting value, which exerts a negative contribution to the membrane potential, which effectively imposes the opposite tuning as compared to the output spiking. As a result, when the neuron fires more (higher contrasts), it inevitably attains a more negative membrane potential. To correct for this phenomenon, we add this negative contribution back to the membrane potential (Vfm =V + τ rVreset) or, equivalently, keep the neuron from spiking by raising its threshold to very high levels. In membrane potential recordings from real neurons, tt is also common to correct for this phenomenon by cutting out the spikes including their aftereffects. If used with care, this procedure is essentially equivalent to the correction we applied here.
Numerical methods and simulation software
The implementation of the LIF model employed in the present study is based on a numerical method known as “exact integration” (Diesmann et al. 2001; Rotter and Diesmann 1999). Numerical simulations of all networks were performed using the neuronal simulation environment NEST (Gewaltig and Diesmann 2007). This tool has been developed to support the reliable, precise and performant numerical simulations of networks of spiking neurons.
The PO, θ∗, was then extracted as the angle, , of the OSV. Its length, , yielded a measure for the degree of orientation selectivity, OSI (Ringach et al. 2002). For a highly selective neuron, which is only active for one orientation, and remains silent for all other orientations, the OSI would be 1; for a completely unselective neuron responding with the same firing rate for all orientations, this measure returns 0.
where rpref = r (θ∗) is the firing rate at the preferred stimulus orientation, θ∗, and the rorth = r (θ∗ + 90°) is the firing rate for the orientation orthogonal to it.
Since the output preferred (and hence the orthogonal) orientations of a neuron are computed from Eq. (16), we need to interpolate between the data points of a tuning curve to obtain rpref and rorth. To do this, we fit a cosine function to the tuning curve sampled at 12 equidistant orientations, employing a nonlinear least squares method. Then, the cosine fit of the tuning curve is evaluated at PO and PO+90° to obtain rpref and rorth, respectively. Negative numbers, whenever occurring, were replaced by 0. This is similar to what experimentalists typically do (see e.g. Niell and Stryker (2008), although by fitting different functions, like a Gaussian or a von Mises probability density; also see Hansel and van Vreeswijk (2012)), therefore allows us to compare our distributions to their reported resultsf.
irrespective of the baseline firing rate s B , and of the preferred orientation θ∗. Thus, in the case of our input tuning curves with m = 0.1, we have OSI = 0.05 and OSI∗ = 0.1, respectively.
Baseline and modulation gain
We refer to this as the F2 component of a tuning curve.
The transformation of preferred orientation induced by a recurrent network is visualized by a scatter plot showing Output PO vs. Input PO for all neurons (Figure 12). Weakly recurrent networks essentially preserve the preferred orientations of the input to each neuron, leading to scatter plots centered about the diagonal. For networks with increased recurrence, the output PO deviates from the input PO, and off-diagonal elements occur more frequently. To quantify the deviation, we first compute the difference, Δ PO n = OutputPO n − InputPO n , for each neuron. Observe that orientation should be taken modulo 180° and Δ PO n = 90° represents the largest possible difference between input and output PO.
Note that as Δ PO spans the half-cricle, i.e. the range [0°,180°], we have taken half the resultant angle as the SDI. If all Output POs are exactly the same as Input POs, SDI returns zero; the maximum scatter from the Input POs corresponds to a uniform distribution of Δ PO, for which SDI returns ≈40.5°.
Firing rates and membrane potential statistics
Mean firing rates in recurrent networks
The tuning curves considered in this work reflect time-averaged firing rates of neurons in a recurrent network. From our numerical simulations, it became clear that the time averaged membrane potential is indicative of the operating point of the network with regard to the tuning properties of its neurons (see Section Results). Therefore, we begin our analysis by considering time averaged equations.
Observe that transmission delays do not matter for temporal averages, and that the above equation holds for networks of LIF neurons with arbitrary connectivity.
We assume that the matrix is always invertible, with inverse . If two out of the three variables , and are known, the third one can then be computed in a straightforward fashion.
Eigenvalue spectrum of homogeneous random networks
We now specifically consider a recurrent network of excitatory and inhibitory neurons, as discussed above. It is assumed that N E = f N neurons are excitatory, forming synapses of uniform strength J with their postsynaptic targets. The remaining neurons N I = N − N E = (1 − f) N are inhibitory, forming synapses of uniform strength −g J. The factor g > 0 describes the relative strength of inhibitory synapses. We refer to the network as being “inhibition dominated”, if the lower number of inhibitory neurons is compensated for by stronger inhibitory weights. In the case considered here, this amounts to the condition g > N E /N I = f/() (Brunel 1 − f2000).
The density of eigenvalues within the circle is in general non-uniform, and it can be approximated by a density derived in Rajan and Abbott (2006).
Eq. (26), which relates input, output and membrane potentials under stationary conditions, has an effective coefficient matrix . Its eigenvalue spectrum consists of numbers λ−1, where λ is from the spectrum of . Likewise, the eigenvalues of are (1 − λ)−1. This can either be derived directly, or it can be implied by the spectral mapping theorem (Higham 2008). The associated eigenvectors are the same in each case.
Self-consistent firing rates in homogeneous random networks
with and .
Employing a mean field ansatz, the above theory can be applied to networks of identical pulse-coupled LIF neurons, randomly connected with homogeneous in-degrees, and driven by external excitatory input of the same strength. Under these circumstances, all neurons exhibit the same mean firing rate, which can be determined by a straight-forward self-consistency argument (Amit and Brunel 1997; Brunel 2000): The firing rate r is a function of the first two moments of the input fluctuations, μ and σ2, as described by Eq. (31). Both parameters are, in turn, functions of the firing rate r. This leads to a fixed point equation, the root of which can be found numerically. Here we employed Newton’s method, verifying the convergence of the iteration by appropriate means.
where s is the input (stimulus) firing rate, and r is the mean response rate of all neurons in the network, respectively. Here, J s denotes the EPSP amplitude of external inputs, and J denotes the amplitude of recurrent EPSPs. The inhibition-excitation ratio g has been introduced above. The remaining structural parameters are the number of neurons in the network, N, the connection probability, ε, and the fraction f of neurons in the network that are excitatory, implying that a fraction 1 − f is inhibitory.
Correction for α-synapses
The treatment described above is only approximating the networks considered in numerical simulations, since we chose biologically more realistic LIF neurons with alpha-synapses. In order to make use of the same analytical framework as just described, we made the simplifying assumption that all the presynaptic current is delivered immediately, and that the input current to each neuron is still white. We therefore need to obtain the effective values for mean and variance.
Therefore, we choose the value J = J α e τsyn as the effective value for the mean input. This is equivalent to the integral under the PSC, i.e. the total amount of current that is delivered by an alpha synapse with peak J α .
This suggests .
Transformation of tuning by a recurrent network
Baseline and modulation
This means that the recurrent network defined by the coupling matrix , and the matrix derived from it, processes baseline and modulation components separately and independently, with no cross-talk involved. In other words, pure modulation input will not attain any baseline through network processing, and vice versa. This is exactly the meaning of Figure 4B.
Note, however, that, as the mean membrane potential actually depends on the input in a highly nonlinear fashion, the above equations determine the network response only implicitly. Moreover, for the processing to be independent, it is necessary that and depends only on s B and s M , respectively, with no cross-talk. For the baseline firing rates, this implies that is not affected by the modulation in the input. As baseline firing rates are the same in our homogeneous networks, the mean (over neurons in the network) should therefore be more or less constant in one experiment with fixed s B . We have checked this numerically in our simulations by plotting the standard deviation (over neurons) of the mean membrane potential (over time and over orientation) for 24 sampled excitatory and inhibitory neurons (Figure 16). The variance is indeed much smaller than v M , the modulation of the membrane potential due to modulation in the input, s M , and this is consistent for all recurrent regimes.
Baseline attenuation by inhibition dominated random networks
In the case of uniform input an explicit solution of the mean firing rate can be obtained by the mean field approximation, as described in Section Self-consistent firing rates in homogeneous random networksSelf-consistent firing rates in homogeneous random networks. The mean membrane potential is then determined by Eq. (26). We will now discuss how the modulation part of tuned inputs can be approximated.
Modulation of the firing rate and of the membrane potential
Although this approximation is not strictly true, it holds on average: As the result of numerical simulation in Figure 9B demonstrated, had a narrow distribution around zero. Similar distribution is expected for , as can be expanded in terms of powers of (Eq. (6)) under the assumption of stability.
In terms of diagrammatic illustration of Figure 4B, this is equivalent to β M ≈ 0. This in turn implies that the net input from the recurrent network is on average untuned. For EPSP=0.2 mV, this input tuning is shown in Figure 10. Although the tuning of recurrent input is (compared to feedforward tuning) not negligible for all neurons, it holds on average, such that average tuning curves have the same shape as the input tuning (Figure 10B). We therefore use this approximation to compute the mean modulation gain of the network.
The linear mixture of tuning curves described by Eq. (37) is reduced to an amplification or attenuation of the respective input tuning curves.
The parameters J s , J, g, N, ε and f are the same as above.
The resultant firing rate of the neuron according to Eq. (31) now differs from its baseline firing rate. The difference is, in fact, a good estimate for the modulation in the output firing rate of this particular neuron. Due to our general homogeneity assumptions, all neurons in the network will have the same output modulation, notwithstanding the fact that they all have different preferred orientations. Note that, to be consistent, the correction of J due to the refractory period should be performed based on the modulated firing rate. This becomes specifically important for low recurrences, where the modulation rate is higher and, as a result, the effect of shunting of input due to refractory period (Jrtref) becomes more prominent.
If the firing rates have been determined self-consistently, Eq. (26) yields the corresponding membrane potentials directly. This is true for both the baseline and for the modulation, which are defined in the obvious way also for the membrane potential.
Operating regime of network
The modulation gains can also be computed by linearizing the dynamics around the baseline operating regime of the network.
Our results revealed that the mean modulation gain, γ M , in the network depends on the mean distance of the membrane potential from the threshold (Figure 18). Subthreshold modulations (with regard to the mean-driven threshold) were capable of eliciting output firing activity, and the input-output relationship (the gain) was inversely proportional to the distance to threshold. One way to interpret these results is to summarize them in terms of the mean and standard deviation of the input that a neuron receives on average from the network, in the baseline state. To this, and alternatively to the mean and standard deviation of membrane potential, which is uniquely determined by the input, we refer as the operating point of the network.
The effect of mean, μ B , and standard deviation of input, σ B , on the modulation gain can be described, respectively, as shifting the mean membrane potential (and hence determining the mean distance to threshold), and smoothing (linearizing) the f-I curve (Anderson et al. 2000; Miller and Troyer 2002). The linearized gains can then be obtained by perturbing the input around the baseline state, as it was described in Section Tuning and invariance of membrane potentials and Linear tuning in recurrent networks. This total embedded gain of the neuron in response to this perturbation determines the effective coupling strength and, as Figure 19 demonstrated, predicts the mean modulation gain in these networks quite well.
The embedded gain modulates both feedforward and recurrent couplings. The fact that recurrent connections are now effectively weighted by these linear gains might suggest an explanation why the network exhibits stable activity even for highly recurrent regimes. As Figure 6A showed, if the spectrum of is computed from the weight matrix normalized by Vth, the radius of the bulk of eigenvalues, ρ, would be larger than one already for an intermediate recurrent regime (EPSP = 0.2, J = 0.27 mV, and ρ = 1.68 in this example). If one now computes the normalized radius by weighting the coupling strength according to linear gains (i.e. W ij = ζ J, instead of J/Vth), the new normalized radius, ρnorm, is not unstable anymore (ρnorm ≈ 0.87 in this example). This coincides with our observations of the numerical simulations.
This enhanced stability has indeed been demonstrated to be the case for all networks we have studied here (Figure 19, inset). If one now add to this that ζ is inversely proportional to distance to threshold, it follows that the network dynamically settles in a regime of operation which stabilizes the bulk of eigenvalues. This is due to the fact that in inhibition dominated networks, increasing the recurrent coupling also increases the negative feedback within the network, which results in more hyperpolarized average membrane potentials of the neurons.g This in turn leads to a smaller relative contribution of each spike from a presynaptic source to the firing activity of the postsynaptic neuron, since the distance to threshold has effectively increased. The overall increase or decrease in the effective gain, ζ J, depends on how exactly the mean membrane potential, v, is affected by J and how ζ is in turn depending on v.
Linear tuning in the strongly recurrent regime
For networks with weak to intermediate recurrence, the assumption of “perfect balance” allowed a rather accurate prediction of the gain for the tuned part of network activation (“modulation”). This assumption, however, fails in the case of strongly recurrent networks. Under the constraints of “linear tuning” some aspects of the problem can be nevertheless treated.
Vectorial features and linear tuning
We now consider stimulus features that can be represented by vectors in for some n ≥ 1.
The direction of a moving light dot stimulus in the visual field is represented by an angle in [0,2 π) or, alternatively, by a vector in , the 1-dimensional sphere. The speed of the movement can be considered simultaneously with its direction, if encoded by the length of the vector. Any vector in is then corresponding to a valid stimulus.
The factor 2 in the argument of the cosine and the sine function makes sure that a rotation of the grating by π is mapped to the initial orientation again. Note that this parametrization already implies that “cross-orientation suppression” follows from the obvious relation ω (ϕ + π/2) = − ω (ϕ).
A stimulus for studying color vision is represented by the activation profiles of the different types of receptor cells in the retina, distinguished by their specific light absorption spectrum. For example, trichromacy in humans and closely related monkeys involves the differential activation of the three different types of cones (S,M,L). This leads, in a natural way, to a representation of a color stimulus in terms of a vector in .
Linear tuning in recurrent networks
where is the vector of baseline activities and is a matrix the rows of which are given by the transposed preferred features . Therefore, apart from nonlinear distortions induced by nonzero mean membrane potentials, all neurons in the recurrent network are again linearly tuned, with baselines given by the components of the vector , and preferred features encoded by the rows of the matrix .
Mixing of preferred features in random recurrent networks
Because linear tuning curves are linearly transformed according to Eq. (56), we can actually compute the recurrent preferred features that result from this transformation. Note, however, that the actual tuning curves will, in general, be contaminated by nonlinear distortions by that are not reflected by the linear mix of preferred features of the inputs to the network. To keep the present discussion simple, we ignore this complication here.
For a random network as described above, the factor β is the same for all rows j, and it describes now the mean attenuation/amplification of the tuning strength performed by the recurrent network.
which is a Weibull distribution with shape parameter 2 and scale parameter .
is the modified Bessel function of the first kind and order zero.
b Note that, as a π-periodic parameter θ is now mapped to a 2π-periodic parameter 2θ, an opposite direction here implies an orthogonal orientation in the original parameter space.
c One should also consider the possible effect of feature-specific connectivity (Ko et al. 20112013) on this behavior, which predicts a preponderance of connections between neurons with similar selectivity and hence a more tuned input to neurons.
d Tuning width is unchanged if the baseline activity is removed, and half-width at half-height (HWHH) is computed from the Gaussian fit to the tuned part, as done in (Sharon and Grinvald 2002). Tuning amplification would effectively decrease the tuning width, however, if the baseline is also taken into consideratio (as in Wilson et al. (2012)).
e Note that we do not exclude the presence of nonlinear mechanisms in the biological cortex and their contribution to orientation selectivity. The nonlinearity of the neuronal transfer function, as well as other nonlinear mechanisms like nonlinear dendritic amplification (Jia et al. 2010; Lavzin et al. 2012; Lee et al. 2012a) or synchronization of thalamic inputs (Bruno and Sakmann 2006; Stanley et al. 2012), may contribute in addition to obtain higher selectivity.
f We verified however, if the cosine fit imposes a general constraint on the tuning curves and changes the OSI*. To check this, we use an alternative way to obtain rpref and rorth from the smoothened tuning curves, namely by linear interpolation between the data points. The result of this is compared with the result of the cosine fit in Figure 3B, and does not change the conclusions qualitatively.
g If there is no minimum boundary condition, as it is the case for our neuron model, this hyperpolarization can grow beyond all bounds. Note that this would not be the case if one uses another neuron model, like conductance-base LIF neurons. The treatment of the problem in those cases might, therefore, be different.
The authors wish to thank A Aertsen, C Boucsein, G Grah, J Kirsch and A Kumar for their comments on previous versions of the manuscript. We also thank the developers of the simulation software NEST (see http://www.nest-initiative.org) and the maintainers of the BCF computing facilities for their support throughout this study. Funding by the German Ministry of Education and Research (BCCN Freiburg, grant 01GQ0420 and BFNT Freiburg*Tübingen, grant 01GQ0830) is gratefully acknowledged. The article processing charge was covered by the open access publication fund of the University of Freiburg.
- Adesnik H, Bruns W, Taniguchi H, Huang ZJ, Scanziani M: A neural circuit for spatial summation in visual cortex. Nature 2012, 490(7419):226-231. http://www.ncbi.nlm.nih.gov/pubmed/23060193 10.1038/nature11526View ArticleGoogle Scholar
- Alitto HJ, Usrey WM: Influence of contrast on orientation and temporal frequency tuning in ferret primary visual cortex. J Neurophysiol 2004, 91(6):2797-2808. http://www.ncbi.nlm.nih.gov/pubmed/14762157 10.1152/jn.00943.2003View ArticleGoogle Scholar
- Amit DJ, Brunel N: Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cerebral Cortex 1997, 7(3):237-252. http://www.ncbi.nlm.nih.gov/pubmed/9143444 10.1093/cercor/7.3.237View ArticleGoogle Scholar
- Anderson JS, Lampl I, Gillespie DC, Ferster D: The Contribution of Noise to Contrast Invariance of Orientation Tuning in Cat Visual Cortex. Science 2000, 290(5498):1968-1972. http://www.ncbi.nlm.nih.gov/pubmed/11110664 10.1126/science.290.5498.1968View ArticleGoogle Scholar
- Atallah BV, Bruns W, Carandini M, Scanziani M: Parvalbumin-expressing interneurons linearly transform cortical responses to visual stimuli. Neuron 2012, 73(1):159-170. http://www.ncbi.nlm.nih.gov/pubmed/22243754 10.1016/j.neuron.2011.12.013View ArticleGoogle Scholar
- Batschelet E: Circular statistics in biology (Mathematics in biology). Academic Press Inc; 1981.Google Scholar
- Ben-Yishai R, Bar-Or RL, Sompolinsky H: Theory of Orientation Tuning in Visual Cortex. Proc Nat Acad Sci 1995, 92(9):3844-3848. http://www.ncbi.nlm.nih.gov/pubmed/7731993 10.1073/pnas.92.9.3844View ArticleGoogle Scholar
- Blasdel GG, Salama G: Voltage-sensitive dyes reveal a modular organization in monkey striate cortex. Nature 1986, 321(6070):579-585. http://www.ncbi.nlm.nih.gov/pubmed/3713842 10.1038/321579a0View ArticleGoogle Scholar
- Bollobás B: Random Graphs, 2nd edn. No. 73 in Cambridge studies in advanced mathematics. Cambridge University Press; 2001. http://www.cambridge.org/ve/academic/subjects/mathematics/discrete-mathematics-information-theory-and-coding/random-graphs-2nd-editionGoogle Scholar
- Bonhoeffer T, Grinvald A: Iso-orientation domains in cat visual cortex are arranged in pinwheel-like patterns. Nature 1991, 353(6343):429-4231. http://www.ncbi.nlm.nih.gov/pubmed/1896085 10.1038/353429a0View ArticleGoogle Scholar
- Braitenberg V, Schüz A: Cortex: statistics and geometry of neuronal connectivity. Springer; 1998. http://link.springer.com/book/10.1007%2F978-3-662-03733-1View ArticleGoogle Scholar
- Brunel N: Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J Comput Neurosci 2000, 8(3):183-208. http://www.ncbi.nlm.nih.gov/pubmed/10809012 10.1023/A:1008925309027View ArticleGoogle Scholar
- Bruno RM, Sakmann B: Cortex is driven by weak but synchronously active thalamocortical synapses. Science 2006, 312(5780):1622-1627. http://www.ncbi.nlm.nih.gov/pubmed/16778049 10.1126/science.1124593View ArticleGoogle Scholar
- Chapman B, Stryker MP: Development of orientation selectivity in ferret visual cortex and effects of deprivation. J Neurosci 1993, 13(12):5251-5262. http://www.ncbi.nlm.nih.gov/pubmed/8254372Google Scholar
- Chen X, Leischner U, Rochefort NL, Nelken I, Konnerth A: Functional mapping of single spines in cortical neurons in vivo. Nature 2011, 475(7357):501-505. http://www.ncbi.nlm.nih.gov/pubmed/21706031 10.1038/nature10193View ArticleGoogle Scholar
- Chung S, Ferster D: Strength and orientation tuning of the thalamic input to simple cells revealed by electrically evoked cortical suppression. Neuron 1998, 20(6):1177-1189. http://www.ncbi.nlm.nih.gov/pubmed/9655505 10.1016/S0896-6273(00)80498-5View ArticleGoogle Scholar
- Cottam JCH, Smith SL, Häusser M: Target-specific effects of somatostatin-expressing interneurons on neocortical visual processing. J Neurosci 2013, 33(50):19,567-19,578. http://www.jneurosci.org/content/33/50/19567.abstract 10.1523/JNEUROSCI.2624-13.2013View ArticleGoogle Scholar
- Di Cristo G, Wu C, Chattopadhyaya B, Ango F, Knott G, Welker E, Svoboda K, Huang ZJ: Subcellular domain-restricted GABAergic innervation in primary visual cortex in the absence of sensory and thalamic inputs. Nat Neurosci 2004, 7(11):1184-1186. http://www.ncbi.nlm.nih.gov/pubmed/15475951 10.1038/nn1334View ArticleGoogle Scholar
- Diesmann M, Gewaltig MO, Rotter S, Aertsen A: State space analysis of synchronous spiking in cortical networks. Neurocomput 2001, 38-40: 565-571. http://material.brainworks.uni-freiburg.de/publications-brainworks/2001/journal%20papers/mdcns00.pdfView ArticleGoogle Scholar
- Douglas RJ, Koch C, Mahowald M, Martin KA, Suarez HH: Recurrent excitation in neocortical circuits. Science 1995, 269(5226):981-985. http://www.ncbi.nlm.nih.gov/pubmed/7638624 10.1126/science.7638624View ArticleGoogle Scholar
- Dragoi V, Rivadulla C, Sur M: Foci of orientation plasticity in visual cortex. Nature 2001, 411(6833):80-86. http://www.ncbi.nlm.nih.gov/pubmed/11333981 10.1038/35075070View ArticleGoogle Scholar
- Ecker AS, Berens P, Keliris GA, Bethge M, Logothetis NK, Tolias AS: Decorrelated neuronal firing in cortical microcircuits. Science 2010, 327(5965):584-587. http://www.ncbi.nlm.nih.gov/pubmed/20110506 10.1126/science.1179867View ArticleGoogle Scholar
- Erdős P, Rényi A: On random graphs I. Publicationes Mathematicae (Debrecen) 1959, 6: 290-297.Google Scholar
- Ferster D, Miller KD: Neural mechanisms of orientation selectivity in the visual cortex. Annu Rev Neurosci 2000, 23(1):441-471. http://www.ncbi.nlm.nih.gov/pubmed/10845071 10.1146/annurev.neuro.23.1.441View ArticleGoogle Scholar
- Ferster D, Chung S, Wheat H: Orientation selectivity of thalamic input to simple cells of cat visual cortex. Nature 1996, 380(6571):249-252. http://www.ncbi.nlm.nih.gov/pubmed/8637573 10.1038/380249a0View ArticleGoogle Scholar
- Finn IM, Priebe NJ, Ferster D: The emergence of contrast-invariant orientation tuning in simple cells of cat visual cortex. Neuron 2007, 54(1):137-152. http://www.ncbi.nlm.nih.gov/pubmed/17408583 10.1016/j.neuron.2007.02.029View ArticleGoogle Scholar
- Fino E, Yuste R: Dense inhibitory connectivity in neocortex. Neuron 2011, 69(6):1188-203. http://www.ncbi.nlm.nih.gov/pubmed/21435562 10.1016/j.neuron.2011.02.025View ArticleGoogle Scholar
- Gewaltig MO, Diesmann M: Nest (neural simulation tool). Scholarpedia 2007, 2(4):1430. ) http://www.scholarpedia.org/article/NEST_(NEural_Simulation_Tool 10.4249/scholarpedia.1430View ArticleGoogle Scholar
- Gillespie DC, Lampl I, Anderson JS, Ferster D: Dynamics of the orientation-tuned membrane potential response in cat primary visual cortex. Nature Neuroscience 2001, 4(10):1014-1019. http://www.ncbi.nlm.nih.gov/pubmed/11559853 10.1038/nn731View ArticleGoogle Scholar
- Haider B, Häusser M, Carandini M: Inhibition dominates sensory responses in the awake cortex. Nature 2013, 493(7430):97-100. http://www.ncbi.nlm.nih.gov/pubmed/23172139View ArticleGoogle Scholar
- Hansel D, van Vreeswijk C: How noise contributes to contrast invariance of orientation tuning in cat visual cortex. J Neurosci 2002, 22(12):5118-5128. http://www.ncbi.nlm.nih.gov/pubmed/12077207Google Scholar
- van Vreeswijk C, Hansel D: The Mechanism of Orientation Selectivity in Primary Visual Cortex without a Functional Map. J Neurosci 2012, 32(12):4049-4064. http://www.ncbi.nlm.nih.gov/pubmed/22442071 10.1523/JNEUROSCI.6284-11.2012View ArticleGoogle Scholar
- Hansen BJ, Chelaru MI, Dragoi V: Correlated Variability in Laminar Cortical Circuits. Neuron 2012, 76(3):590-602. http://www.ncbi.nlm.nih.gov/pubmed/23141070 10.1016/j.neuron.2012.08.029View ArticleGoogle Scholar
- Higham NJ: Functions of Matrices: Theory and Computation. Philadelphia, PA, USA: Society for Industrial and Applied Mathematics; 2008.View ArticleGoogle Scholar
- Hofer SB, Ko H, Pichler B, Vogelstein J, Ros H, Zeng H, Lein E, Lesica NA, Mrsic-Flogel TD: Differential connectivity and response dynamics of excitatory and inhibitory neurons in visual cortex. Nature Neurosci 2011, 14(8):1045-1052. http://www.ncbi.nlm.nih.gov/pubmed/21765421 10.1038/nn.2876View ArticleGoogle Scholar
- Horton JC, Adams DL: The cortical column: a structure without a function. Philosophical Transactions of the Royal Society of London 2005, 360(1456):837-862. http://www.ncbi.nlm.nih.gov/pubmed/15937015 10.1098/rstb.2005.1623View ArticleGoogle Scholar
- Hubel DH, Wiesel TN: Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J Physiol 1962, 160: 106-154. http://www.ncbi.nlm.nih.gov/pubmed/14449617View ArticleGoogle Scholar
- Wiesel TN, Hubel DH: Receptive fields and functional architecture of monkey striate cortex. The Journal of Physiology 1968, 195(1):215-243. http://www.ncbi.nlm.nih.gov/pubmed/4966457View ArticleGoogle Scholar
- Jia H, Rochefort NL, Chen X, Konnerth A: Dendritic organization of sensory input to cortical neurons in vivo. Nature 2010, 464(7293):1307-1312. http://www.ncbi.nlm.nih.gov/pubmed/20428163 10.1038/nature08947View ArticleGoogle Scholar
- Kara P, Pezaris JS, Yurgenson S, Reid RC: The spatial receptive field of thalamic inputs to single cortical simple cells revealed by the interaction of visual and electrical stimulation. Proc Nat Acad Sci 2002, 99(25):16,261-16,266. http://www.ncbi.nlm.nih.gov/pubmed/12461179 10.1073/pnas.242625499View ArticleGoogle Scholar
- Ko H, Hofer SB, Pichler B, Buchanan KA, Sjöström PJ, Mrsic-Flogel TD: Functional specificity of local synaptic connections in neocortical networks. Nature 2011, 473(7345):87-91. http://www.ncbi.nlm.nih.gov/pubmed/21478872 10.1038/nature09880View ArticleGoogle Scholar
- Ko H, Cossell L, Baragli C, Antolik J, Clopath C, Hofer SB, Mrsic-Flogel TD: The emergence of functional microcircuits in visual cortex. Nature 2013, 496(7443):96-100. http://www.ncbi.nlm.nih.gov/pubmed/23552948 10.1038/nature12015View ArticleGoogle Scholar
- Kriener B, Tetzlaff T, Aertsen A, Diesmann M, Rotter S: Correlations and population dynamics in cortical networks. Neural Comput 2008, 20(9):2185-226. http://www.ncbi.nlm.nih.gov/pubmed/18439141 10.1162/neco.2008.02-07-474View ArticleGoogle Scholar
- Kuhlman SJ, Tring E, Trachtenberg JT: Fast-spiking interneurons have an initial orientation bias that is lost with vision. Nature Neurosci 2011, 14(9):1121-1123. http://www.ncbi.nlm.nih.gov/pubmed/21750548 10.1038/nn.2890View ArticleGoogle Scholar
- Kuhn A, Aertsen A, Rotter S: Higher-order statistics of input ensembles and the response of simple model neurons. Neural Computation 2003, 15(1):67-101. http://www.ncbi.nlm.nih.gov/pubmed/12590820 10.1162/089976603321043702View ArticleGoogle Scholar
- Lavzin M, Rapoport S, Polsky A, Garion L, Schiller J: Nonlinear dendritic processing determines angular tuning of barrel cortex neurons in vivo. Nature 2012. http://www.ncbi.nlm.nih.gov/pubmed/22940864Google Scholar
- Lee D, Lin BJ, Lee AK: Hippocampal Place Fields Emerge upon Single-Cell Manipulation of Excitability During Behavior. Science 2012a, 337(6096):849-853. http://www.ncbi.nlm.nih.gov/pubmed/22904011 10.1126/science.1221489View ArticleGoogle Scholar
- Lee SH, Kwan AC, Zhang S, Phoumthipphavong V, Flannery JG, Masmanidis SC, Taniguchi H, Huang ZJ, Zhang F, Boyden ES, Deisseroth K, Dan Y: Activation of specific interneurons improves V1 feature selectivity and visual perception. Nature 2012b, 488(7411):379-383. http://www.ncbi.nlm.nih.gov/pubmed/22878719 10.1038/nature11312View ArticleGoogle Scholar
- Li YT, Ma WP, Li LY, Ibrahim LA, Wang SZ, Tao HW: Broadening of inhibitory tuning underlies contrast-dependent sharpening of orientation selectivity in mouse visual cortex. J Neurosci 2012a, 32(46):16,466-16,477. http://www.ncbi.nlm.nih.gov/pubmed/23152629 10.1523/JNEUROSCI.3221-12.2012View ArticleGoogle Scholar
- Li YT, Ma WP, Pan CJ, Zhang LI, Tao HW: Broadening of cortical inhibition mediates developmental sharpening of orientation selectivity. J Neurosci 2012b, 32(12):3981-3991. http://www.ncbi.nlm.nih.gov/pubmed/22442065 10.1523/JNEUROSCI.5514-11.2012View ArticleGoogle Scholar
- Liu BH, Li YT, Ma WP, Pan CJ, Zhang LI, Tao HW: Broad inhibition sharpens orientation selectivity by expanding input dynamic range in mouse simple cells. Neuron 2011, 71(3):542-554. http://www.ncbi.nlm.nih.gov/pubmed/21835349 10.1016/j.neuron.2011.06.017View ArticleGoogle Scholar
- Ma W, Liu B, Li Y, Huang ZJ, Zhang LI, Tao HW: Visual representations by cortical somatostatin inhibitory neurons–selective but with weak and delayed responses. J Neurosci 2010, 30(43):14,371-14,379. http://www.ncbi.nlm.nih.gov/pubmed/20980594 10.1523/JNEUROSCI.3248-10.2010View ArticleGoogle Scholar
- Mao R, Schummers J, Knoblich U, Lacey CJ, Van Wart A, Cobos I, Kim C, Huguenard JR, Rubenstein JLR, Sur M: Influence of a subtype of inhibitory interneuron on stimulus-specific responses in visual cortex. Cerebral Cortex 2012, 22(3):493-508. http://www.ncbi.nlm.nih.gov/pubmed/21666125 10.1093/cercor/bhr057View ArticleGoogle Scholar
- McLaughlin D, Shapley R, Shelley M, Wielaard DJ: A neuronal network model of macaque primary visual cortex (V1): orientation selectivity and dynamics in the input layer 4Calpha. Proc Nat Acad Sci 2000, 97(14):8087-8092. http://www.ncbi.nlm.nih.gov/pubmed/10869422 10.1073/pnas.110135097View ArticleGoogle Scholar
- Miller KD, Troyer TW: Neural noise can explain expansive, power-law nonlinearities in neural response functions. J Neurophysiol 2002, 87(2):653-659. http://www.ncbi.nlm.nih.gov/pubmed/11826034Google Scholar
- Murthy VN, Fetz EE: Effects of input synchrony on the firing rate of a three-conductance cortical neuron model. Neural Comput 1994, 6(6):1111-1126. http://www.mitpressjournals.org/doi/abs/10.1162/neco.19126.96.36.1991 10.1162/neco.19188.8.131.521View ArticleGoogle Scholar
- Niell CM, Stryker MP: Highly selective receptive fields in mouse visual cortex. J Neurosci 2008, 28(30):7520-7536. http://www.ncbi.nlm.nih.gov/pubmed/18650330 10.1523/JNEUROSCI.0623-08.2008View ArticleGoogle Scholar
- Ohki K, Chung S, Kara P, Hübener M, Bonhoeffer T, Reid RC: Highly ordered arrangement of single neurons in orientation pinwheels. Nature 2006, 442(7105):925-928. http://www.ncbi.nlm.nih.gov/pubmed/16906137 10.1038/nature05019View ArticleGoogle Scholar
- Okun M, Lampl I: Instantaneous correlation of excitation and inhibition during ongoing and sensory-evoked activities. Nature Neurosci 2008, 11(5):535-537. http://www.ncbi.nlm.nih.gov/pubmed/18376400 10.1038/nn.2105View ArticleGoogle Scholar
- Packer AM, Yuste R: J Neurosci. 2011, 31(37):13,260-13,271.http://www.ncbi.nlm.nih.gov/pubmed/21917809Google Scholar
- Pernice V, Staude B, Cardanobile S, Rotter S: How structure determines correlations in neuronal networks. PLoS Comput Biol 2011, 7(5):e1002,059. http://www.ncbi.nlm.nih.gov/pubmed/21625580 10.1371/journal.pcbi.1002059View ArticleGoogle Scholar
- Staude B, Cardanobile S, Rotter S, Pernice V: Recurrent interactions in spiking networks with arbitrary topology. Phys Rev E 2012, 85(3 Pt 1):031,916. http://www.ncbi.nlm.nih.gov/pubmed/22587132Google Scholar
- Priebe NJ, Ferster D: Mechanisms of Neuronal Computation in Mammalian Visual Cortex. Neuron 2012, 75(2):194-208. http://www.ncbi.nlm.nih.gov/pubmed/22841306 10.1016/j.neuron.2012.06.011View ArticleGoogle Scholar
- Rajan K, Abbott LF: Eigenvalue spectra of random matrices for neural networks. Phys Rev Lett 2006, 97(18):188,104. http://www.ncbi.nlm.nih.gov/pubmed/17155583View ArticleGoogle Scholar
- Renart A, de la Rocha J, Bartho P, Hollender L, Parga N, Reyes A, Harris KD: The asynchronous state in cortical circuits. Science 2010, 327(5965):587-590. http://www.ncbi.nlm.nih.gov/pubmed/20110507 10.1126/science.1179850View ArticleGoogle Scholar
- Ricciardi LM: Diffusion processes and related topics on biology. Berlin: Springer-Verlag; 1977.View ArticleGoogle Scholar
- Ringach DL, Shapley RM, Hawken MJ: Orientation selectivity in macaque V1: diversity and laminar dependence. J Neurosci 2002, 22(13):5639-5651. http://www.ncbi.nlm.nih.gov/pubmed/12097515Google Scholar
- Rotter S, Diesmann M: Exact digital simulation of time-invariant linear systems with applications to neuronal modeling. Biol Cybern 1999, 81(5-6):381-402. http://www.ncbi.nlm.nih.gov/pubmed/10592015 10.1007/s004220050570View ArticleGoogle Scholar
- Rudolph M, Pospischil M, Timofeev I, Destexhe A: Inhibition determines membrane potential dynamics and controls action potential generation in awake and sleeping cat cortex. J Neurosci 2007, 27(20):5280-5290. http://www.ncbi.nlm.nih.gov/pubmed/17507551 10.1523/JNEUROSCI.4652-06.2007View ArticleGoogle Scholar
- Sadagopan S, Ferster D: Feedforward origins of response variability underlying contrast invariant orientation tuning in cat visual cortex. Neuron 2012, 74(5):911-923. http://www.ncbi.nlm.nih.gov/pubmed/22681694 10.1016/j.neuron.2012.05.007View ArticleGoogle Scholar
- Sclar G, Freeman RD: Orientation selectivity in the cat’s striate cortex is invariant with stimulus contrast. Experimental Brain Res 1982, 46(3):457-461. http://www.ncbi.nlm.nih.gov/pubmed/7095050 10.1007/BF00238641View ArticleGoogle Scholar
- Seriés P, Latham PE, Pouget A: Tuning curve sharpening for orientation selectivity: coding efficiency and the impact of correlations. Nature Neurosci 2004, 7(10):1129-1135. http://www.ncbi.nlm.nih.gov/pubmed/15452579 10.1038/nn1321View ArticleGoogle Scholar
- Shapley R, Hawken M, Ringach DL: Dynamics of orientation selectivity in the primary visual cortex and the importance of cortical inhibition. Neuron 2003, 38(5):689-699. http://www.ncbi.nlm.nih.gov/pubmed/12797955 10.1016/S0896-6273(03)00332-5View ArticleGoogle Scholar
- Sharon D, Grinvald A: Dynamics and constancy in cortical spatiotemporal patterns of orientation processing. Science 2002, 295(5554):512-515. http://www.ncbi.nlm.nih.gov/pubmed/11799249 10.1126/science.1065916View ArticleGoogle Scholar
- Siegert AJF: On the first passage time probability problem. Phys Rev 1951, 81: 617-623. http://link.aps.org/doi/10.1103/PhysRev.81.617 10.1103/PhysRev.81.617View ArticleGoogle Scholar
- Somers DC, Nelson SB, Sur M: An emergent model of orientation selectivity in cat visual cortical simple cells. J Neurosci 1995, 15(8):5448-5465. http://www.ncbi.nlm.nih.gov/pubmed/7643194Google Scholar
- Sompolinsky H, Shapley R: New perspectives on the mechanisms for orientation selectivity. Curr Opin Neurobiol 1997, 7(4):514-522. http://www.ncbi.nlm.nih.gov/pubmed/9287203 10.1016/S0959-4388(97)80031-1View ArticleGoogle Scholar
- Stanley GB, Jin J, Wang Y, Desbordes G, Wang Q, Black MJ, Alonso JM: Visual orientation and directional selectivity through thalamic synchrony. J Neurosci 2012, 32(26):9073-9088. http://www.ncbi.nlm.nih.gov/pubmed/22745507 10.1523/JNEUROSCI.4968-11.2012View ArticleGoogle Scholar
- Tan Z, Hu H, Huang ZJ, Agmon A: Robust but delayed thalamocortical activation of dendritic-targeting inhibitory interneurons. Proc Nat Acad Sci 2008, 105(6):2187-2192. http://www.ncbi.nlm.nih.gov/pubmed/18245383 10.1073/pnas.0710628105View ArticleGoogle Scholar
- Tan AYY, Brown BD, Scholl B, Mohanty D, Priebe NJ: Orientation selectivity of synaptic input to neurons in mouse and cat primary visual cortex. J Neurosci 2011, 31(34):12,339-12,350. http://www.ncbi.nlm.nih.gov/pubmed/21865476 10.1523/JNEUROSCI.2039-11.2011View ArticleGoogle Scholar
- Tetzlaff T, Helias M, Einevoll GT, Diesmann M: Decorrelation of neural-network activity by inhibitory feedback. PLoS Comput Biol 2012, 8(8):e1002,596. http://www.ncbi.nlm.nih.gov/pubmed/23133368 10.1371/journal.pcbi.1002596View ArticleGoogle Scholar
- Ts’o D, Frostig R, Lieke E, Grinvald A: Functional organization of primate visual cortex revealed by high resolution optical imaging. Science 1990, 249(4967):417-420. http://www.ncbi.nlm.nih.gov/pubmed/2165630 10.1126/science.2165630View ArticleGoogle Scholar
- Varga Z, Jia H, Sakmann B, Konnerth A: Dendritic coding of multiple sensory inputs in single cortical neurons in vivo. Proc Nat Acad Sci 2011, 108(37):15,420-15,405. http://www.ncbi.nlm.nih.gov/pubmed/21876170 10.1073/pnas.1112355108View ArticleGoogle Scholar
- Voges N, Aertsen A, Rotter S: Structural models of cortical networks with long-range connectivity. Math Problems Eng 2011. 2012. http://material.bccn.uni-freiburg.de/publications-bccn/2012/Voges12_484812.pdfGoogle Scholar
- van Vreeswijk C, Sompolinsky H: Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science 1996, 274(5293):1724-1726. http://www.ncbi.nlm.nih.gov/pubmed/8939866 10.1126/science.274.5293.1724View ArticleGoogle Scholar
- Wilson NR, Runyan CA, Wang FL, Sur M: Division and subtraction by distinct cortical inhibitory networks in vivo. Nature 2012, 488(7411):343-348. http://www.ncbi.nlm.nih.gov/pubmed/22878717 10.1038/nature11347View ArticleGoogle Scholar
- Xing D, Ringach DL, Hawken MJ, Shapley RM: Untuned suppression makes a major contribution to the enhancement of orientation selectivity in macaque V1. J Neurosci 2011, 31(44):15,972-15,982. http://www.ncbi.nlm.nih.gov/pubmed/22049440 10.1523/JNEUROSCI.2245-11.2011View ArticleGoogle Scholar
- Yger P, El Boustani S, Destexhe A, Frégnac Y: Topologically invariant macroscopic statistics in balanced networks of conductance-based integrate-and-fire neurons. J Comput Neurosci 2011, 31(2):229-245. http://www.ncbi.nlm.nih.gov/pubmed/21222148 10.1007/s10827-010-0310-zView ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.