# The topology of the directed clique complex as a network invariant

- Paolo Masulli
^{1}Email author and - Alessandro E. P. Villa
^{1}

**Received: **2 October 2015

**Accepted: **17 March 2016

**Published: **31 March 2016

## Abstract

We introduce new algebro-topological invariants of directed networks, based on the topological construction of the directed clique complex. The shape of the underlying directed graph is encoded in a way that can be studied mathematically to obtain network invariants such as the Euler characteristic and the Betti numbers. Two different cases illustrate the application of the Euler characteristic. We investigate how the evolution of a Boolean recurrent artificial neural network is influenced by its topology in a dynamics involving pruning and strengthening of the connections, and to show that the topological features of the directed clique complex influence the dynamical evolution of the network. The second application considers the directed clique complex in a broader framework, to define an invariant of directed networks, the network degree invariant, which is constructed by computing the topological invariant on a sequence of sub-networks filtered by the minimum in- or out-degree of the nodes. The application of the Euler characteristic presented here can be extended to any directed network and provides a new method for the assessment of specific functional features associated with the network topology.

### Keywords

Simplicial homology Network invariant Recurrent neural network Synfire chain Synaptic plasticity### Mathematics Subject Classification

Primary 05C10; Secondary 55-04## Background

The main interest of algebraic topology is to study and understand the functional properties of spatial structures. Algebro-topological constructions have been applied successfully in the field of data science (Carlsson 2009) with the application of the framework of persistent homology, which has proved to be a powerful tool to understand the inner structure of a data set by representing it as a sequence of topological spaces. A network is a set of points satisfying precise properties of connectedness, which can be used to define a class of topological spaces. Network theory aims to understand and describe the shape and the structure of networks, and the application of the tools developed within the framework of algebraic topology can provide new insights of network properties in several research fields.

The directed clique complex is a rigorous way to encode the topological features of a network in the mathematical framework of a simplicial complex, allowing the construction of a class of invariants which have only been recently applied for the first time in the context of network theory (Giusti et al. 2015; Hess 2015). Active nodes are those nodes whose state depend on a set of precise rules that depend on network topology and dynamics. In a highly interconnected network of such nodes, the activity of each node is necessarily related to the combined activity of the afferent nodes transmitted by the connecting edges. Due to the presence of reciprocal connections between certain nodes, re-entrant activity occurs within such network. Hence, selected pathways through the network may emerge because of dynamical processes that may produce activity-dependent connection pruning. The overall goal of these studies is to understand the properties of a network given the topology described by its link structure.

Neuronal networks are a complex system characterized by coupled nonlinear dynamics. This topic is a long-standing scientific program in mathematics and physics (Abarbanel et al. 1996; Amit 1992; Freeman 1994; Guckenheimer and Holmes 1983). In general, the synchronization of two systems means that their time evolution is periodic, with the same period and, perhaps, the same phase (Malagarriga et al. 2015). This notion of synchronization is not sufficient in a context where the systems are excited by non-periodic signals, representing their complex environment. Synchronization of chaotic systems has been discovered (Afraimovich et al. 1986; Fujisaka and Yamada 1983; Pikovsky 1984) and since then it has become an important research topic in mathematics (Ashwin et al. 1994), physics (Ott and Sommerer 1994) and engineering (Chen 1999). In interconnected cell assemblies embedded in a recurrent neural network, some ordered sequences of intervals within spike trains of individual neurons, and across spike trains recorded from different neurons, will recur whenever an identical stimulus is presented. Such recurring, ordered, and precise interspike interval relationships are referred to as “preferred firing sequences”. One such example can be represented by brain circuits shaped by developmental and learning processes (Edelman 1993). The application of tools from algebraic topology to the study of these systems and networks will be of great use for determining deterministic chaotic behavior in experimental data and develop biologically relevant neural network models that do not wipe out temporal information (Babloyantz et al. 1985; Celletti and Villa 1996; Celletti et al. 1997; Mpitsos et al. 1988; Rapp et al. 1986).

In the current study we introduce a mathematical object, called directed clique complex, encoding the link structure of networks in which the edges (or links) have a given orientation. This object is a simplicial complex that can be studied with the techniques of algebraic topology to obtain invariants such as the Euler characteristic and the Betti numbers. We propose general constructions valid for any directed network, but we present an application to evolvable Boolean recurrent neural networks with convergent/divergent layered structure (Abeles 1991) with an embedded dynamics of synaptic plasticity. The Euler characteristic, which is defined given the network connectivity, is computed during the network evolution. We show evidence that this topological invariant predicts how the network is going to evolve under the effect of the pruning dynamics. Despite being just a toy-example of the dynamics observed in biological neuronal networks, we suggest that algebraic topology can be used to investigate the properties of more refined biologically-inspired models and their temporal patterns. We show also that, for a directed network, the Euler characteristic computed on a sequence of networks generated by filtrating its nodes by in- and out-degrees can provide a general metric helpful for a network classification. Hence, the topological invariants computed for each network in the filtration give a sequence of numbers that may be interpreted as a fingerprint of the complete network.

## Results and discussion

### Dynamics of artificial neural networks

We considered a directed graph representing a simplified model of feedforward neural network with convergent/divergent layered structure with few embedded recurrent connections. In this model, the nodes represent individual neurons and the connections between them are oriented edges with a weight given by the connection strength. We have computed the Euler characteristic and its variation during the evolution of such networks, both for the entirety of the nodes in the network and for the sub-network induced by the nodes that are active at each time step in order to detect how the structure changes as the network evolves. The Betti numbers and their variation during the network evolution were also computed but we do not discuss further this topological measurement. Notice that activation of the networks follows a very simple dynamics. The nodes of the input layer are activated at regular time intervals, which is not meant to be biologically realistic, but has been adopted to favor the simplicity of the model. It was shown elsewhere (Iglesias and Villa 2007, 2008) that a stable activity level in a network like this could be achieved only with an appropriate balance of excitatory and inhibitory connections. The networks studied here are oversimplified and formed only by excitatory nodes. We selected the ranges of the parameters such that the simulations maintained a level of activity for 100 steps without neither saturation nor extinction of the activity, thus suggesting that connection pruning enabled topological changes. However, notice that even within selected areas of the parameter space of the simulations we observed that the activity level tended either to increase towards paroxysmal activation (i.e., saturation) or to decrease towards complete inactivation (i.e., extinction).

The type of dynamics undergoing the neural network evolution and the structure of the directed clique complex of that network at the very beginning of the simulation (i.e. before the occurrence of connection pruning) were correlated. This was possibly the most unexpected and significant result regarding the dynamics of artificial neural networks. In the simulations leading to the activation of at least 5 % of the nodes, the average number of active units was correlated to the number of simplices, in the directed clique complex, of dimension two (Pearson correlation coefficient \(r_{(370)} = 0.560\), \(p < 0.001\)) and dimension three \((r_{(370)} = 0.445, \; p < 0.001)\). This may appear surprising because the topology of the directed clique complex of a network *a priori* ignores any dynamics of pruning, the evolution of the network topology and how this is going to influence the activation level. However, the rationale is that directed cliques are fully connected sub-networks, i.e. sub-networks with an initial and a final node that are connected in the highest possible number of ways. Then, a high number of directed cliques leads to a higher chance of propagation of the activation through the network. Notice the fact that it is essential to consider here only *directed* cliques, because the activation of a node occurs only if the connected upstream nodes are activated. Activation is indeed a phenomenon happening in a directional way prescribed by the connectivity pattern. The invariant presented should also be considered as a complementary measurement of complexity for the assessment of the computational power of Boolean recurrent neural networks (Cabessa and Villa 2014).

### Network filtrations and invariants

The in- and out-degrees of nodes are important factors in shaping the network topology. We applied our topological construction to devise invariants for any directed networks. We compute the Euler characteristic on a sequence of sub-networks defined by *directed degeneracy* of their nodes, or in other words the in- and out-degrees of the vertices, as described in detail in the Methods section. Two separate sequences are defined because in- and out-degrees represent different aspects in the network connectivity. The sequences of sub-networks of a given network are a *filtration* in the sense that each network appearing in the sequence is contained in all those that follow. The values of the Euler characteristic for each network of the sequences gives rise to two separate sequences of integers that give a measure of the shape and the topology of the complete network. We propose this invariant to describe general directed networks.

A distinct topological invariant defined for non-directed networks, referred to the Betti curves, was recently proposed by Giusti et al. (2015) following the idea of filtering the network by the weight of connections. This invariant appears well suited for continuously distributed connection weights, for instance when the weights are related to the distances of points and represent a symmetric relation between nodes. In the case of directed networks with modifiable values of connection weights restricted to a limited set (Iglesias et al. 2005), the network dynamics evolves towards a bimodal distribution of the connection weights densely grouped near the minimum and maximum values of the range. This is a general behaviour in neuronal networks (Song et al. 2000). In this kind of networks, filtering the network by the connection weights following Giusti et al. (2015) is not suitable, because most connections would have the same weight. Our approach for directed networks is to filter the connections by the in- and out-degrees separately in order to measure how the nodes of each degree shape the topology of the network. It is important to point out that other methods are based on spectral properties of the adjacency matrix and therefore only make sense if all the transformations of the network data are linear (Brouwer and Haemers 2012).

The results presented here open the way to further applications of the topological invariants. The analytic study of the values of the Euler characteristic in the filtrations framework can provide a metric of similarity between networks which is only dependent on their internal topology, thus allowing the application of clustering algorithms for the detection of distinct functional classes of networks. The study of brain complex networks in clinical neuroscience offers as a particularly promising field of application of the new topological invariant, as suggested by other studies using different techniques to the same aim (Fallani et al. 2014; Stam et al. 2007). In particular we foresee to extend this application to the study of brain dynamics during decision-making tasks and neuroeconomic games (Fiori et al. 2013; Guy et al. 2016). Another promising application is the study of the temporal dynamics in neural activity. The finding of precise and repeating firing sequences in experimental and simulated spike train recordings has been discussed with respect to the existence of synfire chains Abeles (1982, 1991) or chaotic attractors Celletti and Villa (1996), Villa et al. (1998). In both cases the underlying network structure is assumed to be a directed graph. This hypothesis together with the assumption of spike-timing modifiable connections provide a rational basis for the application of topological invariants towards understanding the association between topological structures and neural coding.

## Conclusions

We have developed new invariants for directed networks using techniques derived from algebraic topology, showing that this subject provides a very useful set of tools for understanding networks and their functional and dynamical properties. Simple invariants such as the Euler characteristic can already detect the changes in the network topology. The promising results shown here are a contribution to the application of algebraic topology to the study of more complex networks and their dynamics, including models of neuronal networks that are biologically inspired. We believe that the framework present here may open the way to many computational applications to unveil data structures in data and network sciences.

## Methods

### Graphs and clique complexes

An *abstract oriented simplicial complex*
*K* (Hatcher 2002) is the data of a set \(K_0\) of vertices and sets \(K_n\) of lists \(\sigma = (x_0,\ldots , x_n)\) of elements of \(K_0\) (called *n-simplices*), for \(n \ge 1\), with the property that, if \(\sigma = (x_0, \ldots , x_n)\) belongs to \(K_n\), then any sublist \((x_{i_0}, \ldots , x_{i_k})\) of \(\sigma\) belongs to \(K_k\). The sublists of \(\sigma\) are called *faces*.

*V*and edge set

*E*with no self-loops and no double edges, and denote with

*N*the cardinality of

*V*. Associated to

*G*, we can construct its

*(directed) clique complex*

*K*(

*G*), which is the directed simplicial complex given by \(K(G)_0 = V\) and

*n*-simplex contained in \(K(G)_n\) is a directed \((n+1)\)-clique or a completely connected directed subgraph with \(n+1\) vertices. Notice that an

*n*-simplex is though of as an object of dimension

*n*and consists of \(n+1\) vertices.

### The topological invariants

*Euler characteristic*of the directed clique complex

*K*(

*G*) of

*G*is the integer defined by

*n*, the vector space \(\mathbf {Z}/2\langle K(G)_n\rangle\) given by the linear combinations of

*n*-simplices with coefficients in the field of two elements \(\mathbf {Z}/2\). We can define the

*boundary maps*\(\partial _n :\mathbf {Z}/2\langle K(G)_n\rangle \rightarrow \mathbf {Z}/2\langle K(G)_{n-1}\rangle\) which are given by mapping each simplex to the sum of its faces. Then we can define the quantities:

*n*-simplices whose boundary is zero and the dimension of the space of boundaries of \((n+1)\)-simplices. It can be checked that, if we apply a boundary map twice on any linear combination of simplices, we get zero, and so the quantities \(\beta _n(K(G))\) are always non-negative integers. These classically known numbers take the name of

*Betti numbers*and, for each

*n*, the

*n*-th Betti number \(\beta _n(K(G))\) corresponds to the dimension of the

*n*-th homology space (with \(\mathbf {Z}/2\)-coefficients) of the clique complex

*K*(

*G*) of

*G*.

The intuitive sense of this construction is to count the “holes” that remain in the graph after we have filled all the directed cliques. In particular, the *n*-th Betti number is counting the *n*-dimensional holes. One can also see that \(\beta _0\) counts the number of connected components of the graph. A classical result in topology shows a connection between the Euler characteristic and the Betti numbers, expressed by the identity: \(\chi (K(G)) = \sum _{n=0}^N (-1)^n \beta _n(K(G))\), which gives another way of computing the Euler characteristic.

Notice that the construction of the directed clique complex of a given network *G* does not involve any choice, and therefore, since the Betti numbers and the Euler characteristic of a simplicial complex are well-defined quantities for a simplicial complex (Hatcher 2002), our constructions produce quantities that are well-defined for the network *G*, and we shall refer to them simply as the Euler characteristic and the Betti numbers of *G*.

### Boolean recurrent artificial neural networks

#### Network structure and dynamics

The artificial recurrent neural networks consist of a finite number of Boolean neurons organized in layers with a convergent/divergent connection structure (Abeles 1991). The networks are composed by 50 layers, each of them with 10 Boolean neurons. The first layer is the input layer and all its 10 neurons get activated at the same time at a fixed frequency of 0.1, i.e. every 10 time steps of the history. Each neuron in a layer is connected to a randomly uniformly distributed number of target neurons *f* belonging to the next downstream layer. The networks include *recurrence* in their structure, meaning that a small fraction *g* of the neurons appears in two different layers. This means that a neuron *k* that is also identified as neuron *l*, is characterized by the union of the input connections of neurons *k* and *l*, as well as by the union of their respective efferent projections.

The state \(S_i(t)\) of a neuron *i* take values 0 (inactive) or 1 (active) and all Boolean neurons are set inactive at the beginning of the simulation. The state \(S_i(t)\) is a function of the its activation variable \(V_i(t)\) and a threshold \(\theta\), such that \(S_i(t) = \mathcal {H}(V_i(t)-\theta )\). \(\mathcal {H}\) is the Heaviside function, \(\mathcal {H}(x)=0 : x<0\), \(\mathcal {H}(x)=1 : x\ge 0\). At each time step, the value \(V_i(t)\) of the activation variable of the *i*th neuron is calculated such that \(V_i(t+1) = \sum _{j} S_j(t) w_{ji}(t)\), where \(w_{ji}(t)\) are the weights of the directed connections from any \(j\)th neuron projecting to neuron *i*. The connection weights can only take four values, i.e. \(w_1 = 0.1\), \(w_2 = 0.2\), \(w_3 = 0.4\), \(w_4 = 0.8\). At the begin of the simulations all connection weights are randomly uniformly distributed among the four possible values. The weights of all the neurons are computed synchronously at each time step.

The network dynamics implements activity-dependent plasticity of the connection weights. Whenever the activation of a connection does not lead to the activation of its target neuron during an interval lasting *a* time steps, its weight is weakened to the level immediately lower than the current one. Whenever the weight of a connection reaches the lowest level, the connection is removed from the network (Iglesias et al. 2005). Then, the pruning of the connections provokes the selection of the most significant ones and changes the topology of the network. Similarly, whenever a connection with a weight \(w_m\) is activated at least \(m+1\) consecutive time steps, the connection weight is strengthened to the level immediately higher than the current one. Hence, the parameter space of our simulations was defined by four parameters: the number *f* of layer-to-layer downstream connections in the range 3–10 by steps of 1, the small fraction *g* of the neurons appearing in two different layers in the range 1–3 % by steps of 1%, the threshold of activation \(\theta\) in the range 0.8–1.4 by steps of 0.1, and the interval *a* of the weakening dynamics of the connections in the range 7–9 by steps of 1.

#### Implementation of the simulations

The simulation software was implemented from scratch in Python. The network evolved with the dynamics explained above and the program computed the directed clique complex at each change of the network topology. For the entire network, the directed clique complex was computed each time the connectivity changed because of pruning. For the sub-network of the active nodes, the computation was carried out at each step of the simulation.

The computed directed clique complexes were used to compute the Euler characteristic both for the complexes representing the entire network and for the sub-complexes of the active nodes. To compute the directed clique complex of a network we used the implementation of the algorithm of Tsukiyama et al. (1977) in the igraph Python package (Csardi and Nepusz 2006), adapted to find directed cliques. The experiments were run in parallel on several CPUs using the tool GNU Parallel (Tange 2011).

### Network filtrations

#### Network structures

Many essential topological features of a network are determined by the distribution of edges over its graph. Different types of distributions result in different types of networks. For instance, pure random networks (RN) are formed assuming that edges in the network are independent of each other and they are equally likely to occur (Erdös and Rényi 1959; Gilbert 1959). For RN we have used the algorithm implemented in the Python package ‘NetworkX’ (https://networkx.github.io/) (Hagberg et al. 2008) with the function ‘erdos_renyi_graph’ with parameters number of nodes \(n=40\) and the probability for edge creation \(p=0.2\).

These simple construction assumptions are generally not followed in networks obtained experimentally from ecological or gene systems, telecommunication networks or the Internet which are characterized by short average path lengths and high clustering, resulting in the so called small-world topology (SW) (Newman 2000; Watts and Strogatz 1998). For SW we used the same Python package ‘NetworkX’ (Hagberg et al. 2008) with the function ‘newman_watts_strogatz_graph’ with parameters number of nodes \(n=40\) and the number of connected neighbours in ring topology \(k=20\) and the probability for adding a new edge \(p=0.4\).

Other real-world networks such as brain, social networks, power grids and transportation networks exhibit topologies where more connected nodes, hubs, are more likely to receive new edges. The presence of these hubs and a power law distribution for the degree of the nodes defines scale-free networks (SF) (Barabási and Albert 1999). For SF we used the same Python package ‘NetworkX’ (Hagberg et al. 2008) with the function ‘barabasi_albert_graph’ with parameters number of nodes \(n=40\) and the number of edges to attach from a new node \(m=10\).

#### Network degree invariant

Given a directed network *G*, we define two filtrations by sub-networks (ordered sequences of networks in which each network is a sub-network of all the following ones) using the in- and out-degree of nodes. Let *ODF* (*G*) be the out-degree filtration of *G*: the *i*-th network \(ODF(G)_i\) in this filtration is the sub-network of *G* induced by the vertices having out-degree at least *i* and all the target nodes of their outgoing connections. In the same way we define the in-degree filtration *IDF* (*G*): the *i*-th network \(IDF(G)_i\) in this filtration is the sub-network of *G* induced by the vertices having in-degree at least *i* and all the source nodes of their incoming connections.

We computed the Euler characteristic for each network of the two filtrations, obtaining two sequences of integers, which are plotted to display a measure of the network topology, as a function of the degree levels of the filtration, normalized by the maximum degree present in the network. For example, let us consider the case illustrated in Fig. 2b: one of the random networks with \(n=40\) vertices that we have generated with a parameter \(p=0.20\), as described above, had a maximum out-degree of its vertices equal to 19. Therefore all the filtration levels have been divided by this value to normalize them (between 0 and 1).

For each network family (SF, RN, SW), we generated \(N=50\) distinct networks with different seeds for the random numbers generator (the seeds were uniformly distributed integers in the interval [1, 10000]). We calculated the network degree filtration invariant sequences for in- and out-degree, which were then averaged for each network family and represented in Fig. 2 with the 95 % pointwise confidence bands.

## Declarations

### Authors’ contributions

Conceived and designed the experiments: PM, AEPV. Developed the mathematical construction, implemented the simulation, analyzed the results and drafted the manuscript: PM, AEPV. Both authors read and approved the final manuscript.

### Acknowledgements

This work was partially supported by the Swiss National Science Foundation Grant CR13I1-138032. We wish to thank Prof. Kathryn Hess, Prof. Ran Levi and Paweł Dłotko for suggesting the idea of using oriented cliques in order to define simplices in a directed graph, as it was shown in the talk given at the SIAM DS15 Conference in Snowbird (USA) in May 2015.

### Competing interests

The authors declare that they have no competing interests.

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## Authors’ Affiliations

## References

- Abarbanel HD, Rabinovich MI, Selverston A, Bazhenov MV, Huerta R, Sushchik MM, Rubchinskii LL (1996) Synchronization in neural assemblies. Physics-Uspekhi 39:1–26View ArticleGoogle Scholar
- Abeles M (1982) Local cortical circuits. An electrophysiological study, studies of brain function, vol 6. Springer, BerlinView ArticleGoogle Scholar
- Abeles M (1991) Corticonics: neural circuits of the cerebral cortex, 1st edn. Cambridge University Press, CambridgeView ArticleGoogle Scholar
- Afraimovich V, Veritchev N, Rabinovich M (1986) Stochastically synchronized oscillators in dissipative systems. Radiophys Quantum Electron 29:795View ArticleGoogle Scholar
- Amit DJ (1992) Modeling brain function: the world of attractor neural networks. Cambridge University Press, CambridgeGoogle Scholar
- Ashwin P, Buescu J, Stewart I (1994) Bubbling of attractors and synchronization of chaotic oscillators. Phys Lett A 193:126–139View ArticleGoogle Scholar
- Babloyantz A, Nicolis G, Salazar M (1985) Evidence for chaotic dynamics of brain activity during the sleep cycle. Phys Lett A 111:152–156View ArticleGoogle Scholar
- Barabási AL, Albert R (1999) Emergence of scaling in random networks. Science 286(5439):509–512. doi:10.1126/science.286.5439.509 View ArticleGoogle Scholar
- Brouwer AE, Haemers WH (2012) Spectra of graphs. Universitext, SpringerView ArticleGoogle Scholar
- Cabessa J, Villa AEP (2014) An attractor-based complexity measurement for boolean recurrent neural networks. PLoS One 9(4):e94,204. doi:10.1371/journal.pone.0094204 View ArticleGoogle Scholar
- Carlsson G (2009) Topology and data. Bull Am Math Soc 46(2):255–308View ArticleGoogle Scholar
- Celletti A, Villa AEP (1996) Low dimensional chaotic attractors in the rat brain. Biol Cybern 74:387–394View ArticleGoogle Scholar
- Celletti A, Lorenzana VMB, Villa AEP (1997) Correlation dimension for paired discrete time series. J Stat Phys 89:877–884View ArticleGoogle Scholar
- Chen G (1999) Controlling chaos and bifurcations in engineering systems. CRC-Press, Boca RatonGoogle Scholar
- Csardi G, Nepusz T (2006) The igraph software package for complex network research. Int J Complex Syst 1695. http://igraph.org
- Edelman GM (1993) Topobiology: an introduction to molecular embryology. Basic Books, New YorkGoogle Scholar
- Erdös P, Rényi A (1959) On random graphs, I. Publ Math 6:290–297Google Scholar
- Fallani FDV, Richiardi J, Chavez M, Achard S (2014) Graph analysis of functional brain networks: practical issues in translational neuroscience. Phil Trans R Soc B 369(1653):20130,521View ArticleGoogle Scholar
- Fiori M, Lintas A, Mesrobian S, Villa AEP (2013) Effect of emotion and personality on deviation from purely rational decision-making. In: Guy VT, Karny M, Wolpert D (eds) Decision making and imperfection. Springer, Berlin, pp 129–161View ArticleGoogle Scholar
- Freeman WJ (1994) Neural networks and chaos. J Theor Biol 171:13–18View ArticleGoogle Scholar
- Fujisaka H, Yamada T (1983) Stability theory of synchronized motion in coupled oscillator systems. Progr Theor Phys 69:32View ArticleGoogle Scholar
- Gilbert EN (1959) Random graphs. Ann Math Stat 30(4):1141–1144. doi:10.1214/aoms/1177706098 View ArticleGoogle Scholar
- Giusti C, Pastalkova E, Curto C, Itskov V (2015) Clique topology reveals intrinsic geometric structure in neural correlations. ArXiv e-prints 1502:06172Google Scholar
- Guckenheimer J, Holmes P (1983) Nonlinear oscillations, dynamical systems, and bifurcations of vector fields. Applied mathematical sciences. Springer, New YorkView ArticleGoogle Scholar
- Guy TV, Kárný M, Lintas A, Villa AE (2016) Theoretical models of decision-making in the Ultimatum game: fairness vs. reason. In: Wang R, Pan X (eds) Advances in cognitive neurodynamics (V): Proceedings of the fifth international conference on cognitive neurodynamics-2015. Springer, Singapore, pp 185–191Google Scholar
- Hagberg AA, Schult DA, Swart PJ (2008) Exploring network structure, dynamics, and function using NetworkX. In: Proceedings of the 7th Python in science conference (SciPy2008). Pasadena, CA USA, pp 11–15Google Scholar
- Hatcher A (2002) Algebraic topology. Cambridge University Press, CambridgeGoogle Scholar
- Hess K (2015) An algebro-topological perspective on hierarchical modularity of networks. In: SIAM conference on applications of dynamical systems DS15, Snowbird, USA, p 126Google Scholar
- Iglesias J, Villa AEP (2007) Effect of stimulus-driven pruning on the detection of spatiotemporal patterns of activity in large neural networks. Biosystems 89(1–3, SI):287–293. doi:10.1016/j.biosystems.2006.05.020 View ArticleGoogle Scholar
- Iglesias J, Villa AEP (2008) Emergence of preferred firing sequences in large spiking neural networks during simulated neuronal development. Int J Neural Syst 18(4):267–277. doi:10.1142/S0129065708001580 View ArticleGoogle Scholar
- Iglesias J, Eriksson J, Grize F, Tomassini M, Villa AE (2005) Dynamics of pruning in simulated large-scale spiking neural networks. Biosystems 79(1):11–20View ArticleGoogle Scholar
- Malagarriga D, Villa A, Garcia-Ojalvo J, Pons AJ, Hilgetag CC (2015) Mesoscopic segregation of excitation and inhibition in a brain network model. PLoS Comput Biol 11(e1004):007Google Scholar
- Mpitsos GJ, Burton RMJ, Creech HC, Soinila SO (1988) Evidence for chaos in spike trains of neurons that generate rhythmic motor patterns. Brain Res Bull 21:529–538View ArticleGoogle Scholar
- Newman ME (2000) Models of the small world. J Stat Phys 101(3):819–841View ArticleGoogle Scholar
- Ott E, Sommerer JC (1994) Blowout bifurcations: the occurrence of riddled basins. Phys Lett A 188:39–47View ArticleGoogle Scholar
- Pikovsky AS (1984) On the interaction of strange attractors. Z Physik B 55:149View ArticleGoogle Scholar
- Rapp PE, Zimmerman ID, Albano AM, deGuzman GC, Greenbaum NN, Bashore TR (1986) Experimental studies of chaotic neural behavior: cellular activity and electroencephalographic signal. In: Othmer HG (ed) Nonlinear oscillations in biology and chemistry. Springer, Berlin, pp 175–805View ArticleGoogle Scholar
- Song S, Miller KD, Abbott LF (2000) Competitive hebbian learning through spike-timing-dependent synaptic plasticity. Nat Neurosci 3(9):919–926View ArticleGoogle Scholar
- Stam CJ, Jones BF, Nolte G, Breakspear M, Scheltens P (2007) Small-world networks and functional connectivity in alzheimer’s disease. Cereb Cortex 17(1):92–99. doi:10.1093/cercor/bhj127. http://www.ncbi.nlm.nih.gov/pubmed/16452642
- Tange O (2011) Gnu parallel—the command-line power tool; login: the USENIX magazine 36(1):42–47. doi:10.5281/zenodo.16303. http://www.gnu.org/s/parallel
- Tsukiyama S, Ide M, Ariyoshi H, Shirakawa I (1977) A new algorithm for generating all the maximal independent sets. SIAM J Comput 6(3):505–517View ArticleGoogle Scholar
- Villa AEP, Tetko IV, Celletti A, Riehle A (1998) Chaotic dynamics in the primate motor cortex depend on motor preparation in a reaction-time task. Curr Psychol Cognit 17:763–780Google Scholar
- Watts DJ, Strogatz SH (1998) Collective dynamics of small-world networks. Nature 393(6684):440–442. doi:10.1038/30918 View ArticleGoogle Scholar