Vertex collocation profiles: theory, computation, and results

We describe the vertex collocation profile (VCP) concept. VCPs provide rich information about the surrounding local structure of embedded vertex pairs. VCP analysis offers a new tool for researchers and domain experts to understand the underlying growth mechanisms in their networks and to analyze link formation mechanisms in the appropriate sociological, biological, physical, or other context. The same resolution that gives the VCP method its analytical power also enables it to perform well when used to accomplish link prediction. We first develop the theory, mathematics, and algorithms underlying VCPs. We provide timing results to demonstrate that the algorithms scale well even for large networks. Then we demonstrate VCP methods performing link prediction competitively with unsupervised and supervised methods across different network families. Unlike many analytical tools, VCPs inherently generalize to multirelational data, which provides them with unique power in complex modeling tasks. To demonstrate this, we apply the VCP method to longitudinal networks by encoding temporally resolved information into different relations. In this way, the transitions between VCP elements represent temporal evolutionary patterns in the longitudinal network data. Results show that VCPs can use this additional data, typically challenging to employ, to improve predictive model accuracies. We conclude with our perspectives on the VCP method and its future in network science, particularly link prediction.


Introduction
A vertex collocation profile (VCP) is a vector describing all isomorphically distinct collocations of two vertices within all possible isomorphism classes of three or more vertices. Just like ordinary isomorphism concepts, VCPs generalize naturally to directed, multirelational structures. VCPs are superficially similar to both local triangle counting (Becchetti et al. 2010;) and motif analysis (Milo et al. 2002), but the theory and methods presented here are distinguished even at a superficial level by non-trivial generalization to structures encompassing direction, multiple relations, and any number of vertices. In detailed study, VCPs are strikingly differentiated by their incorporation of isomorphic equivalence and vertex pair collocation.
Among other tasks, VCPs are particularly suited to link prediction, since links are described by two vertices of interest. Link prediction is the task of inferring links in a graph G t+1 based on the observation of a graph G t . It may be that t + 1 follows t in time, or it may be that t+1 represents some other modification of the graph such as including additional links from expensive experiments (Getoor and Diehl 2005). Link prediction stated in this manner is a binary classification task in which the positive class comprises links that form and the negative class comprises links that do not form. Many existing link prediction models compress a selection of basic information in theoretically or empirically guided ways. By contrast VCPs restrictively represent the local topological information describing the embedding of the source and target vertices. VCPs also apply naturally to multirelational networks and can thereby encode a variety of additional information. This includes even continuous quantities, such as edge weights, via quantization. We demonstrate the effectiveness of VCPs in link prediction with purely structural information. Then we show how VCPs can http://www.springerplus.com/content/3/1/116 incorporate temporal information, notoriously difficult to model effectively, to achieve further remarkable increases in performance.

Related work
The fundamental idea of counting structural forms in a network to better understand its properties is not new. Motif analysis (Milo et al. 2002) analyzes the prevalence of structural forms in different types of networks to look for statistical signatures that provide information, such as taxonomic designations, about the network. Counting graphlets, connected induced subgraphs, to which a node is incident (Pržulj 2007) has already been used for a variety of descriptive and analytical purposes in networks. The method described in ) counts partially closed multirelational triads anywhere in a graph and then computes the conditional transition probability of the triad closing with the link type of interest. Unpublished at the time of the original exposition of VCPs, the triad transition matrix (TTM) approach Juszczyszyn et al. (2011aJuszczyszyn et al. ( , 2011b is similar to  in that triadic transition probabilities are used to construct a score. VCPs are subtly but critically distinguished from ) and TTM in the same manner: they do not employ transition probabilities either directly or indirectly but rather describe the collocation of vertex pairs within subgraph isomorphism classes embedded within the network topology.
The vanilla task of link prediction as we have defined it has a wealth of supporting literature, and it would be impossible to cover it all here. Liben-Nowell and Kleinberg offered a seminal guide to the topic in (Liben-Nowell and Kleinberg 2007). The work most directly related uses structural forms to inform transition probabilities Juszczyszyn et al. (2011aJuszczyszyn et al. ( , 2011b. VCPs take advantage of the supervised classification framework in (Lichtenwalter et al. 2010), which involves undersampling, bootstrap aggregation, and random forest or random subspace classification algorithms by substituting the simple feature vector derived from topological analysis with the VCP. There are several other supervised classification frameworks (Al Hasan et al. 2006;Wang et al. 2007) for link prediction that use basic topological characteristics, unsupervised link predictors, node attributes, and other information to construct their feature vectors.
Since we venture to incorporate temporal information into our models, this research niche in link prediction also merits some discussion. In several studies of link prediction, authors have used longitudinal data, a series of events with timestamps of varying resolution that describes a network evolving through time (Liben-Nowell and Kleinberg 2007;Lichtenwalter et al. 2010;Murata and Moriyasu 2007;Scellato et al. 2011;Sharan and Neville 2008), to perform link prediction without actually using the temporal component in their models. One recent work attempts to answer the question of when rather than if a link will form in the future (Sun et al. 2012). Also distinct are works that perform general modeling based on temporal link analysis, which has a broader supporting literature (Amitay et al. 2004;Hill et al. 2006). It is much less common to actually consider the temporal dimension as a factor in constructing link prediction models due to the difficulty of creating representations and models that effectively incorporate time (Hill et al. 2006). (Liben-Nowell and Kleinberg 2007) briefly considers methods of treating temporally delineated periods. The only publications of which we are aware that directly address the problem of predicting links using temporal information are (Acar et al. 2009) and (Qiu et al. 2011). The authors of (Acar et al. 2009) achieve predictions by applying factorizations to third-order tensors, in which the complete link structure is contained in two of the dimensions as it exists at discrete temporal intervals indicated by the third dimension. Alternatively, they describe a method of collapsing the temporal information into a weighted two-dimensional link representation on which they subsequently perform matrix factorizations to achieve predictions. The authors of (Qiu et al. 2011) extend the framework proposed in (Lichtenwalter et al. 2010) to incorporate two compressed features extracted from the longitudinal data, which they term activeness and recency. In (Juszczyszyn et al. 2011a) and (Juszczyszyn et al. 2011b), the authors use the mean value of triad transition matrices from multiple time windows, and thus they do not actually benefit from temporal changes in network dynamics evident in the underlying longitudinal information. To our knowledge, ours is the first work to use multirelational models to encode temporal information for the purpose of performing data-driven predictions with machine learning algorithms. We employ VCPs to accomplish this. By encoding temporal information as multiple edge types in a multirelational graph representation, we can take advantage of the power of VCPs as a sophisticated multirelational modeling technique to incorporate trends in network evolution directly into our models.

Contributions
This work offers several improved and expanded treatments of vertex collocation profiles compared to our work in (Lichtenwalter and Chawla 2012). In addition to a significantly generalized and more formal theoretical treatment, this work provides the following specific areas of new coverage. It generalizes the formulation of vertex collocation profiles to unify handling of undirected multirelational and directed multirelational networks. Simultaneously, it expands upon the original limited treatment of directionality so that the theoretical coverage of directed structures is as complete and rigorous as http://www.springerplus.com/content/3/1/116 for undirected structures. We offer a formal definition of graph isomorphism and use it to provide and elucidate an equally formal definition of vertex collocation profiles. Accompanying this theoretical generalization are new source code offerings that allow users almost unrestricted freedom in scaling their computations to large, highly multirelational networks. We overcame some of the complexities of combinatoric explosion faced in highly multirelational networks by opting for memoized dynamic computation of isomorphisms. This has allowed us to apply vertex collocation profiles to networks with hundreds of different relations. We briefly demonstrate how statistical properties of structural presence make this feasible in time and space for most large, sparse networks using a worst-case random graph model. Finally, we select the incorporation of longitudinal information as a challenging problem in which vertex collocation profiles can offer new insights. We choose to view longitudinal data from a multirelational perspective and show that the incorporation of temporality in this way with vertex collocation profiles offers striking benefits in predictive efficacy.
All theory in the paper is distilled into clean, optimized implementations of asymptotically optimal algorithms in C++. The source code is freely and publicly available at https://github.com/rlichtenwalter/vcp. We have also incorporated VCP algorithms into the LPmade (Lichtenwalter and Chawla 2011) link prediction software freely and publicly available on MLOSS at http://mloss. org/software/view/307/.

Organization
We first offer a new, more general formal definition of vertex collocation profiles in Section "Vertex collocation profiles". We describe how graph isomorphism relates to our work, and we provide the relevant mathematics and theory to take the reader from a definition of graph isomorphism to a definition and mathematical implementation of vertex collocation profiles. In Section "Algorithms", we move to a description of two simple and slow but general and easily understood algorithms. Then, we provide faster, more sophisticated algorithms and describe how and why they work. Having laid the foundation for understanding and implementing the techniques, we move to demonstrating that they are highly predictive, and Section "Data" introduces and describes the data sets employed to that effect. In Section "The VCP method and link prediction", we demonstrate the impressive performance of VCP vectors in supervised link prediction. Finally, Section "VCPs and multirelational data" expands our empirical coverage by showing how the inherent multirelational nature of vertex collocation profiles is useful by successfully tackling the challenge of incorporating temporality in predictive models. We offer some concluding remarks in Section "Conclusions".

Vertex collocation profiles
Formally, a vertex collocation profile (VCP), written as VCP n,r,d i,j , is a vector describing the relationship between two vertices, v i and v j , in terms of their common membership in all possible subgraphs of n ≥ 3 vertices over r ≥ 1 relations, R, with d ∈ {0 = undirected, 1 = directed} directionality. A VCP element, VCP n,r,d i,j (x) is defined as a distinct embedding of v i and v j within a particular isomorphism class of n vertices and is represented by a uniquely addressable cell in the VCP vector. Figure 1 illustrates the first 16 elements of VCP 3,1,1 s,t . When referring to a VCP vector generally and not with respect to specific vertices, we can write VCP n,r,d , excluding the subscript.
We can encode the connectivity in any multirelational network of r relations and d directionality with 2 (d+1)r different types of connections. Notably, we consider total lack of connectivity as itself a type of connection. Undirected unirelational networks exhibit two types of connectivity: existent and nonexistent. Directed unirelational networks exhibit four types of connectivity: nonexistent, outward, inward, and bidirectional. We choose not to consider self-loops in this treatment for clarity, but including them is theoretically and practically trivial.
The cardinality of VCP n,r,d depends upon the number of vertices, n, the number of types of relationship between them, r = |R|, and whether or not directionality is considered, d. The space grows exponentially in the number of vertices with the base as the cardinality of the power set of undirected or directed relations. To explore this growth, we define a subgraph universe as the instantiation of G n,r,d with a specific (n, r, d) tuple so that the universe encompasses the collection of all possible subgraphs with n labeled vertices, r relations, and d ∈ {0 = undirected, 1 = directed} directionality. Importantly, G n,r,d differs from VCP n,r,d in that the former has n labeled vertices and does not collapse isomorphically equivalent structures, whereas the latter has 2 labeled vertices and collapses isomorphisms on unlabeled vertices. It always holds that |G n,r,d | ≥ |VCP n,r,d |.
Equation 1 describes the number of subgraphs in G n,r,d . The base, 2 (d+1)r , is the cardinality of the power set of edge types. The exponent, n(n−1) 2 , describes the number of vertex pairs across which edges may be present. of n and r with d = 0. This is a matrix of outputs from Equation 1.
The number of subgraphs grows at a rate exponential in n and r, and incorporating directionality exacerbates the growth. The rate of growth of VCP vector cardinalities is much slower due to superlinear increases in the isomorphisms with increasing n, but VCP cardinality nonetheless grows quickly. Fortunately, the most important information is typically located close to the source and target vertices and is easily captured with small n, and many networks have only a few types of relationships.

Isomorphisms
Isomorphic graphs are structurally equivalent graphs that vary in appearance only based on the identities of constituent vertices. More formally, we say that two multirelational graphs G = (V , E 1 , . . . , E r ) and G = and we can write G G . Isomorphism classes are sets of differing size containing all graphs within a given universe G n,r,d that are isomorphic to a particular exemplar G ∈ G n,r,d . We can view this formally as VCP elements are closely related to isomorphism classes. In Figure 2, it is impossible to distinguish subgraph 2 from subgraph 4. These subgraphs map to the same VCP element, VCP 4,1,0 s,t (2), and the count of that element is the sum of the counts of the isomorphic subgraphs.
Isomorphisms that require a mapping between v s and v t , for instance subgraph 2 and subgraph 16 in Figure 2, do not share the same VCP element even though they reside within the same isomorphism class. VCP elements ignore isomorphisms that require mapping v s to v t because VCP describes the local embedding of these two explicitly identified vertices. In undirected graphs, elements such as http://www.springerplus.com/content/3/1/116

Figure 2
Subgraphs 0 through 31 from VCP 4,1,0 s,t and the mapping of isomorphic subgraphs to VCP elements. Structures 32 through 63 include a connection between the two free vertices and create 20 additional elements. VCP 4,1,0 s,t (4) and VCP 4,1,0 s,t (14) together supply information regarding symmetry or asymmetry in the density of the embedding of v s and v t . The distinction in directed graphs is more obvious and relates to the potential significance of the difference in the local topologies of the source of a new link and its target. Figure 2 shows all the subgraphs pertinent to VCP 4,1,0 and their corresponding mappings to elements.
Thus we can say that VCP n,r,d s,t (x) ⊆IC G n,r,d (x) , and we can say that two multirelational graphs G = (V, E 1 , . . . , E r ) and G = (V , E 1 , . . . , E r ) are VCP-isomorphic with respect to s and t if there exists a bijective function f s,t Determining the cardinality of VCP n,r,d is related to the complex problem of determining the number of isomorphism classes in a graph of n vertices. In VCP 3,r,d , each enumerated subgraph maps uniquely to a VCP element. v s and v t are not mappable, and there is only one permutation of the remaining vertex. In VCP 4,r,0 , there are two mappable vertices and two permutations of those vertices. The number of mappable vertex permutations in VCP n,r,d is described as (n − 2)!, and the interaction of permutations with the appearance of isomorphisms is complex. The derivation of a general formula for |VCP n,r,d | for all n, r, and d is extremely combinatorially involved. We have instead provided software that computes VCP cardinalities and subgraph-to-element mappings for n ∈ [3, ∞), r ∈ [1, ∞), and d ∈ 0, 1. Table 2 shows the cardinality of all undirected VCPs with fewer than a million elements, but we shall demonstrate how it is possible in practice to perform analysis using VCP vectors with cardinalities many orders of magnitude greater than this.

Addressing
We define a VCP addressing scheme similar to the isomorphism certificate addressing scheme in (Kreher and Stinson 1999). The address space for the subgraphs from which the elements are derived is constructed by assigning r bits to each cell in the subgraph adjacency matrix and defining a significance value for the cell. The value of each edge in the matrix is defined as the index of the lexicographically ordered power set, P(R), corresponding to the ordered set of R relations on that edge. Significance is assigned in increasing lexicographical order above the principal diagonal starting with e 1,2 and ending with e n−1,n . Figures 1 and 2 demonstrate the indexing scheme for two different values of n and r. For any selection of vertices v s=1 , v t=2 , v 3 , . . . , v n , this addressing scheme will map the resulting multirelational subgraph to an index that exists within a set of indices of isomorphic structures. The procedure is defined and described more formally below. Assume a square matrix A of order n representing adjacency of the following form with subscripts {i, j ∈ N : i ≤ n, j ≤ n} in which cells indicate members of P (R). In particular a i,j corresponds to the binary-coded integral value of the bits denoting membership in the lexicographical ordering of R.
Define a square matrix V of order n to represent the value of edges. This matrix will be used to provide a unique address for A through the grand sum of the Hadamard product of A and V. For undirected networks, we define a matrix symmetric across the principal diagonal to represent edges valued in increasing lexicographical order of their incident vertices. This matrix takes the following form, with each cell reserving r bits for the 2 r possible relations that may occur on the edge.
The symmetric property of V in the undirected case allows for symmetry in subgraphs with vertex permutations that cross the principal diagonal. For directed graphs, this symmetry is broken in vertex permutations and this must be reflected within the value matrix.
Each cell still reserves r bits to represent the 2 r values in P(R), but an additional 2 r bits are reserved across the principal diagonal to represent P(R) in the opposite direction giving the value matrix the following form.
Given the matrix A for multirelational adjacencies and the matrix V for edge values, we can compute the address (G x ) of any subgraph G x ∈ G n,r,d as the grand sum of their Hadamard product This provides a complete addressing scheme for any number of subgraphs with any number of relations with or without directionality. To construct isomorphism classes within this addressing scheme, we start with a mapping vector and execute all permutations of K from i = 3 to i = n, which we define as the set K 3,n . The exclusion of i = 1 and i = 2 from permutation excludes isomorphisms that result from mappings between v s and v t . Each permutation corresponds to an isomorphic subgraph within the isomorphism class containing all of the permutations. To determine addresses for the permutations, we compute We can define a canonical subgraph representative within an isomorphism class as The set of canonical subgraph representatives as defined by the application of K 3,n to A in G n,r,d is precisely the set of VCP n,r,d elements. The addresses for elements in VCP 4,1,0 are provided in Figure 2. Because manual identification of isomorphism classes is error-prone and difficult especially as the number of subgraphs increases, we have provided a program that outputs the mapping http://www.springerplus.com/content/3/1/116 from all subgraph indices to their corresponding element addresses for all VCPs.

Directionality
Directed networks with r relations are theoretically and practically distinct from undirected networks with 2r relations. The subgraph-to-element mapping differs with directed networks because A and V are no longer symmetric. This introduces more isomorphic equivalences and decreases the cardinality of a directed VCP vector by comparison to its undirected pseudoequivalent, a fact demonstrated in Figure 3. For instance, VCP 4,1,1 , contains only 2112 elements whereas its undirected pseudoequivalent, VCP 4,2,0 , contains 2176 elements.

Dynamic vs. static computation
With matrices A and C and mapping vector K, we have described the means by which one can generate the set of elements in an isomorphism class and the canonical representative of that class. We can enumerate the subgraph-to-element mapping by constructing the universe of A indexed in lexicographically increasing order, computing the canonical isomorphism class representative for each A, and mapping the index to a contiguous, increasing value of VCP elements. This static computation determines VCP elements and addresses so that when a particular subgraph is identified, its VCP vector element may be determined with a constant-time operation in a precomputed vector.

Equation 1 that enumerating members of G n,r,d is an
O 2 n(n−1)(d+1)r 2 operation. Determining the canonical representative of an arbitrary subgraph is an O(n! ) operation, because |K 3,n | = (n − 2)!, but all subgraphs discovered in the process no longer require the computation, so O 2 n(n−1)(d+1)r 2 necessarily bounds the total cost. Even so, with a complexity of O 2 n(n−1)(d+1)r 2 , higher values of n and r make precomputation intractable. Even if computation were tractable, the size of the data structure may exceed memory and the size of the mapping may even exceed disk space.
Enumerating subgraphs and determining their isomorphism class is prohibitively expensive in many ways, but it is also unnecessary. We can instead dynamically compute the canonical representative of a given subgraph in G n,r,d when we first observe it. Since we have not enumerated potential preceding VCP elements and since subgraph addresses of the canonical representatives may be extremely large as n and r grow, we cannot convert this canonical representation into a contiguous, invertible VCP element index and are thus forced to use an associative data structure with O(log n) rather than O(1) time lookup during counting. In losing this ability, however, we gain the ability to compute and represent VCP output in main memory so long as the population of the VCP vector is sufficiently sparse. In Section "Extension to complex networks", we show that this sparsity requirement will almost always hold regardless of the (n, r, d) tuple.

Algorithms
We have just described methods whereby the elements in a given VCP vector may be either enumerated or determined dynamically during counting. We now describe how to count the number of times each (v s , v t ) pair appears embedded in a certain isomorphism class. We begin by introducing two algorithms that cover the simplest case where n = 3. Then we cover algorithms for the much more complicated case where n = 4. Several of the algorithms assume undirected graphs as input to simplify exposition, but all are easily generalizable to accept directed graphs.
For purposes of complexity analysis, we assume an underlying ordered compressed sparse row graph representation. This enables O(|V |) iteration through vertices and edge searches in (v i ) represents an ordered set of the neighbors of v i . The appearance of in algorithms presents problems for complexity analysis, because the expectation of v j ∈ (v i ) | (v j )| is dependent on the kernel rather than just the first moment of the degree distribution of the graph (Jürgen and Carley 2012). Though the first moment http://www.springerplus.com/content/3/1/116 of the degree distribution must always be 2|E| |V | in either undirected or directed graphs, the distribution is often Poisson, log-normal, power law, or double Pareto lognormal (Reed and Jorgensen 2004), some of which pos- The strict approach is to admit a worst-case distribution describing a star, correspondingly recognize that We shall make the relaxed assumption, for more discriminative complexity analysis, that the distribution possesses a uniform kernel with expected value 2|E| |V | , and Then the approximation error in the case of an adversarial kernel is readily quantified as |V | − |E| |V | , reaching its maximum in sparse graphs when |E| |V | and its minimum in dense graphs when |E| |V | 2 .
Throughout the algorithms, (P(R), e i,j ) refers to a procedure to determine the index of the multirelational edge e i,j in P(R), the lexicographically ordered power set of relations. This procedure can derive power set indices efficiently by setting individual bits in the index according to the presence of the relation corresponding to that bit and indexing the bits by the natural order of the relations themselves. It is possible to maintain integers representing the power set index in the graph representation to allow accomplishing this in O(1) time.
Algorithm complexities are stated in terms of the time requirement for pairwise-independent VCP vector generation for a single pair of vertices. The edge search operation is implicitly indicated in the algorithms when e i,j appears, and it refers to the query for an edge between v i and v j .

VCP 3,r,d Algorithms
We begin by covering a member of the most naïve class of algorithms, represented by Algorithm 1. This algorithm is worth studying for three reasons. First, this algorithm is trivially generalized to any value of n by introducing an additional loop for each additional vertex and accumulating corresponding information about the new vertex pairings that become possible. Second, this algorithm most transparently describes the goal of the computation and is thus instructive for understanding the nature of VCP vectors. Third, for moderate values of n, the clarity of these algorithms allows for the production of test output with which to write more sophisticated algorithms.
Though it offers conceptual advantages, this algorithm is unacceptably slow. Since it must loop through the set V to compute the vector, it requires O(|V |) edge searches. We can easily generalize this complexity for Algorithm 1 VCP 3,r,1 (Naïve) +1 10: end for 11: return VCP 3,r,1 i,j the entire class of naïve algorithms for any n. With each incrementation of n, we add an additional loop through the set V, yielding an (|V | n−2 ) time, but we must also perform a number of operations in each inner loop related to the number of containing loops. The number of operations in the directed case and the asymptotic reduction in n and V applying also to the undirected case are written as where the subscript x expresses each successively nested loop. The cost of the edge search operation has already been described as O log | (v i )| , and substituting the O log |E| |V | approximation yields a general complexity for the naïve class of algorithms of O n · |V | n−2 · log |E| |V | . For Algorithm 1, the time complexity is O |V | log |E| |V | . Fortunately it is possible to do much better. We can recognize that, excluding the connection between v s and v t from consideration, connected structures must include members of (v s ) or (v t ). All structural counts are dictated by the cardinalities of the sets resulting from taking the differences and intersection of these. We can simultaneously account for unconnected structures by subtracting their union from V, excluding v s and v t from the result, and taking the cardinality of the remaining set. Algorithm 2 formalizes the undirected variant of the procedure. http://www.springerplus.com/content/3/1/116 The analysis of this algorithm reduces to the analysis of the union, difference, and intersection set operations, all of which are easily implemented in O(|U| + |T|), where U and T are the respective sets. The algorithm also incurs the O log |E| |V | cost of the search for e i,j , but this is subsumed when we substitute In efficient implementations, the four set operations are collapsible into a single linear scan through (v i ) and (v j ), and such implementations allow the approach to handle any value of r through application of to members of (v i ) and (v j ) with no change to asymptotic complexity.
The usage of set operations reduces the time requirement of the algorithm by a factor of |V | 2 ·log |E| |V | |E| . We shall next demonstrate that much greater savings are possible with VCP 4,r,d using more clever set operations and corrections for double-counting.

VCP 4,r,d Algorithms
The VCP 3,r,d algorithms directly increment element counts corresponding to the underlying subgraph structure index, because no isomorphic equivalences are present with only one free vertex. In VCP 4,r,d algorithms, the presence of two free vertices requires the determination of canonical isomorphism class representatives from observed structures. We designate this operation in the algorithms with , and implement it in O(1) time with a precomputed vector of mappings.
We first provide Algorithm 3 as an undirected exemplar of the class of naïve algorithms applicable in the case of four vertices. It works by iterating through every pair of free vertices in the graph and considering them in combination with v s and v t to count the resulting structures. As demonstrated by the generalized complexity analysis provided above, its asymptotic complexity is O |V | 2 log |E| |V | .
Algorithm 3 VCP 4,r,0 (Naïve) end for 12: end for 13: return VCP 4,r,0 i,j It is again possible to reduce the complexity by restricting consideration to neighbors, but the procedure is more complex. The algorithm commences by considering the neighbor sets of v s and v t to compose structures of three vertices. These structures are extensible either by the addition of another neighbor of either v s or v t or by the addition of a neighbor of their neighbors. To calculate the counts of sparse structures, which are too numerous to cheaply enumerate, the algorithm maintains counts of the number of observed connected pairs, the number of observed unconnected pairs, and the number of vertices that are two hops from v i or v j . They are written as α, β, and γ respectively. These quantities provide sufficient information to differentiate unobserved structures in which the two free vertices are isolates within the subgraph from those in which they are isolated from v s and v t but connected to each other. Algorithm 4 formalizes the procedure and provides more details. All if e i,k ∈ E then 7: if e j,l ∈ E then λ 2 ← λ 2 + 16 end if 29: if e k,l ∈ E then 30: We can express the critical components of the cost mathematically as demonstrating that graph sparsity is an important factor in the feasibility of the algorithm. We call special attention to three lines in Algorithm 4 the purposes of which may not be obvious. Line 37 computes the number of structures in which the v k under consideration is a member and in which a fourth vertex is isolated and thus unexplored through consideration of neighbors. This is equivalently the number of vertices in the network that are neither v i nor v j nor a neighbor of v i , v j , or v k , and it is given by the expression |V Lines 39 and 40 consider structures in which v k and v l are connected to neither v i nor v j . Such structures are divided into two cases: one in which v k and v l are connected to each other and one in which they are not. Counting these structures directly is expensive, because v k and v l are not explored as neighbors of v i and v j . Fortunately careful curation of the number of previously encountered connections and gaps facilitates the computation of these quantities. Line 39 computes the number of structures where v k and v l are connected, which is equivalent to the total number of edges in the network excepting those encountered directly during counting. Line 40, which computes the number of structures where v k and v l are not connected to each other, is significantly more complicated to derive. We start with the number of gaps we know to exist in the network, 1 2 |V | (|V | − 1) − |E|. From this, we must subtract all of the gaps that we encountered while counting, expressed as β, but we must also remember that β only describes the number of gaps directly observed. We must additionally incorporate gaps contributed by the structure count computed by line 37. Recall Outside the loop, this number may be expressed in aggregate by the product of the number of vertices that may serve as v k , written as |( This expression gives the number of such structures, but this only accounts for gaps not directly observed between v k and v l . The sparse subgraphs also contribute unobserved gaps involving v i and v j , and we can express these with 2 |V Subtracting these two quantities from the number of possible gaps minus the number of directly observed gaps provides us with the count of the least connected isomorphism class present in VCP 4,1,0 i,j . This explication also applies to http://www.springerplus.com/content/3/1/116 Algorithm 5, which has identical computations at lines 36, 38, and 39.

end for
27: Since VCP 4,r,d computation determines the reachability of vertices from v i , it must share the lower bound and is We summarize the algorithms and complexities covered in Table 3. The final row does not reflect a specific provided algorithm but instead addresses the general theoretical bound on how well any pairwise-independent VCP computation may perform.

Extension to complex networks
It is mostly trivial to extend VCP algorithms to more complex networks, including any form of edge feature such as directionality, weight, temporality, different relation types, or any information describing edges or vertex pairs that either exists categorically or can be quantized. One amenable network representation associates an ordered set of bits with each edge. Each bit corresponds to the presence of a particular relation or some Boolean descriptor for a pair of vertices. The determination of the existence of an edge for unirelational data instead becomes an evaluation of the edge as the binary-coded integral value of the ordered set of bits. This is one conceivable implementation for (P(R), e x,y ) in Algorithm 1.
A complication emerges as r increases related to determining counts of multirelational edges in subgraphs with disconnected substructures. VCP n,r,d counts structures including those in which none of . , v n } may be connected to either v 1 or v 2 . It is impossible to take advantage of set operations on sets of neighbors because v i is not in a neighbor set. There are two alternative methods of counting. The first, computationally infeasible even for small networks, is to enumerate all |V | k sets of vertices. The second approach, which works well for n = 4 and generalizes for n > 4, is to associate with each existing member of P(R) the count of its occurrence and decrement the count each time a corresponding member is encountered in computing a given vector VCP 4,r,d i,j . Knowing how many of the P(R) connectivities the graph contains and how many are present in direct involvement with v i and v j , it is possible to determine the count of connectivities of vertices not in direct involvement. This admits a correct count of VCP elements. While this method will outperform enumeration, it can still become problematic when r is large because of the space and time involved in creating and maintaining the counts. Though we included these structures in our counts, empirical results show that surveying the connectivity of subgraphs with no direct connectivity to v i and v j reveals much less information than surveying connectivity within subgraphs connected to v i and v j . Omitting this information from the VCP vector offers an opportunity for increases in speed. The feasibility of multirelational applications of VCP analysis with r 10 depends a lot upon the number of VCP elements that are populated. Even before the number of enumerable subgraphs reaches values such as 2 60 for G 4,10,0 , storing the complete VCP vector becomes impossible. Fortunately, most graphs are friendly to VCP analysis, and only an infinitesimal fraction of the total number of possible isomorphism classes is present. Theoretical statements regarding expectations for the number of VCP elements in various random graph models may be possible, but we turn to strong empirical demonstrations corresponding to worst-case scenarios. In the most difficult case, VCP analysis must describe all pairs of vertices in a graph with moderate density and random linkage. Coverage of all pairs guarantees that any existing structures must be observed. Moderate density is necessary to create a large population of both sparsely and densely connected subgraphs. Random linkage is necessary to ensure that the sample exhibits a variety of isomorphism classes.
We simulate this adversarial environment with a specialized Erdös-Renyi random graph model with four parameters: {|V |, |R|, P(e i,j ), P(r x ∈ R)}. Since we will perform VCP analysis on all pairs, |V | places the effective upper bound on the number of different structures we can observe, |V | 4 . The value |R| is the number of different relations that can exist over an edge, and it controls the cardinality of the set of isomorphism classes. The probabilistic parameter P(e i,j ) is the traditional p parameter from the G(n, p)-model Erdös-Renyi random graph except that it controls whether or not an edge is a candidate for multirelational population. For edges whose assigned uniform random value falls below P(e i,j ), the probabilistic parameter P(r x ∈ R) is used as a threshold against which uniform random values are tested for each of the r possible relations to determine the relations that will apply on that edge. Figure 4 shows the number of VCP elements observed in all-pairs analysis of 1,764 random graphs constructed in this manner.
For densities below 30%, there are simply too few edges to allow for the formation of dense isomorphism class exemplars, and less than 50% of the possible VCP 4,r,0 elements appear. The probability of observing relations on edges is also influential. P(r x ∈ R) must be neither extremely low, in which case all substructures are sparse, nor extremely high, in which case all substructures are fully populated with all relations. The lower the density, the more critical it is that the relational probabilistic parameter be in a moderate zone. Extremely high density alone is not sufficient to prevent the formation of most isomorphism classes since P(e i,j ) = 1 does not necessarily imply a link and because the random selection of elements from P(R) will still allow for the construction of many isomorphism class exemplars even when involving every edge in the network.
The graphs with |V | = 250 offer 250 4 = 158, 882, 750 selections of 4 vertices whereas the |V | = 100 offer only 100 4 = 3, 921, 225 selections of 4 vertices. The heat maps indicate that observed diversity is limited by random chance of appearance when the number of subgraphs increases or the number of possible selections of vertices decreases. This manifests as a decrease in the size of the region where many elements are observed in terms of the two probabilistic parameters and introduces another helpful mechanism in maintaining manageable VCP element presence.
In reality, the situation is much less adversarial and more conducive to VCP analysis than this. First, as r increases to extreme values and |VCP n,r,d | increases as a result, the exponential growth in the cardinality of the set of isomorphism classes quickly outstrips |V | n , which places an effective upper bound on the number of possible elements. More significantly, densities are often fractional percentages of their maximum values, so typical analytical tasks operate in areas of the heat map even more favorable than those displayed. It is also rare for all relations to be highly and equally common across the entire network. Links usually form according to underlying guiding motivations, which induce structural similarities throughout the topology. Taken together, these greatly limit the size of the observed subset of highly multirelational VCP vectors to manageable output sizes. Further, as we shall show in Section "Computational challenges", these factors also work to limit the entropy of the values within any particular VCP element.

Data
We present results for several data sets in Table 4 to demonstrate the performance of the techniques under comparison for different families of networks. Though all of these data sets contain information with which to generate edge weights, we are interested in providing purely structural comparison here, so all quantitative results are based on networks constructed without edge weights.
calls is a stream of 262 million cellular phone calls from a major cellular phone service provider. We construct directed networks from the calls by creating a node v i for each caller and a directed link e i,j from v i to v j if http://www.springerplus.com/content/3/1/116 and only if v i calls v j . To reduce the noising effect of robot callers and automated answering systems, we exclude a few nodes that only call if they have an out-degree above 1000 or only receive calls if they have an in-degree above 1000. sms is a stream of 84 million text messages from the same source as calls and constructed in the same manner.
condmat (Newman 2001a) is a stream of 19,464 multiagent events representing condensed matter physics collaborations from 1995 to 2000. We construct undirected networks from the collaborations by creating a node for each author in the event and an undirected link connecting each pair of authors. For all experiments involving condmat, we use the years 1995 to 1999 for constructing training data and the year 2000 for testing.
dblp-cite (Ley 2002) is a citation network based on the DBLP computer science bibliography. Each researcher is a node v i and directed networks are formed by viewing a citation by researcher v i of work by researcher v j as a directed link e i,j . The dblp-collab network uses the same raw data, but links are based on co-authorship collaborations. An undirected link exists between v i and v j if both are authors on the same paper.
disease-g (Davis and Chawla 2011) is a network in which nodes represent diseases and the links between diseases represent the co-occurrence of particular genotypic characteristics. Links are undirected. This network is not longitudinal, but finding unobserved links is an important task, so we have no choice but to estimate http://www.springerplus.com/content/3/1/116 performance by randomly removing links to construct test sets. The disease-p network is from the same source as disease-g, but links in disease-p represent the co-occurrence of phenotypic characteristics. Predictions of common expressions between diseases are uninteresting since expressions are either observed between diseases or they are not. Because of this, practically speaking the value of phenotypic predictions is negligible. Nonetheless, holding out phenotypic links and subsequently predicting their presence is equally instructive for the purposes of predictor evaluation.
hepth-cite and hepth-collab (McGovern et al. 2003) are formed in exactly the same way as dblp-cite and dblp-collab respectively. The raw data for these networks is a set of publications in theoretical high-energy physics.
The huddle data set from (Raeder et al. 2011) is transaction data gathered at a convenience store on the University of Notre Dame campus. The data was collected from June 2004 to February 2007. Products are represented by nodes, and products purchased together in the same transaction are represented by undirected links.
The patents-collab (Valverde et al. 2007) data set is constructed from the data at the National Bureau of Economic Research. Nodes represent authors of patents and undirected links are formed between authors who work together on the same patent.

Computational feasibility
To demonstrate computational feasibility, we computed VCP vectors serially on one core with commodity desktop hardware. Our machine included an Intel Core i5-2500K running at 4.8 GHz with 6 MB of level 3 cache and 8 GB of 1600 MHz dual-channel memory. Table 5 shows the results of computing the vectors for all = 2 pairs, pairs spanning a geodesic distance of 2. Despite the positive serial feasibility, VCP algorithms are naturally parallel: each vector computation is completely independent of the others, so multi-core, grid, or cloud distribution allows for perfect linear speedup. Despite the 8 GB of memory available on the machine, we note that, aside from the graph representations themselves, actual processing of all algorithms required less than 128 KB for all data sets and fit into the 32 KB level 1 cache for most. Nevertheless, we also developed and make available O(1) space algorithms that are only slightly slower.

The VCP method and link prediction
Link prediction has been defined and framed in many ways, but it is in essence the task of accepting a network G = (V , E) and predicting whether there is or will be an unobserved link e s,t between a pair of nodes v s and v t where v s , v t ∈ V and e s,t / ∈ E. In multirelational networks, we simply accept that E is a vector, and we often also interested in predicting both the relational type and existence of a link. VCPs offer several advantages in tackling link prediction. These include incorporating many different hypotheses of underlying link formation mechanisms through the identification of all vertex relationships in subgraphs of size n and natively handling multirelational data.
We compare the link prediction efficacy of VCPs with that of a selection of other methods. We select representatives from different predictor families established as strong by prevailing literature (Liben-Nowell and Kleinberg 2007). Unsupervised methods include the Adamic/Adar predictor based on common neighbors (Adamic and Adar 2001), the Katz path-based predictor (Katz 1953), and the preferential attachment model (Barabási et al. 2002;Newman 2001b). We also compare against the HPLP supervised link prediction framework contributed by (Lichtenwalter et al. 2010), which combines a variety of unsupervised predictors into a classification model.

Experimental setup
To run our experiments, we integrated VCP with the LPmade link prediction software . LPmade uses a GNU make architecture to automate the steps necessary to perform supervised link prediction. This integration ensured a fair and comparable testing regime across predictors and allows those interested in VCP for link prediction and other purposes to test it on their networks easily.
To perform supervised classification with a VCP vector, we use the unmodified VCP vector, VCP n,r,d i,j , as the classification feature vector. The class label is then determined by the existence or nonexistence of the edge v i , v j in a future form of the network. Non-longitudinal data is handled in the same way as longitudinal data by randomly assigning timestamps to edges, so for non-longitudinal data the future form of the network corresponds to a random hold-out of edges. Classification data is not divided from a single data set generated from a single underlying network but is instead constructed from different underlying network data. Training data uses older forms of the network than testing data, and training labels always come from data strictly preceding the testing label period to prevent data leakage. Experiments are fully reproducible through the use of the LPmade framework.
with HPLP also using random subspaces, but we determined that random subspaces produced decreased or comparable performance to the original reference implementation, so we present HPLP results unmodified using random forests (Breiman 2001).
We used the default values from HPLP of 10 bags of 10 random forest trees, 10 bags of 10 random subspaces for VCP classifiers, and training set undersampling to 25% positive class prevalence. We did not change the size or distribution of the testing data. For undirected http://www.springerplus.com/content/3/1/116 networks, we resolve f (v s , v t ) = f (v t , v s ), by computing the arithmetic mean to serve as the final prediction output. By default, LPmade includes features that consider edge weights such as the sum of incoming and outgoing link weights. We are interested in the comparative prediction performance of the link structure alone, so we ran all predictors on the networks disregarding edge weights. There are many different ways to assign edge weights to all the networks here, and the particular choice of edge weight and the precise decision about how to incorporate it into the VCPs would distract from the study.
Computing and evaluating predictions using any prediction method for all possible links on large, sparse networks is infeasible for multiple computational reasons including time and storage capacity. Link sparsity is likewise challenging for statistical modeling because of extremely low prior probability of link formation (Getoor 2003;Lichtenwalter et al. 2010;O'Madadhain et al. 2005;Rattigan and Jensen 2005). Link prediction within a twohop geodesic distance provides much greater baseline precision in many networks (Lichtenwalter et al. 2010;Scellato et al. 2011), so effectively predicting links within this set offers a strong indicator of reasonable deployment performance. For all compared prediction methods, we restricted the prediction task by distance and only considered performance comparisons for potential links spanning two hops within the training data due to their higher prior probability of formation and computational feasibility.

Prediction performance
The area under the receiver operating characteristic curve (AUROC) can be deceptive in scenarios with extreme imbalance (Hand 2009), and area under the precisionrecall curve (AUPR) exhibits higher sensitivity in the same scenarios (Davis and Goadrich 2006). We will provide results for those interested in traditional AUROC, but we will also present AUPR results and will mainly restrict our analysis to those results. Table 6 shows the comparative AUROC and AUPR performance of Adamic/Adar, Katz, preferential attachment, HPLP, and VCPs in link prediction for potential links spanning a geodesic distance of two hops.
In general, we expect the information content of VCPs to increase in the left-to-right order presented in Table 6, though depending on the significance of directedness in the network, the expectation of comparative performance of VCP 3,1,1 and VCP 4,1,0 may change. We point the reader to calls, dblp-cite, dblp-collab, disease-g, disease-p, hepth-cite, huddle, patents-collab, and sms as conformant examples. We suspect that the exceptions indicate cases in which the classification ensemble failed to create a sufficiently optimized model in the high-dimensional space.
In 7 of the 11 networks, VCP classification offers superior AUPR performance. In a slightly different selection of 7 networks, it offers superior AUROC performance. In some of the cases in which VCP offers the best performance, the differences are quite wide. In the sms network it offers an AUPR that is 2.3 times as high as the best competitor. In the condmat network, it offers AUPR 1.53 times the nearest competitor. In two of the networks in which VCP classification does not provide the best performance, HPLP does. As an interesting side note, when weights are removed as they were to obtain these results, HPLP does not always outperform the unsupervised predictors. Figure 5 shows a closer look at the performance differences. The black dashed line represents the baseline performance of a random predictor. Across all the selected networks, VCP maintains high precision longer at increasing values of recall. This is especially important in link prediction where high precisions are so difficult to achieve.
Despite the competitive performance that the VCP method exhibits, it is not our intent to present the most effective possible classification scheme. There is great potential for improvement through feature selection, dimensionality, and alternative classification algorithms. Increasing the number of trees in the forest is also likely to improve performance in the complex feature space. Yet another option for potential improvement is to concatenate VCP 3,r,d and VCP 4,r,d into a single feature vector. VCP vectors contain a wealth of information, and the task is simply to determine how best to employ it to achieve whatever goals are desired.

VCPs and multirelational data
In VCP algorithms, any procedures that consider a particular edge for unirelational data can instead determine the value of the edge as the binary-coded integral value of the ordered set of bits for multirelational data. Excepting the cost of allocating storage necessary to maintain counts for multirelational structural elements, the computational complexity of the multirelational extension is no greater than for unirelational networks. Figure 6 shows the first 16 elements of VCP 3,2,0 .
We can capture the power of longitudinal data by encoding the snapshot in which an event occurs as its relation type. Consistent with the approach proposed by (Sharan and Neville 2008) and subsequently used by (Acar et al. 2009), we also divide our data into a set of discrete, nonoverlapping snapshots. Suppose a continuous stream of event records arrives with arbitrarily fine temporal granularity. We divide the stream into t + 1 chunks s 0 to s t sized, electively equally, according to selected criteria. In http://www.springerplus.com/content/3/1/116  some cases, data is generated by a non-stationary process with periodicity, and so encapsulating the periodicity consistently might be an important criterion. From the chunks, we can create corresponding graph snapshots G 0 to G t such that all events involving entities i or j in arbitrary chunk s a appear as links e i,j between vertices v i and v j in G a . When we want a graph snapshot to be more temporally expansive than a single chunk, say including s a , s a+1 , . . . , s a+b with 1 < b ≤ t − a, we can either compute the minimum common supergraph of multiple consecutive graph snapshots or equivalently compute a new graph snapshot containing the events within the desired bounds. When performing supervised learning on longitudinal data, conceptual and practical complexities arise that are absent from standard supervised learning tasks. These complexities are covered in great detail in the expansive literature on data stream mining (Gaber et al. 2005), and many of the topics endemic to all mining tasks on longitudinal data apply here. Figure 7 demonstrates the specific methodology employed within our experiments. As with non-longitudinal classification problems, there exists a model construction step using training data and a performance evaluation or deployment prediction step using testing data. The basic application of supervised learning to link prediction on longitudinal data is depicted in Figure 7a as a reference. Each color in each of the training and testing time lines represents a separate network snapshot.
In the basic application in Figure 7a, there are four snapshots: the two blue snapshots are used for constructing a feature vector. In the training time line, the red snapshot provides labels for model induction. In the testing time line, the red snapshot is the newly observed data on which we test our model. The red testing snapshot must always have the same boundaries for fair model comparison regardless of changes to any other snapshots. We also choose to maintain the boundaries on the red snapshot in training, since varying the snapshot may cause changes in the observed prior and conditional probabilities of the link formation labels. We likewise make an effort to reflect the duration of the red snapshot in testing with the duration of the red snapshot in training to give the model the best chance at facing the same distribution on which it was trained.
Figures 7b and c demonstrate supervised learning with two temporal relations, and this requires six distinct snapshots total for training and testing. Figures 7d and e demonstrate supervised learning with three temporal relations, and this requires eight distinct snapshots. In http://www.springerplus.com/content/3/1/116 our temporal models, we handle data granularity in two different ways. In the first, snapshot granularity remains consistent throughout the data to model. In the second, we decrease snapshot granularity exponentially with each snapshot and emphasize the most recent events by making the most recent snapshot relatively short. The depictions in Figures 7b and d are of evenly divided snapshots. Those in Figures 7c and e are of exponentially divided snapshots. In Figure 7d, we suppose that G t−6 is derived from the oldest available data in the stream, so we cannot extend the oldest snapshot to its full length. In such cases, it may also be wise to limit the length of the oldest testing snapshot to present the closest possible testing data to the model. Smaller snapshots effectively place greater emphasis on individual events in the multirelational VCP method. Links from short periods and links from long periods both receive their own relations, so short periods with few events afford the contained events greater discriminative potential. Links occurring within the same snapshot are also guaranteed to occur closer together in time, which may decrease noise and increase the clarity of temporally proximate evolutionary trends. In Figures 7b and d, events that recently occurred are given the same weight as those that occurred in the more distant past. In Figures 7c and e, recent events are effectively given more weight by placing fewer, more proximate events in the same relation. In all of the depictions in Figure 7, the number of snapshots excluding the red testing snapshot is equal to the number of relations that will appear in multirelational isomorphism classes of three or four vertices like those in Figure 7.

Longitudinal data
We draw our conclusions from a selection of four of the network data sets above. Table 7 presents some basic information about the data in a temporal context. The table also indicates the chunks into which we divided the events. For instance, we divide the longitudinal condmat data into 6 chunks, and each chunk spans 12 months. The division of data into chunks is arbitrary but represents our attempt to balance data sufficiency in each chunk with the production of sufficient chunks to allow for experiments involving variation in the number and granularity of snapshots. Figure 8 provides the domain and distribution of longitudinal events for each of the networks. The collaboration networks exhibit a general increase in the rate of growth.  The growth in the dblp-collab network is consistent with (Ley 2002), but we note that the anomaly in 1994-1996 is inconsistent, and upon further examination of the data, we can report no explanation. The hepth-collab network data reduces in volume for 2002 due to incomplete information for that year. The huddle product copurchasing network shows a cyclical reduction during summer months, and we attribute this to the presence of fewer students on campus. There is a lesser cyclical reduction during the winter break.

Experiments
To explore the many possibilities of encoding temporal information into multirelational representations, we perform several experiments designed to confirm or deny certain hypotheses. Areas we explore include how to divide data, whether event recency is significant, whether our method appropriately captures temporal trends or whether it is simply a surrogate for weight, and to what extent our approach scales. Performance is again determined based on predictions within = 2 of all training data.

Precluding surrogacy for weight
Defining relations in terms of successive exclusive network snapshots allows the relations to offer a rudimentary encoding of weight. An alternate explanation for any performance disparities between temporally aware and unaware predictors is that the temporally aware predictors merely offer an indication of edge weight. To preclude this alternate explanation, we compute performance before and after randomly reordering events. If dividing data into snapshots and specifying multiple relations is only serving as a surrogate for weight, we expect to see approximately the same performance. If the order of the events is significant, we expect to see higher performance from temporally ordered data.
The results in Table 8 demonstrate the benefits of the temporal ordering. There is no case in which the model performs as well with randomly reordered data, and in some cases multiple relations using randomly reordered data performs worse than the unirelational baseline. This strongly refutes the hypothesis that encoding temporal information in VCPs is boosting performance merely through edge weight surrogacy. http://www.springerplus.com/content/3/1/116

Analyzing split points
Separating fine-grained longitudinal data into discrete, bounded units for constructing network snapshots elicits questions such as how many snapshots to use, at what temporal locations data should be divided, and whether to maintain a consistent snapshot size. We conduct a simple experiment to determine an appropriate temporal location at which to divide these data sets in the case of two snapshots. This experiment also tests the hypothesis that greater focus on recent events is beneficial, and the results lead to cogent theories about how to parameterize more snapshots. Figure 9 shows the performance achievable at all possible splits for two snapshots given the predetermined chunk boundaries. Take s i as a value in the domains of the Figure 9 sub-figures and s t+1 as the final chunk in the data stream as indicated in Table 7. Then in training the first snapshot includes chunks [ s 0 , s i ), and the second snapshot includes [ s i , s t−1 ]. The penultimate chunk, s t , is reserved for training labels. For this experiment, to ensure that training and testing feature data volumes are constant, the first testing snapshot entirely subsumes the first training snapshot. In testing, the first snapshot includes chunks [ s 1 , s i+1 ), and the second snapshot includes [ s i+1 , s t ]. The final chunk, s t+1 , is reserved for testing labels. The horizontal lines correspond to baselines in which supervised models are built using VCP n,1,0 , which represents no temporal discrimination.
The time-resolved multirelational VCPs almost always outperform their non-longitudinal unirelational counter- parts for some split value. The only counterexample is condmat for which VCP 3,1,0 outperforms VCP 3,2,0 . We additionally observe that while VCP 4,r,0 outperforms VCP 3,r,0 in all cases except condmat and nearly ties for hepth-collab AUPR and huddle AUROC, the addition of temporal discrimination into the models appears to allow VCP 4,r,0 significantly greater explanatory power across all networks. Consistent with our hypothesis regarding the significance of recency, performance increases as the second period encompasses fewer, more temporally proximate events. This confirmation is particularly stark, because excepting condmat the highestperforming split places only s t−1 in the second snapshot, and yet this outperforms models constructed without splitting the data.

Effectiveness of temporal data
Having demonstrated the importance of splitting periods and recency, we provide the comparative performance of the temporally non-resolved baseline with that of evenly and exponentially spaced snapshots in Table 9. We are interested in how much the addition of temporal information can improve the results. Gray cells indicate data sets for which there are insufficient chunks to meaningfully differentiate equal and exponential snapshot sizes with three snapshots. We consider it critical to compare the different variations using the same volume and quality of data, so we deviate slightly from the method displayed in Figure 7 in two minor ways. First, to prevent explanations of differences in performance due to variations in data, all methods use the entire quantity of historically available data for training regardless of how that data is divided into periods. In both equal and exponentially growing snapshot sizes, when there are too few or too many chunks for assignment, we select the oldest snapshot to deviate from its ideal size. This decision is based on the premise substantiated by Figure 9 that this is the snapshot of least significance to model performance. Second, to help account for potential periodic biases, especially in the huddle data set, the most recent snapshot in the exponential results starts with two rather than one underlying chunk. These results show that temporally aware predictors universally outperform the most directly comparable nontemporal baselines. Gains are consistently large when moving from no temporal awareness to division of time into two snapshots with VCP n,2,0 . They are less consistent moving from VCP n,2,0 to three snapshots with VCP n,3,0 . Particularly, we note that condmat and hepth-collab do not offer significant gains or even suffer losses when moving to three snapshots. We point out the small relative volume of data in these cases and cite the curse of dimensionality. As the number of distinct temporalstructural elements increases, the amount of information contained within any one element is reduced as the data falls into increasingly many fine-grained buckets. With the voluminous dblp-collab and huddle data sets, there is still sufficient data to lend statistical credence to the greater number of features, and this is reflected by large performance gains when moving from VCP 4,2,0 to VCP 4,3,0 .

Computational challenges
The first computational challenge in highly multirelational environments relates to the time and space required to compute and store the translational subgraph-toelement vector. In our coverage of VCP theory, we described static and dynamic subgraph-to-element isomorphism class computations. Several considerations motivate the use of one or the other. Static computation gives contiguous, invertible indices for consecutive VCP elements and offers the fastest possible subgraph-toelement translations. Memoized dynamic computation, though potentially slower, also potentially decreases the amount of memory necessary to create non-contiguous but unique indices for VCP elements by only identifying and storing elements that are observed. For static http://www.springerplus.com/content/3/1/116 computation, the space required to represent the translation from subgraphs in G n,r,d to elements in VCP n,r,d increases in direct proportion to |G n,r,d |. If the VCP vector is large, then constructing the full translational vector has high cost in time and space. Dynamic computation using arbitrary scale integer indices overcomes this challenge and makes it possible to handle VCP vectors with extremely large cardinalities. For instance, we have effectively applied VCP analysis to semantic graphs in which the vector contained 7.66 × 10 53 elements. The ontological consistency in the semantic graphs placed tight constraints on VCP population, and fewer than 1000 elements were actually present. Dynamic translation incurs only the cost of these elements.
The second challenge is related to allocation costs, output volume, and limitations of disk storage. Output volume grows with the size of the VCP vector. Since large networks may contain billions of edges and link prediction target pair sets may be even larger, the space necessary to represent the output of highly multirelational VCP analysis is problematic. As we showed with Figure 4, even in worst-case networks, the number of non-zero elements in the vector may be small even when its cardinality is large. Even before the value of r grows so that subgraph computation and space for translation vectors is constrained, the cost of repeatedly allocating and deallocating the space for large VCP vectors becomes troublesome. The solution is to maintain an associative data structure of vector elements that exist for a given vertex pair, which will often be a small fraction even of the subset of elements existing in the graph. Rather than outputting the full vector, we output a list of (index,count) pairs, precluding the necessity of prohibitively allocating space for every element and substantially reducing the cost of printing and storing the output. From a classification perspective, this offers zero-cost filtering of VCP elements for which there is no representative membership in the training set, sacrificing no information.
These solutions make VCP analysis feasible for nearly all combinations of n, r, and d likely to appear in realworld data. We next turn to computational challenges involved in performing classification with VCP output. The dblp-collab network has 1.07 × 10 7 potential = 2 links that populate 12,436 VCP 4,2,0 elements with non-zero values. The huddle network has 7.29 × 10 6 potential = 2 links that populate 16,635 VCP 4,2,0 elements. Assuming 8-byte feature representations for 10 million training instances, typical of = 2 link prediction in moderately sized social networks, the inmemory data set footprint for fully populated VCP 4,3,0 is 5.2 TB. This is strictly an engineering problem, and many solutions are readily available. Information density in VCP elements varies widely, so dimensionality reduction and feature selection are well-suited to reducing data volume. These methods must be exercised strategically, however. We cannot load 5.2 TB of data into memory to run such procedures. Processing the training data file one feature at a time requires reading more than 100 PB of I/O. Reading the file once and using process substitution requires thousands of simultaneous active processes and completely overwhelms the scheduler. We can first reduce data volumes to arbitrary levels by undersampling, which also reduces the number of pairs for which VCP vectors must be computed. In dblp-collab, when the negative class is reduced to a 3:1 ratio, only 88,744 instances remain. This ratio, which was practicable enough to obtain our results, makes many dimensionality reduction and feature selection schemes feasible, and even inefficient implementations of classification algorithms such as naïve Bayesian classifiers and decision forests can process such volumes of data in tractable time. Compression methods that are lossless with respect to class entropy, such as binary quantization of VCP elements for which all non-zero values correspond to only a single class, can also make data set storage and processing more efficient.
An even more scalable solution results from a consequence of the curse of dimensionality. We observe that as the number of VCP elements increases for a given graph, the number of distinct non-zero values decreases within each element due to increasing information sparsity. Figure 10 shows the cumulative distribution of distinct values across VCP 4,2,0 and VCP 4,3,0 . In all cases, though there are more than 60 times as many features with four relations, the sum of the cardinalities of the sets of distinct values across the features is less than doubled. We use this fact to implement a single-pass ordered map representation of each feature in the data set that associates values with their frequency of occurrence. The representation is highly compact: the huddle data set, which requires more than 80 GB of memory in matrix representation, requires only 1.4 GB in the map. This method effectively implements a fast, noncontiguous counting sort that allows for nearly instantaneous computation of feature selection methods such as information gain. Information gain is fast, but the well-known limitation of all filtering methods of feature selection is that they fail to consider feature subsets and thus cannot consider feature orthogonality, correlation, and dependencies. Nonetheless, when we constructed models using only the 1000 strongest VCP elements, we were able to reduce classifier training times by a factor of 100, reduce memory requirements by a factor of 50, and achieve AUROC and AUPR performance within 2% of that achieved by models bestowed with all VCP elements. http://www.springerplus.com/content/3/1/116

Conclusions
VCP is a new method for link analysis with solid theoretical roots. We presented evidence of its utility in some applications here, but there are many possible applications. It is useful for post hoc analysis of classification output and comparative analysis of network link structure, and it competes effectively with existing link prediction methods, often outperforming them by wide margins. In well-established networks with past observational data, VCP can serve as a sensitive change detection mechanism for tracking the evolving link formation process. In addition to link prediction and link analysis for the purpose of network growth modeling, VCP can be used for link or vertex pair clustering. Its ability to handle multiple relations naturally extends its utility into many domains and offers an alternative to the practice of combining or discarding edge types or edge directionality. We showed how VCP analysis can be useful in incorporating temporality into link prediction with a multirelational encoding, and this offered improvements another factor greater than what we already obtained with unirelational VCP predictions. Optimized implementations of VCP 3,r,d and VCP 4,r,d algorithms are available at https://github. com/rlichtenwalter/vcp. The repository includes subgraph-to-element mapping software, which generates static subgraph-to-element mappings for 3 ≤ n ≤ 8, 1 ≤ r ≤ 8, d ∈ {0, 1} and dynamic subgraph-to-element mappings for any values of n, r, and d. Algorithms for VCP 3,1,d and VCP 4,1,d are also integrated into the LPmade link prediction framework and are available at http://mloss. org/software/view/307/. Most of the data sets are publicly available elsewhere, but we have also published all public data sets at http://nd.edu/~dial/vcp/.

Appendix A
In the original exposition of VCP analysis in (Lichtenwalter and Chawla 2012), we provided the equation for the number of subgraphs in universes G n,r,0 given by the tuples (n, r) and deliberately excluded a target prediction relation between v s and v t , from consideration. The equation appeared thus: The edge e s,t received special treatment for two reasons. First, particular analytical domains offer additional knowledge about this edge. For instance, in link prediction we will usually be interested in analyzing only vertex pairs (v s , v t ) for which e s,t / ∈ E. In link clustering, we are likely interested in e s,t ∈ E. Second, for a given (v s , v t ), only 2 −r of all elements will be non-zero if the edge does not receive http://www.springerplus.com/content/3/1/116 special treatment, because e s,t takes a constant form in all embeddings. The information about the nature of e s,t is thus more succinctly described, when it is desired at all, as a single integer taking values [0, 2 (d+1)r − 1], which is sufficient to fully define the edge.
The treatment in Section "Vertex collocation profiles" gains greatly in theoretical consistency and mathematical coherence by not considering e s,t in a special way. Additionally, goals such as tensor analysis of VCP vectors may gain from the verbose representation. The provided VCP code implementations are fully aligned with the presentation in Section "Vertex collocation profiles". When one is using VCP for link prediction specifically, it may be convenient to alter this scheme by effectively shifting the lexicographical ordering of the edges so that e s,t is the edge of highest value, as presented in (Lichtenwalter and Chawla 2012). By placing that edge in the most significant position, in the common link prediction case where no edge exists between v s and v t , unpopulated elements are grouped in the second half of the vector. It is trivial to modify our code to achieve this by adjusting the value of enum constants and using a modified subgraphto-element mapping. Then one may truncate the vector either prior to outputting it in code or afterward with, for instance, the Unix cut command.

Appendix B
An anonymous reviewer has suggested presenting a more intuitive explanation of subgraph addressing. We maintain our primary explanation in the main body for its rigor, its ties to spectral graph theory, and its mathematical clarity in the permutation approach to identifying isomorphisms. We recognize, however, that the following explanation is more intuitive for many. We note that this only covers subgraph addressing and not VCP addressing. We thank the anonymous reviewer for providing the following explanation, which we have adjusted somewhat.
Our goal is to map a subgraph to an index in the subgraph universe G n,r,d , which has cardinality |G n,r,d | = 2 1 2 n(n−1)(d+1)r . Let the vertices of the subgraph be labeled v 1 = v s , v 2 = v t , v 3 , . . . , v n . Assume the graph is undirected, so d = 0. Then there are p = n 2 possible edges. Without loss of generality, we assume that these edges are lexicographically ordered and that the lower index is always first so that edge 1 is (v s , v t ), edge 2 is (v s , v 3 ), and edge p is (v n−1 , v n ). The index into the VCP vector is represented in binary with most significant bits first as: Edge p Edge p − 1 · · · Edge 1 Each edge is an r-bit value representing which relationships are present. For example, if there are r = 3 possible edge types, then the presence of the first and second is represented as 011, and the presence of none is represented as 000.
If the network is directed, so d = 1, then the number of possible edges doubles since we separately consider (v i , v j ) and (v j , v i ). We maintain the same lexicographical ordering on edges, but we admit both directions. So, the representation is: Edge 2p Edge 2p − 1 · · · · · · Edge 1 Clearly the number of bits is 2 d n 2 r. Since 2 d = d + 1 for d ∈ {0, 1}, we can see that this is the correct number of indices.