06Signals, Cochains & Feature Spaces
With the domain structure (the complex) established, we now need to put data on it. In physics, you assign fields to points (scalar fields), curves (line integrals), surfaces (flux), and volumes (densities). The discrete version of this is a cochain.
Let $\mathcal{X}_k$ denote the set of all rank-$k$ cells in a CC. A $k$-cochain is a function $\mathbf{h}: \mathcal{X}_k \to \mathbb{R}^{d_k}$ assigning a feature vector to each rank-$k$ cell. The space of all $k$-cochains is $C^k(\mathcal{X}; \mathbb{R}^{d_k}) \cong \mathbb{R}^{|\mathcal{X}_k| \times d_k}$.
Physics connection: This is exactly the discrete de Rham complex. In EM, 0-cochains correspond to scalar potentials at nodes, 1-cochains to line integrals of the vector potential along edges, 2-cochains to magnetic flux through faces, and 3-cochains to charge density in volumes. The TDL framework generalizes this by allowing learned feature vectors instead of single scalars.
The Full State of a CC
The complete state of a CC with maximum rank $K$ is a tuple of cochains at each rank:
Each row of $\mathbf{H}^{(k)}$ is the feature vector of one rank-$k$ cell. A topological neural network layer takes this full tuple as input and produces an updated tuple as output — it updates features across all ranks simultaneously.
07Neighborhood & Adjacency
In a graph, the neighborhood of a node is simple: the set of nodes connected by an edge. In a CC, there are many possible notions of neighborhood, because cells can be related by sharing vertices, sharing boundaries, or being contained in one another. The Hajij et al. paper systematically catalogs these.
Three Types of Adjacency
The Neighborhood Matrix Zoo
For a CC with ranks 0, 1, ..., $K$, the paper defines a rich set of matrices encoding different neighborhood relations. The most important ones:
| Matrix | Dimension | Meaning |
|---|---|---|
| $\mathbf{B}_{k,k+1}$ | $|\mathcal{X}_k| \times |\mathcal{X}_{k+1}|$ | Incidence: which $k$-cells are on the boundary of which $(k\!+\!1)$-cells |
| $\mathbf{A}_{k,\downarrow} = \mathbf{B}_{k-1,k}^\top \mathbf{B}_{k-1,k}$ | $|\mathcal{X}_k| \times |\mathcal{X}_k|$ | Lower adjacency: $k$-cells sharing a $(k\!-\!1)$-cell boundary |
| $\mathbf{A}_{k,\uparrow} = \mathbf{B}_{k,k+1} \mathbf{B}_{k,k+1}^\top$ | $|\mathcal{X}_k| \times |\mathcal{X}_k|$ | Upper adjacency: $k$-cells sharing a $(k\!+\!1)$-cell coface |
Standard graph adjacency is a special case. The usual graph adjacency matrix $\mathbf{A}$ is the upper adjacency of rank-0 cells (vertices) via rank-1 cells (edges): $\mathbf{A} = \mathbf{A}_{0,\uparrow} = \mathbf{B}_{0,1}\mathbf{B}_{0,1}^\top$. The entire GNN framework operates with just this one matrix. Topological deep learning opens up all the others.
08Higher-Order Message Passing
With the neighborhood structure defined, we can now write down the general message-passing scheme on a CC. This is the paper's main contribution: a unified framework for updating features on cells of any rank, using messages from cells of any rank.
The General Update Rule
For a rank-$k$ cell $x$, the update at layer $\ell$ aggregates messages from multiple neighborhood types:
where the neighborhoods include:
In Matrix Form
For a full layer operating on all rank-$k$ cells simultaneously, the update takes an elegant matrix form. Here's the simplified version for a rank-$k$ cochain $\mathbf{H}^{(k)}$:
This should look familiar — it's a direct generalization of the GCN update $\mathbf{H}^{(\ell+1)} = \sigma(\hat{\mathbf{A}}\mathbf{H}^{(\ell)}\mathbf{W}^{(\ell)})$, but now with multiple adjacency matrices operating across multiple ranks.
The key structural insight: In a standard GNN, information can only flow along edges (rank 1). In HOMP, information flows up (from boundaries to volumes), down (from volumes to boundaries), and laterally (between same-rank cells that share higher or lower cells). This enables the network to capture multi-scale, multi-resolution physics natively.