Chapter 06

Signals, Neighborhoods & Message Passing

Cochains as feature spaces, the three types of adjacency, and the general higher-order message passing framework.

In this chapter
  1. 06Signals, Cochains & Feature Spaces
  2. 07Neighborhood & Adjacency
  3. 08Higher-Order Message Passing

06Signals, Cochains & Feature Spaces

With the domain structure (the complex) established, we now need to put data on it. In physics, you assign fields to points (scalar fields), curves (line integrals), surfaces (flux), and volumes (densities). The discrete version of this is a cochain.

Definition — $k$-Cochain

Let $\mathcal{X}_k$ denote the set of all rank-$k$ cells in a CC. A $k$-cochain is a function $\mathbf{h}: \mathcal{X}_k \to \mathbb{R}^{d_k}$ assigning a feature vector to each rank-$k$ cell. The space of all $k$-cochains is $C^k(\mathcal{X}; \mathbb{R}^{d_k}) \cong \mathbb{R}^{|\mathcal{X}_k| \times d_k}$.

Features on a Complex 0-cochain: h⁰ ∈ ℝ⁵ˣᵈ⁰ e.g., material type, temperature 1-cochain: h¹ ∈ ℝ⁵ˣᵈ¹ e.g., contact conductance, flux 2-cochain: h² ∈ ℝ¹ˣᵈ² e.g., eff. conductivity tensor
Figure 6.1. A cochain assigns feature vectors to cells of each rank. Crucially, the feature dimension $d_k$ can differ by rank — vertex features (material properties at a point) can have different dimensionality than edge features (interface properties) or face features (effective tensor properties).

Physics connection: This is exactly the discrete de Rham complex. In EM, 0-cochains correspond to scalar potentials at nodes, 1-cochains to line integrals of the vector potential along edges, 2-cochains to magnetic flux through faces, and 3-cochains to charge density in volumes. The TDL framework generalizes this by allowing learned feature vectors instead of single scalars.

The Full State of a CC

The complete state of a CC with maximum rank $K$ is a tuple of cochains at each rank:

State of a Combinatorial Complex
$$\mathbf{H} = \left(\mathbf{H}^{(0)}, \, \mathbf{H}^{(1)}, \, \ldots, \, \mathbf{H}^{(K)}\right) \quad \text{where} \quad \mathbf{H}^{(k)} \in \mathbb{R}^{|\mathcal{X}_k| \times d_k}$$

Each row of $\mathbf{H}^{(k)}$ is the feature vector of one rank-$k$ cell. A topological neural network layer takes this full tuple as input and produces an updated tuple as output — it updates features across all ranks simultaneously.



07Neighborhood & Adjacency

In a graph, the neighborhood of a node is simple: the set of nodes connected by an edge. In a CC, there are many possible notions of neighborhood, because cells can be related by sharing vertices, sharing boundaries, or being contained in one another. The Hajij et al. paper systematically catalogs these.

Three Types of Adjacency

(Co-)Adjacency Same rank, share a boundary cell σ₁ σ₂ σ₁ and σ₂ are adjacent (both rank 2, share an edge) A_{k,k} matrix e.g., graph adjacency A is A_{0,0} via shared edges Incidence Cross-rank: cell ⊂ higher cell σ (rank 2) τ (rank 1) τ ⊂ σ τ is incident to σ (τ is on σ's boundary) B_{k,k+1} matrix boundary: rank k+1 → rank k coboundary: rank k → rank k+1 Upper/Lower Adjacency Same rank, share a higher cell e₁ e₂ e₃ e₁ and e₂ are upper-adjacent (both rank 1, co-faces of σ) coA_{k,k} = B·Bᵀ Lower adj: share a face Upper adj: share a coface
Figure 7.1. Three types of adjacency in a topological domain. Co-adjacency (same rank, shared lower cell) generalizes the graph adjacency matrix. Incidence (cross-rank containment) captures the boundary/coboundary relationship. Upper/lower adjacency (same rank, shared higher/lower cell) enables message passing mediated by higher-order context.

The Neighborhood Matrix Zoo

For a CC with ranks 0, 1, ..., $K$, the paper defines a rich set of matrices encoding different neighborhood relations. The most important ones:

MatrixDimensionMeaning
$\mathbf{B}_{k,k+1}$ $|\mathcal{X}_k| \times |\mathcal{X}_{k+1}|$ Incidence: which $k$-cells are on the boundary of which $(k\!+\!1)$-cells
$\mathbf{A}_{k,\downarrow} = \mathbf{B}_{k-1,k}^\top \mathbf{B}_{k-1,k}$ $|\mathcal{X}_k| \times |\mathcal{X}_k|$ Lower adjacency: $k$-cells sharing a $(k\!-\!1)$-cell boundary
$\mathbf{A}_{k,\uparrow} = \mathbf{B}_{k,k+1} \mathbf{B}_{k,k+1}^\top$ $|\mathcal{X}_k| \times |\mathcal{X}_k|$ Upper adjacency: $k$-cells sharing a $(k\!+\!1)$-cell coface

Standard graph adjacency is a special case. The usual graph adjacency matrix $\mathbf{A}$ is the upper adjacency of rank-0 cells (vertices) via rank-1 cells (edges): $\mathbf{A} = \mathbf{A}_{0,\uparrow} = \mathbf{B}_{0,1}\mathbf{B}_{0,1}^\top$. The entire GNN framework operates with just this one matrix. Topological deep learning opens up all the others.



08Higher-Order Message Passing

With the neighborhood structure defined, we can now write down the general message-passing scheme on a CC. This is the paper's main contribution: a unified framework for updating features on cells of any rank, using messages from cells of any rank.

The General Update Rule

For a rank-$k$ cell $x$, the update at layer $\ell$ aggregates messages from multiple neighborhood types:

Higher-Order Message Passing (HOMP)
$$\mathbf{h}_x^{(k, \ell+1)} = \text{AGG}\!\left(\mathbf{h}_x^{(k,\ell)}, \;\; \bigoplus_{\mathcal{N} \in \{\mathcal{N}_\downarrow, \mathcal{N}_\uparrow, \mathcal{N}_{\text{adj}}, \mathcal{N}_{\text{inc}}\}} \bigoplus_{y \in \mathcal{N}(x)} m_\mathcal{N}\!\left(\mathbf{h}_x^{(k,\ell)},\, \mathbf{h}_y^{(r_y, \ell)}\right)\right)$$

where the neighborhoods include:

Cell x (rank k) Boundary (rank k−1) 𝒩↓ Cofaces (rank k+1) 𝒩↑ Lower-adj (rank k) 𝒩adj,↓ Upper-adj (rank k) 𝒩adj,↑ Each arrow represents a separate message-passing channel with its own learnable parameters
Figure 8.1. A rank-$k$ cell receives messages from four types of neighborhoods: boundary cells (rank $k\!-\!1$), coface cells (rank $k\!+\!1$), lower-adjacent same-rank cells (sharing a boundary cell), and upper-adjacent same-rank cells (sharing a coface). Each channel has independent learnable weights.

In Matrix Form

For a full layer operating on all rank-$k$ cells simultaneously, the update takes an elegant matrix form. Here's the simplified version for a rank-$k$ cochain $\mathbf{H}^{(k)}$:

Matrix form of one message-passing channel
$$\mathbf{H}^{(k, \ell+1)} = \sigma\!\left(\mathbf{A}_{k,\downarrow}\, \mathbf{H}^{(k,\ell)}\, \mathbf{W}_{\downarrow}^{(\ell)} + \mathbf{A}_{k,\uparrow}\, \mathbf{H}^{(k,\ell)}\, \mathbf{W}_{\uparrow}^{(\ell)} + \mathbf{B}_{k,k+1}\, \mathbf{H}^{(k+1,\ell)}\, \mathbf{W}_{\text{inc}}^{(\ell)} + \mathbf{H}^{(k,\ell)}\, \mathbf{W}_{\text{self}}^{(\ell)}\right)$$

This should look familiar — it's a direct generalization of the GCN update $\mathbf{H}^{(\ell+1)} = \sigma(\hat{\mathbf{A}}\mathbf{H}^{(\ell)}\mathbf{W}^{(\ell)})$, but now with multiple adjacency matrices operating across multiple ranks.

The key structural insight: In a standard GNN, information can only flow along edges (rank 1). In HOMP, information flows up (from boundaries to volumes), down (from volumes to boundaries), and laterally (between same-rank cells that share higher or lower cells). This enables the network to capture multi-scale, multi-resolution physics natively.