09Hodge Theory & the Laplacian
If you know the graph Laplacian $\mathbf{L} = \mathbf{D} - \mathbf{A}$ (or its normalized variants), you know that it governs diffusion on graphs and that its eigenvectors provide a spectral basis for graph signals. The Hodge Laplacian generalizes this to all ranks.
For rank-$k$ cells in a simplicial or cell complex, the $k$-th Hodge Laplacian is:
$$\mathbf{L}_k = \underbrace{\mathbf{B}_{k-1,k}^\top \mathbf{B}_{k-1,k}}_{\text{lower Laplacian } \mathbf{L}_k^{\text{down}}} + \underbrace{\mathbf{B}_{k,k+1} \mathbf{B}_{k,k+1}^\top}_{\text{upper Laplacian } \mathbf{L}_k^{\text{up}}}$$with the convention $\mathbf{B}_{-1,0} = 0$ and $\mathbf{B}_{K,K+1} = 0$. For $k=0$, this recovers the graph Laplacian: $\mathbf{L}_0 = \mathbf{B}_{0,1}\mathbf{B}_{0,1}^\top$.
Physics connection — Hodge decomposition: The kernel of $L_k$ (harmonic cochains) corresponds to topological features: $\dim \ker L_k = \beta_k$, the $k$-th Betti number. For $k=0$, $\beta_0$ counts connected components. For $k=1$, $\beta_1$ counts independent loops. This is the discrete Hodge theorem — the discrete analogue of the decomposition of differential forms into exact, coexact, and harmonic components. Every $k$-cochain $\omega$ admits: $$\omega = \underbrace{d_{k-1} \alpha}_{\text{exact}} + \underbrace{d_k^* \beta}_{\text{coexact}} + \underbrace{\gamma}_{\text{harmonic}}$$ where $d_k$ is the coboundary operator ($B_{k,k+1}^\top$ in matrix form) and $d_k^*$ is its adjoint.
Why Hodge Matters for Neural Networks
The Hodge Laplacian provides a spectrally meaningful diffusion operator for each rank. Just as spectral GNNs use eigenvectors of $L_0$ to define graph convolutions (via the graph Fourier transform), spectral topological networks can use eigenvectors of $L_k$ to define convolutions on $k$-cochains. The Hodge decomposition ensures these convolutions respect the topological structure of the complex.
10The Architecture Zoo
The Hajij et al. paper surveys and unifies a large number of architectures that had been proposed independently. Here is a map of the landscape:
Selected Architecture Summaries
Simplicial Neural Networks (SNN / SCN)
Operate on simplicial complexes using the Hodge Laplacians $L_k$ as the message-passing operator. The update for rank-$k$ features is a polynomial filter on $L_k$, analogous to ChebNet on graphs. These architectures respect the Hodge decomposition.
Message Passing Simplicial Networks (MPSN)
The most general simplicial architecture: uses separate message functions for boundary ($\mathcal{N}_\downarrow$), coboundary ($\mathcal{N}_\uparrow$), and adjacency ($\mathcal{N}_{\text{adj}}$) neighborhoods at each rank. A strict generalization of standard MPNN.
CW Networks (CWN)
Extend MPSN to cell complexes, allowing non-simplicial cells. The key addition: messages can now flow along boundaries of arbitrary polygon/polyhedron cells, not just triangles and tetrahedra.
The Hajij et al. Unification
The paper's central technical contribution is showing that all of these architectures — and several more — can be expressed as instances of a single tensor diagram on a CC. The tensor diagram specifies which neighborhood matrices are used, how messages are computed, and how they're aggregated. Different choices of these components recover different existing architectures.