03Simplicial Complexes
The first natural extension of a graph is the simplicial complex — the combinatorial object most familiar from algebraic topology and finite element methods. If you've worked with tetrahedral meshes in FEA, you already know simplicial complexes intimately; you just may not have called them that.
An abstract simplicial complex $\mathcal{K}$ on a vertex set $V$ is a collection of non-empty subsets of $V$ (called simplices) such that:
(i) For every $v \in V$, the singleton $\{v\} \in \mathcal{K}$ (all vertices are in $\mathcal{K}$).
(ii) If $\sigma \in \mathcal{K}$ and $\tau \subseteq \sigma$ with $\tau \neq \emptyset$, then $\tau \in \mathcal{K}$ (the closure property).
A simplex $\sigma$ with $|\sigma| = k+1$ is called a $k$-simplex and has rank $k$.
Why This Matters and Why It's Limiting
Simplicial complexes have beautiful algebraic structure — boundary operators, chain groups, homology — and they're the foundation of finite element methods. But they have a fundamental rigidity: a $k$-simplex always has exactly $k+1$ vertices. A 2-cell is always a triangle. A 3-cell is always a tetrahedron. This is fine for tetrahedral meshes, but overly restrictive for general deep learning.
The limitation: In a simplicial complex, you cannot represent a square (4-vertex) face, a pentagonal region, or any $k$-cell with more than $k+1$ vertices. Yet many physical domains — hexahedral mesh elements, arbitrary material regions in chip layouts, polygonal geometry — require exactly this flexibility.
04Cell Complexes (CW Complexes)
To relax the "exactly $k+1$ vertices" constraint while preserving hierarchical structure, we turn to cell complexes — specifically, regular CW complexes.
A regular cell complex $\mathcal{X}$ is a topological space decomposed into cells $\{e_\alpha^k\}$ of various dimensions $k$, such that:
(i) Each $k$-cell $e_\alpha^k$ is homeomorphic to an open $k$-disk.
(ii) The boundary of each $k$-cell is a union of lower-dimensional cells in $\mathcal{X}$.
(iii) (Regularity) The attaching map of each cell is a homeomorphism on the boundary.
The key relaxation: a 2-cell in a cell complex can be a square, a pentagon, a hexagon — any polygon. It only needs to be bounded by 1-cells (edges) and 0-cells (vertices).
Cell complexes relax the geometric restriction of simplices but retain the topological inclusion constraint: the boundary of every $k$-cell must be composed of $(k\!-\!1)$-cells that are already in the complex. This is the cell-complex version of closure: you can have a square face, but all four of its bounding edges must be present.
FEA connection: If you've worked with hexahedral or mixed-element FEM meshes, you've already been working with cell complexes. A hex mesh uses 3-cells (cubes/hexahedra) whose boundaries are quadrilateral 2-cells — impossible in a simplicial complex.
05Combinatorial Complexes
This is the central construction of the Hajij et al. paper and the key innovation. A combinatorial complex (CC) relaxes both the rigid geometry of simplicial complexes and the strict boundary requirements of cell complexes.
A combinatorial complex $(S, \mathcal{X}, \text{rk})$ consists of:
• A finite set $S$ (the vertex set)
• A set $\mathcal{X} \subseteq \mathcal{P}(S) \setminus \{\emptyset\}$ of non-empty subsets of $S$ (the cells)
• A rank function $\text{rk}: \mathcal{X} \to \mathbb{Z}_{\geq 0}$
satisfying two axioms:
(CC1) For all $s \in S$: $\{s\} \in \mathcal{X}$ and $\text{rk}(\{s\}) = 0$ (every vertex is a rank-0 cell).
(CC2) If $x, y \in \mathcal{X}$ with $x \subsetneq y$, then $\text{rk}(x) < \text{rk}(y)$ (rank respects inclusion).
Axiom CC2 is the crucial design choice. It says: if one cell is strictly contained in another (as a set of vertices), the contained cell must have strictly lower rank. But — critically — it does not require every face of a cell to be present.
Why CCs Are the Right Abstraction for Neural Networks
The key insight is that in deep learning, we don't need the full topological structure that algebraic topologists care about (homology, homotopy). We need a computationally tractable way to define neighborhoods, pass messages between cells of different ranks, and aggregate information hierarchically. Combinatorial complexes provide exactly this, with minimal structural overhead.
| Property | Simplicial | Cell | Combinatorial |
|---|---|---|---|
| $k$-cell vertex count | Exactly $k+1$ | Any ≥ $k+1$ | Any ≥ 1 |
| Closure / boundary | Full downward closure | Boundary cells required | None required |
| Rank constraint | $\text{rk}(\sigma) = |\sigma| - 1$ | Dimension from topology | $x \subsetneq y \Rightarrow \text{rk}(x) < \text{rk}(y)$ |
| Hypergraph superset? | No | No | Yes — every hypergraph is a CC |
| Simplicial superset? | — | Yes | Yes |
The crucial generalization: A combinatorial complex subsumes both simplicial complexes and hypergraphs as special cases. This means any neural network architecture defined on a CC automatically works on graphs, simplicial complexes, cell complexes, and hypergraphs.