Chapter 08

Applications & Reading Roadmap

Connecting topological deep learning to thermal physics simulation, and a guided roadmap through the full Hajij et al. paper.

In this chapter
  1. 11From Topology to Thermal Physics
  2. 12Roadmap to the Full Paper

11From Topology to Thermal Physics

With the mathematical framework in place, we can now see why topological deep learning is a natural fit for thermal simulation — the application that Vinci AI has commercialized.

A Chip Package as a Combinatorial Complex

Consider a 3D stacked IC package. Its structure maps naturally to a CC:

Chip Package → Combinatorial Complex Physical Structure TIM (volume) Carrier Si (volume) BEOL (detailed geometry) Hybrid Bonding (volume) Si 2 (volume) interface CC Representation Rank 0 (vertices) Grid points, via nodes Rank 1 (edges) Material interfaces, thermal paths Rank 2 (faces / tiles) Homogenization regions, surfaces Rank 3 (volumes) Material layers, package volumes
Figure 11.1. A chip package maps to a CC: grid nodes are rank-0 cells, material interfaces are rank-1, homogenization tiles are rank-2, and volumetric layers are rank-3. Heat source distributions, boundary conditions, and material properties become features (cochains) at the appropriate rank.

Why Anisotropic Message Passing Matters

In the EPTC 2025 paper, the BEOL layer exhibits extreme anisotropy: $k_y^{\text{eff}} / k_z^{\text{eff}} \approx 23\times$. Heat flows much more easily in-plane than through-plane. A standard GNN treats all message-passing directions identically. The copresheaf structure (Hajij et al., NeurIPS 2025, arXiv:2505.21251) adds direction-dependent, learnable linear maps between cells — the network can learn that vertical and horizontal information flow have fundamentally different character, without this being hard-coded.

The CTNN advantage for physics: In a copresheaf topological neural network, the "restriction maps" between cells are learnable linear operators, not fixed aggregation rules. For thermal simulation, this means the network can learn that information flowing from a BEOL tile to its neighbor through a shared edge should be weighted differently than information flowing up through a layer interface — because these represent physically different thermal transport pathways.

From Message Passing to PDE Solving

The steady-state heat equation $\nabla \cdot (k \nabla T) = 0$ is, on a discrete mesh, equivalent to a system $\mathbf{L}\mathbf{T} = \mathbf{f}$ where $\mathbf{L}$ is a Laplacian operator weighted by thermal conductivities. The Hodge Laplacian $L_k$ on a CC is a direct generalization. The connection is not merely analogical — message passing on a CC with Hodge-Laplacian-weighted adjacency is literally performing iterative relaxation of the discrete heat equation.

The connection: Message Passing ↔ PDE Iteration
$$\underbrace{\mathbf{T}^{(n+1)} = \mathbf{T}^{(n)} - \alpha \mathbf{L}_k \mathbf{T}^{(n)}}_{\text{Jacobi iteration on heat equation}} \quad \longleftrightarrow \quad \underbrace{\mathbf{H}^{(\ell+1)} = \sigma\!\left(\hat{\mathbf{L}}_k \mathbf{H}^{(\ell)} \mathbf{W}^{(\ell)}\right)}_{\text{Topological neural network layer}}$$

The neural network version replaces the fixed step size $\alpha$ with learned weights $\mathbf{W}$, the linear update with a nonlinear activation $\sigma$, and the single Laplacian with a combination of adjacency matrices from multiple ranks and neighborhoods. This is why a well-trained topological neural network can solve PDEs much faster than traditional iterative solvers — it learns an optimized, nonlinear, multi-scale iteration scheme.



12Roadmap to the Full Paper

You now have the conceptual and mathematical vocabulary needed to read Hajij et al., "Topological Deep Learning: Going Beyond Graph Data" (arXiv:2206.00606). Here is a guide to the paper's structure, mapped to what you've learned:

Paper SectionWhat You'll FindBackground from This Guide
§1–2: Introduction Motivation, related work Section 1 (Why Go Beyond Graphs)
§3: Topological Domains Formal definitions of simplicial, cell, and combinatorial complexes Sections 3–5
§4: Topological Signals Cochains, cochain spaces Section 6
§5: Higher-Order Neighborhoods Adjacency and incidence matrices, neighborhood functions Section 7
§6: Tensor Diagrams Unified computational framework — new material, build on Section 8 Section 8 (HOMP)
§7: Message Passing General HOMP framework, push-forward and merge operators Section 8
§8: Topological Pooling Coarsening strategies — new material Analogous to graph pooling
§9: Architectures Detailed review of SNN, MPSN, CWN, etc. Section 10
Appendices Homology, Betti numbers, Hodge theory Section 9

Key Papers for Further Study

  1. The foundational paper — Hajij et al., "Topological Deep Learning: Going Beyond Graph Data." arXiv:2206.00606 (2022). Read this next.
  2. The Vinci architecture — Hajij, Bastian, Osentoski, Kabaria et al., "Copresheaf Topological Neural Networks." arXiv:2505.21251 (2025), NeurIPS 2025. Adds the directional/learnable message-passing maps (copresheaf structure).
  3. Open-source implementation — TopoModelX (github.com/pyt-team/TopoModelX). PyTorch implementations of the architectures discussed in the paper.
  4. Benchmarking — Hajij et al., "A Framework for Benchmarking Topological Deep Learning." arXiv:2406.06642 (2024). Systematic comparison of TDL architectures on standardized tasks.
  5. The thermal application — Kabaria et al., "Thermal Sensitivity Analysis of 3D IC Face-to-Back Stacking Using Foundation Models for Physics." IEEE EPTC 2025. Shows TDL applied to real semiconductor thermal simulation.

The big picture: Topological deep learning isn't just a new flavor of GNN. It's a category-theoretic framework that treats deep learning layers as functors on topological domains. The combinatorial complex provides the most general domain; cochains provide the signals; higher-order message passing provides the computation; and the Hodge Laplacian provides spectral grounding. The Vinci/CTNN extension adds morphisms between feature spaces (the copresheaf), turning each message-passing step into a structure-preserving map rather than a flat aggregation. This is what enables it to capture the anisotropic, heterogeneous physics of semiconductor thermal simulation.