The goal of this project was to test whether a PINN can dynamically resolve physical fields at arbitrary length scales — effectively zooming in to reveal finer spatial structure. These results show that yes, it can — but with a clear tradeoff: resolving shorter wavelengths demands exponentially more compute.
For low-to-moderate spatial frequency (ka ≤ π), the PINN converges to 2–3% L2 error in 12–17 minutes. At these scales, PINNs offer genuine advantages over mesh-based solvers: continuous field access at any coordinate, no mesh generation, and physics enforced by construction. The network is a differentiable, resolution-independent surrogate — it can be queried anywhere without interpolation or remeshing.
As ka grows, training cost scales steeply: ka=2π needed 5× more epochs and a 50% wider network (259 min vs 12 min). At ka=3π, a 10+ hour run plateaued at 68% L2 despite loss dropping 5 orders of magnitude. Zooming in to finer features is equivalent to probing higher spatial frequencies, and the network faces a fundamental resolution–compute tradeoff analogous to the Nyquist limit in signal processing.
| ka | L2 Rel Error | Mean Error | Max Error | Network | Epochs | Runtime |
|---|---|---|---|---|---|---|
| ka = 0.5 | 8.23% | 1.90% | 4.04% | 256 / 4L / 64ff | 10K+200 | 12 min |
| ka = 1.0 | 3.57% | 1.34% | 2.99% | 256 / 4L / 64ff | 10K+200 | 13 min |
| ka = π | 2.41% | 1.09% | 2.96% | 256 / 4L / 64ff | 10K+200 | 17 min |
| ka = 2π | 2.00% | 0.93% | 4.37% | 384 / 6L / 96ff | 50K+300 | 259 min |
2–8% L2 across a 12× range of spatial frequency with a single architecture.
Fourier features with σ=k embed the wavelength directly, defeating spectral bias.
The convergent ka=2π run used default loss weights but 2.5× more epochs — 49%→2%. Compute budget, not loss rebalancing, was the binding constraint.
All runs: circle boundary + BGT2 ABC, L = 3.0. Trained on 4× NVIDIA RTX 4000 Ada.
| ka | L2 Rel Error | Mean Error | Max Error | Network | Epochs | Runtime |
|---|---|---|---|---|---|---|
| ka = 0.5 | 8.23% | 1.90% | 4.04% | 256 / 4L / 64ff | 10K+200 | 12 min |
| ka = 1.0 | 3.57% | 1.34% | 2.99% | 256 / 4L / 64ff | 10K+200 | 13 min |
| ka = π | 2.41% | 1.09% | 2.96% | 256 / 4L / 64ff | 10K+200 | 17 min |
| ka = 2π | 2.00% | 0.93% | 4.37% | 384 / 6L / 96ff | 50K+300 | 259 min |
Mean error stays below 2% across all ka values. Max error peaks at 4.4% for ka=2π, localized to the shadow boundary. All runs exhibit the Adam → L-BFGS transition (sharp loss drop). Higher ka requires progressively more compute: 12 min at ka=0.5 vs 259 min at ka=2π.
L2 relative error is highest at low ka because the scattered field amplitude is small (smaller normalization denominator) and these runs received 4× less compute (10K vs 50K epochs, smaller network). The low-ka errors could likely be reduced with comparable training budgets.
| ka ≤ π | ka = 2π | |
|---|---|---|
| Neurons | 256 | 384 |
| Layers | 4 | 6 |
| Fourier feat. | 64 | 96 |
| Adam epochs | 10K | 50K |
| L-BFGS iters | 200 | 300 |
| λpde / λbc | 1.0 / 10 | 1.0 / 10 |
| Wall time | 12–17 min | 259 min |
Default loss weights used for the convergent ka=2π run. An earlier attempt with rebalanced weights (λpde=0.25, λbc=15) and only 20K epochs stalled at 49% L2 error — sufficient training budget, not loss engineering, was the decisive factor.
The goal of this project was to test whether a PINN can dynamically resolve physical fields at arbitrary length scales — effectively zooming in to reveal finer spatial structure. These results show that it can, but with a clear tradeoff: resolving shorter wavelengths demands exponentially more compute.
For low-to-moderate spatial frequency (ka ≤ π), the PINN converges to 2–3% L2 error in 12–17 minutes. At these scales, PINNs offer genuine advantages over mesh-based solvers: continuous field access at any point, no mesh generation, and physics enforced by construction. The network is a differentiable, resolution-independent surrogate for the field — it can be queried at arbitrary coordinates without interpolation or remeshing.
As ka grows, training cost scales steeply: ka=2π needed 5× more epochs and a 50% wider network (259 min vs 12 min). At ka=3π, a 10+ hour run plateaued at 68% L2 despite loss dropping 5 orders of magnitude. Zooming in to finer features is equivalent to probing higher spatial frequencies, and the network faces a fundamental resolution–compute tradeoff analogous to the Nyquist limit in signal processing. At the frontier, compute budget — not architecture or loss design — is the binding constraint.
PINN predictions vs analytic Bessel/Hankel series. Each comparison shows PINN | Analytic | Error.
Parameter-matched comparison — FF-PINN (64 Fourier features, 256 hidden, ~296K params) vs plain MLP (272 hidden, ~298K params). Both used identical 10K Adam + 200 L-BFGS schedules — shorter than production — to isolate the architectural effect. Same loss weights (λpde=1, λbc=10).
| FF-PINN | Plain MLP | |
|---|---|---|
| L2 relative | 2.42% | 2.42% |
| Max error | 2.97% | 2.99% |
No meaningful difference — at low frequency, the standard MLP can represent the scattered field without Fourier features. Spectral bias is not a bottleneck here.
| FF-PINN | Plain MLP | Ratio | |
|---|---|---|---|
| L2 relative | 10.1% | 11.7% | 1.15× worse |
| Max error | 22.7% | 47.3% | 2.1× worse |
The max error doubles without Fourier features. The plain MLP’s L-BFGS phase actually increased its error (L2: 10.9%→11.7%), while the FF-PINN’s L-BFGS phase dramatically improved it (19.6%→10.1%). This suggests Fourier features create a smoother loss landscape that second-order optimizers can exploit more effectively.
Assumes purely outgoing plane waves. Works on any boundary shape but ignores wavefront curvature — introducing O(1/r) reflection error.
The curvature correction \(\phi_s/2r\) accounts for the \(1/\sqrt{r}\) amplitude decay of cylindrical waves, annihilating one more term in the scattered field series [Bayliss, Gunzburger & Turkel, 1982].
19-circle hexagonal lattice (3 concentric rings), rs = 0.15, d = 0.4, same cluster radius as single cylinder. No analytic solution — pure PINN.
| Single Cylinder | Honeycomb Lattice | |
|---|---|---|
| Scatterers | 1, r = 1.0 | 19, rs = 0.15 |
| Validation | Analytic (Bessel/Hankel) | PINN only |
| ka | 0.5 – 2π | 2.0 |
| Loss Component | Value | Status |
|---|---|---|
| PDE | 4.92e-6 | Converged |
| BC (19 surfaces) | 2.00e-7 | Converged |
| ABC | 1.41e-7 | Converged |
| Total | 9.06e-6 | Converged |
Each component is an MSE of the corresponding residual evaluated at sampled collocation points. The scattered-field formulation means the Neumann BC target is known analytically from the incident field.
| Region | Points | Method |
|---|---|---|
| Interior | 10K–20K | Uniform rejection sampling |
| Scatterer BC | 200–400 | Uniform on circle |
| Outer ABC | 400–600 | Uniform on circle |
Collocation points are resampled every 2,000 epochs during Adam training with 50% blending (old + new) for stability.
Accuracy is measured against the analytic Bessel/Hankel series solution on a dense 200×200 grid. Metrics: L2 relative error and max pointwise error. The series uses \(N = \lceil ka + 20 \rceil\) terms for convergence.
Acoustic scattering of a time-harmonic plane wave off a sound-hard (rigid) circular cylinder is one of the canonical problems in mathematical physics with a known exact solution. This makes it an ideal validation benchmark for PINN accuracy.
The incident plane wave \(\phi_{\text{inc}} = e^{ikx}\) is decomposed into cylindrical harmonics via the Jacobi-Anger identity:
where \(J_n\) is the Bessel function of the first kind of order \(n\), and \((r,\theta)\) are polar coordinates centered on the cylinder.
The scattered field is expanded in outgoing cylindrical waves using Hankel functions of the first kind \(H_n^{(1)}\), which satisfy the Sommerfeld radiation condition \(\lim_{r\to\infty} \sqrt{r}\,(\partial\phi_s/\partial r - ik\phi_s) = 0\):
For a rigid cylinder the total field satisfies a Neumann (zero normal velocity) condition at the surface \(r = a\):
Substituting the series and solving term-by-term gives the exact coefficients:
where primes denote derivatives with respect to the argument. The complete exact scattered field is therefore:
The PINN is trained on three distinct sets of collocation points, each enforcing a different physical constraint. Points are resampled every 2,000 Adam epochs with 50% blending (old + new) for stability.
Uniformly sampled inside the annular domain (between scatterer and ABC boundary). These points enforce the Helmholtz PDE: the scattered field must satisfy the wave equation everywhere in the fluid.
Sampled on the scatterer surface (r = a). These enforce the sound-hard boundary condition: the normal derivative of the total field must vanish, meaning no energy passes through the cylinder wall.
Sampled on the circular outer boundary (r = L). These enforce the absorbing boundary condition, which approximates a non-reflecting boundary so outgoing waves exit without artificial reflections.
Training minimizes a weighted sum of three residual losses, each corresponding to one of the point types above. All losses are mean squared residuals.
The weights \(\lambda\) balance the three objectives. All production runs use default weights: \(\lambda_{\text{pde}} = 1\), \(\lambda_{\text{bc}} = 10\), \(\lambda_{\text{abc}} = 1\). The elevated \(\lambda_{\text{bc}}\) ensures the sound-hard boundary condition is enforced strongly.
Measures how well the network output satisfies the Helmholtz wave equation at interior collocation points. A value near zero means the predicted field is a valid solution to the PDE. Computed via autograd second derivatives split into real and imaginary parts.
Enforces the sound-hard (rigid) boundary on the cylinder surface: the total normal velocity must be zero. The network learns the scattered field whose normal derivative cancels that of the known incident wave \(\phi_{\text{inc}} = e^{ikx}\).
Penalizes spurious reflections from the truncated domain boundary. The BGT2 operator assumes outgoing cylindrical waves with \(1/\sqrt{r}\) amplitude decay, absorbing two leading terms of the scattered field expansion to minimize artificial reflections back into the domain.
After training, the PINN is evaluated against the analytic Bessel/Hankel series solution on a dense 200×200 grid. The scattered field is complex-valued \(\phi_s = u + iv\), so all errors use complex magnitude.
The magnitude of the complex error at each grid point. Shown as heatmaps in the error gallery plots. Reveals where the network struggles — typically near the scatterer surface and in the shadow region where field gradients are steepest.
The primary accuracy metric, reported as a percentage. Measures overall solution quality normalized by the magnitude of the true field, so it is comparable across different ka values where field amplitudes vary.
The worst-case error anywhere in the domain. Sensitive to localized failures — a network can have low L2 error but high max error if it fails in a small region (e.g., near the shadow boundary or Poisson bright spot).
Average absolute error across all grid points. Less sensitive to outliers than max error but less normalized than L2 relative error. Useful for judging typical prediction quality at an arbitrary point in the domain.