Abstract
Partitioned neural network functions are used to approximate the solution of partial differential equations. The problem domain is partitioned into non-overlapping subdomains and the partitioned neural network functions are defined on the given non-overlapping subdomains. Each neural network function then approximates the solution in one subdomain. To obtain the convergent neural network solution, certain continuity conditions on the partitioned neural network functions across the subdomain interface need to be included in the loss function, that is used to train the parameters in the neural network functions. In our work, by introducing suitable interface values, the loss function is reformulated into a sum of localized loss functions and each localized loss function is used to train the corresponding local neural network parameters. In addition, to accelerate the neural network solution convergence, the localized loss function is enriched with an augmented Lagrangian term, where the interface condition and the boundary condition are enforced as constraints on the local solutions by using Lagrange multipliers. The local neural network parameters and Lagrange multipliers are then found by optimizing the localized loss function. To take the advantage of the localized loss function for the parallel computation, an iterative algorithm is also proposed. For the proposed algorithms, their training performance and convergence are numerically studied for various test examples.
Original language | English |
---|---|
Article number | 117168 |
Journal | Computer Methods in Applied Mechanics and Engineering |
Volume | 429 |
DOIs | |
Publication status | Published - 1 Sept 2024 |
Bibliographical note
Publisher Copyright:© 2024 Elsevier B.V.
Keywords
- Iterative algorithm
- Lagrange multipliers
- Localized loss function
- Non-overlapping domain decomposition
- Partial differential equations
- Partitioned neural networks