Fluid-structure interactions are a widespread phenomenon in nature. Although their numerical modeling have come a long way, the application of numerical design tools to these multiphysics problems is still lagging behind. Gradient-based optimization is the most popular approach in topology optimization currently. Hence, it's a necessity to utilize mesh deformation techniques that have continuous, smooth derivatives. In this work, we address mesh deformation techniques for structured, quadrilateral meshes. We discuss and comment on two legacy mesh deformation techniques; namely the spring analogy model and the linear elasticity model. In addition, we propose a new technique based on the Yeoh hyperelasticity model. We focus on mesh quality as a gateway to mesh admissibility. We propose layered selective stiffening such that the elements adjacent to the fluid-structure interface - where the bulk of the mesh distortion occurs - are stiffened in consecutive layers. The legacy and the new models are able to sustain large deformations without deprecating the mesh quality, and the results are enhanced with using layered selective stiffening.
This paper introduces a novel method for the efficient and accurate computation of volume fractions on unstructured polyhedral meshes, where the phase boundary is an orientable hypersurface, implicitly given as the iso-contour of a sufficiently smooth level-set function. Locally, i.e. in each mesh cell, we compute a principal coordinate system in which the hypersurface can be approximated as the graph of an osculating paraboloid. A recursive application of the \textsc{Gaussian} divergence theorem then allows to analytically transform the volume integrals to curve integrals associated to the polyhedron faces, which can be easily approximated numerically by means of standard \textsc{Gauss-Legendre} quadrature. This face-based formulation enables the applicability to unstructured meshes and considerably simplifies the numerical procedure for applications in three spatial dimensions. We discuss the theoretical foundations and provide details of the numerical algorithm. Finally, we present numerical results for convex and non-convex hypersurfaces embedded in cuboidal and tetrahedral meshes, showing both high accuracy and third- to fourth-order convergence with spatial resolution.
An adapted deflation preconditioner is employed to accelerate the solution of linear systems resulting from the discretization of fracture mechanics problems with well-conditioned extended/generalized finite elements. The deflation space typically used for linear elasticity problems is enriched with additional vectors, accounting for the enrichment functions used, thus effectively removing low frequency components of the error. To further improve performance, deflation is combined, in a multiplicative way, with a block-Jacobi preconditioner, which removes high frequency components of the error as well as linear dependencies introduced by enrichment. The resulting scheme is tested on a series of non-planar crack propagation problems and compared to alternative linear solvers in terms of performance.
In recent years, the concept of continuous-aperture MIMO (CAP-MIMO) is reinvestigated to achieve improved communication performance with limited antenna apertures. Unlike the classical MIMO composed of discrete antennas, CAP-MIMO has a continuous antenna surface, which is expected to generate any current distribution (i.e., pattern) and induce controllable spatial electromagnetic waves. In this way, the information can be modulated on the electromagnetic waves, which makes it promising to approach the ultimate capacity of finite apertures. The pattern design is the key factor to determine the system performance of CAP-MIMO, but it has not been well studied in the literature. In this paper, we propose the pattern-division multiplexing to design the patterns for CAP-MIMO. Specifically, we first derive the system model of a typical CAP-MIMO system, which allows us to formulate the capacity maximization problem. Then we propose a general pattern-division multiplexing technique to transform the design of continuous pattern functions to the design of their projection lengths on finite orthogonal bases, which is able to overcome the design challenge of continuous functions. Based on this technique, we further propose an alternating optimization based pattern design scheme to solve the formulated capacity maximization problem. Simulation results show that, the capacity achieved by the proposed scheme is about 260% higher than that achieved by the benchmark scheme, which demonstrates the effectiveness of the proposed pattern-division multiplexing for CAP-MIMO.
An essential ingredient of a spectral method is the choice of suitable bases for test and trial spaces. On complex domains, these bases are harder to devise, necessitating the use of domain partitioning techniques such as the spectral element method. In this study, we introduce the Projection Extension (PE) method, an approach that yields spectrally accurate solutions to various problems on complex geometries without requiring domain decomposition. This technique builds on the insights used by extension methodologies such as the immersed boundary smooth extension and Smooth Forcing Extension (SFE) methods that are designed to improve the order of accuracy of the immersed boundary method. In particular, it couples an accurate extension procedure, that functions on arbitrary domains regardless of connectedness or regularity, with a least squares minimization of the boundary conditions. The resulting procedure is stable under iterative application and straightforward to generalize to higher dimensions. Moreover, it rapidly and robustly yields exponentially convergent solutions to a number of challenging test problems, including elliptic, parabolic, Newtonian fluid flow, and viscoelastic problems.
Majority-SAT is the problem of determining whether an input $n$-variable formula in conjunctive normal form (CNF) has at least $2^{n-1}$ satisfying assignments. Majority-SAT and related problems have been studied extensively in various AI communities interested in the complexity of probabilistic planning and inference. Although Majority-SAT has been known to be PP-complete for over 40 years, the complexity of a natural variant has remained open: Majority-$k$SAT, where the input CNF formula is restricted to have clause width at most $k$. We prove that for every $k$, Majority-$k$SAT is in P. In fact, for any positive integer $k$ and rational $\rho \in (0,1)$ with bounded denominator, we give an algorithm that can determine whether a given $k$-CNF has at least $\rho \cdot 2^n$ satisfying assignments, in deterministic linear time (whereas the previous best-known algorithm ran in exponential time). Our algorithms have interesting positive implications for counting complexity and the complexity of inference, significantly reducing the known complexities of related problems such as E-MAJ-$k$SAT and MAJ-MAJ-$k$SAT. At the heart of our approach is an efficient method for solving threshold counting problems by extracting sunflowers found in the corresponding set system of a $k$-CNF. We also show that the tractability of Majority-$k$SAT is somewhat fragile. For the closely related GtMajority-SAT problem (where we ask whether a given formula has greater than $2^{n-1}$ satisfying assignments) which is known to be PP-complete, we show that GtMajority-$k$SAT is in P for $k\le 3$, but becomes NP-complete for $k\geq 4$. These results are counterintuitive, because the ``natural'' classifications of these problems would have been PP-completeness, and because there is a stark difference in the complexity of GtMajority-$k$SAT and Majority-$k$SAT for all $k\ge 4$.
In this paper, a time-periodic MGRIT algorithm is proposed as a means to reduce the time-to-solution of numerical algorithms by exploiting the time periodicity inherent to many applications in science and engineering. The time-periodic MGRIT algorithm is applied to a variety of linear and nonlinear single- and multiphysics problems that are periodic-in-time. It is demonstrated that the proposed parallel-in-time algorithm can obtain the same time-periodic steady-state solution as sequential time-stepping. It is shown that the required number of MGRIT iterations can be estimated a priori and that the new MGRIT variant can significantly and consistently reduce the time-to-solution compared to sequential time-stepping, irrespective of the number of dimensions, linear or nonlinear PDE models, single-physics or coupled problems and the employed computing resources. The numerical experiments demonstrate that the time-periodic MGRIT algorithm enables a greater level of parallelism yielding faster turnaround, and thus, facilitating more complex and more realistic problems to be solved.
We consider sensitivity of a generic stochastic optimization problem to model uncertainty. We take a non-parametric approach and capture model uncertainty using Wasserstein balls around the postulated model. We provide explicit formulae for the first order correction to both the value function and the optimizer and further extend our results to optimization under linear constraints. We present applications to statistics, machine learning, mathematical finance and uncertainty quantification. In particular, we provide explicit first-order approximation for square-root LASSO regression coefficients and deduce coefficient shrinkage compared to the ordinary least squares regression. We consider robustness of call option pricing and deduce a new Black-Scholes sensitivity, a non-parametric version of the so-called Vega. We also compute sensitivities of optimized certainty equivalents in finance and propose measures to quantify robustness of neural networks to adversarial examples.
Graph Neural Networks (GNNs) have received considerable attention on graph-structured data learning for a wide variety of tasks. The well-designed propagation mechanism which has been demonstrated effective is the most fundamental part of GNNs. Although most of GNNs basically follow a message passing manner, litter effort has been made to discover and analyze their essential relations. In this paper, we establish a surprising connection between different propagation mechanisms with a unified optimization problem, showing that despite the proliferation of various GNNs, in fact, their proposed propagation mechanisms are the optimal solution optimizing a feature fitting function over a wide class of graph kernels with a graph regularization term. Our proposed unified optimization framework, summarizing the commonalities between several of the most representative GNNs, not only provides a macroscopic view on surveying the relations between different GNNs, but also further opens up new opportunities for flexibly designing new GNNs. With the proposed framework, we discover that existing works usually utilize naive graph convolutional kernels for feature fitting function, and we further develop two novel objective functions considering adjustable graph kernels showing low-pass or high-pass filtering capabilities respectively. Moreover, we provide the convergence proofs and expressive power comparisons for the proposed models. Extensive experiments on benchmark datasets clearly show that the proposed GNNs not only outperform the state-of-the-art methods but also have good ability to alleviate over-smoothing, and further verify the feasibility for designing GNNs with our unified optimization framework.
Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.
The field of Multi-Agent System (MAS) is an active area of research within Artificial Intelligence, with an increasingly important impact in industrial and other real-world applications. Within a MAS, autonomous agents interact to pursue personal interests and/or to achieve common objectives. Distributed Constraint Optimization Problems (DCOPs) have emerged as one of the prominent agent architectures to govern the agents' autonomous behavior, where both algorithms and communication models are driven by the structure of the specific problem. During the last decade, several extensions to the DCOP model have enabled them to support MAS in complex, real-time, and uncertain environments. This survey aims at providing an overview of the DCOP model, giving a classification of its multiple extensions and addressing both resolution methods and applications that find a natural mapping within each class of DCOPs. The proposed classification suggests several future perspectives for DCOP extensions, and identifies challenges in the design of efficient resolution algorithms, possibly through the adaptation of strategies from different areas.