This paper extends the nonsmooth Relaxed Variational Approach (RVA) to topology optimization, proposed by the authors in a preceding work, to the solution of thermal optimization problems. First, the RVA topology optimization method is briefly discussed and, then, it is applied to a set of representative problems in which the thermal compliance, the deviation of the heat flux from a given field and the average temperature are minimized. For each optimization problem, the relaxed topological derivative and the corresponding adjoint equations are presented. This set of expressions are then discretized in the context of the finite element method and used in the optimization algorithm to update the characteristic function. Finally, some representative (3D) thermal topology optimization examples are presented to asses the performance of the proposed method and the Relaxed Variational Approach solutions are compared with the ones obtained with the level set method in terms of the cost function, the topology design and the computational cost.
We consider a Johnson-N\'ed\'elec FEM-BEM coupling, which is a direct and non-symmetric coupling of finite and boundary element methods, in order to solve interface problems for the magnetostatic Maxwell's equations with the magnetic vector potential ansatz. In the FEM-domain, equations may be non-linear, whereas they are exclusively linear in the BEM-part to guarantee the existence of a fundamental solution. First, the weak problem is formulated in quotient spaces to avoid resolving to a saddle point problem. Second, we establish in this setting well-posedness of the arising problem using the framework of Lipschitz and strongly monotone operators as well as a stability result for a special type of non-linearity, which is typically considered in magnetostatic applications. Then, the discretization is performed in the isogeometric context, i.e., the same type of basis functions that are used for geometry design are considered as ansatz functions for the discrete setting. In particular, NURBS are employed for geometry considerations, and B-Splines, which can be understood as a special type of NURBS, for analysis purposes. In this context, we derive a priori estimates w.r.t. h-refinement, and point out to an interesting behavior of BEM, which consists in an amelioration of the convergence rates, when a functional of the solution is evaluated in the exterior BEM-domain. This improvement may lead to a doubling of the convergence rate under certain assumptions. Finally, we end the paper with a numerical example to illustrate the theoretical results, along with a conclusion and an outlook.
The importance of Variational Autoencoders reaches far beyond standalone generative models -- the approach is also used for learning latent representations and can be generalized to semi-supervised learning. This requires a thorough analysis of their commonly known shortcomings: posterior collapse and approximation errors. This paper analyzes VAE approximation errors caused by the combination of the ELBO objective with the choice of the encoder probability family, in particular under conditional independence assumptions. We identify the subclass of generative models consistent with the encoder family. We show that the ELBO optimizer is pulled from the likelihood optimizer towards this consistent subset. Furthermore, this subset can not be enlarged, and the respective error cannot be decreased, by only considering deeper encoder networks.
Time-space fractional Bloch-Torrey equations (TSFBTEs) are developed by some researchers to investigate the relationship between diffusion and fractional-order dynamics. In this paper, we first propose a second-order implicit difference scheme for TSFBTEs by employing the recently proposed L2-type formula [A. A. Alikhanov, C. Huang, Appl. Math. Comput. (2021) 126545]. Then, we prove the stability and the convergence of the proposed scheme. Based on such a numerical scheme, an L2-type all-at-once system is derived. In order to solve this system in a parallel-in-time pattern, a bilateral preconditioning technique is designed to accelerate the convergence of Krylov subspace solvers according to the special structure of the coefficient matrix of the system. We theoretically show that the condition number of the preconditioned matrix is uniformly bounded by a constant for the time fractional order $\alpha \in (0,0.3624)$. Numerical results are reported to show the efficiency of our method.
In this paper, we consider the asymptotical regularization with convex constraints for nonlinear ill-posed problems. The method allows to use non-smooth penalty terms, including the L1-like and the total variation-like penalty functionals, which are significant in reconstructing special features of solutions such as sparsity and piecewise constancy. Under certain conditions we give convergence properties of the methods. Moreover, we propose Runge-Kutta type methods to discrete the initial value problems to construct new type iterative regularization methods.
In the problem of active sequential hypothesis testing (ASHT), a learner seeks to identify the true hypothesis from among a known set of hypotheses. The learner is given a set of actions and knows the random distribution of the outcome of any action under any true hypothesis. Given a target error $\delta>0$, the goal is to sequentially select the fewest number of actions so as to identify the true hypothesis with probability at least $1 - \delta$. Motivated by applications in which the number of hypotheses or actions is massive (e.g., genomics-based cancer detection), we propose efficient (greedy, in fact) algorithms and provide the first approximation guarantees for ASHT, under two types of adaptivity. Both of our guarantees are independent of the number of actions and logarithmic in the number of hypotheses. We numerically evaluate the performance of our algorithms using both synthetic and real-world DNA mutation data, demonstrating that our algorithms outperform previously proposed heuristic policies by large margins.
Locating an object is key in many applications, namely in high-stakes real-world scenarios, like detecting humans or obstacles in vehicular networks. In such applications, pointwise estimates are not enough, and the full area of locations compatible with acquired measurements should be available, for robust and safe navigation. This paper presents a scalable algorithm for creating a superset of all possible target locations, given range measurements with bounded error. The assumption of bounded error is mild, since both hardware characteristics and application scenario impose upper bounds on measurement errors, and the bounded set can be taken from confidence regions of the error distributions. We construct the superset through convex relaxations that use Linear Fractional Representations (LFRs), a well-known technique in robust control. Additionally, we also provide a statistical interpretation for the set of possible target positions, considering the framework of robust estimation. Finally, we provide empirical validation by comparing our LFR method with a standard semidefinite relaxation. Our approach has shown to pay off for small to moderate noise levels: the supersets created by our method are tighter than the benchmark ones being about 20% smaller in size. Furthermore, our method tends to be tight because the size of the supersets is, on median terms, within a 3% margin of a lower bound computed via grid search.
Valuation problems, such as feature interpretation, data valuation and model valuation for ensembles, become increasingly more important in many machine learning applications. Such problems are commonly solved by well-known game-theoretic criteria, such as Shapley value or Banzhaf index. In this work, we present a novel energy-based treatment for cooperative games, with a theoretical justification by the maximum entropy framework. Surprisingly, by conducting variational inference of the energy-based model, we recover various game-theoretic valuation criteria through conducting one-step gradient ascent for maximizing the mean-field ELBO objective. This observation also verifies the rationality of existing criteria, as they are all attempting to decouple the correlations among the players through the mean-field approach. By running gradient ascent for multiple steps, we achieve a trajectory of the valuations, among which we define the valuation with the best conceivable decoupling error as the Variational Index. We experimentally demonstrate that the proposed Variational Index enjoys intriguing properties on certain synthetic and real-world valuation problems.
We propose an optimal low-resolution precoding technique that minimizes the symbol error probability of the users. Unlike existing approaches that rely on QPSK modulation, for the derivation of the minimum symbol error probability objective function the current approach allows for any PSK modulation order. Moreover, the proposed method solves the corresponding discrete optimization problem optimally via a sophisticated branch-and-bound method. Moreover, we propose different approaches based on the greedy search method to compute practical solutions. Numerical simulations confirm the superiority of the proposed minimum symbol error probability criteria in terms of symbol error rate when compared with the established MMDDT and MMSE approaches.
In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.
Amortized inference has led to efficient approximate inference for large datasets. The quality of posterior inference is largely determined by two factors: a) the ability of the variational distribution to model the true posterior and b) the capacity of the recognition network to generalize inference over all datapoints. We analyze approximate inference in variational autoencoders in terms of these factors. We find that suboptimal inference is often due to amortizing inference rather than the limited complexity of the approximating distribution. We show that this is due partly to the generator learning to accommodate the choice of approximation. Furthermore, we show that the parameters used to increase the expressiveness of the approximation play a role in generalizing inference rather than simply improving the complexity of the approximation.