亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

While constraints arise naturally in many physical models, their treatment in mathematical and numerical models varies widely, depending on the nature of the constraint and the availability of simulation tools to enforce it. In this paper, we consider the solution of discretized PDE models that have a natural constraint on the positivity (or non-negativity) of the solution. While discretizations of such models often offer analogous positivity properties on their exact solutions, the use of approximate solution algorithms (and the unavoidable effects of floating -- point arithmetic) often destroy any guarantees that the computed approximate solution will satisfy the (discretized form of the) physical constraints, unless the discrete model is solved to much higher precision than discretization error would dictate. Here, we introduce a class of iterative solution algorithms, based on the unigrid variant of multigrid methods, where such positivity constraints can be preserved throughout the approximate solution process. Numerical results for one- and two-dimensional model problems show both the effectiveness of the approach and the trade-off required to ensure positivity of approximate solutions throughout the solution process.

相關內容

Neural point estimators are neural networks that map data to parameter point estimates. They are fast, likelihood free and, due to their amortised nature, amenable to fast bootstrap-based uncertainty quantification. In this paper, we aim to increase the awareness of statisticians to this relatively new inferential tool, and to facilitate its adoption by providing user-friendly open-source software. We also give attention to the ubiquitous problem of making inference from replicated data, which we address in the neural setting using permutation-invariant neural networks. Through extensive simulation studies we show that these neural point estimators can quickly and optimally (in a Bayes sense) estimate parameters in weakly-identified and highly-parameterised models with relative ease. We demonstrate their applicability through an analysis of extreme sea-surface temperature in the Red Sea where, after training, we obtain parameter estimates and bootstrap-based confidence intervals from hundreds of spatial fields in a fraction of a second.

The vertex cover problem is a fundamental and widely studied combinatorial optimization problem. It is known that its standard linear programming relaxation is integral for bipartite graphs and half-integral for general graphs. As a consequence, the natural rounding algorithm based on this relaxation computes an optimal solution for bipartite graphs and a $2$-approximation for general graphs. This raises the question of whether one can interpolate the rounding curve of the standard linear programming relaxation in a beyond the worst-case manner, depending on how close the graph is to being bipartite. In this paper, we consider a simple rounding algorithm that exploits the knowledge of an induced bipartite subgraph to attain improved approximation ratios. Equivalently, we suppose that we work with a pair $(G, S)$, consisting of a graph with an odd cycle transversal. If $S$ is a stable set, we prove a tight approximation ratio of $1 + 1/\rho$, where $2\rho -1$ denotes the odd girth (i.e., length of the shortest odd cycle) of the contracted graph $\tilde{G} := G /S$ and satisfies $\rho \in [2,\infty]$. If $S$ is an arbitrary set, we prove a tight approximation ratio of $\left(1+1/\rho \right) (1 - \alpha) + 2 \alpha$, where $\alpha \in [0,1]$ is a natural parameter measuring the quality of the set $S$. The technique used to prove tight improved approximation ratios relies on a structural analysis of the contracted graph $\tilde{G}$. Tightness is shown by constructing classes of weight functions matching the obtained upper bounds. As a byproduct of the structural analysis, we obtain improved tight bounds on the integrality gap and the fractional chromatic number of 3-colorable graphs. We also discuss algorithmic applications in order to find good odd cycle transversals and show optimality of the analysis.

A nonlinear optimization method is proposed for the solution of inverse medium problems with spatially varying properties. To avoid the prohibitively large number of unknown control variables resulting from standard grid-based representations, the misfit is instead minimized in a small subspace spanned by the first few eigenfunctions of a judicious elliptic operator, which itself depends on the previous iteration. By repeatedly adapting both the dimension and the basis of the search space, regularization is inherently incorporated at each iteration without the need for extra Tikhonov penalization. Convergence is proved under an angle condition, which is included into the resulting \emph{Adaptive Spectral Inversion} (ASI) algorithm. The ASI approach compares favorably to standard grid-based inversion using $L^2$-Tikhonov regularization when applied to an elliptic inverse problem. The improved accuracy resulting from the newly included angle condition is further demonstrated via numerical experiments from time-dependent inverse scattering problems.

The proximal Galerkin finite element method is a high-order, nonlinear numerical method that preserves the geometric and algebraic structure of bound constraints in infinite-dimensional function spaces. This paper introduces the proximal Galerkin method and applies it to solve free-boundary problems, enforce discrete maximum principles, and develop scalable, mesh-independent algorithms for optimal design. The paper begins with a derivation of the latent variable proximal point (LVPP) method: an unconditionally stable alternative to the interior point method. LVPP is an infinite-dimensional optimization algorithm that may be viewed as having an adaptive (Bayesian) barrier function that is updated with a new informative prior at each (outer loop) optimization iteration. One of the main benefits of this algorithm is witnessed when analyzing the classical obstacle problem. Therein, we find that the original variational inequality can be replaced by a sequence of semilinear partial differential equations (PDEs) that are readily discretized and solved with, e.g., high-order finite elements. Throughout this work, we arrive at several unexpected contributions that may be of independent interest. These include (1) a semilinear PDE we refer to as the entropic Poisson equation; (2) an algebraic/geometric connection between high-order positivity-preserving discretizations and infinite-dimensional Lie groups; and (3) a gradient-based, bound-preserving algorithm for two-field density-based topology optimization. The complete latent variable proximal Galerkin methodology combines ideas from nonlinear programming, functional analysis, tropical algebra, and differential geometry and can potentially lead to new synergies among these areas as well as within variational and numerical analysis.

We present an intimate connection among the following fields: (a) distributed local algorithms: coming from the area of computer science, (b) finitary factors of iid processes: coming from the area of analysis of randomized processes, (c) descriptive combinatorics: coming from the area of combinatorics and measure theory. In particular, we study locally checkable labellings in grid graphs from all three perspectives. Most of our results are for the perspective (b) where we prove time hierarchy theorems akin to those known in the field (a) [Chang, Pettie FOCS 2017]. This approach that borrows techniques from the fields (a) and (c) implies a number of results about possible complexities of finitary factor solutions. Among others, it answers three open questions of [Holroyd et al. Annals of Prob. 2017] or the more general question of [Brandt et al. PODC 2017] who asked for a formal connection between the fields (a) and (b). In general, we hope that our treatment will help to view all three perspectives as a part of a common theory of locality, in which we follow the insightful paper of [Bernshteyn 2020+] .

Subjects in clinical studies that investigate paired body parts can carry a disease on either both sides (bilateral) or a single side (unilateral) of the organs. Data in such studies may consist of both bilateral and unilateral records. However, the correlation between the paired organs is often ignored, which may lead to biased interpretations. Recent literatures have taken the correlation into account. For example, Ma and Wang (2021) proposed three asymptotic procedures for testing the homogeneity of proportions of multiple groups using combined bilateral and unilateral data and recommended the score test. It is of importance to notice that the asymptotic behavior is not guaranteed if the sample size is small, resulting in uncontrolled type I error rates. In this paper, we extend their work by considering exact approaches and compare these methods with the score test proposed by Ma and Wang (2021) in terms of type I errors and statistical powers. Additionally, two real-world examples are used to illustrate the application of the proposed approaches.

Rubik's Cube (RC) is a well-known and computationally challenging puzzle that has motivated AI researchers to explore efficient alternative representations and problem-solving methods. The ideal situation for planning here is that a problem be solved optimally and efficiently represented in a standard notation using a general-purpose solver and heuristics. The fastest solver today for RC is DeepCubeA with a custom representation, and another approach is with Scorpion planner with State-Action-Space+ (SAS+) representation. In this paper, we present the first RC representation in the popular PDDL language so that the domain becomes more accessible to PDDL planners, competitions, and knowledge engineering tools, and is more human-readable. We then bridge across existing approaches and compare performance. We find that in one comparable experiment, DeepCubeA solves all problems with varying complexities, albeit only 18\% are optimal plans. For the same problem set, Scorpion with SAS+ representation and pattern database heuristics solves 61.50\% problems, while FastDownward with PDDL representation and FF heuristic solves 56.50\% problems, out of which all the plans generated were optimal. Our study provides valuable insights into the trade-offs between representational choice and plan optimality that can help researchers design future strategies for challenging domains combining general-purpose solving methods (planning, reinforcement learning), heuristics, and representations (standard or custom).

We introduce and study a scale of operator classes on the annulus that is motivated by the $\mathcal{C}_{\rho}$ classes of $\rho$-contractions of Nagy and Foia\c{s}. In particular, our classes are defined in terms of the contractivity of the double-layer potential integral operator over the annulus. We prove that if, in addition, complete contractivity is assumed, then one obtains a complete characterization involving certain variants of the $\mathcal{C}_{\rho}$ classes. Recent work of Crouzeix-Greenbaum and Schwenninger-de Vries allows us to also obtain relevant K-spectral estimates, generalizing existing results from the literature on the annulus. Finally, we exhibit a special case where these estimates can be significantly strengthened.

We investigate the problem to find anomalies in a $d$-dimensional random field via multiscale scanning in the presence of nuisance parameters. This covers the common situation that either the baseline-level or additional parameters such as the variance are unknown and have to be estimated from the data. We argue that state of the art approaches to determine asymptotically correct critical values for the multiscale scanning statistic will in general fail when naively such parameters are replaced by plug-in estimators. Opposed to this, we suggest to estimate the nuisance parameters on the largest scale and to use the remaining scales for multiscale scanning. We prove a uniform invariance principle for the resulting adjusted multiscale statistic (AMS), which is widely applicable and provides a computationally feasible way to simulate asymptotically correct critical values. We illustrate the implications of our theoretical results in a simulation study and in a real data example from super-resolution STED microscopy. This allows us to identify interesting regions inside a specimen in a pre-scan with controlled family-wise error rate.

Let $\hat\Sigma=\frac{1}{n}\sum_{i=1}^n X_i\otimes X_i$ denote the sample covariance operator of centered i.i.d. observations $X_1,\dots,X_n$ in a real separable Hilbert space, and let $\Sigma=\mathbf{E}(X_1\otimes X_1)$. The focus of this paper is to understand how well the bootstrap can approximate the distribution of the operator norm error $\sqrt n\|\hat\Sigma-\Sigma\|_{\text{op}}$, in settings where the eigenvalues of $\Sigma$ decay as $\lambda_j(\Sigma)\asymp j^{-2\beta}$ for some fixed parameter $\beta>1/2$. Our main result shows that the bootstrap can approximate the distribution of $\sqrt n\|\hat\Sigma-\Sigma\|_{\text{op}}$ at a rate of order $n^{-\frac{\beta-1/2}{2\beta+4+\epsilon}}$ with respect to the Kolmogorov metric, for any fixed $\epsilon>0$. In particular, this shows that the bootstrap can achieve near $n^{-1/2}$ rates in the regime of large $\beta$--which substantially improves on previous near $n^{-1/6}$ rates in the same regime. In addition to obtaining faster rates, our analysis leverages a fundamentally different perspective based on coordinate-free techniques. Moreover, our result holds in greater generality, and we propose a new model that is compatible with both elliptical and Mar\v{c}enko-Pastur models in high-dimensional Euclidean spaces, which may be of independent interest.

北京阿比特科技有限公司