亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We introduce several spatially adaptive model order reduction approaches tailored to non-coercive elliptic boundary value problems, specifically, parametric-in-frequency Helmholtz problems. The offline information is computed by means of adaptive finite elements, so that each snapshot lives on a different discrete space that resolves the local singularities of the solution and is adjusted to the considered frequency value. A rational surrogate is then assembled adopting either a least-squares or an interpolatory approach, yielding the standard rational interpolation method (SRI), a vector- or function-valued version of it ($\mathcal{V}$-SRI), and the minimal rational interpolation method (MRI). In the context of building an approximation for linear or quadratic functionals of the Helmholtz solution, we perform several numerical experiments to compare the proposed methodologies. Our simulations show that, for interior resonant problems (whose singularities are encoded by poles on the real axis), the spatially adaptive $\mathcal{V}$-SRI and MRI work comparably well. Instead, when dealing with exterior scattering problems, whose frequency response is mostly smooth, the $\mathcal{V}$-SRI method seems to be the best-performing one.

相關內容

In this work isogeometric mortaring is used for the simulation of a six pole permanent magnet synchronous machine. Isogeometric mortaring is especially well suited for the efficient computation of rotating electric machines as it allows for an exact geometry representation for arbitrary rotation angles without the need of remeshing. The appropriate B-spline spaces needed for the solution of Maxwell's equations and the corresponding mortar spaces are introduced. Unlike in classical finite element methods their construction is straightforward in the isogeometric case. The torque in the machine is computed using two different methods, i.e., Arkkio's method and by using the Lagrange multipliers from the mortaring.

We present a novel and well automatable approach to formal verification of programs with underspecified semantics, i.e., a language semantics that leaves open the order of certain evaluations. First, we reduce this problem to non-determinism of distributed systems, automatically extracting a distributed Active Object model from underspecified, sequential C code. This translation process provides a fully formal semantics for the considered C subset. In the extracted model every non-deterministic choice corresponds to one possible evaluation order. This step also automatically translates specifications in the ANSI/ISO C Specification Language (ACSL) into method contracts and object invariants for Active Objects. We then perform verification on the specified Active Objects model. For this we have implemented a theorem prover Crowbar based on the Behavioral Program Logic (BPL), which verifies the extracted model with respect to the translated specification and ensures the original property of the C code for all possible evaluation orders. By using model extraction, we can use standard tools, without designing a new complex program logic to deal with underspecification. The case study used is highly underspecified and cannot be verified with existing tools for C.

In this work, we examine sampling problems with non-smooth potentials. We propose a novel Markov chain Monte Carlo algorithm for sampling from non-smooth potentials. We provide a non-asymptotical analysis of our algorithm and establish a polynomial-time complexity $\tilde {\cal O}(d\varepsilon^{-1})$ to obtain $\varepsilon$ total variation distance to the target density, better than most existing results under the same assumptions. Our method is based on the proximal bundle method and an alternating sampling framework. This framework requires the so-called restricted Gaussian oracle, which can be viewed as a sampling counterpart of the proximal mapping in convex optimization. One key contribution of this work is a fast algorithm that realizes the restricted Gaussian oracle for any convex non-smooth potential with bounded Lipschitz constant.

Let $P$ be a linear differential operator over $\mathcal{D} \subset \mathbb{R}^d$ and $U = (U_x)_{x \in \mathcal{D}}$ a second order stochastic process. In the first part of this article, we prove a new necessary and sufficient condition for all the trajectories of $U$ to verify the partial differential equation (PDE) $T(U) = 0$. This condition is formulated in terms of the covariance kernel of $U$. When compared to previous similar results, the novelty lies in that the equality $T(U) = 0$ is understood in the \textit{sense of distributions}, which is a relevant framework for PDEs. This theorem provides precious insights during the second part of this article, devoted to performing "physically informed" machine learning for the homogeneous 3 dimensional free space wave equation. We perform Gaussian process regression (GPR) on pointwise observations of a solution of this PDE. To do so, we propagate Gaussian processes (GP) priors over its initial conditions through the wave equation. We obtain explicit formulas for the covariance kernel of the propagated GP, which can then be used for GPR. We then explore the particular cases of radial symmetry and point source. For the former, we derive convolution-free GPR formulas; for the latter, we show a direct link between GPR and the classical triangulation method for point source localization used in GPS systems. Additionally, this Bayesian framework provides a new answer for the ill-posed inverse problem of reconstructing initial conditions for the wave equation with a limited number of sensors, and simultaneously enables the inference of physical parameters from these data. Finally, we illustrate this physically informed GPR on a number of practical examples.

We propose an infinity Laplacian method to address the problem of interpolation on an unstructured point cloud. In doing so, we find the labeling function with the smallest infinity norm of its gradient. By introducing the non-local gradient, the continuous functional is approximated with a discrete form. The discrete problem is convex and can be solved efficiently with the split Bregman method. Experimental results indicate that our approach provides consistent interpolations and the labeling functions obtained are globally smooth, even in the case of extreme low sampling rate. More importantly, convergence of the discrete minimizer to the optimal continuous labeling function is proved using $\Gamma$-convergence and compactness, which guarantees the reliability of the infinity Laplacian method in various potential applications.

Learning rate schedules are ubiquitously used to speed up and improve optimisation. Many different policies have been introduced on an empirical basis, and theoretical analyses have been developed for convex settings. However, in many realistic problems the loss-landscape is high-dimensional and non convex -- a case for which results are scarce. In this paper we present a first analytical study of the role of learning rate scheduling in this setting, focusing on Langevin optimization with a learning rate decaying as $\eta(t)=t^{-\beta}$. We begin by considering models where the loss is a Gaussian random function on the $N$-dimensional sphere ($N\rightarrow \infty$), featuring an extensive number of critical points. We find that to speed up optimization without getting stuck in saddles, one must choose a decay rate $\beta<1$, contrary to convex setups where $\beta=1$ is generally optimal. We then add to the problem a signal to be recovered. In this setting, the dynamics decompose into two phases: an \emph{exploration} phase where the dynamics navigates through rough parts of the landscape, followed by a \emph{convergence} phase where the signal is detected and the dynamics enter a convex basin. In this case, it is optimal to keep a large learning rate during the exploration phase to escape the non-convex region as quickly as possible, then use the convex criterion $\beta=1$ to converge rapidly to the solution. Finally, we demonstrate that our conclusions hold in a common regression task involving neural networks.

Many papers in the field of integer linear programming (ILP, for short) are devoted to problems of the type $\max\{c^\top x \colon A x = b,\, x \in \mathbb{Z}^n_{\geq 0}\}$, where all the entries of $A,b,c$ are integer, parameterized by the number of rows of $A$ and $\|A\|_{\max}$. This class of problems is known under the name of ILP problems in the standard form, adding the word "bounded" if $x \leq u$, for some integer vector $u$. Recently, many new sparsity, proximity, and complexity results were obtained for bounded and unbounded ILP problems in the standard form. In this paper, we consider ILP problems in the canonical form $$\max\{c^\top x \colon b_l \leq A x \leq b_r,\, x \in \mathbb{Z}^n\},$$ where $b_l$ and $b_r$ are integer vectors. We assume that the integer matrix $A$ has the rank $n$, $(n + m)$ rows, $n$ columns, and parameterize the problem by $m$ and $\Delta(A)$, where $\Delta(A)$ is the maximum of $n \times n$ sub-determinants of $A$, taken in the absolute value. We show that any ILP problem in the standard form can be polynomially reduced to some ILP problem in the canonical form, preserving $m$ and $\Delta(A)$, but the reverse reduction is not always possible. More precisely, we define the class of generalized ILP problems in the standard form, which includes an additional group constraint, and prove the equivalence to ILP problems in the canonical form. We generalize known sparsity, proximity, and complexity bounds for ILP problems in the canonical form. Additionally, sometimes, we strengthen previously known results for ILP problems in the canonical form, and, sometimes, we give shorter proofs. Finally, we consider the special cases of $m \in \{0,1\}$. By this way, we give specialised sparsity, proximity, and complexity bounds for the problems on simplices, Knapsack problems and Subset-Sum problems.

We study regression adjustments with additional covariates in randomized experiments under covariate-adaptive randomizations (CARs) when subject compliance is imperfect. We develop a regression-adjusted local average treatment effect (LATE) estimator that is proven to improve efficiency in the estimation of LATEs under CARs. Our adjustments can be parametric in linear and nonlinear forms, nonparametric, and high-dimensional. Even when the adjustments are misspecified, our proposed estimator is still consistent and asymptotically normal, and their inference method still achieves the exact asymptotic size under the null. When the adjustments are correctly specified, our estimator achieves the minimum asymptotic variance. When the adjustments are parametrically misspecified, we construct a new estimator which is weakly more efficient than linearly and nonlinearly adjusted estimators, as well as the one without any adjustments. Simulation evidence and empirical application confirm efficiency gains achieved by regression adjustments relative to both the estimator without adjustment and the standard two-stage least squares estimator.

In this work, we consider fracture propagation in nearly incompressible and (fully) incompressible materials using a phase-field formulation. We use a mixed form of the elasticity equation to overcome volume locking effects and develop a robust, nonlinear and linear solver scheme and preconditioner for the resulting system. The coupled variational inequality system, which is solved monolithically, consists of three unknowns: displacements, pressure, and phase-field. Nonlinearities due to coupling, constitutive laws, and crack irreversibility are solved using a combined Newton algorithm for the nonlinearities in the partial differential equation and employing a primal-dual active set strategy for the crack irreverrsibility constraint. The linear system in each Newton step is solved iteratively with a flexible generalized minimal residual method (GMRES). The key contribution of this work is the development of a problem-specific preconditioner that leverages the saddle-point structure of the displacement and pressure variable. Four numerical examples in pure solids and pressure-driven fractures are conducted on uniformly and locally refined meshes to investigate the robustness of the solver concerning the Poisson ratio as well as the discretization and regularization parameters.

This work presents a region-growing image segmentation approach based on superpixel decomposition. From an initial contour-constrained over-segmentation of the input image, the image segmentation is achieved by iteratively merging similar superpixels into regions. This approach raises two key issues: (1) how to compute the similarity between superpixels in order to perform accurate merging and (2) in which order those superpixels must be merged together. In this perspective, we firstly introduce a robust adaptive multi-scale superpixel similarity in which region comparisons are made both at content and common border level. Secondly, we propose a global merging strategy to efficiently guide the region merging process. Such strategy uses an adpative merging criterion to ensure that best region aggregations are given highest priorities. This allows to reach a final segmentation into consistent regions with strong boundary adherence. We perform experiments on the BSDS500 image dataset to highlight to which extent our method compares favorably against other well-known image segmentation algorithms. The obtained results demonstrate the promising potential of the proposed approach.

北京阿比特科技有限公司