亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A method is proposed for evaluation of single and double layer potentials of the Laplace and Helmholtz equations on piecewise smooth manifold boundary elements with constant densities. The method is based on a novel two-term decomposition of the layer potentials, derived by means of differential geometry. The first term is an integral of a differential 2-form which can be reduced to contour integrals using Stokes' theorem, while the second term is related to the element curvature. This decomposition reduces the degree of singularity and the curvature term can be further regularized by a polar coordinate transform. The method can handle singular and nearly singular integrals. Numerical results validating the accuracy of the method are presented for all combinations of single and double layer potentials, for the Laplace and Helmholtz kernels, and for singular and nearly singular integrals.

相關內容

We introduce a lower bounding technique for the min max correlation clustering problem and, based on this technique, a combinatorial 4-approximation algorithm for complete graphs. This improves upon the previous best known approximation guarantees of 5, using a linear program formulation (Kalhan et al., 2019), and 40, for a combinatorial algorithm (Davies et al., 2023). We extend this algorithm by a greedy joining heuristic and show empirically that it improves the state of the art in solution quality and runtime on several benchmark datasets.

We study least-squares trace regression when the parameter is the sum of a $r$-low-rank and a $s$-sparse matrices and a fraction $\epsilon$ of the labels is corrupted. For subgaussian distributions, we highlight three design properties. The first, termed $\PP$, handles additive decomposition and follows from a product process inequality. The second, termed $\IP$, handles both label contamination and additive decomposition. It follows from Chevet's inequality. The third, termed $\MP$, handles the interaction between the design and featured-dependent noise. It follows from a multiplier process inequality. Jointly, these properties entail the near-optimality of a tractable estimator with respect to the effective dimensions $d_{\eff,r}$ and $d_{\eff,s}$ for the low-rank and sparse components, $\epsilon$ and the failure probability $\delta$. This rate has the form $$ \mathsf{r}(n,d_{\eff,r}) + \mathsf{r}(n,d_{\eff,s}) + \sqrt{(1+\log(1/\delta))/n} + \epsilon\log(1/\epsilon). $$ Here, $\mathsf{r}(n,d_{\eff,r})+\mathsf{r}(n,d_{\eff,s})$ is the optimal uncontaminated rate, independent of $\delta$. Our estimator is adaptive to $(s,r,\epsilon,\delta)$ and, for fixed absolute constant $c>0$, it attains the mentioned rate with probability $1-\delta$ uniformly over all $\delta\ge\exp(-cn)$. Disconsidering matrix decomposition, our analysis also entails optimal bounds for a robust estimator adapted to the noise variance. Finally, we consider robust matrix completion. We highlight a new property for this problem: one can robustly and optimally estimate the incomplete matrix regardless of the \emph{magnitude of the corruption}. Our estimators are based on ``sorted'' versions of Huber's loss. We present simulations matching the theory. In particular, it reveals the superiority of ``sorted'' Huber loss over the classical Huber's loss.

Stress prediction in porous materials and structures is challenging due to the high computational cost associated with direct numerical simulations. Convolutional Neural Network (CNN) based architectures have recently been proposed as surrogates to approximate and extrapolate the solution of such multiscale simulations. These methodologies are usually limited to 2D problems due to the high computational cost of 3D voxel based CNNs. We propose a novel geometric learning approach based on a Graph Neural Network (GNN) that efficiently deals with three-dimensional problems by performing convolutions over 2D surfaces only. Following our previous developments using pixel-based CNN, we train the GNN to automatically add local fine-scale stress corrections to an inexpensively computed coarse stress prediction in the porous structure of interest. Our method is Bayesian and generates densities of stress fields, from which credible intervals may be extracted. As a second scientific contribution, we propose to improve the extrapolation ability of our network by deploying a strategy of online physics-based corrections. Specifically, we condition the posterior predictions of our probabilistic predictions to satisfy partial equilibrium at the microscale, at the inference stage. This is done using an Ensemble Kalman algorithm, to ensure tractability of the Bayesian conditioning operation. We show that this innovative methodology allows us to alleviate the effect of undesirable biases observed in the outputs of the uncorrected GNN, and improves the accuracy of the predictions in general.

The paper generalizes Lazarus Fuchs' theorem on the solutions of complex ordinary linear differential equations with regular singularities to the case of ground fields of arbitrary characteristic, giving a precise description of the shape of each solution. This completes partial investigations started by Taira Honda and Bernard Dwork. The main features are the introduction of a differential ring $\mathcal{R}$ in infinitely many variables mimicking the role of the (complex) iterated logarithms, and the proof that adding these "logarithms" already provides sufficiently many primitives so as to solve any differential equation with regular singularity in $\mathcal{R}$. A key step in the proof is the reduction of the involved differential operator to an Euler operator, its normal form, to solve Euler equations in $\mathcal{R}$ and to lift their (monomial) solutions to solutions of the original equation. The first (and already very striking) example of this outset is the exponential function $\exp_p$ in positive characteristic, solution of $y' = y$. We prove that it necessarily involves all variables and we construct its explicit (and quite mysterious) power series expansion. Additionally, relations of our results to the Grothendieck-Katz $p$-curvature conjecture and related conjectures will be discussed.

We introduce a novel structure-preserving method in order to approximate the compressible ideal Magnetohydrodynamics (MHD) equations. This technique addresses the MHD equations using a non-divergence formulation, where the contributions of the magnetic field to the momentum and total mechanical energy are treated as source terms. Our approach uses the Marchuk-Strang splitting technique and involves three distinct components: a compressible Euler solver, a source-system solver, and an update procedure for the total mechanical energy. The scheme allows for significant freedom on the choice of Euler's equation solver, while the magnetic field is discretized using a curl-conforming finite element space, yielding exact preservation of the involution constraints. We prove that the method preserves invariant domain properties, including positivity of density, positivity of internal energy, and the minimum principle of the specific entropy. If the scheme used to solve Euler's equation conserves total energy, then the resulting MHD scheme can be proven to preserve total energy. Similarly, if the scheme used to solve Euler's equation is entropy-stable, then the resulting MHD scheme is entropy stable as well. In our approach, the CFL condition does not depend on magnetosonic wave-speeds, but only on the usual maximum wave speed from Euler's system. To validate the effectiveness of our method, we solve a variety of ideal MHD problems, showing that the method is capable of delivering high-order accuracy in space for smooth problems, while also offering unconditional robustness in the shock hydrodynamics regime as well.

This work presents a comparative study to numerically compute impulse approximate controls for parabolic equations with various boundary conditions. Theoretical controllability results have been recently investigated using a logarithmic convexity estimate at a single time based on a Carleman commutator approach. We propose a numerical algorithm for computing the impulse controls with minimal $L^2$-norms by adapting a penalized Hilbert Uniqueness Method (HUM) combined with a Conjugate Gradient (CG) method. We consider static boundary conditions (Dirichlet and Neumann) and dynamic boundary conditions. Some numerical experiments based on our developed algorithm are given to validate and compare the theoretical impulse controllability results.

Conformal inference is a fundamental and versatile tool that provides distribution-free guarantees for many machine learning tasks. We consider the transductive setting, where decisions are made on a test sample of $m$ new points, giving rise to $m$ conformal $p$-values. {While classical results only concern their marginal distribution, we show that their joint distribution follows a P\'olya urn model, and establish a concentration inequality for their empirical distribution function.} The results hold for arbitrary exchangeable scores, including {\it adaptive} ones that can use the covariates of the test+calibration samples at training stage for increased accuracy. We demonstrate the usefulness of these theoretical results through uniform, in-probability guarantees for two machine learning tasks of current interest: interval prediction for transductive transfer learning and novelty detection based on two-class classification.

We present a new high-order accurate spectral element solution to the two-dimensional scalar Poisson equation subject to a general Robin boundary condition. The solution is based on a simplified version of the shifted boundary method employing a continuous arbitrary order $hp$-Galerkin spectral element method as the numerical discretization procedure. The simplification relies on a polynomial correction to avoid explicitly evaluating high-order partial derivatives from the Taylor series expansion, which traditionally have been used within the shifted boundary method. In this setting, we apply an extrapolation and novel interpolation approach to project the basis functions from the true domain onto the approximate surrogate domain. The resulting solution provides a method that naturally incorporates curved geometrical features of the domain, overcomes complex and cumbersome mesh generation, and avoids problems with small-cut-cells. Dirichlet, Neumann, and general Robin boundary conditions are enforced weakly through: i) a generalized Nitsche's method and ii) a generalized Aubin's method. For this, a consistent asymptotic preserving formulation of the embedded Robin formulations is presented. We present several numerical experiments and analysis of the algorithmic properties of the different weak formulations. With this, we include convergence studies under polynomial, $p$, increase of the basis functions, mesh, $h$, refinement, and matrix conditioning to highlight the spectral and algebraic convergence features, respectively. This is done to assess the influence of errors across variational formulations, polynomial order, mesh size, and mappings between the true and surrogate boundaries.

For problems of time-harmonic scattering by rational polygonal obstacles, embedding formulae express the far-field pattern induced by any incident plane wave in terms of the far-field patterns for a relatively small (frequency-independent) set of canonical incident angles. Although these remarkable formulae are exact in theory, here we demonstrate that: (i) they are highly sensitive to numerical errors in practice, and; (ii) direct calculation of the coefficients in these formulae may be impossible for particular sets of canonical incident angles, even in exact arithmetic. Only by overcoming these practical issues can embedding formulae provide a highly efficient approach to computing the far-field pattern induced by a large number of incident angles. Here we propose solutions for problems (i) and (ii), backed up by theory and numerical experiments. Problem (i) is solved using techniques from computational complex analysis: we reformulate the embedding formula as a complex contour integral and prove that this is much less sensitive to numerical errors. In practice, this contour integral can be efficiently evaluated by residue calculus. Problem (ii) is addressed using techniques from numerical linear algebra: we oversample, considering more canonical incident angles than are necessary, thus expanding the space of valid coefficients vectors. The coefficients vectors can then be selected using either a least squares approach or column subset selection.

We introduce and analyse a family of hash and predicate functions that are more likely to produce collisions for small reducible configurations of vectors. These may offer practical improvements to lattice sieving for short vectors. In particular, in one asymptotic regime the family exhibits significantly different convergent behaviour than existing hash functions and predicates.

北京阿比特科技有限公司