亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The boundary element method is an efficient algorithm for simulating acoustic propagation through homogeneous objects embedded in free space. The conditioning of the system matrix strongly depends on physical parameters such as density, wavespeed and frequency. In particular, high contrast in density and wavespeed across a material interface leads to an ill-conditioned discretisation matrix. Therefore, the convergence of Krylov methods to solve the linear system is slow. Here, specialised boundary integral formulations are designed for the case of acoustic scattering at high-contrast media. The eigenvalues of the resulting system matrix accumulate at two points in the complex plane that depend on the density ratio and stay away from zero. The spectral analysis of the Calder\'on preconditioned PMCHWT formulation yields a single accumulation point. Benchmark simulations demonstrate the computational efficiency of the high-contrast Neumann formulation for scattering at high-contrast media.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

Evolution strategies (ESs) are zeroth-order stochastic black-box optimization heuristics invariant to monotonic transformations of the objective function. They evolve a multivariate normal distribution, from which candidate solutions are generated. Among different variants, CMA-ES is nowadays recognized as one of the state-of-the-art zeroth-order optimizers for difficult problems. Albeit ample empirical evidence that ESs with a step-size control mechanism converge linearly, theoretical guarantees of linear convergence of ESs have been established only on limited classes of functions. In particular, theoretical results on convex functions are missing, where zeroth-order and also first-order optimization methods are often analyzed. In this paper, we establish almost sure linear convergence and a bound on the expected hitting time of an \new{ES family, namely the $(1+1)_\kappa$-ES, which includes the (1+1)-ES with (generalized) one-fifth success rule} and an abstract covariance matrix adaptation with bounded condition number, on a broad class of functions. The analysis holds for monotonic transformations of positively homogeneous functions and of quadratically bounded functions, the latter of which particularly includes monotonic transformation of strongly convex functions with Lipschitz continuous gradient. As far as the authors know, this is the first work that proves linear convergence of ES on such a broad class of functions.

This paper presents a hybrid numerical method for linear collisional kinetic equations with diffusive scaling. The aim of the method is to reduce the computational cost of kinetic equations by taking advantage of the lower dimensionality of the asymptotic fluid model while reducing the error induced by the latter approach. It relies on two criteria motivated by a pertubative approach to obtain a dynamic domain decomposition. The first criterion quantifies how far from a local equilibrium in velocity the distribution function of particles is. The second one depends only on the macroscopic quantities that are available on the whole computing domain. Interface conditions are dealt with using a micro-macro decomposition and the method is significantly more efficient than a standard full kinetic approach. Some properties of the hybrid method are also investigated, such as the conservation of mass.

In this paper, we propose Nesterov Accelerated Shuffling Gradient (NASG), a new algorithm for the convex finite-sum minimization problems. Our method integrates the traditional Nesterov's acceleration momentum with different shuffling sampling schemes. We show that our algorithm has an improved rate of $\mathcal{O}(1/T)$ using unified shuffling schemes, where $T$ is the number of epochs. This rate is better than that of any other shuffling gradient methods in convex regime. Our convergence analysis does not require an assumption on bounded domain or a bounded gradient condition. For randomized shuffling schemes, we improve the convergence bound further. When employing some initial condition, we show that our method converges faster near the small neighborhood of the solution. Numerical simulations demonstrate the efficiency of our algorithm.

Computations of incompressible flows with velocity boundary conditions require solution of a Poisson equation for pressure with all Neumann boundary conditions. Discretization of such a Poisson equation results in a rank-deficient matrix of coefficients. When a non-conservative discretization method such as finite difference, finite element, or spectral scheme is used, such a matrix also generates an inconsistency which makes the residuals in the iterative solution to saturate at a threshold level that depends on the spatial resolution and order of the discretization scheme. In this paper, we examine inconsistency for a high-order meshless discretization scheme suitable for solving the equations on a complex domain. The high order meshless method uses polyharmonic spline radial basis functions (PHS-RBF) with appended polynomials to interpolate scattered data and constructs the discrete equations by collocation. The PHS-RBF provides the flexibility to vary the order of discretization by increasing the degree of the appended polynomial. In this study, we examine the convergence of the inconsistency for different spatial resolutions and for different degrees of the appended polynomials by solving the Poisson equation for a manufactured solution as well as the Navier-Stokes equations for several fluid flows. We observe that the inconsistency decreases faster than the error in the final solution, and eventually becomes vanishing small at sufficient spatial resolution. The rate of convergence of the inconsistency is observed to be similar or better than the rate of convergence of the discretization errors. This beneficial observation makes it unnecessary to regularize the Poisson equation by fixing either the mean pressure or pressure at an arbitrary point. A simple point solver such as the SOR is seen to be well-convergent, although it can be further accelerated using multilevel methods.

Computational feasibility is a widespread concern that guides the framing and modeling of biological and artificial intelligence. The specification of cognitive system capacities is often shaped by unexamined intuitive assumptions about the search space and complexity of a subcomputation. However, a mistaken intuition might make such initial conceptualizations misleading for what empirical questions appear relevant later on. We undertake here computational-level modeling and complexity analyses of segmentation - a widely hypothesized subcomputation that plays a requisite role in explanations of capacities across domains - as a case study to show how crucial it is to formally assess these assumptions. We mathematically prove two sets of results regarding hardness and search space size that may run counter to intuition, and position their implications with respect to existing views on the subcapacity.

We obtain new equitightness and $C([0,T];L^p(\mathbb{R}^N))$-convergence results for numerical approximations of generalized porous medium equations of the form $$ \partial_tu-\mathfrak{L}[\varphi(u)]=g\qquad\text{in $\mathbb{R}^N\times(0,T)$}, $$ where $\varphi:\mathbb{R}\to\mathbb{R}$ is continuous and nondecreasing, and $\mathfrak{L}$ is a local or nonlocal diffusion operator. Our results include slow diffusions, strongly degenerate Stefan problems, and fast diffusions above a critical exponent. These results improve the previous $C([0,T];L_{\text{loc}}^p(\mathbb{R}^N))$-convergence obtained in a series of papers on the topic by the authors. To have equitightness and global $L^p$-convergence, some additional restrictions on $\mathfrak{L}$ and $\varphi$ are needed. Most commonly used symmetric operators $\mathfrak{L}$ are still included: the Laplacian, fractional Laplacians, and other generators of symmetric L\'evy processes with some fractional moment. We also discuss extensions to nonlinear possibly strongly degenerate convection-diffusion equations.

Particle smoothers are SMC (Sequential Monte Carlo) algorithms designed to approximate the joint distribution of the states given observations from a state-space model. We propose dSMC (de-Sequentialized Monte Carlo), a new particle smoother that is able to process $T$ observations in $\mathcal{O}(\log T)$ time on parallel architecture. This compares favourably with standard particle smoothers, the complexity of which is linear in $T$. We derive $\mathcal{L}_p$ convergence results for dSMC, with an explicit upper bound, polynomial in $T$. We then discuss how to reduce the variance of the smoothing estimates computed by dSMC by (i) designing good proposal distributions for sampling the particles at the initialization of the algorithm, as well as by (ii) using lazy resampling to increase the number of particles used in dSMC. Finally, we design a particle Gibbs sampler based on dSMC, which is able to perform parameter inference in a state-space model at a $\mathcal{O}(\log(T))$ cost on parallel hardware.

Cyber-resilience is an increasing concern in developing autonomous navigation solutions for marine vessels. This paper scrutinizes cyber-resilience properties of marine navigation through a prism with three edges: multiple sensor information fusion, diagnosis of not-normal behaviours, and change detection. It proposes a two-stage estimator for diagnosis and mitigation of sensor signals used for coastal navigation. Developing a Likelihood Field approach, a first stage extracts shoreline features from radar and matches them to the electronic navigation chart. A second stage associates buoy and beacon features from the radar with chart information. Using real data logged at sea tests combined with simulated spoofing, the paper verifies the ability to timely diagnose and isolate an attempt to compromise position measurements. A new approach is suggested for high level processing of received data to evaluate their consistency, that is agnostic to the underlying technology of the individual sensory input. A combined parametric Gaussian modelling and Kernel Density Estimation is suggested and compared with a generalized likelihood ratio change detector that uses sliding windows. The paper shows how deviations from nominal behaviour and isolation of the components is possible when under attack or when defects in sensors occur.

Promoting behavioural diversity is critical for solving games with non-transitive dynamics where strategic cycles exist, and there is no consistent winner (e.g., Rock-Paper-Scissors). Yet, there is a lack of rigorous treatment for defining diversity and constructing diversity-aware learning dynamics. In this work, we offer a geometric interpretation of behavioural diversity in games and introduce a novel diversity metric based on \emph{determinantal point processes} (DPP). By incorporating the diversity metric into best-response dynamics, we develop \emph{diverse fictitious play} and \emph{diverse policy-space response oracle} for solving normal-form games and open-ended games. We prove the uniqueness of the diverse best response and the convergence of our algorithms on two-player games. Importantly, we show that maximising the DPP-based diversity metric guarantees to enlarge the \emph{gamescape} -- convex polytopes spanned by agents' mixtures of strategies. To validate our diversity-aware solvers, we test on tens of games that show strong non-transitivity. Results suggest that our methods achieve much lower exploitability than state-of-the-art solvers by finding effective and diverse strategies.

We consider the question: how can you sample good negative examples for contrastive learning? We argue that, as with metric learning, learning contrastive representations benefits from hard negative samples (i.e., points that are difficult to distinguish from an anchor point). The key challenge toward using hard negatives is that contrastive methods must remain unsupervised, making it infeasible to adopt existing negative sampling strategies that use label information. In response, we develop a new class of unsupervised methods for selecting hard negative samples where the user can control the amount of hardness. A limiting case of this sampling results in a representation that tightly clusters each class, and pushes different classes as far apart as possible. The proposed method improves downstream performance across multiple modalities, requires only few additional lines of code to implement, and introduces no computational overhead.

北京阿比特科技有限公司