亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a mapping algorithm to compute large-scale magnetic field maps in indoor environments with approximate Gaussian process (GP) regression. Mapping the spatial variations in the ambient magnetic field can be used for localization algorithms in indoor areas. To compute such a map, GP regression is a suitable tool because it provides predictions of the magnetic field at new locations along with uncertainty quantification. Because full GP regression has a complexity that grows cubically with the number of data points, approximations for GPs have been extensively studied. In this paper, we build on the structured kernel interpolation (SKI) framework, speeding up inference by exploiting efficient Krylov subspace methods. More specifically, we incorporate SKI with derivatives (D-SKI) into the scalar potential model for magnetic field modeling and compute both predictive mean and covariance with a complexity that is linear in the data points. In our simulations, we show that our method achieves better accuracy than current state-of-the-art methods on magnetic field maps with a growing mapping area. In our large-scale experiments, we construct magnetic field maps from up to 40000 three-dimensional magnetic field measurements in less than two minutes on a standard laptop.

相關內容

Sample selection models represent a common methodology for correcting bias induced by data missing not at random. It is well known that these models are not empirically identifiable without exclusion restrictions. In other words, some variables predictive of missingness do not affect the outcome model of interest. The drive to establish this requirement often leads to the inclusion of irrelevant variables in the model. A recent proposal uses adaptive LASSO to circumvent this problem, but its performance depends on the so-called covariance assumption, which can be violated in small to moderate samples. Additionally, there are no tools yet for post-selection inference for this model. To address these challenges, we propose two families of spike-and-slab priors to conduct Bayesian variable selection in sample selection models. These prior structures allow for constructing a Gibbs sampler with tractable conditionals, which is scalable to the dimensions of practical interest. We illustrate the performance of the proposed methodology through a simulation study and present a comparison against adaptive LASSO and stepwise selection. We also provide two applications using publicly available real data. An implementation and code to reproduce the results in this paper can be found at //github.com/adam-iqbal/selection-spike-slab

We consider the estimation of generalized additive models using basis expansions coupled with Bayesian model selection. Although Bayesian model selection is an intuitively appealing tool for regression splines, its use has traditionally been limited to Gaussian additive regression because of the availability of a tractable form of the marginal model likelihood. We extend the method to encompass the exponential family of distributions using the Laplace approximation to the likelihood. Although the approach exhibits success with any Gaussian-type prior distribution, there remains a lack of consensus regarding the best prior distribution for nonparametric regression through model selection. We observe that the classical unit information prior distribution for variable selection may not be well-suited for nonparametric regression using basis expansions. Instead, our investigation reveals that mixtures of g-priors are more suitable. We consider various mixtures of g-priors to evaluate the performance in estimating generalized additive models. Furthermore, we conduct a comparative analysis of several priors for knots to identify the most practically effective strategy. Our extensive simulation studies demonstrate the superiority of model selection-based approaches over other Bayesian methods.

In this work, an efficient and robust isogeometric three-dimensional solid-beam finite element is developed for large deformations and finite rotations with merely displacements as degrees of freedom. The finite strain theory and hyperelastic constitutive models are considered and B-Spline and NURBS are employed for the finite element discretization. Similar to finite elements based on Lagrange polynomials, also NURBS-based formulations are affected by the non-physical phenomena of locking, which constrains the field variables and negatively impacts the solution accuracy and deteriorates convergence behavior. To avoid this problem within the context of a Solid-Beam formulation, the Assumed Natural Strain (ANS) method is applied to alleviate membrane and transversal shear locking and the Enhanced Assumed Strain (EAS) method against Poisson thickness locking. Furthermore, the Mixed Integration Point (MIP) method is employed to make the formulation more efficient and robust. The proposed novel isogeometric solid-beam element is tested on several single-patch and multi-patch benchmark problems, and it is validated against classical solid finite elements and isoparametric solid-beam elements. The results show that the proposed formulation can alleviate the locking effects and significantly improve the performance of the isogeometric solid-beam element. With the developed element, efficient and accurate predictions of mechanical properties of lattice-based structured materials can be achieved. The proposed solid-beam element inherits both the merits of solid elements e.g. flexible boundary conditions and of the beam elements i.e. higher computational efficiency.

We propose a non-linear state-space model to examine the relationship between CO$_2$ emissions, energy sources, and macroeconomic activity, using data from 1971 to 2019. CO$_2$ emissions are modeled as a weighted sum of fossil fuel use, with emission conversion factors that evolve over time to reflect technological changes. GDP is expressed as the outcome of linearly increasing energy efficiency and total energy consumption. The model is estimated using CO$_2$ data from the Global Carbon Budget, GDP statistics from the World Bank, and energy data from the International Energy Agency (IEA). Projections for CO$_2$ emissions and GDP from 2020 to 2100 from the model are based on energy scenarios from the Shared Socioeconomic Pathways (SSP) and the IEA's Net Zero roadmap. Emissions projections from the model are consistent with these scenarios but predict lower GDP growth. An alternative model version, assuming exponential energy efficiency improvement, produces GDP growth rates more in line with the benchmark projections. Our results imply that if internationally agreed net-zero objectives are to be fulfilled and economic growth is to follow SSP or IEA scenarios, then drastic changes in energy efficiency, not consistent with historical trends, are needed.

This paper introduces a time-domain combined field integral equation for electromagnetic scattering by a perfect electric conductor. The new equation is obtained by leveraging the quasi-Helmholtz projectors, which separate both the unknown and the source fields into solenoidal and irrotational components. These two components are then appropriately rescaled to cure the solution from a loss of accuracy occurring when the time step is large. Yukawa-type integral operators of a purely imaginary wave number are also used as a Calderon preconditioner to eliminate the ill-conditioning of matrix systems. The stabilized time-domain electric and magnetic field integral equations are linearly combined in a Calderon-like fashion, then temporally discretized using a proper pair of trial functions, resulting in a marching-on-in-time linear system. The novel formulation is immune to spurious resonances, dense discretization breakdown, large-time step breakdown and dc instabilities stemming from non-trivial kernels. Numerical results for both simply-connected and multiply-connected scatterers corroborate the theoretical analysis.

In this work we consider the two dimensional instationary Navier-Stokes equations with homogeneous Dirichlet/no-slip boundary conditions. We show error estimates for the fully discrete problem, where a discontinuous Galerkin method in time and inf-sup stable finite elements in space are used. Recently, best approximation type error estimates for the Stokes problem in the $L^\infty(I;L^2(\Omega))$, $L^2(I;H^1(\Omega))$ and $L^2(I;L^2(\Omega))$ norms have been shown. The main result of the present work extends the error estimate in the $L^\infty(I;L^2(\Omega))$ norm to the Navier-Stokes equations, by pursuing an error splitting approach and an appropriate duality argument. In order to discuss the stability of solutions to the discrete primal and dual equations, a specially tailored discrete Gronwall lemma is presented. The techniques developed towards showing the $L^\infty(I;L^2(\Omega))$ error estimate, also allow us to show best approximation type error estimates in the $L^2(I;H^1(\Omega))$ and $L^2(I;L^2(\Omega))$ norms, which complement this work.

Due to the dynamic characteristics of instantaneity and steepness, employing domain decomposition techniques for simulating rogue wave solutions is highly appropriate. Wherein, the backward compatible PINN (bc-PINN) is a temporally sequential scheme to solve PDEs over successive time segments while satisfying all previously obtained solutions. In this work, we propose improvements to the original bc-PINN algorithm in two aspects based on the characteristics of error propagation. One is to modify the loss term for ensuring backward compatibility by selecting the earliest learned solution for each sub-domain as pseudo reference solution. The other is to adopt the concatenation of solutions obtained from individual subnetworks as the final form of the predicted solution. The improved backward compatible PINN (Ibc-PINN) is applied to study data-driven higher-order rogue waves for the nonlinear Schr\"{o}dinger (NLS) equation and the AB system to demonstrate the effectiveness and advantages. Transfer learning and initial condition guided learning (ICGL) techniques are also utilized to accelerate the training. Moreover, the error analysis is conducted on each sub-domain and it turns out that the slowdown of Ibc-PINN in error accumulation speed can yield greater advantages in accuracy. In short, numerical results fully indicate that Ibc-PINN significantly outperforms bc-PINN in terms of accuracy and stability without sacrificing efficiency.

We study the problem of clustering networks whose nodes have imputed or physical positions in a single dimension, for example prestige hierarchies or the similarity dimension of hyperbolic embeddings. Existing algorithms, such as the critical gap method and other greedy strategies, only offer approximate solutions to this problem. Here, we introduce a dynamic programming approach that returns provably optimal solutions in polynomial time -- O(n^2) steps -- for a broad class of clustering objectives. We demonstrate the algorithm through applications to synthetic and empirical networks and show that it outperforms existing heuristics by a significant margin, with a similar execution time.

With the rising concern on model interpretability, the application of eXplainable AI (XAI) tools on deepfake detection models has been a topic of interest recently. In image classification tasks, XAI tools highlight pixels influencing the decision given by a model. This helps in troubleshooting the model and determining areas that may require further tuning of parameters. With a wide range of tools available in the market, choosing the right tool for a model becomes necessary as each one may highlight different sets of pixels for a given image. There is a need to evaluate different tools and decide the best performing ones among them. Generic XAI evaluation methods like insertion or removal of salient pixels/segments are applicable for general image classification tasks but may produce less meaningful results when applied on deepfake detection models due to their functionality. In this paper, we perform experiments to show that generic removal/insertion XAI evaluation methods are not suitable for deepfake detection models. We also propose and implement an XAI evaluation approach specifically suited for deepfake detection models.

This paper considers the problem of robust iterative Bayesian smoothing in nonlinear state-space models with additive noise using Gaussian approximations. Iterative methods are known to improve smoothed estimates but are not guaranteed to converge, motivating the development of more robust versions of the algorithms. The aim of this article is to present Levenberg-Marquardt (LM) and line-search extensions of the classical iterated extended Kalman smoother (IEKS) as well as the iterated posterior linearisation smoother (IPLS). The IEKS has previously been shown to be equivalent to the Gauss-Newton (GN) method. We derive a similar GN interpretation for the IPLS. Furthermore, we show that an LM extension for both iterative methods can be achieved with a simple modification of the smoothing iterations, enabling algorithms with efficient implementations. Our numerical experiments show the importance of robust methods, in particular for the IEKS-based smoothers. The computationally expensive IPLS-based smoothers are naturally robust but can still benefit from further regularisation.

北京阿比特科技有限公司