亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper is devoted to find the numerical solutions of one dimensional general nonlinear system of third-order boundary value problems (BVPs) for the pair of functions using Galerkin weighted residual method. We derive mathematical formulations in matrix form, in details, by exploiting Bernstein polynomials as basis functions. A reasonable accuracy is found when the proposed method is used on few examples. At the end of the study, a comparison is made between the approximate and exact solutions, and also with the solutions of the existing methods. Our results converge monotonically to the exact solutions. In addition, we show that the the derived formulations may be applicable by reducing higher order complicated BVP into a lower order system of BVPs, and the performance of the numerical solutions is satisfactory.

相關內容

Understanding the generalization properties of heavy-tailed stochastic optimization algorithms has attracted increasing attention over the past years. While illuminating interesting aspects of stochastic optimizers by using heavy-tailed stochastic differential equations as proxies, prior works either provided expected generalization bounds, or introduced non-computable information theoretic terms. Addressing these drawbacks, in this work, we prove high-probability generalization bounds for heavy-tailed SDEs which do not contain any nontrivial information theoretic terms. To achieve this goal, we develop new proof techniques based on estimating the entropy flows associated with the so-called fractional Fokker-Planck equation (a partial differential equation that governs the evolution of the distribution of the corresponding heavy-tailed SDE). In addition to obtaining high-probability bounds, we show that our bounds have a better dependence on the dimension of parameters as compared to prior art. Our results further identify a phase transition phenomenon, which suggests that heavy tails can be either beneficial or harmful depending on the problem structure. We support our theory with experiments conducted in a variety of settings.

Recent strides in nonlinear model predictive control (NMPC) underscore a dependence on numerical advancements to efficiently and accurately solve large-scale problems. Given the substantial number of variables characterizing typical whole-body optimal control (OC) problems - often numbering in the thousands - exploiting the sparse structure of the numerical problem becomes crucial to meet computational demands, typically in the range of a few milliseconds. Addressing the linear-quadratic regulator (LQR) problem is a fundamental building block for computing Newton or Sequential Quadratic Programming (SQP) steps in direct optimal control methods. This paper concentrates on equality-constrained problems featuring implicit system dynamics and dual regularization, a characteristic of advanced interiorpoint or augmented Lagrangian solvers. Here, we introduce a parallel algorithm for solving an LQR problem with dual regularization. Leveraging a rewriting of the LQR recursion through block elimination, we first enhanced the efficiency of the serial algorithm and then subsequently generalized it to handle parametric problems. This extension enables us to split decision variables and solve multiple subproblems concurrently. Our algorithm is implemented in our nonlinear numerical optimal control library ALIGATOR. It showcases improved performance over previous serial formulations and we validate its efficacy by deploying it in the model predictive control of a real quadruped robot.

Separation bounds are a fundamental measure of the complexity of solving a zero-dimensional system as it measures how difficult it is to separate its zeroes. In the positive dimensional case, the notion of reach takes its place. In this paper, we provide bounds on the reach of a smooth algebraic variety in terms of several invariants of interest: the condition number, Smale's $\gamma$ and the bit-size. We also provide probabilistic bounds for random algebraic varieties under some general assumptions.

In the rapidly evolving field of autonomous driving, precise segmentation of LiDAR data is crucial for understanding complex 3D environments. Traditional approaches often rely on disparate, standalone codebases, hindering unified advancements and fair benchmarking across models. To address these challenges, we introduce MMDetection3D-lidarseg, a comprehensive toolbox designed for the efficient training and evaluation of state-of-the-art LiDAR segmentation models. We support a wide range of segmentation models and integrate advanced data augmentation techniques to enhance robustness and generalization. Additionally, the toolbox provides support for multiple leading sparse convolution backends, optimizing computational efficiency and performance. By fostering a unified framework, MMDetection3D-lidarseg streamlines development and benchmarking, setting new standards for research and application. Our extensive benchmark experiments on widely-used datasets demonstrate the effectiveness of the toolbox. The codebase and trained models have been publicly available, promoting further research and innovation in the field of LiDAR segmentation for autonomous driving.

This paper is about learning the parameter-to-solution map for systems of partial differential equations (PDEs) that depend on a potentially large number of parameters covering all PDE types for which a stable variational formulation (SVF) can be found. A central constituent is the notion of variationally correct residual loss function meaning that its value is always uniformly proportional to the squared solution error in the norm determined by the SVF, hence facilitating rigorous a posteriori accuracy control. It is based on a single variational problem, associated with the family of parameter dependent fiber problems, employing the notion of direct integrals of Hilbert spaces. Since in its original form the loss function is given as a dual test norm of the residual a central objective is to develop equivalent computable expressions. A first critical role is played by hybrid hypothesis classes, whose elements are piecewise polynomial in (low-dimensional) spatio-temporal variables with parameter-dependent coefficients that can be represented, e.g. by neural networks. Second, working with first order SVFs, we distinguish two scenarios: (i) the test space can be chosen as an $L_2$-space (e.g. for elliptic or parabolic problems) so that residuals live in $L_2$ and can be evaluated directly; (ii) when trial and test spaces for the fiber problems (e.g. for transport equations) depend on the parameters, we use ultraweak formulations. In combination with Discontinuous Petrov Galerkin concepts the hybrid format is then instrumental to arrive at variationally correct computable residual loss functions. Our findings are illustrated by numerical experiments representing (i) and (ii), namely elliptic boundary value problems with piecewise constant diffusion coefficients and pure transport equations with parameter dependent convection field.

This paper presents a novel algorithm, based on use of rational approximants of randomly scalarized boundary integral resolvents, for the evaluation of acoustic and electromagnetic resonances in open and closed cavities; for simplicity we restrict treatment to cavities in two-dimensional space. The desired open cavity resonances (also known as ``eigenvalues'' for interior problems, and ``scattering poles'' for exterior and open problems) are obtained as the poles of associated rational approximants; both the approximants and their poles are obtained by means of the recently introduced AAA rational-approximation algorithm. In fact, the proposed resonance-search method applies to any nonlinear eigenvalue problem (NEP) associated with a given function $F: U \to \mathbb{C}^{n\times n}$, wherein a complex value $k$ is sought for which $F_kw = 0$ for some nonzero $w\in \mathbb{C}^n$. For the cavity problems considered in this paper, $F_k$ is taken as a spectrally discretized version of a Green function-based boundary integral operator at spatial frequency $k$. In all cases, the scalarized resolvent is given by an expression of the form $u^* F_k^{-1} v$, where $u,v \in \mathbb{C}^n$ are fixed random vectors. A variety of numerical results are presented for both scattering resonances and other NEPs, demonstrating the accuracy of the method even for high frequency states.

Triply periodic minimal surfaces (TPMS) are a class of metamaterials with a variety of applications and well-known primitives. We present a new method for discovering novel microscale TPMS structures with exceptional energy-dissipation capabilities, achieving double the energy absorption of the best existing TPMS primitive structure. Our approach employs a parametric representation, allowing seamless interpolation between structures and representing a rich TPMS design space. We show that simulations are intractable for optimizing microscale hyperelastic structures, and instead propose a sample-efficient computational strategy for rapidly discovering structures with extreme energy dissipation using limited amounts of empirical data from 3D-printed and tested microscale metamaterials. This strategy ensures high-fidelity results but involves time-consuming 3D printing and testing. To address this, we leverage an uncertainty-aware Deep Ensembles model to predict microstructure behaviors and identify which structures to 3D-print and test next. We iteratively refine our model through batch Bayesian optimization, selecting structures for fabrication that maximize exploration of the performance space and exploitation of our energy-dissipation objective. Using our method, we produce the first open-source dataset of hyperelastic microscale TPMS structures, including a set of novel structures that demonstrate extreme energy dissipation capabilities. We show several potential applications of these structures in protective equipment and bone implants.

In the present paper, an algorithm for the numerical solution of the external Dirichlet generalized harmonic problem for a sphere by the method of probabilistic solution (MPS) is given, where generalized indicates that a boundary function has a finite number of first kind discontinuity curves. The algorithm consists of the following main stages: (1) the transition from an infinite domain to a finite domain by an inversion; (2) the consideration of a new Dirichlet generalized harmonic problem on the basis of Kelvin theorem for the obtained finite domain; (3) the numerical solution of the new problem for the finite domain by the MPS, which in turn is based on a computer simulation of the Weiner process; (4) finding the probabilistic solution of the posed generalized problem at any fixed points of the infinite domain by the solution of the new problem. For illustration, numerical examples are considered and results are presented.

Estimators of doubly robust functionals typically rely on estimating two complex nuisance functions, such as the propensity score and conditional outcome mean for the average treatment effect functional. We consider the problem of how to estimate nuisance functions to obtain optimal rates of convergence for a doubly robust nonparametric functional that has witnessed applications across the causal inference and conditional independence testing literature. For several plug-in type estimators and a one-step type estimator, we illustrate the interplay between different tuning parameter choices for the nuisance function estimators and sample splitting strategies on the optimal rate of estimating the functional of interest. For each of these estimators and each sample splitting strategy, we show the necessity to undersmooth the nuisance function estimators under low regularity conditions to obtain optimal rates of convergence for the functional of interest. By performing suitable nuisance function tuning and sample splitting strategies, we show that some of these estimators can achieve minimax rates of convergence in all H\"older smoothness classes of the nuisance functions.

The existence of representative datasets is a prerequisite of many successful artificial intelligence and machine learning models. However, the subsequent application of these models often involves scenarios that are inadequately represented in the data used for training. The reasons for this are manifold and range from time and cost constraints to ethical considerations. As a consequence, the reliable use of these models, especially in safety-critical applications, is a huge challenge. Leveraging additional, already existing sources of knowledge is key to overcome the limitations of purely data-driven approaches, and eventually to increase the generalization capability of these models. Furthermore, predictions that conform with knowledge are crucial for making trustworthy and safe decisions even in underrepresented scenarios. This work provides an overview of existing techniques and methods in the literature that combine data-based models with existing knowledge. The identified approaches are structured according to the categories integration, extraction and conformity. Special attention is given to applications in the field of autonomous driving.

北京阿比特科技有限公司