亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this work, we introduce the novel application of the adaptive mesh refinement (AMR) technique in the global stability analysis of incompressible flows. The design of an accurate mesh for transitional flows is crucial. Indeed, an inadequate resolution might introduce numerical noise that triggers premature transition. With AMR, we enable the design of three different and independent meshes for the non-linear base flow, the linear direct and adjoint solutions. Each of those is designed to reduce the truncation and quadrature errors for its respective solution, which are measured via the spectral error indicator. We provide details about the workflow and the refining procedure. The numerical framework is validated for the two-dimensional flow past a circular cylinder, computing a portion of the spectrum for the linearised direct and adjoint Navier-Stokes operators.

相關內容

In this work, we present new proofs of convergence for Plug-and-Play (PnP) algorithms. PnP methods are efficient iterative algorithms for solving image inverse problems where regularization is performed by plugging a pre-trained denoiser in a proximal algorithm, such as Proximal Gradient Descent (PGD) or Douglas-Rachford Splitting (DRS). Recent research has explored convergence by incorporating a denoiser that writes exactly as a proximal operator. However, the corresponding PnP algorithm has then to be run with stepsize equal to $1$. The stepsize condition for nonconvex convergence of the proximal algorithm in use then translates to restrictive conditions on the regularization parameter of the inverse problem. This can severely degrade the restoration capacity of the algorithm. In this paper, we present two remedies for this limitation. First, we provide a novel convergence proof for PnP-DRS that does not impose any restrictions on the regularization parameter. Second, we examine a relaxed version of the PGD algorithm that converges across a broader range of regularization parameters. Our experimental study, conducted on deblurring and super-resolution experiments, demonstrate that both of these solutions enhance the accuracy of image restoration.

Reliable quantification of epistemic and aleatoric uncertainty is of crucial importance in applications where models are trained in one environment but applied to multiple different environments, often seen in real-world applications for example, in climate science or mobility analysis. We propose a simple approach using surjective normalizing flows to identify out-of-distribution data sets in deep neural network models that can be computed in a single forward pass. The method builds on recent developments in deep uncertainty quantification and generative modeling with normalizing flows. We apply our method to a synthetic data set that has been simulated using a mechanistic model from the mobility literature and several data sets simulated from interventional distributions induced by soft and atomic interventions on that model, and demonstrate that our method can reliably discern out-of-distribution data from in-distribution data. We compare the surjective flow model to a Dirichlet process mixture model and a bijective flow and find that the surjections are a crucial component to reliably distinguish in-distribution from out-of-distribution data.

In this paper, we consider robust nonparametric regression using deep neural networks with ReLU activation function. While several existing theoretically justified methods are geared towards robustness against identical heavy-tailed noise distributions, the rise of adversarial attacks has emphasized the importance of safeguarding estimation procedures against systematic contamination. We approach this statistical issue by shifting our focus towards estimating conditional distributions. To address it robustly, we introduce a novel estimation procedure based on $\ell$-estimation. Under a mild model assumption, we establish general non-asymptotic risk bounds for the resulting estimators, showcasing their robustness against contamination, outliers, and model misspecification. We then delve into the application of our approach using deep ReLU neural networks. When the model is well-specified and the regression function belongs to an $\alpha$-H\"older class, employing $\ell$-type estimation on suitable networks enables the resulting estimators to achieve the minimax optimal rate of convergence. Additionally, we demonstrate that deep $\ell$-type estimators can circumvent the curse of dimensionality by assuming the regression function closely resembles the composition of several H\"older functions. To attain this, new deep fully-connected ReLU neural networks have been designed to approximate this composition class. This approximation result can be of independent interest.

We present a component-based model order reduction procedure to efficiently and accurately solve parameterized incompressible flows governed by the Navier-Stokes equations. Our approach leverages a non-overlapping optimization-based domain decomposition technique to determine the control variable that minimizes jumps across the interfaces between sub-domains. To solve the resulting constrained optimization problem, we propose both Gauss-Newton and sequential quadratic programming methods, which effectively transform the constrained problem into an unconstrained formulation. Furthermore, we integrate model order reduction techniques into the optimization framework, to speed up computations. In particular, we incorporate localized training and adaptive enrichment to reduce the burden associated with the training of the local reduced-order models. Numerical results are presented to demonstrate the validity and effectiveness of the overall methodology.

In this paper, we derive a kinetic description of swarming particle dynamics in an interacting multi-agent system featuring emerging leaders and followers. Agents are classically characterized by their position and velocity plus a continuous parameter quantifying their degree of leadership. The microscopic processes ruling the change of velocity and degree of leadership are independent, non-conservative and non-local in the physical space, so as to account for long-range interactions. Out of the kinetic description, we obtain then a macroscopic model under a hydrodynamic limit reminiscent of that used to tackle the hydrodynamics of weakly dissipative granular gases, thus relying in particular on a regime of small non-conservative and short-range interactions. Numerical simulations in one- and two-dimensional domains show that the limiting macroscopic model is consistent with the original particle dynamics and furthermore can reproduce classical emerging patterns typically observed in swarms.

Using a novel modeling approach based on the so-called environmental stress level (ESL), we develop a mathematical model to describe systematically the collective influence of oxygen concentration and stiffness of the extracellular matrix on the response of tumor cells to a combined chemotherapeutic treatment. We perform Bayesian calibrations of the resulting model using particle filters, with in vitro experimental data for different hepatocellular carcinoma cell lines. The calibration results support the validity of our mathematical model. Furthermore, they shed light on individual as well as synergistic effects of hypoxia and tissue stiffness on tumor cell dynamics under chemotherapy.

Many of the tools available for robot learning were designed for Euclidean data. However, many applications in robotics involve manifold-valued data. A common example is orientation; this can be represented as a 3-by-3 rotation matrix or a quaternion, the spaces of which are non-Euclidean manifolds. In robot learning, manifold-valued data are often handled by relating the manifold to a suitable Euclidean space, either by embedding the manifold or by projecting the data onto one or several tangent spaces. These approaches can result in poor predictive accuracy, and convoluted algorithms. In this paper, we propose an "intrinsic" approach to regression that works directly within the manifold. It involves taking a suitable probability distribution on the manifold, letting its parameter be a function of a predictor variable, such as time, then estimating that function non-parametrically via a "local likelihood" method that incorporates a kernel. We name the method kernelised likelihood estimation. The approach is conceptually simple, and generally applicable to different manifolds. We implement it with three different types of manifold-valued data that commonly appear in robotics applications. The results of these experiments show better predictive accuracy than projection-based algorithms.

In this paper we propose a novel and general approach to design semi-implicit methods for the simulation of fluid-structure interaction problems in a fully Eulerian framework. In order to properly present the new method, we focus on the two-dimensional version of the general model developed to describe full membrane elasticity. The approach consists in treating the elastic source term by writing an evolution equation on the structure stress tensor, even if it is nonlinear. Then, it is possible to show that its semi-implicit discretization allows us to add to the linear system of the Navier-Stokes equations some consistent dissipation terms that depend on the local deformation and stiffness of the membrane. Due to the linearly implicit discretization, the approach does not need iterative solvers and can be easily applied to any Eulerian framework for fluid-structure interaction. Its stability properties are studied by performing a Von Neumann analysis on a simplified one-dimensional model and proving that, thanks to the additional dissipation, the discretized coupled system is unconditionally stable. Several numerical experiments are shown for two-dimensional problems by comparing the new method to the original explicit scheme and studying the effect of structure stiffness and mesh refinement on the membrane dynamics. The newly designed scheme is able to relax the time step restrictions that affect the explicit method and reduce crucially the computational costs, especially when very stiff membranes are under consideration.

In this paper we discuss a deterministic form of ensemble Kalman inversion as a regularization method for linear inverse problems. By interpreting ensemble Kalman inversion as a low-rank approximation of Tikhonov regularization, we are able to introduce a new sampling scheme based on the Nystr\"om method that improves practical performance. Furthermore, we formulate an adaptive version of ensemble Kalman inversion where the sample size is coupled with the regularization parameter. We prove that the proposed scheme yields an order optimal regularization method under standard assumptions if the discrepancy principle is used as a stopping criterion. The paper concludes with a numerical comparison of the discussed methods for an inverse problem of the Radon transform.

The main goal of this work is to improve the efficiency of training binary neural networks, which are low latency and low energy networks. The main contribution of this work is the proposal of two solutions comprised of topology changes and strategy training that allow the network to achieve near the state-of-the-art performance and efficient training. The time required for training and the memory required in the process are two factors that contribute to efficient training.

北京阿比特科技有限公司