亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Extended Dynamic Mode Decomposition (EDMD) is a data-driven tool for forecasting and model reduction of dynamics, which has been extensively taken up in the physical sciences. While the method is conceptually simple, in deterministic chaos it is unclear what its properties are or even what it converges to. In particular, it is not clear how EDMD's least-squares approximation treats the classes of regular functions needed to make sense of chaotic dynamics. In this paper we develop a general, rigorous theory of EDMD on the simplest examples of chaotic maps: analytic expanding maps of the circle. Proving a new result in the theory of orthogonal polynomials on the unit circle (OPUC), we show that in the infinite-data limit, the least-squares projection is exponentially efficient for polynomial observable dictionaries. As a result, we show that the forecasts and Koopman spectral data produced using EDMD in this setting converge to the physically meaningful limits, at an exponential rate. This demonstrates that with only a relatively small polynomial dictionary, EDMD can be very effective, even when the sampling measure is not uniform. Furthermore, our OPUC result suggests that data-based least-squares projections may be a very effective approximation strategy.

相關內容

Allowing for space- and time-dependence of mass in Klein--Gordon equations resolves the problem of negative probability density and of violation of Lorenz covariance of interaction in quantum mechanics. Moreover it extends their applicability to the domain of quantum cosmology, where the variation in mass may be accompanied by high oscillations. In this paper we propose a third-order exponential integrator, where the main idea lies in embedding the oscillations triggered by the possibly highly oscillatory component intrinsically into the numerical discretisation. While typically high oscillation requires appropriately small time steps, an application of Filon methods allows implementation with large time steps even in the presence of very high oscillation. This greatly improves the efficiency of the time-stepping algorithm. Proof of the convergence and its rate are nontrivial and require alternative representation of the equation under consideration. We derive careful bounds on the growth of global error in time discretisation and prove that, contrary to standard intuition, the error of time integration does not grow once the frequency of oscillations increases. Several numerical simulations are presented to confirm the theoretical investigations and the robustness of the method in all oscillatory regimes.

We study the type of solutions to which stochastic gradient descent converges when used to train a single hidden-layer multivariate ReLU network with the quadratic loss. Our results are based on a dynamical stability analysis. In the univariate case, it was shown that linearly stable minima correspond to network functions (predictors), whose second derivative has a bounded weighted $L^1$ norm. Notably, the bound gets smaller as the step size increases, implying that training with a large step size leads to `smoother' predictors. Here we generalize this result to the multivariate case, showing that a similar result applies to the Laplacian of the predictor. We demonstrate the tightness of our bound on the MNIST dataset, and show that it accurately captures the behavior of the solutions as a function of the step size. Additionally, we prove a depth separation result on the approximation power of ReLU networks corresponding to stable minima of the loss. Specifically, although shallow ReLU networks are universal approximators, we prove that stable shallow networks are not. Namely, there is a function that cannot be well-approximated by stable single hidden-layer ReLU networks trained with a non-vanishing step size. This is while the same function can be realized as a stable two hidden-layer ReLU network. Finally, we prove that if a function is sufficiently smooth (in a Sobolev sense) then it can be approximated arbitrarily well using shallow ReLU networks that correspond to stable solutions of gradient descent.

Randomized controlled trials (RCTs) are the gold standard for causal inference, but they are often powered only for average effects, making estimation of heterogeneous treatment effects (HTEs) challenging. Conversely, large-scale observational studies (OS) offer a wealth of data but suffer from confounding bias. Our paper presents a novel framework to leverage OS data for enhancing the efficiency in estimating conditional average treatment effects (CATEs) from RCTs while mitigating common biases. We propose an innovative approach to combine RCTs and OS data, expanding the traditionally used control arms from external sources. The framework relaxes the typical assumption of CATE invariance across populations, acknowledging the often unaccounted systematic differences between RCT and OS participants. We demonstrate this through the special case of a linear outcome model, where the CATE is sparsely different between the two populations. The core of our framework relies on learning potential outcome means from OS data and using them as a nuisance parameter in CATE estimation from RCT data. We further illustrate through experiments that using OS findings reduces the variance of the estimated CATE from RCTs and can decrease the required sample size for detecting HTEs.

Evolutionary differential equation discovery proved to be a tool to obtain equations with less a priori assumptions than conventional approaches, such as sparse symbolic regression over the complete possible terms library. The equation discovery field contains two independent directions. The first one is purely mathematical and concerns differentiation, the object of optimization and its relation to the functional spaces and others. The second one is dedicated purely to the optimizational problem statement. Both topics are worth investigating to improve the algorithm's ability to handle experimental data a more artificial intelligence way, without significant pre-processing and a priori knowledge of their nature. In the paper, we consider the prevalence of either single-objective optimization, which considers only the discrepancy between selected terms in the equation, or multi-objective optimization, which additionally takes into account the complexity of the obtained equation. The proposed comparison approach is shown on classical model examples -- Burgers equation, wave equation, and Korteweg - de Vries equation.

We study cut finite element discretizations of a Darcy interface problem based on the mixed finite element pairs $\textbf{RT}_0\times Q_0$, $\textbf{BDM}_1\times Q_0$, and $\textbf{RT}_1\times Q_1$. Here $Q_k$ is the space of discontinuous polynomial functions of degree k, $\textbf{RT}_{k}$ is the Raviart-Thomas space, and $\textbf{BDM}_k$ is the Brezzi-Douglas-Marini space. We show that the standard ghost penalty stabilization, often added in the weak forms of cut finite element methods for stability and control of the condition number of the resulting linear system matrix, destroys the divergence-free property of the considered element pairs. Therefore, we propose two corrections to the standard stabilization strategy: using macro-elements and new stabilization terms for the pressure. By decomposing the computational mesh into macro-elements and applying ghost penalty terms only on interior edges of macro-elements, stabilization is active only where needed. By modifying the standard stabilization terms for the pressure we recover the optimal approximation of the divergence without losing control of the condition number of the linear system matrix. We derive a priori error estimates for the proposed unfitted finite element discretization based on $\textbf{RT}_k\times Q_k$, $k\geq 0$. Numerical experiments indicate that with the new method we have 1) optimal rates of convergence of the approximate velocity and pressure; 2) well-posed linear systems where the condition number of the system matrix scales as it does for fitted finite element discretizations; 3) optimal rates of convergence of the approximate divergence with pointwise divergence-free approximations of solenoidal velocity fields. All three properties hold independently of how the interface is positioned relative to the computational mesh.

Singularly perturbed problems present inherent difficulty due to the presence of a thin boundary layer in its solution. To overcome this difficulty, we propose using deep operator networks (DeepONets), a method previously shown to be effective in approximating nonlinear operators between infinite-dimensional Banach spaces. In this paper, we demonstrate for the first time the application of DeepONets to one-dimensional singularly perturbed problems, achieving promising results that suggest their potential as a robust tool for solving this class of problems. We consider the convergence rate of the approximation error incurred by the operator networks in approximating the solution operator, and examine the generalization gap and empirical risk, all of which are shown to converge uniformly with respect to the perturbation parameter. By utilizing Shishkin mesh points as locations of the loss function, we conduct several numerical experiments that provide further support for the effectiveness of operator networks in capturing the singular boundary layer behavior.

Topology optimization is a powerful tool utilized in various fields for structural design. However, its application has primarily been restricted to static or passively moving objects, mainly focusing on hard materials with limited deformations and contact capabilities. Designing soft and actively moving objects, such as soft robots equipped with actuators, poses challenges due to simulating dynamics problems involving large deformations and intricate contact interactions. Moreover, the optimal structure depends on the object's motion, necessitating a simultaneous design approach. To address these challenges, we propose "4D topology optimization," an extension of density-based topology optimization that incorporates the time dimension. This enables the simultaneous optimization of both the structure and self-actuation of soft bodies for specific dynamic tasks. Our method utilizes multi-indexed and hierarchized density variables distributed over the spatiotemporal design domain, representing the material layout, actuator layout, and time-varying actuation. These variables are efficiently optimized using gradient-based methods. Forward and backward simulations of soft bodies are done using the material point method, a Lagrangian-Eulerian hybrid approach, implemented on a recent automatic differentiation framework. We present several numerical examples of self-actuating soft body designs aimed at achieving locomotion, posture control, and rotation tasks. The results demonstrate the effectiveness of our method in successfully designing soft bodies with complex structures and biomimetic movements, benefiting from its high degree of design freedom.

This paper addresses the scheduling problem on two identical parallel machines with a single server in charge of loading and unloading operations of jobs. Each job has to be loaded by the server before being processed on one of the two machines and unloaded by the same server after its processing. No delay is allowed between loading and processing, and between processing and unloading. The objective function involves the minimization of the makespan. This problem referred to as P2, S1|sj , tj |Cmax generalizes the classical parallel machine scheduling problem with a single server which performs only the loading (i.e., setup) operation of each job. For this NP-hard problem, no solution algorithm was proposed in the literature. Therefore, we present two mixedinteger linear programming (MILP) formulations, one with completion-time variables along with two valid inequalities and one with time-indexed variables. In addition, we propose some polynomial-time solvable cases and a tight theoretical lower bound. In addition, we show that the minimization of the makespan is equivalent to the minimization of the total idle times on the machines. To solve large-sized instances of the problem, an efficient General Variable Neighborhood Search (GVNS) metaheuristic with two mechanisms for finding an initial solution is designed. The GVNS is evaluated by comparing its performance with the results provided by the MILPs and another metaheuristic. The results show that the average percentage deviation from the theoretical lower-bound of GVNS is within 0.642%. Some managerial insights are presented and our results are compared with the related literature.

Fast development in science and technology has driven the need for proper statistical tools to capture special data features such as abrupt changes or sharp contrast. Many applications in the data science seek spatiotemporal reconstruction from a sequence of time-dependent objects with discontinuity or singularity, e.g. dynamic computerized tomography (CT) images with edges. Traditional methods based on Gaussian processes (GP) may not provide satisfactory solutions since they tend to offer over-smooth prior candidates. Recently, Besov process (BP) defined by wavelet expansions with random coefficients has been proposed as a more appropriate prior for this type of Bayesian inverse problems. While BP outperforms GP in imaging analysis to produce edge-preserving reconstructions, it does not automatically incorporate temporal correlation inherited in the dynamically changing images. In this paper, we generalize BP to the spatiotemporal domain (STBP) by replacing the random coefficients in the series expansion with stochastic time functions following Q-exponential process which governs the temporal correlation strength. Mathematical and statistical properties about STBP are carefully studied. A white-noise representation of STBP is also proposed to facilitate the point estimation through maximum a posterior (MAP) and the uncertainty quantification (UQ) by posterior sampling. Two limited-angle CT reconstruction examples and a highly non-linear inverse problem involving Navier-Stokes equation are used to demonstrate the advantage of the proposed STBP in preserving spatial features while accounting for temporal changes compared with the classic STGP and a time-uncorrelated approach.

We consider the problem of variational Bayesian inference in a latent variable model where a (possibly complex) observed stochastic process is governed by the solution of a latent stochastic differential equation (SDE). Motivated by the challenges that arise when trying to learn an (almost arbitrary) latent neural SDE from large-scale data, such as efficient gradient computation, we take a step back and study a specific subclass instead. In our case, the SDE evolves on a homogeneous latent space and is induced by stochastic dynamics of the corresponding (matrix) Lie group. In learning problems, SDEs on the unit $n$-sphere are arguably the most relevant incarnation of this setup. Notably, for variational inference, the sphere not only facilitates using a truly uninformative prior SDE, but we also obtain a particularly simple and intuitive expression for the Kullback-Leibler divergence between the approximate posterior and prior process in the evidence lower bound. Experiments demonstrate that a latent SDE of the proposed type can be learned efficiently by means of an existing one-step geometric Euler-Maruyama scheme. Despite restricting ourselves to a less diverse class of SDEs, we achieve competitive or even state-of-the-art performance on various time series interpolation and classification benchmarks.

北京阿比特科技有限公司