亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We are interested in a fast solver for the Stokes equations, discretized with multi-patch Isogeometric Analysis. In the last years, several inf-sup stable discretizations for the Stokes problem have been proposed, often the analysis was restricted to single-patch domains. We focus on one of the simplest approaches, the isogeometric Taylor--Hood element. We show how stability results for single-patch domains can be carried over to multi-patch domains. While this is possible, the stability strongly depends on the shape of the geometry. We construct a Dual-Primal Isogeometric Tearing and Interconnecting (IETI-DP) solver that does not suffer from that effect. We give a convergence analysis and provide numerical tests.

相關內容

In practical applications, data is used to make decisions in two steps: estimation and optimization. First, a machine learning model estimates parameters for a structural model relating decisions to outcomes. Second, a decision is chosen to optimize the structural model's predicted outcome as if its parameters were correctly estimated. Due to its flexibility and simple implementation, this ``estimate-then-optimize'' approach is often used for data-driven decision-making. Errors in the estimation step can lead estimate-then-optimize to sub-optimal decisions that result in regret, i.e., a difference in value between the decision made and the best decision available with knowledge of the structural model's parameters. We provide a novel bound on this regret for smooth and unconstrained optimization problems. Using this bound, in settings where estimated parameters are linear transformations of sub-Gaussian random vectors, we provide a general procedure for experimental design to minimize the regret resulting from estimate-then-optimize. We demonstrate our approach on simple examples and a pandemic control application.

In this paper we study the convergence rate of a finite volume approximation of the compressible Navier--Stokes--Fourier system. To this end we first show the local existence of a highly regular unique strong solution and analyse its global extension in time as far as the density and temperature remain bounded. We make a physically reasonable assumption that the numerical density and temperature are uniformly bounded from above and below. The relative energy provides us an elegant way to derive a priori error estimates between finite volume solutions and the strong solution.

Neyman (1923/1990) introduced the randomization model, which contains the notation of potential outcomes to define causal effects and a framework for large-sample inference based on the design of the experiment. However, the existing theory for this framework is far from complete especially when the number of treatment levels diverges and the group sizes vary a lot across treatment levels. We provide a unified discussion of statistical inference under the randomization model with general group sizes across treatment levels. We formulate the estimator in terms of a linear permutational statistic and use results based on Stein's method to derive various Berry--Esseen bounds on the linear and quadratic functions of the estimator. These new Berry--Esseen bounds serve as basis for design-based causal inference with possibly diverging treatment levels and diverging dimension of causal effects. We also fill an important gap by proposing novel variance estimators for experiments with possibly many treatment levels without replications. Equipped with the newly developed results, design-based causal inference in general settings becomes more convenient with stronger theoretical guarantees.

Many organizations run thousands of randomized experiments, or A/B tests, to statistically quantify and detect the impact of product changes. Analysts take these results to augment decision-making around deployment and investment opportunities, making the time it takes to detect an effect a key priority. Often, these experiments are conducted on customers arriving sequentially; however, the analysis is only performed at the end of the study. This is undesirable because strong effects can be detected before the end of the study, which is especially relevant for risk mitigation when the treatment effect is negative. Alternatively, analysts could perform hypotheses tests more frequently and stop the experiment when the estimated causal effect is statistically significant; this practice is often called "peeking." Unfortunately, peeking invalidates the statistical guarantees and quickly leads to a substantial uncontrolled type-1 error. Our paper provides valid confidence sequences from the design-based perspective, where we condition on the full set of potential outcomes and perform inference on the obtained sample. Our design-based confidence sequence accommodates a wide variety of sequential experiments in an assumption-light manner. In particular, we build confidence sequences for 1) the average treatment effect for different individuals arriving sequentially, 2) the reward mean difference in multi-arm bandit settings with adaptive treatment assignments, 3) the contemporaneous treatment effect for single time series experiment with potential carryover effects in the potential outcome, and 4) the average contemporaneous treatment effect in panel experiments. We further provide a variance reduction technique that incorporates modeling assumptions and covariates to reduce the confidence sequence width proportional to how well the analyst can predict the next outcome.

When missing values occur in multi-view data, all features in a view are likely to be missing simultaneously. This leads to very large quantities of missing data which, especially when combined with high-dimensionality, makes the application of conditional imputation methods computationally infeasible. We introduce a new meta-learning imputation method based on stacked penalized logistic regression (StaPLR), which performs imputation in a dimension-reduced space. We evaluate the new imputation method with several imputation algorithms using simulations. The results show that meta-level imputation of missing values leads to good results at a much lower computational cost, and makes the use of advanced imputation algorithms such as missForest and predictive mean matching possible in settings where they would otherwise be computationally infeasible.

Domain decomposition methods are a set of widely used tools for parallelization of partial differential equation solvers. Convergence is well studied for elliptic equations, but in the case of parabolic equations there are hardly any results for general Lipschitz domains in two or more dimensions. The aim of this work is therefore to construct a new framework for analyzing nonoverlapping domain decomposition methods for the heat equation in a space-time Lipschitz cylinder. The framework is based on a variational formulation, inspired by recent studies of space-time finite elements using Sobolev spaces with fractional time regularity. In this framework, the time-dependent Steklov--Poincar\'e operators are introduced and their essential properties are proven. We then derive the interface interpretations of the Dirichlet--Neumann, Neumann--Neumann and Robin--Robin methods and show that these methods are well defined. Finally, we prove convergence of the Robin--Robin method and introduce a modified method with stronger convergence properties.

Non-convex sampling is a key challenge in machine learning, central to non-convex optimization in deep learning as well as to approximate probabilistic inference. Despite its significance, theoretically there remain many important challenges: Existing guarantees (1) typically only hold for the averaged iterates rather than the more desirable last iterates, (2) lack convergence metrics that capture the scales of the variables such as Wasserstein distances, and (3) mainly apply to elementary schemes such as stochastic gradient Langevin dynamics. In this paper, we develop a new framework that lifts the above issues by harnessing several tools from the theory of dynamical systems. Our key result is that, for a large class of state-of-the-art sampling schemes, their last-iterate convergence in Wasserstein distances can be reduced to the study of their continuous-time counterparts, which is much better understood. Coupled with standard assumptions of MCMC sampling, our theory immediately yields the last-iterate Wasserstein convergence of many advanced sampling schemes such as proximal, randomized mid-point, and Runge-Kutta integrators. Beyond existing methods, our framework also motivates more efficient schemes that enjoy the same rigorous guarantees.

Deep learning (DL) is becoming indispensable to contemporary stochastic analysis and finance; nevertheless, it is still unclear how to design a principled DL framework for approximating infinite-dimensional causal operators. This paper proposes a "geometry-aware" solution to this open problem by introducing a DL model-design framework that takes a suitable infinite-dimensional linear metric spaces as inputs and returns a universal sequential DL models adapted to these linear geometries: we call these models Causal Neural Operators (CNO). Our main result states that the models produced by our framework can uniformly approximate on compact sets and across arbitrarily finite-time horizons H\"older or smooth trace class operators which causally map sequences between given linear metric spaces. Consequentially, we deduce that a single CNO can efficiently approximate the solution operator to a broad range of SDEs, thus allowing us to simultaneously approximate predictions from families of SDE models, which is vital to computational robust finance. We deduce that the CNO can approximate the solution operator to most stochastic filtering problems, implying that a single CNO can simultaneously filter a family of partially observed stochastic volatility models.

The exact computation of the matching distance for multi-parameter persistence modules is an active area of research in computational topology. Achieving an easily obtainable exact computation of this distance would allow multi-parameter persistent homology to be a viable option for data analysis. In this paper, we provide theoretical results for the computation of the matching distance in two dimensions along with a geometric interpretation of the lines through parameter space realizing this distance. The crucial point of the method we propose is that it can be easily implemented.

We study the problem of online learning in two-sided non-stationary matching markets, where the objective is to converge to a stable match. In particular, we consider the setting where one side of the market, the arms, has fixed known set of preferences over the other side, the players. While this problem has been studied when the players have fixed but unknown preferences, in this work we study the problem of how to learn when the preferences of the players are time varying. We propose the {\it Restart Competing Bandits (RCB)} algorithm, which combines a simple {\it restart strategy} to handle the non-stationarity with the {\it competing bandits} algorithm \citep{liu2020competing} designed for the stationary case. We show that, with the proposed algorithm, each player receives a uniform sub-linear regret of {$\widetilde{\mathcal{O}}(L^{1/2}_TT^{1/2})$} up to the number of changes in the underlying preference of agents, $L_T$. We also discuss extensions of this algorithm to the case where the number of changes need not be known a priori.

北京阿比特科技有限公司