亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

PDDSparse is a new hybrid parallelisation scheme for solving large-scale elliptic boundary value problems on supercomputers, which can be described as a Feynman-Kac formula for domain decomposition. At its core lies a stochastic linear, sparse system for the solutions on the interfaces, whose entries are generated via Monte Carlo simulations. Assuming small statistical errors, we show that the random system matrix ${\tilde G}(\omega)$ is near a nonsingular M-matrix $G$, i.e. ${\tilde G}(\omega)+E=G$ where $||E||/||G||$ is small. Using nonstandard arguments, we bound $||G^{-1}||$ and the condition number of $G$, showing that both of them grow moderately with the degrees of freedom of the discretisation. Moreover, the truncated Neumann series of $G^{-1}$ -- which is straightforward to calculate -- is the basis for an excellent preconditioner for ${\tilde G}(\omega)$. These findings are supported by numerical evidence.

相關內容

We introduce the higher-order refactoring problem, where the goal is to compress a logic program by discovering higher-order abstractions, such as map, filter, and fold. We implement our approach in Stevie, which formulates the refactoring problem as a constraint optimisation problem. Our experiments on multiple domains, including program synthesis and visual reasoning, show that refactoring can improve the learning performance of an inductive logic programming system, specifically improving predictive accuracies by 27% and reducing learning times by 47%. We also show that Stevie can discover abstractions that transfer to multiple domains.

The representation of a dynamic problem in ASP usually boils down to using copies of variables and constraints, one for each time stamp, no matter whether it is directly encoded or via an action or temporal language. The multiplication of variables and constraints is commonly done during grounding and the solver is completely ignorant about the temporal relationship among the different instances. On the other hand, a key factor in the performance of today's ASP solvers is conflict-driven constraint learning. Our question is now whether a constraint learned for particular time steps can be generalized and reused at other time stamps, and ultimately whether this enhances the overall solver performance on temporal problems. Knowing full well the domain of time, we study conditions under which learned dynamic constraints can be generalized. We propose a simple translation of the original logic program such that, for the translated programs, the learned constraints can be generalized to other time points. Additionally, we identify a property of temporal problems that allows us to generalize all learned constraints to all time steps. It turns out that this property is satisfied by many planning problems. Finally, we empirically evaluate the impact of adding the generalized constraints to an ASP solver

It is a challenge to manage infinite- or high-dimensional data in situations where storage, transmission, or computation resources are constrained. In the simplest scenario when the data consists of a noisy infinite-dimensional signal, we introduce the notion of local \emph{effective dimension} (i.e., pertinent to the underlying signal), formulate and study the problem of its recovery on the basis of noisy data. This problem can be associated to the problems of adaptive quantization, (lossy) data compression, oracle signal estimation. We apply a Bayesian approach and study frequentists properties of the resulting posterior, a purely frequentist version of the results is also proposed. We derive certain upper and lower bounds results about identifying the local effective dimension which show that only the so called \emph{one-sided inference} on the local effective dimension can be ensured whereas the \emph{two-sided inference}, on the other hand, is in general impossible. We establish the \emph{minimal} conditions under which two-sided inference can be made. Finally, connection to the problem of smoothness estimation for some traditional smoothness scales (Sobolev scales) is considered.

The scheduling problem is a key class of optimization problems and has various kinds of applications both in practical and theoretical scenarios. In the scheduling problem, probabilistic analysis is a basic tool for investigating performance of scheduling algorithms, and therefore has been carried out by plenty amount of prior works. However, probabilistic analysis has several potential problems. For example, current research interest in the scheduling problem is limited to i.i.d. scenarios, due to its simplicity for analysis. This paper provides a new framework for probabilistic analysis in the scheduling problem and aims to deal with such problems. As a consequence, we obtain several theorems including a theoretical limit of the scheduling problem which can be applied to \emph{general, non-i.i.d. probability distributions}. Several information theoretic techniques, such as \emph{information-spectrum method}, turned out to be useful to prove our results. Since the scheduling problem has relations to many other research fields, our framework hopefully yields other interesting applications in the future.

We derive bounds on the moduli of the eigenvalues of special type of matrix rational functions using the following techniques/methods: (1) the Bauer-Fike theorem on an associated block matrix of the given matrix rational function, (2) by associating a real rational function, along with Rouch$\text{\'e}$ theorem for the matrix rational function and (3) by a numerical radius inequality for a block matrix for the matrix rational function. These bounds are compared when the coefficients are unitary matrices. Numerical examples are given to illustrate the results obtained.

We present new Neumann-Neumann algorithms based on a time domain decomposition applied to unconstrained parabolic optimal control problems. After a spatial semi-discretization, the Lagrange multiplier approach provides a coupled forward-backward optimality system, which can be solved using a time domain decomposition. Due to the forward-backward structure of the optimality system, nine variants can be found for the Neumann-Neumann algorithms. We analyze their convergence behavior and determine the optimal relaxation parameter for each algorithm. Our analysis reveals that the most natural algorithms are actually only good smoothers, and there are better choices which lead to efficient solvers. We illustrate our analysis with numerical experiments.

Statistical inference for high dimensional parameters (HDPs) can be based on their intrinsic correlation; that is, parameters that are close spatially or temporally tend to have more similar values. This is why nonlinear mixed-effects models (NMMs) are commonly (and appropriately) used for models with HDPs. Conversely, in many practical applications of NMM, the random effects (REs) are actually correlated HDPs that should remain constant during repeated sampling for frequentist inference. In both scenarios, the inference should be conditional on REs, instead of marginal inference by integrating out REs. In this paper, we first summarize recent theory of conditional inference for NMM, and then propose a bias-corrected RE predictor and confidence interval (CI). We also extend this methodology to accommodate the case where some REs are not associated with data. Simulation studies indicate that this new approach leads to substantial improvement in the conditional coverage rate of RE CIs, including CIs for smooth functions in generalized additive models, as compared to the existing method based on marginal inference.

We study the problem of parameter estimation for large exchangeable interacting particle systems when a sample of discrete observations from a single particle is known. We propose a novel method based on martingale estimating functions constructed by employing the eigenvalues and eigenfunctions of the generator of the mean field limit, where the law of the process is replaced by the (unique) invariant measure of the mean field dynamics. We then prove that our estimator is asymptotically unbiased and asymptotically normal when the number of observations and the number of particles tend to infinity, and we provide a rate of convergence towards the exact value of the parameters. Finally, we present several numerical experiments which show the accuracy of our estimator and corroborate our theoretical findings, even in the case the mean field dynamics exhibit more than one steady states.

A new sparse semiparametric model is proposed, which incorporates the influence of two functional random variables in a scalar response in a flexible and interpretable manner. One of the functional covariates is included through a single-index structure, while the other is included linearly through the high-dimensional vector formed by its discretised observations. For this model, two new algorithms are presented for selecting relevant variables in the linear part and estimating the model. Both procedures utilise the functional origin of linear covariates. Finite sample experiments demonstrated the scope of application of both algorithms: the first method is a fast algorithm that provides a solution (without loss in predictive ability) for the significant computational time required by standard variable selection methods for estimating this model, and the second algorithm completes the set of relevant linear covariates provided by the first, thus improving its predictive efficiency. Some asymptotic results theoretically support both procedures. A real data application demonstrated the applicability of the presented methodology from a predictive perspective in terms of the interpretability of outputs and low computational cost.

In arXiv:2305.03945 [math.NA], a first-order optimization algorithm has been introduced to solve time-implicit schemes of reaction-diffusion equations. In this research, we conduct theoretical studies on this first-order algorithm equipped with a quadratic regularization term. We provide sufficient conditions under which the proposed algorithm and its time-continuous limit converge exponentially fast to a desired time-implicit numerical solution. We show both theoretically and numerically that the convergence rate is independent of the grid size, which makes our method suitable for large-scale problems. The efficiency of our algorithm has been verified via a series of numerical examples conducted on various types of reaction-diffusion equations. The choice of optimal hyperparameters as well as comparisons with some classical root-finding algorithms are also discussed in the numerical section.

北京阿比特科技有限公司