亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

To avoid poor empirical performance in Metropolis-Hastings and other accept-reject-based algorithms practitioners often tune them by trial and error. Lower bounds on the convergence rate are developed in both total variation and Wasserstein distances in order to identify how the simulations will fail so these settings can be avoided, providing guidance on tuning. Particular attention is paid to using the lower bounds to study the convergence complexity of accept-reject-based Markov chains and to constrain the rate of convergence for geometrically ergodic Markov chains. The theory is applied in several settings. For example, if the target density concentrates with a parameter $n$ (e.g. posterior concentration, Laplace approximations), it is demonstrated that the convergence rate of a Metropolis-Hastings chain can tend to $1$ exponentially fast if the tuning parameters do not depend carefully on $n$. This is demonstrated with Bayesian logistic regression with Zellner's g-prior when the dimension and sample increase in such a way that size $d/n \to \gamma \in (0, 1)$ and flat prior Bayesian logistic regression as $n \to \infty$.

相關內容

Learning precise surrogate models of complex computer simulations and physical machines often require long-lasting or expensive experiments. Furthermore, the modeled physical dependencies exhibit nonlinear and nonstationary behavior. Machine learning methods that are used to produce the surrogate model should therefore address these problems by providing a scheme to keep the number of queries small, e.g. by using active learning and be able to capture the nonlinear and nonstationary properties of the system. One way of modeling the nonstationarity is to induce input-partitioning, a principle that has proven to be advantageous in active learning for Gaussian processes. However, these methods either assume a known partitioning, need to introduce complex sampling schemes or rely on very simple geometries. In this work, we present a simple, yet powerful kernel family that incorporates a partitioning that: i) is learnable via gradient-based methods, ii) uses a geometry that is more flexible than previous ones, while still being applicable in the low data regime. Thus, it provides a good prior for active learning procedures. We empirically demonstrate excellent performance on various active learning tasks.

In this paper, we will show the $L^p$-resolvent estimate for the finite element approximation of the Stokes operator for $p \in \left( \frac{2N}{N+2}, \frac{2N}{N-2} \right)$, where $N \ge 2$ is the dimension of the domain. It is expected that this estimate can be applied to error estimates for finite element approximation of the non-stationary Navier--Stokes equations, since studies in this direction are successful in numerical analysis of nonlinear parabolic equations. To derive the resolvent estimate, we introduce the solution of the Stokes resolvent problem with a discrete external force. We then obtain local energy error estimate according to a novel localization technique and establish global $L^p$-type error estimates. The restriction for $p$ is caused by the treatment of lower-order terms appearing in the local energy error estimate. Our result may be a breakthrough in the $L^p$-theory of finite element methods for the non-stationary Navier--Stokes equations.

This paper studies model checking for general parametric regression models with no dimension reduction structures on the high-dimensional vector of predictors. Using existing test as an initial test, this paper combines the sample-splitting technique and conditional studentization approach to construct a COnditionally Studentized Test(COST). Unlike existing tests, whether the initial test is global or local smoothing-based, and whether the dimension of the predictor vector and the number of parameters are fixed, or diverge at a certain rate as the sample size goes to infinity, the proposed test always has a normal weak limit under the null hypothesis. Further, the test can detect the local alternatives distinct from the null hypothesis at the fastest possible rate of convergence in hypothesis testing. We also discuss the optimal sample splitting in power performance. The numerical studies offer information on its merits and limitations in finite sample cases. As a generic methodology, it could be applied to other testing problems.

New emerging technologies powered by Artificial Intelligence (AI) have the potential to disruptively transform our societies for the better. In particular, data-driven learning approaches (i.e., Machine Learning (ML)) have been a true revolution in the advancement of multiple technologies in various application domains. But at the same time there is growing concern about certain intrinsic characteristics of these methodologies that carry potential risks to both safety and fundamental rights. Although there are mechanisms in the adoption process to minimize these risks (e.g., safety regulations), these do not exclude the possibility of harm occurring, and if this happens, victims should be able to seek compensation. Liability regimes will therefore play a key role in ensuring basic protection for victims using or interacting with these systems. However, the same characteristics that make AI systems inherently risky, such as lack of causality, opacity, unpredictability or their self and continuous learning capabilities, may lead to considerable difficulties when it comes to proving causation. This paper presents three case studies, as well as the methodology to reach them, that illustrate these difficulties. Specifically, we address the cases of cleaning robots, delivery drones and robots in education. The outcome of the proposed analysis suggests the need to revise liability regimes to alleviate the burden of proof on victims in cases involving AI technologies.

Out-of-distribution (OOD) detection aims at enhancing standard deep neural networks to distinguish anomalous inputs from original training data. Previous progress has introduced various approaches where the in-distribution training data and even several OOD examples are prerequisites. However, due to privacy and security, auxiliary data tends to be impractical in a real-world scenario. In this paper, we propose a data-free method without training on natural data, called Class-Conditional Impressions Reappearing (C2IR), which utilizes image impressions from the fixed model to recover class-conditional feature statistics. Based on that, we introduce Integral Probability Metrics to estimate layer-wise class-conditional deviations and obtain layer weights by Measuring Gradient-based Importance (MGI). The experiments verify the effectiveness of our method and indicate that C2IR outperforms other post-hoc methods and reaches comparable performance to the full access (ID and OOD) detection method, especially in the far-OOD dataset (SVHN).

The log-logistic regression model is one of the most commonly used accelerated failure time (AFT) models in survival analysis, for which statistical inference methods are mainly established under the frequentist framework. Recently, Bayesian inference for log-logistic AFT models using Markov chain Monte Carlo (MCMC) techniques has also been widely developed. In this work, we develop an alternative approach to MCMC methods and infer the parameters of the log-logistic AFT model via a mean-field variational Bayes (VB) algorithm. A piece-wise approximation technique is embedded in deriving the update equations in the VB algorithm to achieve conjugacy. The proposed VB algorithm is evaluated and compared with typical frequentist inferences using simulated data under various scenarios, and a publicly available dataset is employed for illustration. We demonstrate that the proposed VB algorithm can achieve good estimation accuracy and is not sensitive to sample sizes, censoring rates, and prior information.

We present and analyze a high-order discontinuous Galerkin method for the space discretization of the wave propagation model in thermo-poroelastic media. The proposed scheme supports general polytopal grids. Stability analysis and $hp$-version error estimates in suitable energy norms are derived for the semi-discrete problem. The fully-discrete scheme is then obtained based on employing an implicit Newmark-$\beta$ time integration scheme. A wide set of numerical simulations is reported, both for the verification of the theoretical estimates and for examples of physical interest. A comparison with the results of the poroelastic model is provided too, highlighting the differences between the predictive capabilities of the two models.

Many machine learning problems can be framed in the context of estimating functions, and often these are time-dependent functions that are estimated in real-time as observations arrive. Gaussian processes (GPs) are an attractive choice for modeling real-valued nonlinear functions due to their flexibility and uncertainty quantification. However, the typical GP regression model suffers from several drawbacks: 1) Conventional GP inference scales $O(N^{3})$ with respect to the number of observations; 2) Updating a GP model sequentially is not trivial; and 3) Covariance kernels typically enforce stationarity constraints on the function, while GPs with non-stationary covariance kernels are often intractable to use in practice. To overcome these issues, we propose a sequential Monte Carlo algorithm to fit infinite mixtures of GPs that capture non-stationary behavior while allowing for online, distributed inference. Our approach empirically improves performance over state-of-the-art methods for online GP estimation in the presence of non-stationarity in time-series data. To demonstrate the utility of our proposed online Gaussian process mixture-of-experts approach in applied settings, we show that we can sucessfully implement an optimization algorithm using online Gaussian process bandits.

Following the breakthrough work of Tardos in the bit-complexity model, Vavasis and Ye gave the first exact algorithm for linear programming in the real model of computation with running time depending only on the constraint matrix. For solving a linear program (LP) $\max\, c^\top x,\: Ax = b,\: x \geq 0,\: A \in \mathbb{R}^{m \times n}$, Vavasis and Ye developed a primal-dual interior point method using a 'layered least squares' (LLS) step, and showed that $O(n^{3.5} \log (\bar{\chi}_A+n))$ iterations suffice to solve (LP) exactly, where $\bar{\chi}_A$ is a condition measure controlling the size of solutions to linear systems related to $A$. Monteiro and Tsuchiya, noting that the central path is invariant under rescalings of the columns of $A$ and $c$, asked whether there exists an LP algorithm depending instead on the measure $\bar{\chi}^*_A$, defined as the minimum $\bar{\chi}_{AD}$ value achievable by a column rescaling $AD$ of $A$, and gave strong evidence that this should be the case. We resolve this open question affirmatively. Our first main contribution is an $O(m^2 n^2 + n^3)$ time algorithm which works on the linear matroid of $A$ to compute a nearly optimal diagonal rescaling $D$ satisfying $\bar{\chi}_{AD} \leq n(\bar{\chi}^*)^3$. This algorithm also allows us to approximate the value of $\bar{\chi}_A$ up to a factor $n (\bar{\chi}^*)^2$. As our second main contribution, we develop a scaling invariant LLS algorithm, together with a refined potential function based analysis for LLS algorithms in general. With this analysis, we derive an improved $O(n^{2.5} \log n\log (\bar{\chi}^*_A+n))$ iteration bound for optimally solving (LP) using our algorithm. The same argument also yields a factor $n/\log n$ improvement on the iteration complexity bound of the original Vavasis-Ye algorithm.

This work outlines a fast, high-precision time-domain solver for scalar, electromagnetic and gravitational perturbations on hyperboloidal foliations of Kerr space-times. Time-domain Teukolsky equation solvers have typically used explicit methods, which numerically violate Noether symmetries and are Courant-limited. These restrictions can limit the performance of explicit schemes when simulating long-time extreme mass ratio inspirals, expected to appear in LISA band for 2-5 years. We thus explore symmetric (exponential, Pad\'e or Hermite) integrators, which are unconditionally stable and known to preserve certain Noether symmetries and phase-space volume. For linear hyperbolic equations, these implicit integrators can be cast in explicit form, making them well-suited for long-time evolution of black hole perturbations. The 1+1 modal Teukolsky equation is discretized in space using polynomial collocation methods and reduced to a linear system of ordinary differential equations, coupled via mode-coupling arrays and discretized (matrix) differential operators. We use a matricization technique to cast the mode-coupled system in a form amenable to a method-of-lines framework, which simplifies numerical implementation and enables efficient parallelization on CPU and GPU architectures. We test our numerical code by studying late-time tails of Kerr spacetime perturbations in the sub-extremal and extremal cases.

北京阿比特科技有限公司