亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The behavior of a generalized random environment integer-valued autoregressive model of higher order with geometric marginal distribution {and negative binomial thinning operator} (abbrev. $RrNGINAR(\mathcal{M,A,P})$) is dictated by a realization $\{z_n\}_{n=1}^\infty$ of an auxiliary Markov chain called random environment process. Element $z_n$ represents a state of the environment in moment $n\in\mathbb{N}$ and determines three different parameters of the model in that moment. In order to use $RrNGINAR(\mathcal{M,A,P})$ model, one first needs to estimate $\{z_n\}_{n=1}^\infty$, which was so far done by K-means data clustering. We argue that this approach ignores some information and performs poorly in certain situations. We propose a new method for estimating $\{z_n\}_{n=1}^\infty$, which includes the data transformation preceding the clustering, in order to reduce the information loss. To confirm its efficiency, we compare this new approach with the usual one when applied on the simulated and the real-life data, and notice all the benefits obtained from our method.

相關內容

In this paper we apply the ideas of New Q-Newton's method directly to a system of equations, utilising the specialties of the cost function $f=||F||^2$, where $F=(f_1,\ldots ,f_m)$. The first algorithm proposed here is a modification of Levenberg-Marquardt algorithm, where we prove some new results on global convergence and avoidance of saddle points. The second algorithm proposed here is a modification of New Q-Newton's method Backtracking, where we use the operator $\nabla ^2f(x)+\delta ||F(x)||^{\tau}$ instead of $\nabla ^2f(x)+\delta ||\nabla f(x)||^{\tau}$. This new version is more suitable than New Q-Newton's method Backtracking itself, while currently has better avoidance of saddle points guarantee than Levenberg-Marquardt algorithms. Also, a general scheme for second order methods for solving systems of equations is proposed. We will also discuss a way to avoid that the limit of the constructed sequence is a solution of $H(x)^{\intercal}F(x)=0$ but not of $F(x)=0$.

Induction benefits from useful priors. Penalized regression approaches, like ridge regression, shrink weights toward zero but zero association is usually not a sensible prior. Inspired by simple and robust decision heuristics humans use, we constructed non-zero priors for penalized regression models that provide robust and interpretable solutions across several tasks. Our approach enables estimates from a constrained model to serve as a prior for a more general model, yielding a principled way to interpolate between models of differing complexity. We successfully applied this approach to a number of decision and classification problems, as well as analyzing simulated brain imaging data. Models with robust priors had excellent worst-case performance. Solutions followed from the form of the heuristic that was used to derive the prior. These new algorithms can serve applications in data analysis and machine learning, as well as help in understanding how people transition from novice to expert performance.

We consider a dynamical system with two sources of uncertainties: (1) parameterized input with a known probability distribution and (2) stochastic input-to-response (ItR) function with heteroscedastic randomness. Our purpose is to efficiently quantify the extreme response probability when the ItR function is expensive to evaluate. The problem setup arises often in physics and engineering problems, with randomness in ItR coming from either intrinsic uncertainties (say, as a solution to a stochastic equation) or additional (critical) uncertainties that are not incorporated in the input parameter space. To reduce the required sampling numbers, we develop a sequential Bayesian experimental design method leveraging the variational heteroscedastic Gaussian process regression (VHGPR) to account for the stochastic ItR, along with a new criterion to select the next-best samples sequentially. The validity of our new method is first tested in two synthetic problems with the stochastic ItR functions defined artificially. Finally, we demonstrate the application of our method to an engineering problem of estimating the extreme ship motion probability in ensemble of wave groups, where the uncertainty in ItR naturally originates from the uncertain initial condition of ship motion in each wave group.

In a recent breakthrough, Mahadev constructed a classical verification of quantum computation (CVQC) protocol for a classical client to delegate decision problems in BQP to an untrusted quantum prover under computational assumptions. In this work, we explore further the feasibility of CVQC with the more general sampling problems in BQP and with the desirable blindness property. We contribute affirmative solutions to both as follows. (1) Motivated by the sampling nature of many quantum applications (e.g., quantum algorithms for machine learning and quantum supremacy tasks), we initiate the study of CVQC for quantum sampling problems (denoted by SampBQP). More precisely, in a CVQC protocol for a SampBQP problem, the prover and the verifier are given an input $x\in \{0,1\}^n$ and a quantum circuit $C$, and the goal of the classical client is to learn a sample from the output $z \leftarrow C(x)$ up to a small error, from its interaction with an untrusted prover. We demonstrate its feasibility by constructing a four-message CVQC protocol for SampBQP based on the quantum Learning With Error assumption. (2) The blindness of CVQC protocols refers to a property of the protocol where the prover learns nothing, and hence is blind, about the client's input. It is a highly desirable property that has been intensively studied for the delegation of quantum computation. We provide a simple yet powerful generic compiler that transforms any CVQC protocol to a blind one while preserving its completeness and soundness errors as well as the number of rounds. Applying our compiler to (a parallel repetition of) Mahadev's CVQC protocol for BQP and our CVQC protocol for SampBQP yields the first constant-round blind CVQC protocol for BQP and SampBQP respectively, with negligible and inverse polynomial soundness errors respectively, and negligible completeness errors.

Many real-world optimization problems involve uncertain parameters with probability distributions that can be estimated using contextual feature information. In contrast to the standard approach of first estimating the distribution of uncertain parameters and then optimizing the objective based on the estimation, we propose an integrated conditional estimation-optimization (ICEO) framework that estimates the underlying conditional distribution of the random parameter while considering the structure of the optimization problem. We directly model the relationship between the conditional distribution of the random parameter and the contextual features, and then estimate the probabilistic model with an objective that aligns with the downstream optimization problem. We show that our ICEO approach is asymptotically consistent under moderate regularity conditions and further provide finite performance guarantees in the form of generalization bounds. Computationally, performing estimation with the ICEO approach is a non-convex and often non-differentiable optimization problem. We propose a general methodology for approximating the potentially non-differentiable mapping from estimated conditional distribution to the optimal decision by a differentiable function, which greatly improves the performance of gradient-based algorithms applied to the non-convex problem. We also provide a polynomial optimization solution approach in the semi-algebraic case. Numerical experiments are also conducted to show the empirical success of our approach in different situations including with limited data samples and model mismatches.

We study random design linear regression with no assumptions on the distribution of the covariates and with a heavy-tailed response variable. In this distribution-free regression setting, we show that boundedness of the conditional second moment of the response given the covariates is a necessary and sufficient condition for achieving nontrivial guarantees. As a starting point, we prove an optimal version of the classical in-expectation bound for the truncated least squares estimator due to Gy\"{o}rfi, Kohler, Krzy\.{z}ak, and Walk. However, we show that this procedure fails with constant probability for some distributions despite its optimal in-expectation performance. Then, combining the ideas of truncated least squares, median-of-means procedures, and aggregation theory, we construct a non-linear estimator achieving excess risk of order $d/n$ with an optimal sub-exponential tail. While existing approaches to linear regression for heavy-tailed distributions focus on proper estimators that return linear functions, we highlight that the improperness of our procedure is necessary for attaining nontrivial guarantees in the distribution-free setting.

This paper develops and analyzes a general iterative framework for solving parameter-dependent and random convection-diffusion problems. It is inspired by the multi-modes method of [7,8] and the ensemble method of [20] and extends those methods into a more general and unified framework. The main idea of the framework is to reformulate the underlying problem into another problem with parameter-independent convection and diffusion coefficients and a parameter-dependent (and solution-dependent) right-hand side, a fixed-point iteration is then employed to compute the solution of the reformulated problem. The main benefit of the proposed approach is that an efficient direct solver and a block Krylov subspace iterative solver can be used at each iteration, allowing to reuse the $LU$ matrix factorization or to do an efficient matrix-matrix multiplication for all parameters, which in turn results in significant computation saving. Convergence and rates of convergence are established for the iterative method both at the variational continuous level and at the finite element discrete level under some structure conditions. Several strategies for establishing reformulations of parameter-dependent and random diffusion and convection-diffusion problems are proposed and their computational complexity is analyzed. Several 1-D and 2-D numerical experiments are also provided to demonstrate the efficiency of the proposed iterative method and to validate the theoretical convergence results.

This work proposes a novel tensor train random projection (TTRP) method for dimension reduction, where pairwise distances can be approximately preserved. Our TTRP is systematically constructed through a tensor train (TT) representation with TT-ranks equal to one. Based on the tensor train format, this new random projection method can speed up the dimension reduction procedure for high-dimensional datasets and requires less storage costs with little loss in accuracy, compared with existing methods. We provide a theoretical analysis of the bias and the variance of TTRP, which shows that this approach is an expected isometric projection with bounded variance, and we show that the Rademacher distribution is an optimal choice for generating the corresponding TT-cores. Detailed numerical experiments with synthetic datasets and the MNIST dataset are conducted to demonstrate the efficiency of TTRP.

In this article, a discrete analogue of continuous Teissier distribution is presented. Its several important distributional characteristics have been derived. The estimation of the unknown parameter has been done using the method of maximum likelihood and the method of moment. Two real data applications have been presented to show the applicability of the proposed model.

Importance sampling is one of the most widely used variance reduction strategies in Monte Carlo rendering. In this paper, we propose a novel importance sampling technique that uses a neural network to learn how to sample from a desired density represented by a set of samples. Our approach considers an existing Monte Carlo rendering algorithm as a black box. During a scene-dependent training phase, we learn to generate samples with a desired density in the primary sample space of the rendering algorithm using maximum likelihood estimation. We leverage a recent neural network architecture that was designed to represent real-valued non-volume preserving ('Real NVP') transformations in high dimensional spaces. We use Real NVP to non-linearly warp primary sample space and obtain desired densities. In addition, Real NVP efficiently computes the determinant of the Jacobian of the warp, which is required to implement the change of integration variables implied by the warp. A main advantage of our approach is that it is agnostic of underlying light transport effects, and can be combined with many existing rendering techniques by treating them as a black box. We show that our approach leads to effective variance reduction in several practical scenarios.

北京阿比特科技有限公司