亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The logistic regression model is one of the most popular data generation model in noisy binary classification problems. In this work, we study the sample complexity of estimating the parameters of the logistic regression model up to a given $\ell_2$ error, in terms of the dimension and the inverse temperature, with standard normal covariates. The inverse temperature controls the signal-to-noise ratio of the data generation process. While both generalization bounds and asymptotic performance of the maximum-likelihood estimator for logistic regression are well-studied, the non-asymptotic sample complexity that shows the dependence on error and the inverse temperature for parameter estimation is absent from previous analyses. We show that the sample complexity curve has two change-points in terms of the inverse temperature, clearly separating the low, moderate, and high temperature regimes.

相關內容

Longitudinal data are important in numerous fields, such as healthcare, sociology and seismology, but real-world datasets present notable challenges for practitioners because they can be high-dimensional, contain structured missingness patterns, and measurement time points can be governed by an unknown stochastic process. While various solutions have been suggested, the majority of them have been designed to account for only one of these challenges. In this work, we propose a flexible and efficient latent-variable model that is capable of addressing all these limitations. Our approach utilizes Gaussian processes to capture temporal correlations between samples and their associated missingness masks as well as to model the underlying point process. We construct our model as a variational autoencoder together with deep neural network parameterised encoder and decoder models, and develop a scalable amortised variational inference approach for efficient model training. We demonstrate competitive performance using both simulated and real datasets.

Quantum computers are now on the brink of outperforming their classical counterparts. One way to demonstrate the advantage of quantum computation is through quantum random sampling performed on quantum computing devices. However, existing tools for verifying that a quantum device indeed performed the classically intractable sampling task are either impractical or not scalable to the quantum advantage regime. The verification problem thus remains an outstanding challenge. Here, we experimentally demonstrate efficiently verifiable quantum random sampling in the measurement-based model of quantum computation on a trapped-ion quantum processor. We create and sample from random cluster states, which are at the heart of measurement-based computing, up to a size of 4 x 4 qubits. By exploiting the structure of these states, we are able to recycle qubits during the computation to sample from entangled cluster states that are larger than the qubit register. We then efficiently estimate the fidelity to verify the prepared states -- in single instances and on average -- and compare our results to cross-entropy benchmarking. Finally, we study the effect of experimental noise on the certificates. Our results and techniques provide a feasible path toward a verified demonstration of a quantum advantage.

When the target of inference is a real-valued function of probability parameters in the k-sample multinomial problem, variance estimation may be challenging. In small samples, methods like the nonparametric bootstrap or delta method may perform poorly. We propose a novel general method in this setting for computing exact p-values and confidence intervals which means that type I error rates are correctly bounded and confidence intervals have at least nominal coverage at all sample sizes. Our method is applicable to any real-valued function of multinomial probabilities, accommodating an arbitrary number of samples with varying category counts. We describe the method and provide an implementation of it in R, with some computational optimization to ensure broad applicability. Simulations demonstrate our method's ability to maintain correct coverage rates in settings where the nonparametric bootstrap fails.

The multiscale simplicial flat norm (MSFN) of a d-cycle is a family of optimal homology problems indexed by a scale parameter {\lambda} >= 0. Each instance (mSFN) optimizes the total weight of a homologous d-cycle and a bounded (d + 1)-chain, with one of the components being scaled by {\lambda}.We propose a min-cost flow formulation for solving instances of mSFN at a given scale {\lambda} in polynomial time in the case of (d + 1)-dimensional simplicial complexes embedded in {R^(d + 1)} and homology over Z. Furthermore, we establish the weak and strong dualities for mSFN, as well as the complementary slackness conditions. Additionally, we prove optimality conditions for directed flow formulations with cohomology over Z+. Next, we propose an approach based on the multiscale flat norm, a notion of distance between objects defined in the field of geometric measure theory, to compute the distance between a pair of planar geometric networks. Using a triangulation of the domain containing the input networks, the flat norm distance between two networks at a given scale can be computed by solving a linear program. In addition, this computation automatically identifies the 2D regions (patches) that capture where the two networks are different. We demonstrate through 2D examples that the flat norm distance can capture the variations of inputs more accurately than the commonly used Hausdorff distance. As a notion of stability, we also derive upper bounds on the flat norm distance between a simple 1D curve and its perturbed version as a function of the radius of perturbation for a restricted class of perturbations. We demonstrate our approach on a set of actual power networks from a county in the USA. Our approach can be extended to validate synthetic networks created for multiple infrastructures such as transportation, communication, water, and gas networks.

Factor models are widely used for dimension reduction in the analysis of multivariate data. This is achieved through decomposition of a p x p covariance matrix into the sum of two components. Through a latent factor representation, they can be interpreted as a diagonal matrix of idiosyncratic variances and a shared variation matrix, that is, the product of a p x k factor loadings matrix and its transpose. If k << p, this defines a parsimonious factorisation of the covariance matrix. Historically, little attention has been paid to incorporating prior information in Bayesian analyses using factor models where, at best, the prior for the factor loadings is order invariant. In this work, a class of structured priors is developed that can encode ideas of dependence structure about the shared variation matrix. The construction allows data-informed shrinkage towards sensible parametric structures while also facilitating inference over the number of factors. Using an unconstrained reparameterisation of stationary vector autoregressions, the methodology is extended to stationary dynamic factor models. For computational inference, parameter-expanded Markov chain Monte Carlo samplers are proposed, including an efficient adaptive Gibbs sampler. Two substantive applications showcase the scope of the methodology and its inferential benefits.

Rational approximation is a powerful tool to obtain accurate surrogates for nonlinear functions that are easy to evaluate and linearize. The interpolatory adaptive Antoulas--Anderson (AAA) method is one approach to construct such approximants numerically. For large-scale vector- and matrix-valued functions, however, the direct application of the set-valued variant of AAA becomes inefficient. We propose and analyze a new sketching approach for such functions called sketchAAA that, with high probability, leads to much better approximants than previously suggested approaches while retaining efficiency. The sketching approach works in a black-box fashion where only evaluations of the nonlinear function at sampling points are needed. Numerical tests with nonlinear eigenvalue problems illustrate the efficacy of our approach, with speedups above 200 for sampling large-scale black-box functions without sacrificing on accuracy.

Riemannian optimization is concerned with problems, where the independent variable lies on a smooth manifold. There is a number of problems from numerical linear algebra that fall into this category, where the manifold is usually specified by special matrix structures, such as orthogonality or definiteness. Following this line of research, we investigate tools for Riemannian optimization on the symplectic Stiefel manifold. We complement the existing set of numerical optimization algorithms with a Riemannian trust region method tailored to the symplectic Stiefel manifold. To this end, we derive a matrix formula for the Riemannian Hessian under a right-invariant metric. Moreover, we propose a novel retraction for approximating the Riemannian geodesics. Finally, we conduct a comparative study in which we juxtapose the performance of the Riemannian variants of the steepest descent, conjugate gradients, and trust region methods on selected matrix optimization problems that feature symplectic constraints.

We develop a high order accurate numerical method for solving the elastic wave equation in second-order form. We hybridize the computationally efficient Cartesian grid formulation of finite differences with geometrically flexible discontinuous Galerkin methods on unstructured grids by a penalty based technique. At the interface between the two methods, we construct projection operators for the pointwise finite difference solutions and discontinuous Galerkin solutions based on piecewise polynomials. In addition, we optimize the projection operators for both accuracy and spectrum. We prove that the overall discretization conserves a discrete energy, and verify optimal convergence in numerical experiments.

Practical parameter identifiability in ODE-based epidemiological models is a known issue, yet one that merits further study. It is essentially ubiquitous due to noise and errors in real data. In this study, to avoid uncertainty stemming from data of unknown quality, simulated data with added noise are used to investigate practical identifiability in two distinct epidemiological models. Particular emphasis is placed on the role of initial conditions, which are assumed unknown, except those that are directly measured. Instead of just focusing on one method of estimation, we use and compare results from various broadly used methods, including maximum likelihood and Markov Chain Monte Carlo (MCMC) estimation. Among other findings, our analysis revealed that the MCMC estimator is overall more robust than the point estimators considered. Its estimates and predictions are improved when the initial conditions of certain compartments are fixed so that the model becomes globally identifiable. For the point estimators, whether fixing or fitting the that are not directly measured improves parameter estimates is model-dependent. Specifically, in the standard SEIR model, fixing the initial condition for the susceptible population S(0) improved parameter estimates, while this was not true when fixing the initial condition of the asymptomatic population in a more involved model. Our study corroborates the change in quality of parameter estimates upon usage of pre-peak or post-peak time-series under consideration. Finally, our examples suggest that in the presence of significantly noisy data, the value of structural identifiability is moot.

This paper investigates the problem of regression model generation. A model is a superposition of primitive functions. The model structure is described by a weighted colored graph. Each graph vertex corresponds to some primitive function. An edge assigns a superposition of two functions. The weight of an edge equals the probability of superposition. To generate an optimal model one has to reconstruct its structure from its graph adjacency matrix. The proposed algorithm reconstructs the~minimum spanning tree from the~weighted colored graph. This paper presents a novel solution based on the prize-collecting Steiner tree algorithm. This algorithm is compared with its alternatives.

北京阿比特科技有限公司