亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Cross-validation is the standard approach for tuning parameter selection in many non-parametric regression problems. However its use is less common in change-point regression, perhaps as its prediction error-based criterion may appear to permit small spurious changes and hence be less well-suited to estimation of the number and location of change-points. We show that in fact the problems of cross-validation with squared error loss are more severe and can lead to systematic under- or over-estimation of the number of change-points, and highly suboptimal estimation of the mean function in simple settings where changes are easily detectable. We propose two simple approaches to remedy these issues, the first involving the use of absolute error rather than squared error loss, and the second involving modifying the holdout sets used. For the latter, we provide conditions that permit consistent estimation of the number of change-points for a general change-point estimation procedure. We show these conditions are satisfied for least squares estimation using new results on its performance when supplied with the incorrect number of change-points. Numerical experiments show that our new approaches are competitive with common change-point methods using classical tuning parameter choices when error distributions are well-specified, but can substantially outperform these in misspecified models. An implementation of our methodology is available in the R package crossvalidationCP on CRAN.

相關內容

We propose to improve the convergence properties of the single-reference coupled cluster (CC) method through an augmented Lagrangian formalism. The conventional CC method changes a linear high-dimensional eigenvalue problem with exponential size into a problem of determining the roots of a nonlinear system of equations that has a manageable size. However, current numerical procedures for solving this system of equations to get the lowest eigenvalue suffer from two practical issues: First, solving the CC equations may not converge, and second, when converging, they may converge to other -- potentially unphysical -- states, which are stationary points of the CC energy expression. We show that both issues can be dealt with when a suitably defined energy is minimized in addition to solving the original CC equations. We further propose an augmented Lagrangian method for coupled cluster (alm-CC) to solve the resulting constrained optimization problem. We numerically investigate the proposed augmented Lagrangian formulation showing that the convergence towards the ground state is significantly more stable and that the optimization procedure is less susceptible to local minima. Furthermore, the computational cost of alm-CC is comparable to the conventional CC method.

We consider arbitrary bounded discrete time series. From its statistical feature, without any use of the Fourier transform, we find an almost periodic function which suitably characterizes the corresponding time series.

We address the problem of the best uniform approximation of a continuous function on a convex domain. The approximation is by linear combinations of a finite system of functions (not necessarily Chebyshev) under arbitrary linear constraints. By modifying the concept of alternance and of the Remez iterative procedure we present a method, which demonstrates its efficiency in numerical problems. The linear rate of convergence is proved under some favourable assumptions. A special attention is paid to systems of complex exponents, Gaussian functions, lacunar algebraic and trigonometric polynomials. Applications to signal processing, linear ODE, switching dynamical systems, and to Markov-Bernstein type inequalities are considered.

We propose a method for obtaining parsimonious decompositions of networks into higher order interactions which can take the form of arbitrary motifs.The method is based on a class of analytically solvable generative models, where vertices are connected via explicit copies of motifs, which in combination with non-parametric priors allow us to infer higher order interactions from dyadic graph data without any prior knowledge on the types or frequencies of such interactions. Crucially, we also consider 'degree--corrected' models that correctly reflect the degree distribution of the network and consequently prove to be a better fit for many real world--networks compared to non-degree corrected models. We test the presented approach on simulated data for which we recover the set of underlying higher order interactions to a high degree of accuracy. For empirical networks the method identifies concise sets of atomic subgraphs from within thousands of candidates that cover a large fraction of edges and include higher order interactions of known structural and functional significance. The method not only produces an explicit higher order representation of the network but also a fit of the network to analytically tractable models opening new avenues for the systematic study of higher order network structures.

The univariate dimension reduction (UDR) method stands as a way to estimate the statistical moments of the output that is effective in a large class of uncertainty quantification (UQ) problems. UDR's fundamental strategy is to approximate the original function using univariate functions so that the UQ cost only scales linearly with the dimension of the problem. Nonetheless, UDR's effectiveness can diminish when uncertain inputs have high variance, particularly when assessing the output's second and higher-order statistical moments. This paper proposes a new method, gradient-enhanced univariate dimension reduction (GUDR), that enhances the accuracy of UDR by incorporating univariate gradient function terms into the UDR approximation function. Theoretical results indicate that the GUDR approximation is expected to be one order more accurate than UDR in approximating the original function, and it is expected to generate more accurate results in computing the output's second and higher-order statistical moments. Our proposed method uses a computational graph transformation strategy to efficiently evaluate the GUDR approximation function on tensor-grid quadrature inputs, and use the tensor-grid input-output data to compute the statistical moments of the output. With an efficient automatic differentiation method to compute the gradients, our method preserves UDR's linear scaling of computation time with problem dimension. Numerical results show that the GUDR is more accurate than UDR in estimating the standard deviation of the output and has a performance comparable to the method of moments using a third-order Taylor series expansion.

The amount of information in satisfiability problem (SAT) is considered. SAT can be polynomial-time solvable when the solving algorithm holds an exponential amount of information. It is also established that SAT Kolmogorov complexity is constant. It is argued that the amount of information in SAT grows at least exponentially with the size of the input instance. The amount of information in SAT is compared with the amount of information in the fixed code algorithms and generated over runtime.

Utilizing non-concurrent controls in the analysis of late-entering experimental arms in platform trials has recently received considerable attention, both on academic and regulatory levels. While incorporating this data can lead to increased power and lower required sample sizes, it might also introduce bias to the effect estimators if temporal drifts are present in the trial. Aiming to mitigate the potential calendar time bias, we propose various frequentist model-based approaches that leverage the non-concurrent control data, while adjusting for time trends. One of the currently available frequentist models incorporates time as a categorical fixed effect, separating the duration of the trial into periods, defined as time intervals bounded by any treatment arm entering or leaving the platform. In this work, we propose two extensions of this model. First, we consider an alternative definition of the time covariate by dividing the trial into fixed-length calendar time intervals. Second, we propose alternative methods to adjust for time trends. In particular, we investigate adjusting for autocorrelated random effects to account for dependency between closer time intervals and employing spline regression to model time with a smooth polynomial function. We evaluate the performance of the proposed approaches in a simulation study and illustrate their use by means of a case study.

Homogeneous normalized random measures with independent increments (hNRMIs) represent a broad class of Bayesian nonparametric priors and thus are widely used. In this paper, we obtain the strong law of large numbers, the central limit theorem and the functional central limit theorem of hNRMIs when the concentration parameter $a$ approaches infinity. To quantify the convergence rate of the obtained central limit theorem, we further study the Berry-Esseen bound, which turns out to be of the form $O \left( \frac{1}{\sqrt{a}}\right)$. As an application of the central limit theorem, we present the functional delta method, which can be employed to obtain the limit of the quantile process of hNRMIs. As an illustration of the central limit theorems, we demonstrate the convergence numerically for the Dirichlet processes and the normalized inverse Gaussian processes with various choices of the concentration parameters.

We introduce a novel particle-in-Fourier (PIF) scheme that extends its applicability to non-periodic boundary conditions. Our method handles free space boundary conditions by replacing the Fourier Laplacian operator in PIF with a mollified Green's function as first introduced by Vico-Greengard-Ferrando. This modification yields highly accurate free space solutions to the Vlasov-Poisson system, while still maintaining energy conservation up to an error bounded by the time step size. We also explain how to extend our scheme to arbitrary Dirichlet boundary conditions via standard potential theory, which we illustrate in detail for Dirichlet boundary conditions on a circular boundary. We support our approach with proof-of-concept numerical results from two-dimensional plasma test cases to demonstrate the accuracy, efficiency, and conservation properties of the scheme.

There is currently considerable excitement within government about the potential of artificial intelligence to improve public service productivity through the automation of complex but repetitive bureaucratic tasks, freeing up the time of skilled staff. Here, we explore the size of this opportunity, by mapping out the scale of citizen-facing bureaucratic decision-making procedures within UK central government, and measuring their potential for AI-driven automation. We estimate that UK central government conducts approximately one billion citizen-facing transactions per year in the provision of around 400 services, of which approximately 143 million are complex repetitive transactions. We estimate that 84% of these complex transactions are highly automatable, representing a huge potential opportunity: saving even an average of just one minute per complex transaction would save the equivalent of approximately 1,200 person-years of work every year. We also develop a model to estimate the volume of transactions a government service undertakes, providing a way for government to avoid conducting time consuming transaction volume measurements. Finally, we find that there is high turnover in the types of services government provide, meaning that automation efforts should focus on general procedures rather than services themselves which are likely to evolve over time. Overall, our work presents a novel perspective on the structure and functioning of modern government, and how it might evolve in the age of artificial intelligence.

北京阿比特科技有限公司