亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The statistical analysis of univariate quantiles is a well developed research topic. However, there is a need for research in multivariate quantiles. We construct bivariate (conditional) quantiles using the level curves of vine copula based bivariate regression model. Vine copulas are graph theoretical models identified by a sequence of linked trees, which allow for separate modelling of marginal distributions and the dependence structure. We introduce a novel graph structure model (given by a tree sequence) specifically designed for a symmetric treatment of two responses in a predictive regression setting. We establish computational tractability of the model and a straight forward way of obtaining different conditional distributions. Using vine copulas the typical shortfalls of regression, as the need for transformations or interactions of predictors, collinearity or quantile crossings are avoided. We illustrate the copula based bivariate level curves for different copula distributions and show how they can be adjusted to form valid quantile curves. We apply our approach to weather measurements from Seoul, Korea. This data example emphasizes the benefits of the joint bivariate response modelling in contrast to two separate univariate regressions or by assuming conditional independence, for bivariate response data set in the presence of conditional dependence.

相關內容

Partial differential equations (PDEs) have become an essential tool for modeling complex physical systems. Such equations are typically solved numerically via mesh-based methods, such as finite element methods, the outputs of which consist of the solutions on a set of mesh nodes over the spatial domain. However, these simulations are often prohibitively costly to survey the input space. In this paper, we propose an efficient emulator that simultaneously predicts the outputs on a set of mesh nodes, with theoretical justification of its uncertainty quantification. The novelty of the proposed method lies in the incorporation of the mesh node coordinates into the statistical model. In particular, the proposed method segments the mesh nodes into multiple clusters via a Dirichlet process prior and fits a Gaussian process model in each. Most importantly, by revealing the underlying clustering structures, the proposed method can extract valuable flow physics present in the systems that can be used to guide further investigations. Real examples are demonstrated to show that our proposed method has smaller prediction errors than its main competitors, with competitive computation time, and identifies interesting clusters of mesh nodes that exhibit coherent input-output relationships and possess physical significance, such as satisfying boundary conditions. An R package for the proposed methodology is provided in an open repository.

We propose a computationally and statistically efficient procedure for segmenting univariate data under piecewise linearity. The proposed moving sum (MOSUM) methodology detects multiple change points where the underlying signal undergoes discontinuous jumps and/or slope changes. Theoretically, it controls the family-wise error rate at a given significance level asymptotically and achieves consistency in multiple change point detection, as well as matching the minimax optimal rate of estimation when the signal is piecewise linear and continuous, all under weak assumptions permitting serial dependence and heavy-tailedness. Computationally, the complexity of the MOSUM procedure is $O(n)$ which, combined with its good performance on simulated datasets, making it highly attractive in comparison with the existing methods. We further demonstrate its good performance on a real data example on rolling element-bearing prognostics.

As quantum theory allows for information processing and computing tasks that otherwise are not possible with classical systems, there is a need and use of quantum Internet beyond existing network systems. At the same time, the realization of a desirably functional quantum Internet is hindered by fundamental and practical challenges such as high loss during transmission of quantum systems, decoherence due to interaction with the environment, fragility of quantum states, etc. We study the implications of these constraints by analyzing the limitations on the scaling and robustness of quantum Internet. Considering quantum networks, we present practical bottlenecks for secure communication, delegated computing, and resource distribution among end nodes. Motivated by the power of abstraction in graph theory (in association with quantum information theory), we consider graph-theoretic quantifiers to assess network robustness and provide critical values of communication lines for viable communication over quantum Internet. In particular, we begin by discussing limitations on usefulness of isotropic states as device-independent quantum key repeaters which otherwise could be useful for device-independent quantum key distribution. We consider some quantum networks of practical interest, ranging from satellite-based networks connecting far-off spatial locations to currently available quantum processor architectures within computers, and analyze their robustness to perform quantum information processing tasks. Some of these tasks form primitives for delegated quantum computing, e.g., entanglement distribution and quantum teleportation. For some examples of quantum networks, we present algorithms to perform different quantum network tasks of interest such as constructing the network structure, finding the shortest path between a pair of end nodes, and optimizing the flow of resources at a node.

We introduce a new class of absorbing boundary conditions (ABCs) for the Helmholtz equation. The proposed ABCs can be derived from a certain simple class of perfectly matched layers using $L$ discrete layers and using the $Q_N$ Lagrange finite element in conjunction with the $N$-point Gauss-Legendre quadrature reduced integration rule. The proposed ABCs are classified by a tuple $(L,N)$, and achieve reflection error of order $O(R^{2LN})$ for some $R<1$. The new ABCs generalise the perfectly matched discrete layers proposed by Guddati and Lim [Int. J. Numer. Meth. Engng 66 (6) (2006) 949-977], including them as type $(L,1)$. An analysis of the proposed ABCs is performed motivated by the work of Ainsworth [J. Comput. Phys. 198 (1) (2004) 106-130]. The new ABCs facilitate numerical implementations of the Helmholtz problem with ABCs if $Q_N$ finite elements are used in the physical domain. Moreover, giving more insight, the analysis presented in this work potentially aids with developing ABCs in related areas.

Replication studies are increasingly conducted to assess the credibility of scientific findings. Most of these replication attempts target studies with a superiority design, and there is a lack of methodology regarding the analysis of replication studies with alternative types of designs, such as equivalence. In order to fill this gap, we propose two approaches, the two-trials rule and the sceptical TOST procedure, adapted from methods used in superiority settings. Both methods have the same overall Type-I error rate, but the sceptical TOST procedure allows replication success even for non-significant original or replication studies. This leads to a larger project power and other differences in relevant operating characteristics. Both methods can be used for sample size calculation of the replication study, based on the results from the original one. The two methods are applied to data from the Reproducibility Project: Cancer Biology.

Deep neural networks (DNN) are singular statistical models which exhibit complex degeneracies. In this work, we illustrate how a quantity known as the \emph{learning coefficient} introduced in singular learning theory quantifies precisely the degree of degeneracy in deep neural networks. Importantly, we will demonstrate that degeneracy in DNN cannot be accounted for by simply counting the number of "flat" directions. We propose a computationally scalable approximation of a localized version of the learning coefficient using stochastic gradient Langevin dynamics. To validate our approach, we demonstrate its accuracy in low-dimensional models with known theoretical values. Importantly, the local learning coefficient can correctly recover the ordering of degeneracy between various parameter regions of interest. An experiment on MNIST shows the local learning coefficient can reveal the inductive bias of stochastic opitmizers for more or less degenerate critical points.

In recent decades, a growing number of discoveries in fields of mathematics have been assisted by computer algorithms, primarily for exploring large parameter spaces that humans would take too long to investigate. As computers and algorithms become more powerful, an intriguing possibility arises - the interplay between human intuition and computer algorithms can lead to discoveries of novel mathematical concepts that would otherwise remain elusive. To realize this perspective, we have developed a massively parallel computer algorithm that discovers an unprecedented number of continued fraction formulas for fundamental mathematical constants. The sheer number of formulas discovered by the algorithm unveils a novel mathematical structure that we call the conservative matrix field. Such matrix fields (1) unify thousands of existing formulas, (2) generate infinitely many new formulas, and most importantly, (3) lead to unexpected relations between different mathematical constants, including multiple integer values of the Riemann zeta function. Conservative matrix fields also enable new mathematical proofs of irrationality. In particular, we can use them to generalize the celebrated proof by Ap\'ery for the irrationality of $\zeta(3)$. Utilizing thousands of personal computers worldwide, our computer-supported research strategy demonstrates the power of experimental mathematics, highlighting the prospects of large-scale computational approaches to tackle longstanding open problems and discover unexpected connections across diverse fields of science.

Differential geometric approaches are ubiquitous in several fields of mathematics, physics and engineering, and their discretizations enable the development of network-based mathematical and computational frameworks, which are essential for large-scale data science. The Forman-Ricci curvature (FRC) - a statistical measure based on Riemannian geometry and designed for networks - is known for its high capacity for extracting geometric information from complex networks. However, extracting information from dense networks is still challenging due to the combinatorial explosion of high-order network structures. Motivated by this challenge we sought a set-theoretic representation theory for high-order network cells and FRC, as well as their associated concepts and properties, which together provide an alternative and efficient formulation for computing high-order FRC in complex networks. We provide a pseudo-code, a software implementation coined FastForman, as well as a benchmark comparison with alternative implementations. Crucially, our representation theory reveals previous computational bottlenecks and also accelerates the computation of FRC. As a consequence, our findings open new research possibilities in complex systems where higher-order geometric computations are required.

In this paper, we address the problem of modeling data with periodic autoregressive (PAR) time series and additive noise. In most cases, the data are processed assuming a noise-free model (i.e., without additive noise), which is not a realistic assumption in real life. The first two steps in PAR model identification are order selection and period estimation, so the main focus is on these issues. Finally, the model should be validated, so a procedure for analyzing the residuals, which are considered here as multidimensional vectors, is proposed. Both order and period selection, as well as model validation, are addressed by using the characteristic function (CF) of the residual series. The CF is used to obtain the probability density function, which is utilized in the information criterion and for residuals distribution testing. To complete the PAR model analysis, the procedure for estimating the coefficients is necessary. However, this issue is only mentioned here as it is a separate task (under consideration in parallel). The presented methodology can be considered as the general framework for analyzing data with periodically non-stationary characteristics disturbed by finite-variance external noise. The original contribution is in the selection of the optimal model order and period identification, as well as the analysis of residuals. All these findings have been inspired by our previous work on machine condition monitoring that used PAR modeling

We propose a new way to assess certain short constructed responses to mathematics items. Our approach uses a pipeline that identifies the key values specified by the student in their response. This allows us to determine the correctness of the response, as well as identify any misconceptions. The information from the value identification pipeline can then be used to provide feedback to the teacher and student. The value identification pipeline consists of two fine-tuned language models. The first model determines if a value is implicit in the student response. The second model identifies where in the response the key value is specified. We consider both a generic model that can be used for any prompt and value, as well as models that are specific to each prompt and value. The value identification pipeline is a more accurate and informative way to assess short constructed responses than traditional rubric-based scoring. It can be used to provide more targeted feedback to students, which can help them improve their understanding of mathematics.

北京阿比特科技有限公司