亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Max-autogressive moving average (Max-ARMA) processes are powerful tools for modelling time series data with heavy-tailed behaviour; these are a non-linear version of the popular autoregressive moving average models. River flow data typically have features of heavy tails and non-linearity, as large precipitation events cause sudden spikes in the data that then exponentially decay. Therefore, stationary Max-ARMA models are a suitable candidate for capturing the unique temporal dependence structure exhibited by river flows. This paper contributes to advancing our understanding of the extremal properties of stationary Max-ARMA processes. We detail the first approach for deriving the extremal index, the lagged asymptotic dependence coefficient, and an efficient simulation for a general Max-ARMA process. We use the extremal properties, coupled with the belief that Max-ARMA processes provide only an approximation to extreme river flow, to fit such a model which can broadly capture river flow behaviour over a high threshold. We make our inference under a reparametrisation which gives a simpler parameter space that excludes cases where any parameter is non-identifiable. We illustrate results for river flow data from the UK River Thames.

相關內容

The unique solvability and error analysis of the original Lagrange multiplier approach proposed in [8] for gradient flows is studied in this paper. We identify a necessary and sufficient condition that must be satisfied for the nonlinear algebraic equation arising from the original Lagrange multiplier approach to admit a unique solution in the neighborhood of its exact solution, and propose a modified Lagrange multiplier approach so that the computation can continue even if the aforementioned condition is not satisfied. Using Cahn-Hilliard equation as an example, we prove rigorously the unique solvability and establish optimal error estimates of a second-order Lagrange multiplier scheme assuming this condition and that the time step is sufficient small. We also present numerical results to demonstrate that the modified Lagrange multiplier approach is much more robust and can use much larger time step than the original Lagrange multiplier approach.

Neural network (NN) designed for challenging machine learning tasks is in general a highly nonlinear mapping that contains massive variational parameters. High complexity of NN, if unbounded or unconstrained, might unpredictably cause severe issues including over-fitting, loss of generalization power, and unbearable cost of hardware. In this work, we propose a general compression scheme that significantly reduces the variational parameters of NN by encoding them to deep automatically-differentiable tensor network (ADTN) that contains exponentially-fewer free parameters. Superior compression performance of our scheme is demonstrated on several widely-recognized NN's (FC-2, LeNet-5, AlextNet, ZFNet and VGG-16) and datasets (MNIST, CIFAR-10 and CIFAR-100). For instance, we compress two linear layers in VGG-16 with approximately $10^{7}$ parameters to two ADTN's with just 424 parameters, where the testing accuracy on CIFAR-10 is improved from $90.17 \%$ to $91.74\%$. Our work suggests TN as an exceptionally efficient mathematical structure for representing the variational parameters of NN's, which exhibits superior compressibility over the commonly-used matrices and multi-way arrays.

Log-linear models are widely used to express the association in multivariate frequency data on contingency tables. The paper focuses on the power analysis for testing the goodness-of-fit hypothesis for this model type. Conventionally, for the power-related sample size calculations a deviation from the null hypothesis (effect size) is specified by means of the chi-square goodness-of-fit index. It is argued that the odds ratio is a more natural measure of effect size, with the advantage of having a data-relevant interpretation. Therefore, a class of log-affine models that are specified by odds ratios whose values deviate from those of the null by a small amount can be chosen as an alternative. Being expressed as sets of constraints on odds ratios, both hypotheses are represented by smooth surfaces in the probability simplex, and thus, the power analysis can be given a geometric interpretation as well. A concept of geometric power is introduced and a Monte-Carlo algorithm for its estimation is proposed. The framework is applied to the power analysis of goodness-of-fit in the context of multinomial sampling. An iterative scaling procedure for generating distributions from a log-affine model is described and its convergence is proved. To illustrate, the geometric power analysis is carried out for data from a clinical study.

Semi-Lagrangian (SL) schemes are highly efficient for simulating transport equations and are widely used across various applications. Despite their success, designing genuinely multi-dimensional and conservative SL schemes remains a significant challenge. Building on our previous work [Chen et al., J. Comput. Phys., V490 112329, (2023)], we introduce a conservative machine-learning-based SL finite difference (FD) method that allows for extra-large time step evolution. At the core of our approach is a novel dynamical graph neural network designed to handle the complexities associated with tracking accurately upstream points along characteristics. This proposed neural transport solver learns the conservative SL FD discretization directly from data, improving accuracy and efficiency compared to traditional numerical schemes, while significantly simplifying algorithm implementation. We validate the method' s effectiveness and efficiency through numerical tests on benchmark transport equations in both one and two dimensions, as well as the nonlinear Vlasov-Poisson system.

A change point detection (CPD) framework assisted by a predictive machine learning model called "Predict and Compare" is introduced and characterised in relation to other state-of-the-art online CPD routines which it outperforms in terms of false positive rate and out-of-control average run length. The method's focus is on improving standard methods from sequential analysis such as the CUSUM rule in terms of these quality measures. This is achieved by replacing typically used trend estimation functionals such as the running mean with more sophisticated predictive models (Predict step), and comparing their prognosis with actual data (Compare step). The two models used in the Predict step are the ARIMA model and the LSTM recursive neural network. However, the framework is formulated in general terms, so as to allow the use of other prediction or comparison methods than those tested here. The power of the method is demonstrated in a tribological case study in which change points separating the run-in, steady-state, and divergent wear phases are detected in the regime of very few false positives.

Out-of-distribution (OOD) detection, crucial for reliable pattern classification, discerns whether a sample originates outside the training distribution. This paper concentrates on the high-dimensional features output by the final convolutional layer, which contain rich image features. Our key idea is to project these high-dimensional features into two specific feature subspaces, leveraging the dimensionality reduction capacity of the network's linear layers, trained with Predefined Evenly-Distribution Class Centroids (PEDCC)-Loss. This involves calculating the cosines of three projection angles and the norm values of features, thereby identifying distinctive information for in-distribution (ID) and OOD data, which assists in OOD detection. Building upon this, we have modified the batch normalization (BN) and ReLU layer preceding the fully connected layer, diminishing their impact on the output feature distributions and thereby widening the distribution gap between ID and OOD data features. Our method requires only the training of the classification network model, eschewing any need for input pre-processing or specific OOD data pre-tuning. Extensive experiments on several benchmark datasets demonstrates that our approach delivers state-of-the-art performance. Our code is available at //github.com/Hewell0/ProjOOD.

The cosinor model is frequently used to represent gene expression given the 24 hour day-night cycle time at which a corresponding tissue sample is collected. However, the timing of many biological processes are based on individual-specific internal timing systems that are offset relative to day-night cycle time. When these offsets are unknown, they pose a challenge in performing statistical analyses with a cosinor model. To clarify, when sample collection times are mis-recorded, cosinor regression can yield attenuated parameter estimates, which would also attenuate test statistics. This attenuation bias would inflate type II error rates in identifying genes with oscillatory behavior. This paper proposes a heuristic method to account for unknown offsets when tissue samples are collected in a longitudinal design. Specifically, this method involves first estimating individual-specific cosinor models for each gene. The times of sample collection for that individual are then translated based on the estimated phase-shifts across every gene. Simulation studies confirm that this method mitigates bias in estimation and inference. Illustrations with real data from three circadian biology studies highlight that this method produces parameter estimates and inferences akin to those obtained when each individual's offset is known.

We consider the problem of finite-time identification of linear dynamical systems from $T$ samples of a single trajectory. Recent results have predominantly focused on the setup where no structural assumption is made on the system matrix $A^* \in \mathbb{R}^{n \times n}$, and have consequently analyzed the ordinary least squares (OLS) estimator in detail. We assume prior structural information on $A^*$ is available, which can be captured in the form of a convex set $\mathcal{K}$ containing $A^*$. For the solution of the ensuing constrained least squares estimator, we derive non-asymptotic error bounds in the Frobenius norm that depend on the local size of $\mathcal{K}$ at $A^*$. To illustrate the usefulness of these results, we instantiate them for four examples, namely when (i) $A^*$ is sparse and $\mathcal{K}$ is a suitably scaled $\ell_1$ ball; (ii) $\mathcal{K}$ is a subspace; (iii) $\mathcal{K}$ consists of matrices each of which is formed by sampling a bivariate convex function on a uniform $n \times n$ grid (convex regression); (iv) $\mathcal{K}$ consists of matrices each row of which is formed by uniform sampling (with step size $1/T$) of a univariate Lipschitz function. In all these situations, we show that $A^*$ can be reliably estimated for values of $T$ much smaller than what is needed for the unconstrained setting.

With the recent success of generative models in image and text, the evaluation of generative models has gained a lot of attention. Whereas most generative models are compared in terms of scalar values such as Frechet Inception Distance (FID) or Inception Score (IS), in the last years (Sajjadi et al., 2018) proposed a definition of precision-recall curve to characterize the closeness of two distributions. Since then, various approaches to precision and recall have seen the light (Kynkaanniemi et al., 2019; Naeem et al., 2020; Park & Kim, 2023). They center their attention on the extreme values of precision and recall, but apart from this fact, their ties are elusive. In this paper, we unify most of these approaches under the same umbrella, relying on the work of (Simon et al., 2019). Doing so, we were able not only to recover entire curves, but also to expose the sources of the accounted pitfalls of the concerned metrics. We also provide consistency results that go well beyond the ones presented in the corresponding literature. Last, we study the different behaviors of the curves obtained experimentally.

We present MULTIGAIN 2.0, a major extension to the controller synthesis tool MULTIGAIN, built on top of the probabilistic model checker PRISM. This new version extends MULTIGAIN's multi-objective capabilities, by allowing for the formal verification and synthesis of controllers for probabilistic systems with multi-dimensional long-run average reward structures, steady-state constraints, and linear temporal logic properties. Additionally, MULTIGAIN 2.0 can modify the underlying linear program to prevent unbounded-memory and other unintuitive solutions and visualizes Pareto curves, in the two- and three-dimensional cases, to facilitate trade-off analysis in multi-objective scenarios.

北京阿比特科技有限公司