亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we consider the multicollinearity problem in the gamma regression model when model parameters are linearly restricted. The linear restrictions are available from prior information to ensure the validity of scientific theories or structural consistency based on physical phenomena. In order to make relevant statistical inference for a model any available knowledge and prior information on the model parameters should be taken into account. This paper proposes therefore an algorithm to acquire Bayesian estimator for the parameters of a gamma regression model subjected to some linear inequality restrictions. We then show that the proposed estimator outperforms the ordinary estimators such as the maximum likelihood and ridge estimators in term of pertinence and accuracy through Monte Carlo simulations and application to a real dataset.

相關內容

The decreasing cost and improved sensor and monitoring system technology (e.g. fiber optics and strain gauges) have led to more measurements in close proximity to each other. When using such spatially dense measurement data in Bayesian system identification strategies, the correlation in the model prediction error can become significant. The widely adopted assumption of uncorrelated Gaussian error may lead to inaccurate parameter estimation and overconfident predictions, which may lead to sub-optimal decisions. This paper addresses the challenges of performing Bayesian system identification for structures when large datasets are used, considering both spatial and temporal dependencies in the model uncertainty. We present an approach to efficiently evaluate the log-likelihood function, and we utilize nested sampling to compute the evidence for Bayesian model selection. The approach is first demonstrated on a synthetic case and then applied to a (measured) real-world steel bridge. The results show that the assumption of dependence in the model prediction uncertainties is decisively supported by the data. The proposed developments enable the use of large datasets and accounting for the dependency when performing Bayesian system identification, even when a relatively large number of uncertain parameters is inferred.

This paper extends the gradient-based reconstruction approach of Chamarthi \cite{chamarthi2023gradient} to genuine high-order accuracy for inviscid test cases involving smooth flows. A seventh-order accurate scheme is derived using the same stencil as of the explicit fourth-order scheme proposed in Ref. \cite{chamarthi2023gradient}, which also has low dissipation properties. The proposed method is seventh-order accurate under the assumption that the variables at the \textit{cell centres are point values}. A problem-independent discontinuity detector is used to obtain high-order accuracy. Accordingly, primitive or conservative variable reconstruction is performed around regions of discontinuities, whereas smooth solution regions apply flux reconstruction. The proposed approach can still share the derivatives between the inviscid and viscous fluxes, which is the main idea behind the gradient-based reconstruction. Several standard benchmark test cases are presented. The proposed method is more efficient than the seventh-order weighted compact nonlinear scheme (WCNS) for the test cases considered in this paper.

In the big data era researchers face a series of problems. Even standard approaches/methodologies, like linear regression, can be difficult or problematic with huge volumes of data. Traditional approaches for regression in big datasets may suffer due to the large sample size, since they involve inverting huge data matrices or even because the data cannot fit to the memory. Proposed approaches are based on selecting representative subdata to run the regression. Existing approaches select the subdata using information criteria and/or properties from orthogonal arrays. In the present paper we improve existing algorithms providing a new algorithm that is based on D-optimality approach. We provide simulation evidence for its performance. Evidence about the parameters of the proposed algorithm is also provided in order to clarify the trade-offs between execution time and information gain. Real data applications are also provided.

This paper introduces two randomized preconditioning techniques for robustly solving kernel ridge regression (KRR) problems with a medium to large number of data points ($10^4 \leq N \leq 10^7$). The first method, RPCholesky preconditioning, is capable of accurately solving the full-data KRR problem in $O(N^2)$ arithmetic operations, assuming sufficiently rapid polynomial decay of the kernel matrix eigenvalues. The second method, KRILL preconditioning, offers an accurate solution to a restricted version of the KRR problem involving $k \ll N$ selected data centers at a cost of $O((N + k^2) k \log k)$ operations. The proposed methods solve a broad range of KRR problems and overcome the failure modes of previous KRR preconditioners, making them ideal for practical applications.

Assessing causal effects in the presence of unmeasured confounding is a challenging problem. Although auxiliary variables, such as instrumental variables, are commonly used to identify causal effects, they are often unavailable in practice due to stringent and untestable conditions. To address this issue, previous researches have utilized linear structural equation models to show that the causal effect can be identifiable when noise variables of the treatment and outcome are both non-Gaussian. In this paper, we investigate the problem of identifying the causal effect using auxiliary covariates and non-Gaussianity from the treatment. Our key idea is to characterize the impact of unmeasured confounders using an observed covariate, assuming they are all Gaussian. The auxiliary covariate can be an invalid instrument or an invalid proxy variable. We demonstrate that the causal effect can be identified using this measured covariate, even when the only source of non-Gaussianity comes from the treatment. We then extend the identification results to the multi-treatment setting and provide sufficient conditions for identification. Based on our identification results, we propose a simple and efficient procedure for calculating causal effects and show the $\sqrt{n}$-consistency of the proposed estimator. Finally, we evaluate the performance of our estimator through simulation studies and an application.

We present a registration method for model reduction of parametric partial differential equations with dominating advection effects and moving features. Registration refers to the use of a parameter-dependent mapping to make the set of solutions to these equations more amicable for approximation using classical reduced basis methods. The proposed approach utilizes concepts from optimal transport theory, as we utilize Monge embeddings to construct these mappings in a purely data-driven way. The method relies on one interpretable hyper-parameter. We discuss how our approach relates to existing works that combine model order reduction and optimal transport theory. Numerical results are provided to demonstrate the effect of the registration. This includes a model problem where the solution is itself a probability density and one where it is not.

Feature selection is one of the most relevant processes in any methodology for creating a statistical learning model. Generally, existing algorithms establish some criterion to select the most influential variables, discarding those that do not contribute any relevant information to the model. This methodology makes sense in a classical static situation where the joint distribution of the data does not vary over time. However, when dealing with real data, it is common to encounter the problem of the dataset shift and, specifically, changes in the relationships between variables (concept shift). In this case, the influence of a variable cannot be the only indicator of its quality as a regressor of the model, since the relationship learned in the traning phase may not correspond to the current situation. Thus, we propose a new feature selection methodology for regression problems that takes this fact into account, using Shapley values to study the effect that each variable has on the predictions. Five examples are analysed: four correspond to typical situations where the method matches the state of the art and one example related to electricity price forecasting where a concept shift phenomenon has occurred in the Iberian market. In this case the proposed algorithm improves the results significantly.

We study the classical problem of predicting an outcome variable, $Y$, using a linear combination of a $d$-dimensional covariate vector, $\mathbf{X}$. We are interested in linear predictors whose coefficients solve: % \begin{align*} \inf_{\boldsymbol{\beta} \in \mathbb{R}^d} \left( \mathbb{E}_{\mathbb{P}_n} \left[ \left(Y-\mathbf{X}^{\top}\beta \right)^r \right] \right)^{1/r} +\delta \, \rho\left(\boldsymbol{\beta}\right), \end{align*} where $\delta>0$ is a regularization parameter, $\rho:\mathbb{R}^d\to \mathbb{R}_+$ is a convex penalty function, $\mathbb{P}_n$ is the empirical distribution of the data, and $r\geq 1$. We present three sets of new results. First, we provide conditions under which linear predictors based on these estimators % solve a \emph{distributionally robust optimization} problem: they minimize the worst-case prediction error over distributions that are close to each other in a type of \emph{max-sliced Wasserstein metric}. Second, we provide a detailed finite-sample and asymptotic analysis of the statistical properties of the balls of distributions over which the worst-case prediction error is analyzed. Third, we use the distributionally robust optimality and our statistical analysis to present i) an oracle recommendation for the choice of regularization parameter, $\delta$, that guarantees good out-of-sample prediction error; and ii) a test-statistic to rank the out-of-sample performance of two different linear estimators. None of our results rely on sparsity assumptions about the true data generating process; thus, they broaden the scope of use of the square-root lasso and related estimators in prediction problems.

We develop the concept of exponential stochastic inequality (ESI), a novel notation that simultaneously captures high-probability and in-expectation statements. It is especially well suited to succinctly state, prove, and reason about excess-risk and generalization bounds in statistical learning, specifically, but not restricted to, the PAC-Bayesian type. We show that the ESI satisfies transitivity and other properties which allow us to use it like standard, nonstochastic inequalities. We substantially extend the original definition from Koolen et al. (2016) and show that general ESIs satisfy a host of useful additional properties, including a novel Markov-like inequality. We show how ESIs relate to, and clarify, PAC-Bayesian bounds, subcentered subgamma random variables and *fast-rate conditions* such as the central and Bernstein conditions. We also show how the ideas can be extended to random scaling factors (learning rates).

Experimental data is often comprised of variables measured independently, at different sampling rates (non-uniform ${\Delta}$t between successive measurements); and at a specific time point only a subset of all variables may be sampled. Approaches to identifying dynamical systems from such data typically use interpolation, imputation or subsampling to reorganize or modify the training data $\textit{prior}$ to learning. Partial physical knowledge may also be available $\textit{a priori}$ (accurately or approximately), and data-driven techniques can complement this knowledge. Here we exploit neural network architectures based on numerical integration methods and $\textit{a priori}$ physical knowledge to identify the right-hand side of the underlying governing differential equations. Iterates of such neural-network models allow for learning from data sampled at arbitrary time points $\textit{without}$ data modification. Importantly, we integrate the network with available partial physical knowledge in "physics informed gray-boxes"; this enables learning unknown kinetic rates or microbial growth functions while simultaneously estimating experimental parameters.

北京阿比特科技有限公司