亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Linear Mixed Effects (LME) models have been widely applied in clustered data analysis in many areas including marketing research, clinical trials, and biomedical studies. Inference can be conducted using maximum likelihood approach if assuming Normal distributions on the random effects. However, in many applications of economy, business and medicine, it is often essential to impose constraints on the regression parameters after taking their real-world interpretations into account. Therefore, in this paper we extend the classical (unconstrained) LME models to allow for sign constraints on its overall coefficients. We propose to assume a symmetric doubly truncated Normal (SDTN) distribution on the random effects instead of the unconstrained Normal distribution which is often found in classical literature. With the aforementioned change, difficulty has dramatically increased as the exact distribution of the dependent variable becomes analytically intractable. We then develop likelihood-based approaches to estimate the unknown model parameters utilizing the approximation of its exact distribution. Simulation studies have shown that the proposed constrained model not only improves real-world interpretations of results, but also achieves satisfactory performance on model fits as compared to the existing model.

相關內容

We study the problem of maximizing the probability that (i) an electric component or financial institution $X$ does not default before another component or institution $Y$ and (ii) that $X$ and $Y$ default jointly within the class of all random variables $X,Y$ with given univariate continuous distribution functions $F$ and $G$, respectively, and show that the maximization problems correspond to finding copulas maximizing the mass of the endograph $\Gamma^\leq(T)$ and the graph $\Gamma(T)$ of $T=G \circ F^-$, respectively. After providing simple, copula-based proofs for the existence of copulas attaining the two maxima $\overline{m}_T$ and $\overline{w}_T$ we generalize the obtained results to the case of general (not necessarily monotonic) transformations $T:[0,1] \rightarrow [0,1]$ and derive simple and easily calculable formulas for $\overline{m}_T$ and $\overline{w}_T$ involving the distribution function $F_T$ of $T$ (interpreted as random variable on $[0,1]$). The latter are then used to charac\-terize all non-decreasing transformations $T:[0,1] \rightarrow [0,1]$ for which $\overline{m}_T$ and $\overline{w}_T$ coincide. A strongly consistent estimator for the maximum probability that $X$ does not default before $Y$ is derived and proven to be asymptotically normal under very mild regularity conditions. Several examples and graphics illustrate the main results and falsify some seemingly natural conjectures.

This paper proposes an algorithm to estimate the parameters of a censored linear regression model when the regression errors are autocorrelated, and the innovations follow a Student-$t$ distribution. The Student-$t$ distribution is widely used in statistical modeling of datasets involving errors with outliers and a more substantial possibility of extreme values. The maximum likelihood (ML) estimates are obtained throughout the SAEM algorithm [1]. This algorithm is a stochastic approximation of the EM algorithm, and it is a tool for models in which the E-step does not have an analytic form. There are also provided expressions to compute the observed Fisher information matrix [2]. The proposed model is illustrated by the analysis of a real dataset that has left-censored and missing observations. We also conducted two simulations studies to examine the asymptotic properties of the estimates and the robustness of the model.

In this paper, the problem of state estimation, in the context of both filtering and smoothing, for nonlinear state-space models is considered. Due to the nonlinear nature of the models, the state estimation problem is generally intractable as it involves integrals of general nonlinear functions and the filtered and smoothed state distributions lack closed-form solutions. As such, it is common to approximate the state estimation problem. In this paper, we develop an assumed Gaussian solution based on variational inference, which offers the key advantage of a flexible, but principled, mechanism for approximating the required distributions. Our main contribution lies in a new formulation of the state estimation problem as an optimisation problem, which can then be solved using standard optimisation routines that employ exact first- and second-order derivatives. The resulting state estimation approach involves a minimal number of assumptions and applies directly to nonlinear systems with both Gaussian and non-Gaussian probabilistic models. The performance of our approach is demonstrated on several examples; a challenging scalar system, a model of a simple robotic system, and a target tracking problem using a von Mises-Fisher distribution and outperforms alternative assumed Gaussian approaches to state estimation.

Learning the relationships between various entities from time-series data is essential in many applications. Gaussian graphical models have been studied to infer these relationships. However, existing algorithms process data in a batch at a central location, limiting their applications in scenarios where data is gathered by different agents. In this paper, we propose a distributed sparse inverse covariance algorithm to learn the network structure (i.e., dependencies among observed entities) in real-time from data collected by distributed agents. Our approach is built on an online graphical alternating minimization algorithm, augmented with a consensus term that allows agents to learn the desired structure cooperatively. We allow the system designer to select the number of communication rounds and optimization steps per data point. We characterize the rate of convergence of our algorithm and provide simulations on synthetic datasets.

This paper is concerned with error estimates of the fully discrete generalized finite element method (GFEM) with optimal local approximation spaces for solving elliptic problems with heterogeneous coefficients. The local approximation spaces are constructed using eigenvectors of local eigenvalue problems solved by the finite element method on some sufficiently fine mesh with mesh size $h$. The error bound of the discrete GFEM approximation is proved to converge as $h\rightarrow 0$ towards that of the continuous GFEM approximation, which was shown to decay nearly exponentially in previous works. Moreover, even for fixed mesh size $h$, a nearly exponential rate of convergence of the local approximation errors with respect to the dimension of the local spaces is established. An efficient and accurate method for solving the discrete eigenvalue problems is proposed by incorporating the discrete $A$-harmonic constraint directly into the eigensolver. Numerical experiments are carried out to confirm the theoretical results and to demonstrate the effectiveness of the method.

In the context of epidemiology, policies for disease control are often devised through a mixture of intuition and brute-force, whereby the set of logically conceivable policies is narrowed down to a small family described by a few parameters, following which linearization or grid search is used to identify the optimal policy within the set. This scheme runs the risk of leaving out more complex (and perhaps counter-intuitive) policies for disease control that could tackle the disease more efficiently. In this article, we use techniques from convex optimization theory and machine learning to conduct optimizations over disease policies described by hundreds of parameters. In contrast to past approaches for policy optimization based on control theory, our framework can deal with arbitrary uncertainties on the initial conditions and model parameters controlling the spread of the disease, and stochastic models. In addition, our methods allow for optimization over policies which remain constant over weekly periods, specified by either continuous or discrete (e.g.: lockdown on/off) government measures. We illustrate our approach by minimizing the total time required to eradicate COVID-19 within the Susceptible-Exposed-Infected-Recovered (SEIR) model proposed by Kissler \emph{et al.} (March, 2020).

Denoising is one of the most important data processing tasks and is generally a prerequisite for downstream image analysis in many fields. Despite their superior denoising performance, supervised deep denoising methods require paired noise-clean or noise-noise samples often unavailable in practice. On the other hand, unsupervised deep denoising methods such as Noise2Void and its variants predict masked pixels from their neighboring pixels in single noisy images. However, these unsupervised algorithms only work under the independent noise assumption while real noise distributions are usually correlated with complex structural patterns. Here we propose the first-of-its-kind feature similarity-based unsupervised denoising approach that works in a nonlocal and nonlinear fashion to suppress not only independent but also correlated noise. Our approach is referred to as Noise2Sim since different noisy sub-images with similar signals are extracted to form as many as possible training pairs so that the parameters of a deep denoising network can be optimized in a self-learning fashion. Theoretically, the theorem is established that Noise2Sim is equivalent to the supervised learning methods under mild conditions. Experimentally, Noise2Sim achieves excellent results on natural, microscopic, low-dose CT and photon-counting micro-CT images, removing image noise independent or not and being superior to the competitive denoising methods. Potentially, Noise2Sim would open a new direction of research and lead to the development of adaptive denoising tools in diverse applications.

Stochastic processes are random variables with values in some space of paths. However, reducing a stochastic process to a path-valued random variable ignores its filtration, i.e. the flow of information carried by the process through time. By conditioning the process on its filtration, we introduce a family of higher order kernel mean embeddings (KMEs) that generalizes the notion of KME and captures additional information related to the filtration. We derive empirical estimators for the associated higher order maximum mean discrepancies (MMDs) and prove consistency. We then construct a filtration-sensitive kernel two-sample test able to pick up information that gets missed by the standard MMD test. In addition, leveraging our higher order MMDs we construct a family of universal kernels on stochastic processes that allows to solve real-world calibration and optimal stopping problems in quantitative finance (such as the pricing of American options) via classical kernel-based regression methods. Finally, adapting existing tests for conditional independence to the case of stochastic processes, we design a causal-discovery algorithm to recover the causal graph of structural dependencies among interacting bodies solely from observations of their multidimensional trajectories.

In this work, we compare three different modeling approaches for the scores of soccer matches with regard to their predictive performances based on all matches from the four previous FIFA World Cups 2002 - 2014: Poisson regression models, random forests and ranking methods. While the former two are based on the teams' covariate information, the latter method estimates adequate ability parameters that reflect the current strength of the teams best. Within this comparison the best-performing prediction methods on the training data turn out to be the ranking methods and the random forests. However, we show that by combining the random forest with the team ability parameters from the ranking methods as an additional covariate we can improve the predictive power substantially. Finally, this combination of methods is chosen as the final model and based on its estimates, the FIFA World Cup 2018 is simulated repeatedly and winning probabilities are obtained for all teams. The model slightly favors Spain before the defending champion Germany. Additionally, we provide survival probabilities for all teams and at all tournament stages as well as the most probable tournament outcome.

Large margin nearest neighbor (LMNN) is a metric learner which optimizes the performance of the popular $k$NN classifier. However, its resulting metric relies on pre-selected target neighbors. In this paper, we address the feasibility of LMNN's optimization constraints regarding these target points, and introduce a mathematical measure to evaluate the size of the feasible region of the optimization problem. We enhance the optimization framework of LMNN by a weighting scheme which prefers data triplets which yield a larger feasible region. This increases the chances to obtain a good metric as the solution of LMNN's problem. We evaluate the performance of the resulting feasibility-based LMNN algorithm using synthetic and real datasets. The empirical results show an improved accuracy for different types of datasets in comparison to regular LMNN.

北京阿比特科技有限公司