亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The development of data acquisition systems is facilitating the collection of data that are apt to be modelled as functional data. In some applications, the interest lies in the identification of significant differences in group functional means defined by varying experimental conditions, which is known as functional analysis of variance (FANOVA). With real data, it is common that the sample under study is contaminated by some outliers, which can strongly bias the analysis. In this paper, we propose a new robust nonparametric functional ANOVA method (RoFANOVA) that reduces the weights of outlying functional data on the results of the analysis. It is implemented through a permutation test based on a test statistic obtained via a functional extension of the classical robust $ M $-estimator. By means of an extensive Monte Carlo simulation study, the proposed test is compared with some alternatives already presented in the literature, in both one-way and two-way designs. The performance of the RoFANOVA is demonstrated in the framework of a motivating real-case study in the field of additive manufacturing that deals with the analysis of spatter ejections. The RoFANOVA method is implemented in the R package rofanova, available online at //github.com/unina-sfere/rofanova.

相關內容

iOS 8 提供的應用間和應用跟系統的功能交互特性。
  • Today (iOS and OS X): widgets for the Today view of Notification Center
  • Share (iOS and OS X): post content to web services or share content with others
  • Actions (iOS and OS X): app extensions to view or manipulate inside another app
  • Photo Editing (iOS): edit a photo or video in Apple's Photos app with extensions from a third-party apps
  • Finder Sync (OS X): remote file storage in the Finder with support for Finder content annotation
  • Storage Provider (iOS): an interface between files inside an app and other apps on a user's device
  • Custom Keyboard (iOS): system-wide alternative keyboards

Source:

The manuscript discusses how to incorporate random effects for quantile regression models for clustered data with focus on settings with many but small clusters. The paper has three contributions: (i) documenting that existing methods may lead to severely biased estimators for fixed effects parameters; (ii) proposing a new two-step estimation methodology where predictions of the random effects are first computed {by a pseudo likelihood approach (the LQMM method)} and then used as offsets in standard quantile regression; (iii) proposing a novel bootstrap sampling procedure in order to reduce bias of the two-step estimator and compute confidence intervals. The proposed estimation and associated inference is assessed numerically through rigorous simulation studies and applied to an AIDS Clinical Trial Group (ACTG) study.

Many studies have reported associations between later-life cognition and socioeconomic position in childhood, young adulthood, and mid-life. However, the vast majority of these studies are unable to quantify how these associations vary over time and with respect to several demographic factors. Varying coefficient (VC) models, which treat the covariate effects in a linear model as nonparametric functions of additional effect modifiers, offer an appealing way to overcome these limitations. Unfortunately, state-of-the-art VC modeling methods require computationally prohibitive parameter tuning or make restrictive assumptions about the functional form of the covariate effects. In response, we propose VCBART, which estimates the covariate effects in a VC model using Bayesian Additive Regression Trees. With simple default hyperparameter settings, VCBART outperforms existing methods in terms of covariate effect estimation and prediction. Using VCBART, we predict the cognitive trajectories of 4,167 subjects from the Health and Retirement Study using multiple measures of socioeconomic position and physical health. We find that socioeconomic position in childhood and young adulthood have small effects that do not vary with age. In contrast, the effects of measures of mid-life physical health tend to vary with respect to age, race, and marital status. An R package implementing VCBART is available at //github.com/skdeshpande91/VCBART

We present a new method for face recognition from digital images acquired under varying illumination conditions. The method is based on mathematical modeling of local gradient distributions using the Radon Cumulative Distribution Transform (R-CDT). We demonstrate that lighting variations cause certain types of deformations of local image gradient distributions which, when expressed in R-CDT domain, can be modeled as a subspace. Face recognition is then performed using a nearest subspace in R-CDT domain of local gradient distributions. Experiment results demonstrate the proposed method outperforms other alternatives in several face recognition tasks with challenging illumination conditions. Python code implementing the proposed method is available, which is integrated as a part of the software package PyTransKit.

Despite increasing accessibility to function data, effective methods for flexibly estimating underlying functional trend are still scarce. We thereby develop functional version of trend filtering for estimating trend of functional data indexed by time or on general graph by extending the conventional trend filtering, a powerful nonparametric trend estimation technique, for scalar data. We formulate the new trend filtering by introducing penalty terms based on $L_2$-norm of the differences of adjacent trend functions. We develop an efficient iteration algorithm for optimizing the objective function obtained by orthonormal basis expansion. Furthermore, we introduce additional penalty terms to eliminate redundant basis functions, which leads to automatic adaptation of the number of basis functions. The tuning parameter in the proposed method is selected via cross validation. We demonstrate the proposed method through simulation studies and applications to real world datasets.

There has been significant attention given to developing data-driven methods for tailoring patient care based on individual patient characteristics. Dynamic treatment regimes formalize this through a sequence of decision rules that map patient information to a suggested treatment. The data for estimating and evaluating treatment regimes are ideally gathered through the use of Sequential Multiple Assignment Randomized Trials (SMARTs) though longitudinal observational studies are commonly used due to the potentially prohibitive costs of conducting a SMART. These studies are typically sized for simple comparisons of fixed treatment sequences or, in the case of observational studies, a priori sample size calculations are often not performed. We develop sample size procedures for the estimation of dynamic treatment regimes from observational studies. Our approach uses pilot data to ensure a study will have sufficient power for comparing the value of the optimal regime, i.e. the expected outcome if all patients in the population were treated by following the optimal regime, with a known comparison mean. Our approach also ensures the value of the estimated optimal treatment regime is within an a priori set range of the value of the true optimal regime with a high probability. We examine the performance of the proposed procedure with a simulation study and use it to size a study for reducing depressive symptoms using data from electronic health records.

We give a review of recent ANOVA-like procedures for testing group differences based on data in a metric space and present a new such procedure. Our statistic is based on the classic Levene's test for detecting differences in dispersion. It uses only pairwise distances of data points and and can be computed quickly and precisely in situations where the computation of barycenters ("generalized means") in the data space is slow, only by approximation or even infeasible. We show the asymptotic normality of our test statistic and present simulation studies for spatial point pattern data, in which we compare the various procedures in a 1-way ANOVA setting. As an application, we perform a 2-way ANOVA on a data set of bubbles in a mineral flotation process.

In this paper we consider a class of unfitted finite element methods for scalar elliptic problems. These so-called CutFEM methods use standard finite element spaces on a fixed unfitted triangulation combined with the Nitsche technique and a ghost penalty stabilization. As a model problem we consider the application of such a method to the Poisson interface problem. We introduce and analyze a new class of preconditioners that is based on a subspace decomposition approach. The unfitted finite element space is split into two subspaces, where one subspace is the standard finite element space associated to the background mesh and the second subspace is spanned by all cut basis functions corresponding to nodes on the cut elements. We will show that this splitting is stable, uniformly in the discretization parameter and in the location of the interface in the triangulation. Based on this we introduce an efficient preconditioner that is uniformly spectrally equivalent to the stiffness matrix. Using a similar splitting, it is shown that the same preconditioning approach can also be applied to a fictitious domain CutFEM discretization of the Poisson equation. Results of numerical experiments are included that illustrate optimality of such preconditioners for the Poisson interface problem and the Poisson fictitious domain problem.

Ensemble methods based on subsampling, such as random forests, are popular in applications due to their high predictive accuracy. Existing literature views a random forest prediction as an infinite-order incomplete U-statistic to quantify its uncertainty. However, these methods focus on a small subsampling size of each tree, which is theoretically valid but practically limited. This paper develops an unbiased variance estimator based on incomplete U-statistics, which allows the tree size to be comparable with the overall sample size, making statistical inference possible in a broader range of real applications. Simulation results demonstrate that our estimators enjoy lower bias and more accurate confidence interval coverage without additional computational costs. We also propose a local smoothing procedure to reduce the variation of our estimator, which shows improved numerical performance when the number of trees is relatively small. Further, we investigate the ratio consistency of our proposed variance estimator under specific scenarios. In particular, we develop a new "double U-statistic" formulation to analyze the Hoeffding decomposition of the estimator's variance.

Dyadic data is often encountered when quantities of interest are associated with the edges of a network. As such it plays an important role in statistics, econometrics and many other data science disciplines. We consider the problem of uniformly estimating a dyadic Lebesgue density function, focusing on nonparametric kernel-based estimators taking the form of dyadic empirical processes. Our main contributions include the minimax-optimal uniform convergence rate of the dyadic kernel density estimator, along with strong approximation results for the associated standardized and Studentized $t$-processes. A consistent variance estimator enables the construction of valid and feasible uniform confidence bands for the unknown density function. A crucial feature of dyadic distributions is that they may be "degenerate" at certain points in the support of the data, a property making our analysis somewhat delicate. Nonetheless our methods for uniform inference remain robust to the potential presence of such points. For implementation purposes, we discuss procedures based on positive semi-definite covariance estimators, mean squared error optimal bandwidth selectors and robust bias-correction techniques. We illustrate the empirical finite-sample performance of our methods both in simulations and with real-world data. Our technical results concerning strong approximations and maximal inequalities are of potential independent interest.

In this paper, we propose a conceptually simple and geometrically interpretable objective function, i.e. additive margin Softmax (AM-Softmax), for deep face verification. In general, the face verification task can be viewed as a metric learning problem, so learning large-margin face features whose intra-class variation is small and inter-class difference is large is of great importance in order to achieve good performance. Recently, Large-margin Softmax and Angular Softmax have been proposed to incorporate the angular margin in a multiplicative manner. In this work, we introduce a novel additive angular margin for the Softmax loss, which is intuitively appealing and more interpretable than the existing works. We also emphasize and discuss the importance of feature normalization in the paper. Most importantly, our experiments on LFW BLUFR and MegaFace show that our additive margin softmax loss consistently performs better than the current state-of-the-art methods using the same network architecture and training dataset. Our code has also been made available at //github.com/happynear/AMSoftmax

北京阿比特科技有限公司