亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Automated variable selection is widely applied in statistical model development. Algorithms like forward, backward or stepwise selection are available in statistical software packages like R and SAS. Many researchers have criticized the use of these algorithms because the models resulting from automated selection algorithms are not based on theory and tend to be unstable. Furthermore, simulation studies have shown that they often select incorrect variables due to random effects which makes these model building strategies unreliable. In this article, a comprehensive stepwise selection algorithm tailored to logistic regression is proposed. It uses multiple criteria in variable selection instead of relying on one single measure only, like a $p$-value or Akaike's information criterion, which ensures robustness and soundness of the final outcome. The result of the selection process might not be unambiguous. It might select multiple models that could be considered as statistically equivalent. A simulation study demonstrates the superiority of the proposed variable selection method over available alternatives.

相關內容

A useful capability is that of classifying some agent's behavior using data from a sequence, or trace, of sensor measurements. The sensor selection problem involves choosing a subset of available sensors to ensure that, when generated, observation traces will contain enough information to determine whether the agent's activities match some pattern. In generalizing prior work, this paper studies a formulation in which multiple behavioral itineraries may be supplied, with sensors selected to distinguish between behaviors. This allows one to pose fine-grained questions, e.g., to position the agent's activity on a spectrum. In addition, with multiple itineraries, one can also ask about choices of sensors where some behavior is always plausibly concealed by (or mistaken for) another. Using sensor ambiguity to limit the acquisition of knowledge is a strong privacy guarantee, a form of guarantee which some earlier work examined under formulations distinct from our inter-itinerary conflation approach. By concretely formulating privacy requirements for sensor selection, this paper connects both lines of work in a novel fashion: privacy-where there is a bound from above, and behavior verification-where sensors choices are bounded from below. We examine the worst-case computational complexity that results from both types of bounds, proving that upper bounds are more challenging under standard computational complexity assumptions. The problem is intractable in general, but we introduce an approach to solving this problem that can exploit interrelationships between constraints, and identify opportunities for optimizations. Case studies are presented to demonstrate the usefulness and scalability of our proposed solution, and to assess the impact of the optimizations.

Diagnostic classification models (DCMs) are psychometric models designed to classify examinees according to their proficiency or non-proficiency of specified latent characteristics. These models are well-suited for providing diagnostic and actionable feedback to support formative assessment efforts. Several DCMs have been developed and applied in different settings. This study proposes a DCM with functional form similar to the 1-parameter logistic item response theory model. Using data from a large-scale mathematics education research study, we demonstrate that the proposed DCM has measurement properties akin to the Rasch and 1-parameter logistic item response theory models, including test score sufficiency, item-free and person-free measurement, and invariant item and person ordering. We discuss the implications and limitations of these developments, as well as directions for future research.

Compared to mean regression and quantile regression, the literature on modal regression is very sparse. We propose a unified framework for Bayesian modal regression based on a family of unimodal distributions indexed by the mode along with other parameters that allow for flexible shapes and tail behaviors. Following prior elicitation, we carry out regression analysis of simulated data and datasets from several real-life applications. Besides drawing inference for covariate effects that are easy to interpret, we consider prediction and model selection under the proposed Bayesian modal regression framework. Evidence from these analyses suggest that the proposed inference procedures are very robust to outliers, enabling one to discover interesting covariate effects missed by mean or median regression, and to construct much tighter prediction intervals than those from mean or median regression. Computer programs for implementing the proposed Bayesian modal regression are available at //github.com/rh8liuqy/Bayesian_modal_regression.

Robust iterative methods for solving large sparse systems of linear algebraic equations often suffer from the problem of optimizing the corresponding tuning parameters. To improve the performance of the problem of interest, specific parameter tuning is required, which in practice can be a time-consuming and tedious task. This paper proposes an optimization algorithm for tuning the numerical method parameters. The algorithm combines the evolution strategy with the pre-trained neural network used to filter the individuals when constructing the new generation. The proposed coupling of two optimization approaches allows to integrate the adaptivity properties of the evolution strategy with a priori knowledge realized by the neural network. The use of the neural network as a preliminary filter allows for significant weakening of the prediction accuracy requirements and reusing the pre-trained network with a wide range of linear systems. The detailed algorithm efficiency evaluation is performed for a set of model linear systems, including the ones from the SuiteSparse Matrix Collection and the systems from the turbulent flow simulations. The obtained results show that the pre-trained neural network can be effectively reused to optimize parameters for various linear systems, and a significant speedup in the calculations can be achieved at the cost of about 100 trial solves. The hybrid evolution strategy decreases the calculation time by more than 6 times for the black box matrices from the SuiteSparse Matrix Collection and by a factor of 1.4-2 for the sequence of linear systems when modeling turbulent flows. This results in a speedup of up to 1.8 times for the turbulent flow simulations performed in the paper.

The exponential-family random graph models (ERGMs) have emerged as an important framework for modeling social networks for a wide variety of relational types. ERGMs for valued networks are less well-developed than their unvalued counterparts, and pose particular computational challenges. Network data with edge values on the non-negative integers (count-valued networks) is an important such case, with examples ranging from the magnitude of migration and trade flows between places to the frequency of interactions and encounters between individuals. Here, we propose an efficient parallelable subsampled maximum pseudo-likelihood estimation (MPLE) scheme for count-valued ERGMs, and compare its performance with existing Contrastive Divergence (CD) and Monte Carlo Maximum Likelihood Estimation (MCMLE) approaches via a simulation study based on migration flow networks in two U.S. states. Our results suggest that edge value variance is a key factor in method performance, while network size mainly influences their relative merits in computational time. For small-variance networks, all methods perform well in point estimations while CD greatly overestimates uncertainties, and MPLE underestimates them for dependence terms; all methods have fast estimation for small networks, but CD and subsampled multi-core MPLE provides speed advantages as network size increases. For large-variance networks, both MPLE and MCMLE offer high-quality estimates of coefficients and their uncertainty, but MPLE is significantly faster than MCMLE; MPLE is also a better seeding method for MCMLE than CD, as the latter makes MCMLE more prone to convergence failure.

Existing error-bounded lossy compression techniques control the pointwise error during compression to guarantee the integrity of the decompressed data. However, they typically do not explicitly preserve the topological features in data. When performing post hoc analysis with decompressed data using topological methods, preserving topology in the compression process to obtain topologically consistent and correct scientific insights is desirable. In this paper, we introduce TopoSZ, an error-bounded lossy compression method that preserves the topological features in 2D and 3D scalar fields. Specifically, we aim to preserve the types and locations of local extrema as well as the level set relations among critical points captured by contour trees in the decompressed data. The main idea is to derive topological constraints from contour-tree-induced segmentation from the data domain, and incorporate such constraints with a customized error-controlled quantization strategy from the classic SZ compressor.Our method allows users to control the pointwise error and the loss of topological features during the compression process with a global error bound and a persistence threshold.

Neural point estimators are neural networks that map data to parameter point estimates. They are fast, likelihood free and, due to their amortised nature, amenable to fast bootstrap-based uncertainty quantification. In this paper, we aim to increase the awareness of statisticians to this relatively new inferential tool, and to facilitate its adoption by providing user-friendly open-source software. We also give attention to the ubiquitous problem of making inference from replicated data, which we address in the neural setting using permutation-invariant neural networks. Through extensive simulation studies we show that these neural point estimators can quickly and optimally (in a Bayes sense) estimate parameters in weakly-identified and highly-parameterised models with relative ease. We demonstrate their applicability through an analysis of extreme sea-surface temperature in the Red Sea where, after training, we obtain parameter estimates and bootstrap-based confidence intervals from hundreds of spatial fields in a fraction of a second.

Supervised classification algorithms are used to solve a growing number of real-life problems around the globe. Their performance is strictly connected with the quality of labels used in training. Unfortunately, acquiring good-quality annotations for many tasks is infeasible or too expensive to be done in practice. To tackle this challenge, active learning algorithms are commonly employed to select only the most relevant data for labeling. However, this is possible only when the quality and quantity of labels acquired from experts are sufficient. Unfortunately, in many applications, a trade-off between annotating individual samples by multiple annotators to increase label quality vs. annotating new samples to increase the total number of labeled instances is necessary. In this paper, we address the issue of faulty data annotations in the context of active learning. In particular, we propose two novel annotation unification algorithms that utilize unlabeled parts of the sample space. The proposed methods require little to no intersection between samples annotated by different experts. Our experiments on four public datasets indicate the robustness and superiority of the proposed methods in both, the estimation of the annotator's reliability, and the assignment of actual labels, against the state-of-the-art algorithms and the simple majority voting.

Transfer learning aims at improving the performance of target learners on target domains by transferring the knowledge contained in different but related source domains. In this way, the dependence on a large number of target domain data can be reduced for constructing target learners. Due to the wide application prospects, transfer learning has become a popular and promising area in machine learning. Although there are already some valuable and impressive surveys on transfer learning, these surveys introduce approaches in a relatively isolated way and lack the recent advances in transfer learning. As the rapid expansion of the transfer learning area, it is both necessary and challenging to comprehensively review the relevant studies. This survey attempts to connect and systematize the existing transfer learning researches, as well as to summarize and interpret the mechanisms and the strategies in a comprehensive way, which may help readers have a better understanding of the current research status and ideas. Different from previous surveys, this survey paper reviews over forty representative transfer learning approaches from the perspectives of data and model. The applications of transfer learning are also briefly introduced. In order to show the performance of different transfer learning models, twenty representative transfer learning models are used for experiments. The models are performed on three different datasets, i.e., Amazon Reviews, Reuters-21578, and Office-31. And the experimental results demonstrate the importance of selecting appropriate transfer learning models for different applications in practice.

With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.

北京阿比特科技有限公司