亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this article, we develop methods for sample size and power calculations in four-level intervention studies when intervention assignment is carried out at any level, with a particular focus on cluster randomized trials (CRTs). CRTs involving four levels are becoming popular in health care research, where the effects are measured, for example, from evaluations (level 1) within participants (level 2) in divisions (level 3) that are nested in clusters (level 4). In such multi-level CRTs, we consider three types of intraclass correlations between different evaluations to account for such clustering: that of the same participant, that of different participants from the same division, and that of different participants from different divisions in the same cluster. Assuming arbitrary link and variance functions, with the proposed correlation structure as the true correlation structure, closed-form sample size formulas for randomization carried out at any level (including individually randomized trials within a four-level clustered structure) are derived based on the generalized estimating equations approach using the model-based variance and using the sandwich variance with an independence working correlation matrix. We demonstrate that empirical power corresponds well with that predicted by the proposed method for as few as 8 clusters, when data are analyzed using the matrix-adjusted estimating equations for the correlation parameters with a bias-corrected sandwich variance estimator, under both balanced and unbalanced designs.

相關內容

There is increasing interest in applying verification tools to programs that have bitvector operations. SMT solvers, which serve as a foundation for these tools, have thus increased support for bitvector reasoning through bit-blasting and linear arithmetic approximations. Still, verification tools are limited on termination and LTL verification of bitvector programs. In this work, we show that similar linear arithmetic approximation of bitvector operations can be done at the source level through transformations. Specifically, we introduce new paths that over-approximate bitvector operations with linear conditions/constraints, increasing branching but allowing us to better exploit the well-developed integer reasoning and interpolation of verification tools. We present two sets of rules, namely rewriting rules and weakening rules, that can be implemented as bitwise branching of program transformation, the branching path can facilitate verification tools widen verification tasks over bitvector programs. Our experiment shows this exploitation of integer reasoning and interpolation enables competitive termination verification of bitvector programs and leads to the first effective technique for LTL verification of bitvector programs.

Let $\pi\in \Pi(\mu,\nu)$ be a coupling between two probability measures $\mu$ and $\nu$ on a Polish space. In this article we propose and study a class of nonparametric measures of association between $\mu$ and $\nu$, which we call Wasserstein correlation coefficients. These coefficients are based on the Wasserstein distance between $\nu$ and the disintegration $\pi_{x_1}$ of $\pi$ with respect to the first coordinate. We also establish basic statistical properties of this new class of measures: we develop a statistical theory for strongly consistent estimators and determine their convergence rate in the case of compactly supported measures $\mu$ and $\nu$. Throughout our analysis we make use of the so-called adapted/bicausal Wasserstein distance, in particular we rely on results established in [Backhoff, Bartl, Beiglb\"ock, Wiesel. Estimating processes in adapted Wasserstein distance. 2020]. Our approach applies to probability laws on general Polish spaces.

Functional data analysis has attracted considerable interest and is facing new challenges, one of which is the increasingly available data in a streaming manner. In this article we develop an online nonparametric method to dynamically update the estimates of mean and covariance functions for functional data. The kernel-type estimates can be decomposed into two sufficient statistics depending on the data-driven bandwidths. We propose to approximate the future optimal bandwidths by a sequence of dynamically changing candidates and combine the corresponding statistics across blocks to form the updated estimation. The proposed online method is easy to compute based on the stored sufficient statistics and the current data block. We derive the asymptotic normality and, more importantly, the relative efficiency lower bounds of the online estimates of mean and covariance functions. This provides insight into the relationship between estimation accuracy and computational cost driven by the length of candidate bandwidth sequence. Simulations and real data examples are provided to support such findings.

A fundamental algorithm for data analytics at the edge of wireless networks is distributed principal component analysis (DPCA), which finds the most important information embedded in a distributed high-dimensional dataset by distributed computation of a reduced-dimension data subspace, called principal components (PCs). In this paper, to support one-shot DPCA in wireless systems, we propose a framework of analog MIMO transmission featuring the uncoded analog transmission of local PCs for estimating the global PCs. To cope with channel distortion and noise, two maximum-likelihood (global) PC estimators are presented corresponding to the cases with and without receive channel state information (CSI). The first design, termed coherent PC estimator, is derived by solving a Procrustes problem and reveals the form of regularized channel inversion where the regulation attempts to alleviate the effects of both channel noise and data noise. The second one, termed blind PC estimator, is designed based on the subspace channel-rotation-invariance property and computes a centroid of received local PCs on a Grassmann manifold. Using the manifold-perturbation theory, tight bounds on the mean square subspace distance (MSSD) of both estimators are derived for performance evaluation. The results reveal simple scaling laws of MSSD concerning device population, data and channel signal-to-noise ratios (SNRs), and array sizes. More importantly, both estimators are found to have identical scaling laws, suggesting the dispensability of CSI to accelerate DPCA. Simulation results validate the derived results and demonstrate the promising latency performance of the proposed analog MIMO.

Cluster analysis requires many decisions: the clustering method and the implied reference model, the number of clusters and, often, several hyper-parameters and algorithms' tunings. In practice, one produces several partitions, and a final one is chosen based on validation or selection criteria. There exist an abundance of validation methods that, implicitly or explicitly, assume a certain clustering notion. Moreover, they are often restricted to operate on partitions obtained from a specific method. In this paper, we focus on groups that can be well separated by quadratic or linear boundaries. The reference cluster concept is defined through the quadratic discriminant score function and parameters describing clusters' size, center and scatter. We develop two cluster-quality criteria called quadratic scores. We show that these criteria are consistent with groups generated from a general class of elliptically-symmetric distributions. The quest for this type of groups is common in applications. The connection with likelihood theory for mixture models and model-based clustering is investigated. Based on bootstrap resampling of the quadratic scores, we propose a selection rule that allows choosing among many clustering solutions. The proposed method has the distinctive advantage that it can compare partitions that cannot be compared with other state-of-the-art methods. Extensive numerical experiments and the analysis of real data show that, even if some competing methods turn out to be superior in some setups, the proposed methodology achieves a better overall performance.

Large health care data repositories such as electronic health records (EHR) opens new opportunities to derive individualized treatment strategies to improve disease outcomes. We study the problem of estimating sequential treatment rules tailored to patient's individual characteristics, often referred to as dynamic treatment regimes (DTRs). We seek to find the optimal DTR which maximizes the discontinuous value function through direct maximization of a fisher consistent surrogate loss function. We show that a large class of concave surrogates fails to be Fisher consistent, which differs from the classic setting for binary classification. We further characterize a non-concave family of Fisher consistent smooth surrogate functions, which can be optimized with gradient descent using off-the-shelf machine learning algorithms. Compared to the existing direct search approach under the support vector machine framework (Zhao et al., 2015), our proposed DTR estimation via surrogate loss optimization (DTRESLO) method is more computationally scalable to large sample size and allows for a broader functional class for the predictor effects. We establish theoretical properties for our proposed DTR estimator and obtain a sharp upper bound on the regret corresponding to our DTRESLO method. Finite sample performance of our proposed estimator is evaluated through extensive simulations and an application on deriving an optimal DTR for treatment sepsis using EHR data from patients admitted to intensive care units.

We propose a generalized CUR (GCUR) decomposition for matrix pairs $(A, B)$. Given matrices $A$ and $B$ with the same number of columns, such a decomposition provides low-rank approximations of both matrices simultaneously, in terms of some of their rows and columns. We obtain the indices for selecting the subset of rows and columns of the original matrices using the discrete empirical interpolation method (DEIM) on the generalized singular vectors. When $B$ is square and nonsingular, there are close connections between the GCUR of $(A, B)$ and the DEIM-induced CUR of $AB^{-1}$. When $B$ is the identity, the GCUR decomposition of $A$ coincides with the DEIM-induced CUR decomposition of $A$. We also show a similar connection between the GCUR of $(A, B)$ and the CUR of $AB^+$ for a nonsquare but full-rank matrix $B$, where $B^+$ denotes the Moore--Penrose pseudoinverse of $B$. While a CUR decomposition acts on one data set, a GCUR factorization jointly decomposes two data sets. The algorithm may be suitable for applications where one is interested in extracting the most discriminative features from one data set relative to another data set. In numerical experiments, we demonstrate the advantages of the new method over the standard CUR approximation; for recovering data perturbed with colored noise and subgroup discovery.

The problem of Approximate Nearest Neighbor (ANN) search is fundamental in computer science and has benefited from significant progress in the past couple of decades. However, most work has been devoted to pointsets whereas complex shapes have not been sufficiently treated. Here, we focus on distance functions between discretized curves in Euclidean space: they appear in a wide range of applications, from road segments to time-series in general dimension. For $\ell_p$-products of Euclidean metrics, for any $p$, we design simple and efficient data structures for ANN, based on randomized projections, which are of independent interest. They serve to solve proximity problems under a notion of distance between discretized curves, which generalizes both discrete Fr\'echet and Dynamic Time Warping distances. These are the most popular and practical approaches to comparing such curves. We offer the first data structures and query algorithms for ANN with arbitrarily good approximation factor, at the expense of increasing space usage and preprocessing time over existing methods. Query time complexity is comparable or significantly improved by our algorithms, our algorithm is especially efficient when the length of the curves is bounded.

The Everyday Sexism Project documents everyday examples of sexism reported by volunteer contributors from all around the world. It collected 100,000 entries in 13+ languages within the first 3 years of its existence. The content of reports in various languages submitted to Everyday Sexism is a valuable source of crowdsourced information with great potential for feminist and gender studies. In this paper, we take a computational approach to analyze the content of reports. We use topic-modelling techniques to extract emerging topics and concepts from the reports, and to map the semantic relations between those topics. The resulting picture closely resembles and adds to that arrived at through qualitative analysis, showing that this form of topic modeling could be useful for sifting through datasets that had not previously been subject to any analysis. More precisely, we come up with a map of topics for two different resolutions of our topic model and discuss the connection between the identified topics. In the low resolution picture, for instance, we found Public space/Street, Online, Work related/Office, Transport, School, Media harassment, and Domestic abuse. Among these, the strongest connection is between Public space/Street harassment and Domestic abuse and sexism in personal relationships.The strength of the relationships between topics illustrates the fluid and ubiquitous nature of sexism, with no single experience being unrelated to another.

Many problems on signal processing reduce to nonparametric function estimation. We propose a new methodology, piecewise convex fitting (PCF), and give a two-stage adaptive estimate. In the first stage, the number and location of the change points is estimated using strong smoothing. In the second stage, a constrained smoothing spline fit is performed with the smoothing level chosen to minimize the MSE. The imposed constraint is that a single change point occurs in a region about each empirical change point of the first-stage estimate. This constraint is equivalent to requiring that the third derivative of the second-stage estimate has a single sign in a small neighborhood about each first-stage change point. We sketch how PCF may be applied to signal recovery, instantaneous frequency estimation, surface reconstruction, image segmentation, spectral estimation and multivariate adaptive regression.

北京阿比特科技有限公司