亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Integrating multiple observational studies to make unconfounded causal or descriptive comparisons of group potential outcomes in a large natural population is challenging. Moreover, retrospective cohorts, being convenience samples, are usually unrepresentative of the natural population of interest and have groups with unbalanced covariates. We propose a general covariate-balancing framework based on pseudo-populations that extends established weighting methods to the meta-analysis of multiple retrospective cohorts with multiple groups. Additionally, by maximizing the effective sample sizes of the cohorts, we propose a FLEXible, Optimized, and Realistic (FLEXOR) weighting method appropriate for integrative analyses. We develop new weighted estimators for unconfounded inferences on wide-ranging population-level features and estimands relevant to group comparisons of quantitative, categorical, or multivariate outcomes. The asymptotic properties of these estimators are examined, and accurate small-sample procedures are devised for quantifying estimation uncertainty. Through simulation studies and meta-analyses of TCGA datasets, we discover the differential biomarker patterns of the two major breast cancer subtypes in the United States and demonstrate the versatility and reliability of the proposed weighting strategy, especially for the FLEXOR pseudo-population.

相關內容

In rectangularly-pulsed orthogonal frequency division multiplexing (OFDM) systems, constant-amplitude (CA) sequences are desirable to construct preamble/pilot waveforms to facilitate system parameter identification (SPI). Orthogonal CA sequences are generally preferred in various SPI applications like random-access channel identification. However, the number of conventional orthogonal CA sequences (e.g., Zadoff-Chu sequences) that can be adopted in cellular communication without causing sequence identification ambiguity is insufficient. Such insufficiency causes heavy performance degradation for SPI requiring a large number of identification sequences. Moreover, rectangularly-pulsed OFDM preamble/pilot waveforms carrying conventional CA sequences suffer from large power spectral sidelobes and thus exhibit low spectral compactness. This paper is thus motivated to develop several order-I CA sequence families which contain more orthogonal CA sequences while endowing the corresponding OFDM preamble/pilot waveforms with fast-decaying spectral sidelobes. Since more orthogonal sequences are provided, the developed order-I CA sequence families can enhance the performance characteristics in SPI requiring a large number of identification sequences over multipath channels exhibiting short-delay channel profiles, while composing spectrally compact OFDM preamble/pilot waveforms.

Blockchains require deterministic execution in order to reach consensus. This is often guaranteed in languages designed to write smart contracts, such as Solidity. Application-specific blockchains or ``appchains'' allow the blockchain application logic to be written using general-purpose programming languages, giving developers more flexibility but also additional responsibilities. In particular, developers must ensure that their blockchain application logic does not contain any sources of non-determinism. Any source of non-determinism may be a potential source of vulnerabilities. This paper focuses on the use of Static Application Security Testing (SAST) tools to detect such sources of non-determinism at development time. We focus on Cosmos, a prominent open-source project that lets developers build interconnected networks of application-specific blockchains. Cosmos provides a Software Development Kit (SDK) that allows these chains to be implemented in the Go programming language. We create a corpus of 11 representative Cosmos-based appchains to analyze for sources of non-determinism in Go. As part of our study, we identified cosmos-sdk-codeql, a set of CodeQL code analysis rules for Cosmos applications. We find that these rules generate many false positives and propose a refactored set of rules that more precisely detects sources of non-determinism only in code that runs as part of the blockchain logic. We demonstrate a significant increase in the precision of the rules, making the SAST tool more effective and hence potentially contributing to enhanced security for Cosmos-based blockchains.

This paper studies the spatial manifestations of order reduction that occur when time-stepping initial-boundary-value problems (IBVPs) with high-order Runge-Kutta methods. For such IBVPs, geometric structures arise that do not have an analog in ODE IVPs: boundary layers appear, induced by a mismatch between the approximation error in the interior and at the boundaries. To understand those boundary layers, an analysis of the modes of the numerical scheme is conducted, which explains under which circumstances boundary layers persist over many time steps. Based on this, two remedies to order reduction are studied: first, a new condition on the Butcher tableau, called weak stage order, that is compatible with diagonally implicit Runge-Kutta schemes; and second, the impact of modified boundary conditions on the boundary layer theory is analyzed.

When multiple models are considered in regression problems, the model averaging method can be used to weigh and integrate the models. In the present study, we examined how the goodness-of-prediction of the estimator depends on the dimensionality of explanatory variables when using a generalization of the model averaging method in a linear model. We specifically considered the case of high-dimensional explanatory variables, with multiple linear models deployed for subsets of these variables. Consequently, we derived the optimal weights that yield the best predictions. we also observe that the double-descent phenomenon occurs in the model averaging estimator. Furthermore, we obtained theoretical results by adapting methods such as the random forest to linear regression models. Finally, we conducted a practical verification through numerical experiments.

Unsupervised representation learning has recently helped automatic speech recognition (ASR) to tackle tasks with limited labeled data. Following this, hardware limitations and applications give rise to the question how to take advantage of large pre-trained models efficiently and reduce their complexity. In this work, we study a challenging low resource conversational telephony speech corpus from the medical domain in Vietnamese and German. We show the benefits of using unsupervised techniques beyond simple fine-tuning of large pre-trained models, discuss how to adapt them to a practical telephony task including bandwidth transfer and investigate different data conditions for pre-training and fine-tuning. We outperform the project baselines by 22% relative using pretraining techniques. Further gains of 29% can be achieved by refinements of architecture and training and 6% by adding 0.8 h of in-domain adaptation data.

As responsible AI gains importance in machine learning algorithms, properties such as fairness, adversarial robustness, and causality have received considerable attention in recent years. However, despite their individual significance, there remains a critical gap in simultaneously exploring and integrating these properties. In this paper, we propose a novel approach that examines the relationship between individual fairness, adversarial robustness, and structural causal models in heterogeneous data spaces, particularly when dealing with discrete sensitive attributes. We use causal structural models and sensitive attributes to create a fair metric and apply it to measure semantic similarity among individuals. By introducing a novel causal adversarial perturbation and applying adversarial training, we create a new regularizer that combines individual fairness, causality, and robustness in the classifier. Our method is evaluated on both real-world and synthetic datasets, demonstrating its effectiveness in achieving an accurate classifier that simultaneously exhibits fairness, adversarial robustness, and causal awareness.

Traditional geometric registration based estimation methods only exploit the CAD model implicitly, which leads to their dependence on observation quality and deficiency to occlusion. To address the problem,the paper proposes a bidirectional correspondence prediction network with a point-wise attention-aware mechanism. This network not only requires the model points to predict the correspondence but also explicitly models the geometric similarities between observations and the model prior. Our key insight is that the correlations between each model point and scene point provide essential information for learning point-pair matches. To further tackle the correlation noises brought by feature distribution divergence, we design a simple but effective pseudo-siamese network to improve feature homogeneity. Experimental results on the public datasets of LineMOD, YCB-Video, and Occ-LineMOD show that the proposed method achieves better performance than other state-of-the-art methods under the same evaluation criteria. Its robustness in estimating poses is greatly improved, especially in an environment with severe occlusions.

The existence of representative datasets is a prerequisite of many successful artificial intelligence and machine learning models. However, the subsequent application of these models often involves scenarios that are inadequately represented in the data used for training. The reasons for this are manifold and range from time and cost constraints to ethical considerations. As a consequence, the reliable use of these models, especially in safety-critical applications, is a huge challenge. Leveraging additional, already existing sources of knowledge is key to overcome the limitations of purely data-driven approaches, and eventually to increase the generalization capability of these models. Furthermore, predictions that conform with knowledge are crucial for making trustworthy and safe decisions even in underrepresented scenarios. This work provides an overview of existing techniques and methods in the literature that combine data-based models with existing knowledge. The identified approaches are structured according to the categories integration, extraction and conformity. Special attention is given to applications in the field of autonomous driving.

In contrast to batch learning where all training data is available at once, continual learning represents a family of methods that accumulate knowledge and learn continuously with data available in sequential order. Similar to the human learning process with the ability of learning, fusing, and accumulating new knowledge coming at different time steps, continual learning is considered to have high practical significance. Hence, continual learning has been studied in various artificial intelligence tasks. In this paper, we present a comprehensive review of the recent progress of continual learning in computer vision. In particular, the works are grouped by their representative techniques, including regularization, knowledge distillation, memory, generative replay, parameter isolation, and a combination of the above techniques. For each category of these techniques, both its characteristics and applications in computer vision are presented. At the end of this overview, several subareas, where continuous knowledge accumulation is potentially helpful while continual learning has not been well studied, are discussed.

We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SciIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.

北京阿比特科技有限公司