亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Multivariate matching algorithms "pair" similar study units in an observational study to remove potential bias and confounding effects caused by the absence of randomizations. In one-to-one multivariate matching algorithms, a large number of "pairs" to be matched could mean both the information from a large sample and a large number of tasks, and therefore, to best match the pairs, such a matching algorithm with efficiency and comparatively limited auxiliary matching knowledge provided through a "training" set of paired units by domain experts, is practically intriguing. We proposed a novel one-to-one matching algorithm based on a quadratic score function $S_{\beta}(x_i,x_j)= \beta^T (x_i-x_j)(x_i-x_j)^T \beta$. The weights $\beta$, which can be interpreted as a variable importance measure, are designed to minimize the score difference between paired training units while maximizing the score difference between unpaired training units. Further, in the typical but intricate case where the training set is much smaller than the unpaired set, we propose a \underline{s}emisupervised \underline{c}ompanion \underline{o}ne-\underline{t}o-\underline{o}ne \underline{m}atching \underline{a}lgorithm (SCOTOMA) that makes the best use of the unpaired units. The proposed weight estimator is proved to be consistent when the truth matching criterion is indeed the quadratic score function. When the model assumptions are violated, we demonstrate that the proposed algorithm still outperforms some popular competing matching algorithms through a series of simulations. We applied the proposed algorithm to a real-world study to investigate the effect of in-person schooling on community Covid-19 transmission rate for policy making purpose.

相關內容

Using instruments comprising ordered responses to items are ubiquitous for studying many constructs of interest. However, using such an item response format may lead to items with response categories infrequently endorsed or unendorsed completely. In maximum likelihood estimation, this results in non-existing estimates for thresholds. This work focuses on a Bayesian estimation approach to counter this issue. The issue changes from the existence of an estimate to how to effectively construct threshold priors. The proposed prior specification reconceptualizes the threshold prior as prior on the probability of each response category. A metric that is easier to manipulate while maintaining the necessary ordering constraints on the thresholds. The resulting induced-prior is more communicable, and we demonstrate comparable statistical efficiency that existing threshold priors. Evidence is provided using a simulated data set, a Monte Carlo simulation study, and an example multi-group item-factor model analysis. All analyses demonstrate how at least a relatively informative threshold prior is necessary to avoid inefficient posterior sampling and increase confidence in the coverage rates of posterior credible intervals.

This contribution introduces the idea of refinement patterns for the generation of optimal meshes in the context of the Finite Element Method. The main idea is to generate a library of possible patterns on which elements can be refined and use this library to inform an h adaptive code on how to handle complex refinements in regions of interest. There are no restrictions on the type of elements that can be refined, and the patterns can be generated for any element type. The main advantage of this approach is that it allows for the generation of optimal meshes in a systematic way where, even if a certain pattern is not available, it can easily be included through a simple text file with nodes and sub-elements. The contribution presents a detailed methodology for incorporating refinement patterns into h adaptive Finite Element Method codes and demonstrates the effectiveness of the approach through mesh refinement of problems with complex geometries.

Recent papers initiated the study of a generalization of group testing where the potentially contaminated sets are the members of a given hypergraph F=(V,E). This generalization finds application in contexts where contaminations can be conditioned by some kinds of social and geographical clusterings. The paper focuses on few-stage group testing algorithms, i.e., slightly adaptive algorithms where tests are performed in stages and all tests performed in the same stage should be decided at the very beginning of the stage. In particular, the paper presents the first two-stage algorithm that uses o(dlog|E|) tests for general hypergraphs with hyperedges of size at most d, and a three-stage algorithm that improves by a d^{1/6} factor on the number of tests of the best known three-stage algorithm. These algorithms are special cases of an s-stage algorithm designed for an arbitrary positive integer s<= d. The design of this algorithm resort to a new non-adaptive algorithm (one-stage algorithm), i.e., an algorithm where all tests must be decided beforehand. Further, we derive a lower bound for non-adaptive group testing. For E sufficiently large, the lower bound is very close to the upper bound on the number of tests of the best non-adaptive group testing algorithm known in the literature, and it is the first lower bound that improves on the information theoretic lower bound Omega(log |E|).

We consider the problem of approximating the regression function from noisy vector-valued data by an online learning algorithm using an appropriate reproducing kernel Hilbert space (RKHS) as prior. In an online algorithm, i.i.d. samples become available one by one by a random process and are successively processed to build approximations to the regression function. We are interested in the asymptotic performance of such online approximation algorithms and show that the expected squared error in the RKHS norm can be bounded by $C^2 (m+1)^{-s/(2+s)}$, where $m$ is the current number of processed data, the parameter $0<s\leq 1$ expresses an additional smoothness assumption on the regression function and the constant $C$ depends on the variance of the input noise, the smoothness of the regression function and further parameters of the algorithm.

We analyze a bilinear optimal control problem for the Stokes--Brinkman equations: the control variable enters the state equations as a coefficient. In two- and three-dimensional Lipschitz domains, we perform a complete continuous analysis that includes the existence of solutions and first- and second-order optimality conditions. We also develop two finite element methods that differ fundamentally in whether the admissible control set is discretized or not. For each of the proposed methods, we perform a convergence analysis and derive a priori error estimates; the latter under the assumption that the domain is convex. Finally, assuming that the domain is Lipschitz, we develop an a posteriori error estimator for each discretization scheme and obtain a global reliability bound.

We study the data-driven selection of causal graphical models using constraint-based algorithms, which determine the existence or non-existence of edges (causal connections) in a graph based on testing a series of conditional independence hypotheses. In settings where the ultimate scientific goal is to use the selected graph to inform estimation of some causal effect of interest (e.g., by selecting a valid and sufficient set of adjustment variables), we argue that a "cautious" approach to graph selection should control the probability of falsely removing edges and prefer dense, rather than sparse, graphs. We propose a simple inversion of the usual conditional independence testing procedure: to remove an edge, test the null hypothesis of conditional association greater than some user-specified threshold, rather than the null of independence. This equivalence testing formulation to testing independence constraints leads to a procedure with desriable statistical properties and behaviors that better match the inferential goals of certain scientific studies, for example observational epidemiological studies that aim to estimate causal effects in the face of causal model uncertainty. We illustrate our approach on a data example from environmental epidemiology.

Reduced rank regression (RRR) is a widely employed model for investigating the linear association between multiple response variables and a set of predictors. While RRR has been extensively explored in various works, the focus has predominantly been on continuous response variables, overlooking other types of outcomes. This study shifts its attention to the Bayesian perspective of generalized linear models (GLM) within the RRR framework. In this work, we relax the requirement for the link function of the generalized linear model to be canonical. We examine the properties of fractional posteriors in GLM within the RRR context, where a fractional power of the likelihood is utilized. By employing a spectral scaled Student prior distribution, we establish consistency and concentration results for the fractional posterior. Our results highlight adaptability, as they do not necessitate prior knowledge of the rank of the parameter matrix. These results are in line with those found in frequentist literature. Additionally, an examination of model mis-specification is undertaken, underscoring the effectiveness of our approach in such scenarios.

Two sequential estimators are proposed for the odds p/(1-p) and log odds log(p/(1-p)) respectively, using independent Bernoulli random variables with parameter p as inputs. The estimators are unbiased, and guarantee that the variance of the estimation error divided by the true value of the odds, or the variance of the estimation error of the log odds, are less than a target value for any p in (0,1). The estimators are close to optimal in the sense of Wolfowitz's bound.

Quantum hypothesis testing (QHT) has been traditionally studied from the information-theoretic perspective, wherein one is interested in the optimal decay rate of error probabilities as a function of the number of samples of an unknown state. In this paper, we study the sample complexity of QHT, wherein the goal is to determine the minimum number of samples needed to reach a desired error probability. By making use of the wealth of knowledge that already exists in the literature on QHT, we characterize the sample complexity of binary QHT in the symmetric and asymmetric settings, and we provide bounds on the sample complexity of multiple QHT. In more detail, we prove that the sample complexity of symmetric binary QHT depends logarithmically on the inverse error probability and inversely on the negative logarithm of the fidelity. As a counterpart of the quantum Stein's lemma, we also find that the sample complexity of asymmetric binary QHT depends logarithmically on the inverse type II error probability and inversely on the quantum relative entropy. We then provide lower and upper bounds on the sample complexity of multiple QHT, with it remaining an intriguing open question to improve these bounds. The final part of our paper outlines and reviews how sample complexity of QHT is relevant to a broad swathe of research areas and can enhance understanding of many fundamental concepts, including quantum algorithms for simulation and search, quantum learning and classification, and foundations of quantum mechanics. As such, we view our paper as an invitation to researchers coming from different communities to study and contribute to the problem of sample complexity of QHT, and we outline a number of open directions for future research.

We establish necessary and sufficient conditions for invertiblility of symmetric three-by-three block matrices having a double saddle-point structure that guarantee the unique solvability of double saddle-point systems. We consider various scenarios, including the case where all diagonal blocks are allowed to be rank deficient. Under certain conditions related to the ranks of the blocks and intersections of their kernels, an explicit formula for the inverse is derived.

北京阿比特科技有限公司