亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study deviations by a group of agents in the three main types of matching markets: the house allocation, the marriage, and the roommates models. For a given instance, we call a matching $k$-stable if no other matching exists that is more beneficial to at least $k$ out of the $n$ agents. The concept generalizes the recently studied majority stability. We prove that whereas the verification of $k$-stability for a given matching is polynomial-time solvable in all three models, the complexity of deciding whether a $k$-stable matching exists depends on $\frac{k}{n}$ and is characteristic to each model.

相關內容

CC在計算復雜性方面表現突出。它的學科處于數學與計算機理論科學的交叉點,具有清晰的數學輪廓和嚴格的數學格式。官網鏈接: · Agent · 可交換的 · INFORMS · 大學 ·
2023 年 8 月 31 日

Blind quantum computation (BQC) protocols enable quantum algorithms to be executed on third-party quantum agents while keeping the data and algorithm confidential. The previous proposals for measurement-based BQC require preparing a highly entangled cluster state. In this paper, we show that such a requirement is not necessary. Our protocol only requires pre-shared bell pairs between delegated quantum agents, and there is no requirement for any classical or quantum information exchange between agents during the execution. Our proposal requires fewer quantum resources than previous proposals by eliminating the need for a universal cluster state.

This paper focuses on the problem of testing the null hypothesis that the regression functions of several populations are equal under a general nonparametric homoscedastic regression model. It is well known that linear kernel regression estimators are sensitive to atypical responses. These distorted estimates will influence the test statistic constructed from them so the conclusions obtained when testing equality of several regression functions may also be affected. In recent years, the use of testing procedures based on empirical characteristic functions has shown good practical properties. For that reason, to provide more reliable inferences, we construct a test statistic that combines characteristic functions and residuals obtained from a robust smoother under the null hypothesis. The asymptotic distribution of the test statistic is studied under the null hypothesis and under root$-n$ contiguous alternatives. A Monte Carlo study is performed to compare the finite sample behaviour of the proposed test with the classical one obtained using local averages. The reported numerical experiments show the advantage of the proposed methodology over the one based on Nadaraya-Watson estimators for finite samples. An illustration to a real data set is also provided and enables to investigate the sensitivity of the $p-$value to the bandwidth selection.

We define game semantics for the constructive $\mu$-calculus and prove its correctness. We use these game semantics to prove that the $\mu$-calculus collapses to modal logic over $\mathsf{CS5}$ frames. Finally, we prove the completeness of $\mathsf{\mu CS5}$ over $\mathsf{CS5}$ frames.

This paper presents the error analysis of numerical methods on graded meshes for stochastic Volterra equations with weakly singular kernels. We first prove a novel regularity estimate for the exact solution via analyzing the associated convolution structure. This reveals that the exact solution exhibits an initial singularity in the sense that its H\"older continuous exponent on any neighborhood of $t=0$ is lower than that on every compact subset of $(0,T]$. Motivated by the initial singularity, we then construct the Euler--Maruyama method, fast Euler--Maruyama method, and Milstein method based on graded meshes. By establishing their pointwise-in-time error estimates, we give the grading exponents of meshes to attain the optimal uniform-in-time convergence orders, where the convergence orders improve those of the uniform mesh case. Numerical experiments are finally reported to confirm the sharpness of theoretical findings.

Estimating the probability of the binomial distribution is a basic problem, which appears in almost all introductory statistics courses and is performed frequently in various studies. In some cases, the parameter of interest is a difference between two probabilities, and the current work studies the construction of confidence intervals for this parameter when the sample size is small. Our goal is to find the shortest confidence intervals under the constraint of coverage probability being larger than a predetermined level. For the two-sample case, there is no known algorithm that achieves this goal, but different heuristics procedures have been suggested, and the present work aims at finding optimal confidence intervals. In the one-sample case, there is a known algorithm that finds optimal confidence intervals presented by Blyth and Still (1983). It is based on solving small and local optimization problems and then using an inversion step to find the global optimum solution. We show that this approach fails in the two-sample case and therefore, in order to find optimal confidence intervals, one needs to solve a global optimization problem, rather than small and local ones, which is computationally much harder. We present and discuss the suitable global optimization problem. Using the Gurobi package we find near-optimal solutions when the sample sizes are smaller than 15, and we compare these solutions to some existing methods, both approximate and exact. We find that the improvement in terms of lengths with respect to the best competitor varies between 1.5\% and 5\% for different parameters of the problem. Therefore, we recommend the use of the new confidence intervals when both sample sizes are smaller than 15. Tables of the confidence intervals are given in the Excel file in this link.

The modeling of high-frequency data that qualify financial asset transactions has been an area of relevant interest among statisticians and econometricians -- above all, the analysis of time series of financial durations. Autoregressive conditional duration (ACD) models have been the main tool for modeling financial transaction data, where duration is usually defined as the time interval between two successive events. These models are usually specified in terms of a time-varying mean (or median) conditional duration. In this paper, a new extension of ACD models is proposed which is built on the basis of log-symmetric distributions reparametrized by their quantile. The proposed quantile log-symmetric conditional duration autoregressive model allows us to model different percentiles instead of the traditionally used conditional mean (or median) duration. We carry out an in-depth study of theoretical properties and practical issues, such as parameter estimation using maximum likelihood method and diagnostic analysis based on residuals. A detailed Monte Carlo simulation study is also carried out to evaluate the performance of the proposed models and estimation method in retrieving the true parameter values as well as to evaluate a form of residuals. Finally, the proposed class of models is applied to a price duration data set and then used to derive a semi-parametric intraday value-at-risk (IVaR) model.

The Pauli stabilizer formalism is perhaps the most thoroughly studied means of procuring quantum error-correcting codes, whereby the code is obtained through commutative Pauli operators and ``stabilized'' by them. In this work we will show that every quantum error-correcting code, including Pauli stabilizer codes and subsystem codes, has a similar structure, in that the code can be stabilized by commutative ``Paulian'' operators which share many features with Pauli operators and which form a \textbf{Paulian stabilizer group}. By facilitating a controlled gate we can measure these Paulian operators to acquire the error syndrome. Examples concerning codeword stabilized codes and bosonic codes will be presented; specifically, one of the examples has been demonstrated experimentally and the observable for detecting the error turns out to be Paulian, thereby showing the potential utility of this approach. This work provides a possible approach to implement error-correcting codes and to find new codes.

Introduction: Oblique Target-rotation in the context of exploratory factor analysis is a relevant method for the investigation of the oblique independent clusters model. It was argued that minimizing single cross-loadings by means of target rotation may lead to large effects of sampling error on the target rotated factor solutions. Method: In order to minimize effects of sampling error on results of Target-rotation we propose to compute the mean cross-loadings for each block of salient loadings of the independent clusters model and to perform target rotation for the block-wise mean cross-loadings. The resulting transformation-matrix is than applied to the complete unrotated loading matrix in order to produce mean Target-rotated factors. Results: A simulation study based on correlated independent factor models revealed that mean oblique Target-rotation resulted in smaller negative bias of factor inter-correlations than conventional Target-rotation based on single loadings, especially when sample size was small and when the number of factors was large. An empirical example revealed that the similarity of Target-rotated factors computed for small subsamples with Target-rotated factors of the total sample was more pronounced for mean Target-rotation than for conventional Target-rotation. Discussion: Mean Target-rotation can be recommended in the context of oblique independent factor models, especially for small samples. An R-script and an SPSS-script for this form of Target-rotation are provided in the Appendix.

A major family of sufficient dimension reduction (SDR) methods, called inverse regression, commonly require the distribution of the predictor $X$ to have a linear $E(X|\beta^\mathsf{T}X)$ and a degenerate $\mathrm{var}(X|\beta^\mathsf{T}X)$ for the desired reduced predictor $\beta^\mathsf{T}X$. In this paper, we adjust the first and second-order inverse regression methods by modeling $E(X|\beta^\mathsf{T}X)$ and $\mathrm{var}(X|\beta^\mathsf{T}X)$ under the mixture model assumption on $X$, which allows these terms to convey more complex patterns and is most suitable when $X$ has a clustered sample distribution. The proposed SDR methods build a natural path between inverse regression and the localized SDR methods, and in particular inherit the advantages of both; that is, they are $\sqrt{n}$-consistent, efficiently implementable, directly adjustable under the high-dimensional settings, and fully recovering the desired reduced predictor. These findings are illustrated by simulation studies and a real data example at the end, which also suggest the effectiveness of the proposed methods for nonclustered data.

In epidemiology and social sciences, propensity score methods are popular for estimating treatment effects using observational data, and multiple imputation is popular for handling covariate missingness. However, how to appropriately use multiple imputation for propensity score analysis is not completely clear. This paper aims to bring clarity on the consistency (or lack thereof) of methods that have been proposed, focusing on the within approach (where the effect is estimated separately in each imputed dataset and then the multiple estimates are combined) and the across approach (where typically propensity scores are averaged across imputed datasets before being used for effect estimation). We show that the within method is valid and can be used with any causal effect estimator that is consistent in the full-data setting. Existing across methods are inconsistent, but a different across method that averages the inverse probability weights across imputed datasets is consistent for propensity score weighting. We also comment on methods that rely on imputing a function of the missing covariate rather than the covariate itself, including imputation of the propensity score and of the probability weight. Based on consistency results and practical flexibility, we recommend generally using the standard within method. Throughout, we provide intuition to make the results meaningful to the broad audience of applied researchers.

北京阿比特科技有限公司