亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Fuzzy Message Detection (FMD) is a recent cryptographic primitive invented by Beck et al. (CCS'21) where an untrusted server performs coarse message filtering for its clients in a recipient-anonymous way. In FMD - besides the true positive messages - the clients download from the server their cover messages determined by their false-positive detection rates. What is more, within FMD, the server cannot distinguish between genuine and cover traffic. In this paper, we formally analyze the privacy guarantees of FMD from four different angles. First, we evaluate what privacy provisions are offered by FMD. We found that FMD does not provide relationship anonymity without additional cryptographic techniques protecting the senders' identities. Moreover, FMD only provides a reasonable degree of recipient unlinkability when users apply considerable false-positive rates, and concurrently there is significant traffic. Second, we perform a differential privacy (DP) analysis and coin a relaxed DP definition to capture the privacy guarantees FMD yields. Third, we study FMD through a game-theoretic lens and argue why FMD is not sustainable without altruistic users. Finally, we simulate FMD on real-world communication data. Our theoretical and empirical results assist FMD users to adequately select their false-positive detection rates for various applications with given privacy requirements.

相關內容

The common way to optimize auction and pricing systems is to set aside a small fraction of the traffic to run experiments. This leads to the question: how can we learn the most with the smallest amount of data? For truthful auctions, this is the \emph{sample complexity} problem. For posted price auctions, we no longer have access to samples. Instead, the algorithm is allowed to choose a price $p_t$; then for a fresh sample $v_t \sim \mathcal{D}$ we learn the sign $s_t = sign(p_t - v_t) \in \{-1,+1\}$. How many pricing queries are needed to estimate a given parameter of the underlying distribution? We give tight upper and lower bounds on the number of pricing queries required to find an approximately optimal reserve price for general, regular and MHR distributions. Interestingly, for regular distributions, the pricing query and sample complexities match. But for general and MHR distributions, we show a strict separation between them. All known results on sample complexity for revenue optimization follow from a variant of using the optimal reserve price of the empirical distribution. In the pricing query complexity setting, we show that learning the entire distribution within an error of $\epsilon$ in Levy distance requires strictly more pricing queries than to estimate the reserve. Instead, our algorithm uses a new property we identify called \emph{relative flatness} to quickly zoom into the right region of the distribution to get the optimal pricing query complexity.

We consider the query complexity of finding a local minimum of a function defined on a graph, where at most $k$ rounds of interaction with the oracle are allowed. Rounds model parallel settings, where each query takes resources to complete and is executed on a separate processor. Thus the query complexity in $k$ rounds informs how many processors are needed to achieve a parallel time of $k$. We focus on the d-dimensional grid $[n]^d$, where the dimension $d$ is a constant, and consider two regimes for the number of rounds: constant and polynomial in n. We give algorithms and lower bounds that characterize the trade-off between the number of rounds of adaptivity and the query complexity of local search. When the number of rounds $k$ is constant, we show that the query complexity of local search in $k$ rounds is $\Theta\bigl(n^{\frac{d^{k+1} - d^k}{d^k - 1}}\bigl)$, for both deterministic and randomized algorithms. When the number of rounds is polynomial, i.e. $k = n^{\alpha}$ for $0 < \alpha < d/2$, the randomized query complexity is $\Theta\left(n^{d-1 - \frac{d-2}{d}\alpha}\right)$ for all $d \geq 5$. For $d=3$ and $d=4$, we show the same upper bound expression holds and give almost matching lower bounds. The local search analysis also enables us to characterize the query complexity of computing a Brouwer fixed point in rounds. Our proof technique for lower bounding the query complexity in rounds may be of independent interest as an alternative to the classical relational adversary method of Aaronson from the fully adaptive setting.

The blind deconvolution problem amounts to reconstructing both a signal and a filter from the convolution of these two. It constitutes a prominent topic in mathematical and engineering literature. In this work, we analyze a sparse version of the problem: The filter $h\in \mathbb{R}^\mu$ is assumed to be $s$-sparse, and the signal $b \in \mathbb{R}^n$ is taken to be $\sigma$-sparse, both supports being unknown. We observe a convolution between the filter and a linear transformation of the signal. Motivated by practically important multi-user communication applications, we derive a recovery guarantee for the simultaneous demixing and deconvolution setting. We achieve efficient recovery by relaxing the problem to a hierarchical sparse recovery for which we can build on a flexible framework. At the same time, for this we pay the price of some sub-optimal guarantees compared to the number of free parameters of the problem. The signal model we consider is sufficiently general to capture many applications in a number of engineering fields. Despite their practical importance, we provide first rigorous performance guarantees for efficient and simple algorithms for the bi-sparse and generalized demixing setting. We complement our analytical results by presenting results of numerical simulations. We find evidence that the sub-optimal scaling $s^2\sigma \log(\mu)\log(n)$ of our derived sufficient condition is likely overly pessimistic and that the observed performance is better described by a scaling proportional to $ s\sigma$ up to log-factors.

Clinical studies often encounter with truncation-by-death problems, which may render the outcomes undefined. Statistical analysis based only on observed survivors may lead to biased results because the characters of survivors may differ greatly between treatment groups. Under the principal stratification framework, a meaningful causal parameter, the survivor average causal effect, in the always-survivor group can be defined. This causal parameter may not be identifiable in observational studies where the treatment assignment and the survival or outcome process are confounded by unmeasured features. In this paper, we propose a new method to deal with unmeasured confounding when the outcome is truncated by death. First, a new method is proposed to identify the heterogeneous conditional survivor average causal effect based on a substitutional variable under monotonicity. Second, under additional assumptions, the survivor average causal effect on the overall population is also identified. Furthermore, we consider estimation and inference for the conditional survivor average causal effect based on parametric and nonparametric methods with good asymptotic properties. Good finite-sample properties are demonstrated by simulation and sensitivity analysis. The proposed method is applied to investigate the effect of allogeneic stem cell transplantation types on leukemia relapse.

The first step towards investigating the effectiveness of a treatment is to split the population into the control and the treatment groups, then compare the average responses of the two groups to the treatment. In order to ensure that the difference in the two groups is only caused by the treatment, it is crucial for the control and the treatment groups to have similar statistics. The validity and reliability of trials are determined by the similarity of two groups' statistics. Covariate balancing methods increase the similarity between the distributions of the two groups' covariates. However, often in practice, there are not enough samples to accurately estimate the groups' covariate distributions. In this paper, we empirically show that covariate balancing with the standardized means difference covariate balancing measure is susceptible to adversarial treatment assignments in limited population sizes. Adversarial treatment assignments are those admitted by the covariate balance measure, but result in large ATE estimation errors. To support this argument, we provide an optimization-based algorithm, namely Adversarial Treatment ASsignment in TREatment Effect Trials (ATASTREET), to find the adversarial treatment assignments for the IHDP-1000 dataset.

Fast developing artificial intelligence (AI) technology has enabled various applied systems deployed in the real world, impacting people's everyday lives. However, many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc., which not only degrades user experience but erodes the society's trust in all AI systems. In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems. We first introduce the theoretical framework of important aspects of AI trustworthiness, including robustness, generalization, explainability, transparency, reproducibility, fairness, privacy preservation, alignment with human values, and accountability. We then survey leading approaches in these aspects in the industry. To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems, ranging from data acquisition to model development, to development and deployment, finally to continuous monitoring and governance. In this framework, we offer concrete action items to practitioners and societal stakeholders (e.g., researchers and regulators) to improve AI trustworthiness. Finally, we identify key opportunities and challenges in the future development of trustworthy AI systems, where we identify the need for paradigm shift towards comprehensive trustworthy AI systems.

The notion of uncertainty is of major importance in machine learning and constitutes a key element of machine learning methodology. In line with the statistical tradition, uncertainty has long been perceived as almost synonymous with standard probability and probabilistic predictions. Yet, due to the steadily increasing relevance of machine learning for practical applications and related issues such as safety requirements, new problems and challenges have recently been identified by machine learning scholars, and these problems may call for new methodological developments. In particular, this includes the importance of distinguishing between (at least) two different types of uncertainty, often refereed to as aleatoric and epistemic. In this paper, we provide an introduction to the topic of uncertainty in machine learning as well as an overview of hitherto attempts at handling uncertainty in general and formalizing this distinction in particular.

Machine Learning is a widely-used method for prediction generation. These predictions are more accurate when the model is trained on a larger dataset. On the other hand, the data is usually divided amongst different entities. For privacy reasons, the training can be done locally and then the model can be safely aggregated amongst the participants. However, if there are only two participants in \textit{Collaborative Learning}, the safe aggregation loses its power since the output of the training already contains much information about the participants. To resolve this issue, they must employ privacy-preserving mechanisms, which inevitably affect the accuracy of the model. In this paper, we model the training process as a two-player game where each player aims to achieve a higher accuracy while preserving its privacy. We introduce the notion of \textit{Price of Privacy}, a novel approach to measure the effect of privacy protection on the accuracy of the model. We develop a theoretical model for different player types, and we either find or prove the existence of a Nash Equilibrium with some assumptions. Moreover, we confirm these assumptions via a Recommendation Systems use case: for a specific learning algorithm, we apply three privacy-preserving mechanisms on two real-world datasets. Finally, as a complementary work for the designed game, we interpolate the relationship between privacy and accuracy for this use case and present three other methods to approximate it in a real-world scenario.

Detecting objects and estimating their pose remains as one of the major challenges of the computer vision research community. There exists a compromise between localizing the objects and estimating their viewpoints. The detector ideally needs to be view-invariant, while the pose estimation process should be able to generalize towards the category-level. This work is an exploration of using deep learning models for solving both problems simultaneously. For doing so, we propose three novel deep learning architectures, which are able to perform a joint detection and pose estimation, where we gradually decouple the two tasks. We also investigate whether the pose estimation problem should be solved as a classification or regression problem, being this still an open question in the computer vision community. We detail a comparative analysis of all our solutions and the methods that currently define the state of the art for this problem. We use PASCAL3D+ and ObjectNet3D datasets to present the thorough experimental evaluation and main results. With the proposed models we achieve the state-of-the-art performance in both datasets.

Robust estimation is much more challenging in high dimensions than it is in one dimension: Most techniques either lead to intractable optimization problems or estimators that can tolerate only a tiny fraction of errors. Recent work in theoretical computer science has shown that, in appropriate distributional models, it is possible to robustly estimate the mean and covariance with polynomial time algorithms that can tolerate a constant fraction of corruptions, independent of the dimension. However, the sample and time complexity of these algorithms is prohibitively large for high-dimensional applications. In this work, we address both of these issues by establishing sample complexity bounds that are optimal, up to logarithmic factors, as well as giving various refinements that allow the algorithms to tolerate a much larger fraction of corruptions. Finally, we show on both synthetic and real data that our algorithms have state-of-the-art performance and suddenly make high-dimensional robust estimation a realistic possibility.

北京阿比特科技有限公司