亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The union-closed sets conjecture, attributed to P\'eter Frankl from 1979, states that for any non-empty finite union-closed family of finite sets not consisting of only the empty set, there is an element that is in at least half of the sets in the family. We prove a version of Frankl's conjecture for families distributed according to any one of infinitely many distributions. As a corollary, in the intersection-closed reformulation of Frankl's conjecture, we obtain that it is true for families distributed according to any one of infinitely many Maxwell--Boltzmann distributions with inverse temperatures bounded below by a positive universal constant. Frankl's original conjecture corresponds to zero inverse temperature.

相關內容

The majority of fault-tolerant distributed algorithms are designed assuming a nominal corruption model, in which at most a fraction $f_n$ of parties can be corrupted by the adversary. However, due to the infamous Sybil attack, nominal models are not sufficient to express the trust assumptions in open (i.e., permissionless) settings. Instead, permissionless systems typically operate in a weighted model, where each participant is associated with a weight and the adversary can corrupt a set of parties holding at most a fraction $f_w$ of total weight. In this paper, we suggest a simple way to transform a large class of protocols designed for the nominal model into the weighted model. To this end, we formalize and solve three novel optimization problems, which we collectively call the weight reduction problems, that allow us to map large real weights into small integer weights while preserving the properties necessary for the correctness of the protocols. In all cases, we manage to keep the sum of the integer weights to be at most linear in the number of parties, resulting in extremely efficient protocols for the weighted model. Moreover, we demonstrate that, on weight distributions that emerge in practice, the sum of the integer weights tends to be far from the theoretical worst-case and, often even smaller than the number of participants. While, for some protocols, our transformation requires an arbitrarily small reduction in resilience (i.e., $f_w = f_n - \epsilon$), surprisingly, for many important problems we manage to obtain weighted solutions with the same resilience ($f_w = f_n$) as nominal ones. Notable examples include asynchronous consensus, verifiable secret sharing, erasure-coded distributed storage and broadcast protocols.

We are interested in creating statistical methods to provide informative summaries of random fields through the geometry of their excursion sets. To this end, we introduce an estimator for the length of the perimeter of excursion sets of random fields on $\mathbb{R}^2$ observed over regular square tilings. The proposed estimator acts on the empirically accessible binary digital images of the excursion regions and computes the length of a piecewise linear approximation of the excursion boundary. The estimator is shown to be consistent as the pixel size decreases, without the need of any normalization constant, and with neither assumption of Gaussianity nor isotropy imposed on the underlying random field. In this general framework, even when the domain grows to cover $\mathbb{R}^2$, the estimation error is shown to be of smaller order than the side length of the domain. For affine, strongly mixing random fields, this translates to a multivariate Central Limit Theorem for our estimator when multiple levels are considered simultaneously. Finally, we conduct several numerical studies to investigate statistical properties of the proposed estimator in the finite-sample data setting.

This paper focuses on the algebraic theory underlying the study of the complexity and the algorithms for the Constraint Satisfaction Problem (CSP). We unify, simplify, and extend parts of the three approaches that have been developed to study the CSP over finite templates - absorption theory that was used to characterize CSPs solvable by local consistency methods (JACM'14), and Bulatov's and Zhuk's theories that were used for two independent proofs of the CSP Dichotomy Theorem (FOCS'17, JACM'20). As the first contribution we present an elementary theorem about primitive positive definability and use it to obtain the starting points of Bulatov's and Zhuk's proofs as corollaries. As the second contribution we propose and initiate a systematic study of minimal Taylor algebras. This class of algebras is broad enough so that it suffices to verify the CSP Dichotomy Theorem on this class only, but still is unusually well behaved. In particular, many concepts from the three approaches coincide in the class, which is in striking contrast with the general setting. We believe that the theory initiated in this paper will eventually result in a simple and more natural proof of the Dichotomy Theorem that employs a simpler and more efficient algorithm, and will help in attacking complexity questions in other CSP-related problems.

Reconfigurable intelligent surfaces (RISs) allow controlling the propagation environment in wireless networks through reconfigurable elements. Recently, beyond diagonal RISs (BD-RISs) have been proposed as novel RIS architectures whose scattering matrix is not limited to being diagonal. However, BDRISs have been studied assuming continuous-value scattering matrices, which are hard to implement in practice. In this paper, we address this problem by proposing two solutions to realize discrete-value group and fully connected RISs. First, we propose scalar-discrete RISs, in which each entry of the RIS impedance matrix is independently discretized. Second, we propose vector-discrete RISs, where the entries in each group of the RIS impedance matrix are jointly discretized. In both solutions, the codebook is designed offline such as to minimize the distortion caused in the RIS impedance matrix by the discretization operation. Numerical results show that vector-discrete RISs achieve higher performance than scalar discrete RISs at the cost of increased optimization complexity. Furthermore, fewer resolution bits per impedance are necessary to achieve the performance upper bound as the group size of the group connected architecture increases. In particular, only a single resolution bit is sufficient in fully connected RISs to approximately achieve the performance upper bound.

In a well-calibrated risk prediction model, the average predicted probability is close to the true event rate for any given subgroup. Such models are reliable across heterogeneous populations and satisfy strong notions of algorithmic fairness. However, the task of auditing a model for strong calibration is well-known to be difficult -- particularly for machine learning (ML) algorithms -- due to the sheer number of potential subgroups. As such, common practice is to only assess calibration with respect to a few predefined subgroups. Recent developments in goodness-of-fit testing offer potential solutions but are not designed for settings with weak signal or where the poorly calibrated subgroup is small, as they either overly subdivide the data or fail to divide the data at all. We introduce a new testing procedure based on the following insight: if we can reorder observations by their expected residuals, there should be a change in the association between the predicted and observed residuals along this sequence if a poorly calibrated subgroup exists. This lets us reframe the problem of calibration testing into one of changepoint detection, for which powerful methods already exist. We begin with introducing a sample-splitting procedure where a portion of the data is used to train a suite of candidate models for predicting the residual, and the remaining data are used to perform a score-based cumulative sum (CUSUM) test. To further improve power, we then extend this adaptive CUSUM test to incorporate cross-validation, while maintaining Type I error control under minimal assumptions. Compared to existing methods, the proposed procedure consistently achieved higher power in simulation studies and more than doubled the power when auditing a mortality risk prediction model.

We propose a novel method for testing serial independence of object-valued time series in metric spaces, which is more general than Euclidean or Hilbert spaces. The proposed method is fully nonparametric, free of tuning parameters, and can capture all nonlinear pairwise dependence. The key concept used in this paper is the distance covariance in metric spaces, which is extended to auto distance covariance for object-valued time series. Furthermore, we propose a generalized spectral density function to account for pairwise dependence at all lags and construct a Cramer-von Mises type test statistic. New theoretical arguments are developed to establish the asymptotic behavior of the test statistic. A wild bootstrap is also introduced to obtain the critical values of the non-pivotal limiting null distribution. Extensive numerical simulations and two real data applications are conducted to illustrate the effectiveness and versatility of our proposed method.

To date, the only way to argue polynomial lower bounds for dynamic algorithms is via fine-grained complexity arguments. These arguments rely on strong assumptions about specific problems such as the Strong Exponential Time Hypothesis (SETH) and the Online Matrix-Vector Multiplication Conjecture (OMv). While they have led to many exciting discoveries, dynamic algorithms still miss out some benefits and lessons from the traditional ``coarse-grained'' approach that relates together classes of problems such as P and NP. In this paper we initiate the study of coarse-grained complexity theory for dynamic algorithms. Below are among questions that this theory can answer. What if dynamic Orthogonal Vector (OV) is easy in the cell-probe model? A research program for proving polynomial unconditional lower bounds for dynamic OV in the cell-probe model is motivated by the fact that many conditional lower bounds can be shown via reductions from the dynamic OV problem. Since the cell-probe model is more powerful than word RAM and has historically allowed smaller upper bounds, it might turn out that dynamic OV is easy in the cell-probe model, making this research direction infeasible. Our theory implies that if this is the case, there will be very interesting algorithmic consequences: If dynamic OV can be maintained in polylogarithmic worst-case update time in the cell-probe model, then so are several important dynamic problems such as $k$-edge connectivity, $(1+\epsilon)$-approximate mincut, $(1+\epsilon)$-approximate matching, planar nearest neighbors, Chan's subset union and 3-vs-4 diameter. The same conclusion can be made when we replace dynamic OV by, e.g., subgraph connectivity, single source reachability, Chan's subset union, and 3-vs-4 diameter. Lower bounds for $k$-edge connectivity via dynamic OV? (see the full abstract in the pdf file).

$\text{TT}^{\Box}_{{\mathcal C}}$ is a generic family of effectful, extensional type theories with a forcing interpretation parameterized by modalities. This paper identifies a subclass of $\text{TT}^{\Box}_{{\mathcal C}}$ theories that internally realizes continuity principles through stateful computations, such as reference cells. The principle of continuity is a seminal property that holds for a number of intuitionistic theories such as System T. Roughly speaking, it states that functions on real numbers only need approximations of these numbers to compute. Generally, continuity principles have been justified using semantical arguments, but it is known that the modulus of continuity of functions can be computed using effectful computations such as exceptions or reference cells. In this paper, the modulus of continuity of the functionals on the Baire space is directly computed using the stateful computations enabled internally in the theory.

The history of ternary adders goes back to more than six decades ago. Since then, a multitude of ternary full adders (TFAs) have been presented in the literature. This paper conducts a review of TFAs so that one can be familiar with the utilized design methodologies and their prevalence. Moreover, despite numerous TFAs, almost none of them are in their simplest form. A large number of transistors could have been eliminated by considering a partial TFA instead of a complete one. According to our investigation, only 28.6% of the previous designs are partial TFAs. Also, they could have been simplified even further by assuming a partial TFA with an output carry voltage of 0V or VDD. This way, in a single-VDD design, voltage division inside the Carry generator part would have been eliminated and less power dissipated. As far as we have searched, there are only three partial TFAs with this favorable condition in the literature. Additionally, most of the simulation setups in the previous articles are not realistic enough. Therefore, the simulation results reported in these papers are neither comparable nor entirely valid. Therefore, we got motivated to conduct a survey, elaborate on this issue, and enhance some of the previous designs. Among 84 papers, 10 different TFAs (from 11 papers) are selected, simplified, and simulated in this paper. Simulation results by HSPICE and 32nm CNFET technology reveal that the simplified partial TFAs outperform their original versions in terms of delay, power, and transistor count.

The dominating NLP paradigm of training a strong neural predictor to perform one task on a specific dataset has led to state-of-the-art performance in a variety of applications (eg. sentiment classification, span-prediction based question answering or machine translation). However, it builds upon the assumption that the data distribution is stationary, ie. that the data is sampled from a fixed distribution both at training and test time. This way of training is inconsistent with how we as humans are able to learn from and operate within a constantly changing stream of information. Moreover, it is ill-adapted to real-world use cases where the data distribution is expected to shift over the course of a model's lifetime. The first goal of this thesis is to characterize the different forms this shift can take in the context of natural language processing, and propose benchmarks and evaluation metrics to measure its effect on current deep learning architectures. We then proceed to take steps to mitigate the effect of distributional shift on NLP models. To this end, we develop methods based on parametric reformulations of the distributionally robust optimization framework. Empirically, we demonstrate that these approaches yield more robust models as demonstrated on a selection of realistic problems. In the third and final part of this thesis, we explore ways of efficiently adapting existing models to new domains or tasks. Our contribution to this topic takes inspiration from information geometry to derive a new gradient update rule which alleviate catastrophic forgetting issues during adaptation.

北京阿比特科技有限公司