亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Secure multi-party computation using a physical deck of cards, often called card-based cryptography, has been extensively studied during the past decade. Card-based protocols to compute various Boolean functions have been developed. As each input bit is typically encoded by two cards, computing an $n$-variable Boolean function requires at least $2n$ cards. We are interested in optimal protocols that use exactly $2n$ cards. In particular, we focus on symmetric functions. In this paper, we formulate the problem of developing $2n$-card protocols to compute $n$-variable symmetric Boolean functions by classifying all such functions into several NPN-equivalence classes. We then summarize existing protocols that can compute some representative functions from these classes, and also solve some open problems in the cases $n=4$, 5, 6, and 7. In particular, we develop a protocol to compute a function $k$Mod3, which determines whether the sum of all inputs is congruent to $k$ modulo 3 ($k \in \{0,1,2\}$).

相關內容

The problem of finding a maximum $2$-matching without short cycles has received significant attention due to its relevance to the Hamilton cycle problem. This problem is generalized to finding a maximum $t$-matching which excludes specified complete $t$-partite subgraphs, where $t$ is a fixed positive integer. The polynomial solvability of this generalized problem remains an open question. In this paper, we present polynomial-time algorithms for the following two cases of this problem: in the first case the forbidden complete $t$-partite subgraphs are edge-disjoint; and in the second case the maximum degree of the input graph is at most $2t-1$. Our result for the first case extends the previous work of Nam (1994) showing the polynomial solvability of the problem of finding a maximum $2$-matching without cycles of length four, where the cycles of length four are vertex-disjoint. The second result expands upon the works of B\'{e}rczi and V\'{e}gh (2010) and Kobayashi and Yin (2012), which focused on graphs with maximum degree at most $t+1$. Our algorithms are obtained from exploiting the discrete structure of restricted $t$-matchings and employing an algorithm for the Boolean edge-CSP.

A \emph{geometric graph} is a graph whose vertex set is a set of points in general position in the plane, and its edges are straight line segments joining these points. We show that for every integer $k \ge 2$, there exists a constat $c>0$ such that the following holds. The edges of every dense geometric graph can be colored with $k$ colors, such that the number of pairs of edges of the same color that cross is at most $(1/k-c)$ times the total number of pairs of edges that cross. The case when $k=2$ and $G$ is a complete geometric graph, was proved by Aichholzer et al.[\emph{GD} 2019].

Differentially private (DP) machine learning algorithms incur many sources of randomness, such as random initialization, random batch subsampling, and shuffling. However, such randomness is difficult to take into account when proving differential privacy bounds because it induces mixture distributions for the algorithm's output that are difficult to analyze. This paper focuses on improving privacy bounds for shuffling models and one-iteration differentially private gradient descent (DP-GD) with random initializations using $f$-DP. We derive a closed-form expression of the trade-off function for shuffling models that outperforms the most up-to-date results based on $(\epsilon,\delta)$-DP. Moreover, we investigate the effects of random initialization on the privacy of one-iteration DP-GD. Our numerical computations of the trade-off function indicate that random initialization can enhance the privacy of DP-GD. Our analysis of $f$-DP guarantees for these mixture mechanisms relies on an inequality for trade-off functions introduced in this paper. This inequality implies the joint convexity of $F$-divergences. Finally, we study an $f$-DP analog of the advanced joint convexity of the hockey-stick divergence related to $(\epsilon,\delta)$-DP and apply it to analyze the privacy of mixture mechanisms.

Kuiper's $V_n$ statistic, a measure for comparing the difference of ideal distribution and empirical distribution, is of great significance in the goodness-of-fit test. However, Kuiper's formulae for computing the cumulative distribution function, false positive probability and the upper tail quantile of $V_n$can not be applied to the case of small sample capacity $n$ since the approximation error is $\mathcal{O}(n^{-1})$. In this work, our contributions lie in three perspectives: firstly the approximation error is reduced to $\mathcal{O}(n^{-(k+1)/2})$ where $k$ is the expansion order with the \textit{high order expansion} (HOE) for the exponent of differential operator; secondly, a novel high order formula with approximation error $\mathcal{O}(n^{-3})$ is obtained by massive calculations; thirdly, the fixed-point algorithms are designed for solving the Kuiper pair of critical values and upper tail quantiles based on the novel formula. The high order expansion method for Kuiper's $V_n$-statistic is applicable for various applications where there are more than $5$ samples of data. The principles, algorithms and code for the high order expansion method are attractive for the goodness-of-fit test.

We present an alternating direction method of multipliers (ADMM) for a generic overlapping group lasso problem, where the groups can be overlapping in an arbitrary way. Meanwhile, we prove the lower bounds and upper bounds for both the $\ell_1$ sparse group lasso problem and the $\ell_0$ sparse group lasso problem. Also, we propose the algorithms for computing these bounds.

Target similarity tuning (TST) is a method of selecting relevant examples in natural language (NL) to code generation through large language models (LLMs) to improve performance. Its goal is to adapt a sentence embedding model to have the similarity between two NL inputs match the similarity between their associated code outputs. In this paper, we propose different methods to apply and improve TST in the real world. First, we replace the sentence transformer with embeddings from a larger model, which reduces sensitivity to the language distribution and thus provides more flexibility in synthetic generation of examples, and we train a tiny model that transforms these embeddings to a space where embedding similarity matches code similarity, which allows the model to remain a black box and only requires a few matrix multiplications at inference time. Second, we show how to efficiently select a smaller number of training examples to train the TST model. Third, we introduce a ranking-based evaluation for TST that does not require end-to-end code generation experiments, which can be expensive to perform.

The study of robustness has received much attention due to its inevitability in data-driven settings where many systems face uncertainty. One such example of concern is Bayesian Optimization (BO), where uncertainty is multi-faceted, yet there only exists a limited number of works dedicated to this direction. In particular, there is the work of Kirschner et al. (2020), which bridges the existing literature of Distributionally Robust Optimization (DRO) by casting the BO problem from the lens of DRO. While this work is pioneering, it admittedly suffers from various practical shortcomings such as finite contexts assumptions, leaving behind the main question Can one devise a computationally tractable algorithm for solving this DRO-BO problem? In this work, we tackle this question to a large degree of generality by considering robustness against data-shift in $\varphi$-divergences, which subsumes many popular choices, such as the $\chi^2$-divergence, Total Variation, and the extant Kullback-Leibler (KL) divergence. We show that the DRO-BO problem in this setting is equivalent to a finite-dimensional optimization problem which, even in the continuous context setting, can be easily implemented with provable sublinear regret bounds. We then show experimentally that our method surpasses existing methods, attesting to the theoretical results.

Differential privacy guarantees allow the results of a statistical analysis involving sensitive data to be released without compromising the privacy of any individual taking part. Achieving such guarantees generally requires the injection of noise, either directly into parameter estimates or into the estimation process. Instead of artificially introducing perturbations, sampling from Bayesian posterior distributions has been shown to be a special case of the exponential mechanism, producing consistent, and efficient private estimates without altering the data generative process. The application of current approaches has, however, been limited by their strong bounding assumptions which do not hold for basic models, such as simple linear regressors. To ameliorate this, we propose $\beta$D-Bayes, a posterior sampling scheme from a generalised posterior targeting the minimisation of the $\beta$-divergence between the model and the data generating process. This provides private estimation that is generally applicable without requiring changes to the underlying model and consistently learns the data generating parameter. We show that $\beta$D-Bayes produces more precise inference estimation for the same privacy guarantees, and further facilitates differentially private estimation via posterior sampling for complex classifiers and continuous regression models such as neural networks for the first time.

In the online packet scheduling problem with deadlines (PacketSchD, for short), the goal is to schedule transmissions of packets that arrive over time in a network switch and need to be sent across a link. Each packet has a deadline, representing its urgency, and a non-negative weight, that represents its priority. Only one packet can be transmitted in any time slot, so if the system is overloaded, some packets will inevitably miss their deadlines and be dropped. In this scenario, the natural objective is to compute a transmission schedule that maximizes the total weight of packets that are successfully transmitted. The problem is inherently online, with the scheduling decisions made without the knowledge of future packet arrivals. The central problem concerning PacketSchD, that has been a subject of intensive study since 2001, is to determine the optimal competitive ratio of online algorithms, namely the worst-case ratio between the optimum total weight of a schedule (computed by an offline algorithm) and the weight of a schedule computed by a (deterministic) online algorithm. We solve this open problem by presenting a $\phi$-competitive online algorithm for PacketSchD (where $\phi\approx 1.618$ is the golden ratio), matching the previously established lower bound.

Click-through rate (CTR) prediction plays a critical role in recommender systems and online advertising. The data used in these applications are multi-field categorical data, where each feature belongs to one field. Field information is proved to be important and there are several works considering fields in their models. In this paper, we proposed a novel approach to model the field information effectively and efficiently. The proposed approach is a direct improvement of FwFM, and is named as Field-matrixed Factorization Machines (FmFM, or $FM^2$). We also proposed a new explanation of FM and FwFM within the FmFM framework, and compared it with the FFM. Besides pruning the cross terms, our model supports field-specific variable dimensions of embedding vectors, which acts as soft pruning. We also proposed an efficient way to minimize the dimension while keeping the model performance. The FmFM model can also be optimized further by caching the intermediate vectors, and it only takes thousands of floating-point operations (FLOPs) to make a prediction. Our experiment results show that it can out-perform the FFM, which is more complex. The FmFM model's performance is also comparable to DNN models which require much more FLOPs in runtime.

北京阿比特科技有限公司