亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We provide the first $\mathit{constant}$-$\mathit{round}$ construction of post-quantum non-malleable commitments under the minimal assumption that $\mathit{post}$-$\mathit{quantum}$ $\mathit{one}$-$\mathit{way}$ $\mathit{functions}$ exist. We achieve the standard notion of non-malleability with respect to commitments. Prior constructions required $\Omega(\log^*\lambda)$ rounds under the same assumption. We achieve our results through a new technique for constant-round non-malleable commitments which is easier to use in the post-quantum setting. The technique also yields an almost elementary proof of security for constant-round non-malleable commitments in the classical setting, which may be of independent interest. When combined with existing work, our results yield the first constant-round quantum-secure multiparty computation for both classical and quantum functionalities $\mathit{in}$ $\mathit{the}$ $\mathit{plain}$ $\mathit{model}$, under the $\mathit{polynomial}$ hardness of quantum fully-homomorphic encryption and quantum learning with errors.

相關內容

For a permutation $\pi: [K]\rightarrow [K]$, a sequence $f: \{1,2,\cdots, n\}\rightarrow \mathbb R$ contains a $\pi$-pattern of size $K$, if there is a sequence of indices $(i_1, i_2, \cdots, i_K)$ ($i_1<i_2<\cdots<i_K$), satisfying that $f(i_a)<f(i_b)$ if $\pi(a)<\pi(b)$, for $a,b\in [K]$. Otherwise, $f$ is referred to as $\pi$-free. For the special case where $\pi = (1,2,\cdots, K)$, it is referred to as the monotone pattern. \cite{newman2017testing} initiated the study of testing $\pi$-freeness with one-sided error. They focused on two specific problems, testing the monotone permutations and the $(1,3,2)$ permutation. For the problem of testing monotone permutation $(1,2,\cdots,K)$, \cite{ben2019finding} improved the $(\log n)^{O(K^2)}$ non-adaptive query complexity of \cite{newman2017testing} to $O((\log n)^{\lfloor \log_{2} K\rfloor})$. Further, \cite{ben2019optimal} proposed an adaptive algorithm with $O(\log n)$ query complexity. However, no progress has yet been made on the problem of testing $(1,3,2)$-freeness. In this work, we present an adaptive algorithm for testing $(1,3,2)$-freeness. The query complexity of our algorithm is $O(\epsilon^{-2}\log^4 n)$, which significantly improves over the $O(\epsilon^{-7}\log^{26}n)$-query adaptive algorithm of \cite{newman2017testing}. This improvement is mainly achieved by the proposal of a new structure embedded in the patterns.

We embark on a systematic study of the $(k+1)$-th derivative of $x^{k-r}H(x^r)$, where $H(x):=-x\log x-(1-x)\log(1-x)$ is the binary entropy and $k>r\geq 1$ are integers. Our motivation is the conjectural entropy inequality $\alpha_k H(x^k)\geq x^{k-1}H(x)$, where $0<\alpha_k<1$ is given by a functional equation. The $k=2$ case was the key technical tool driving recent breakthroughs on the union-closed sets conjecture. We express $ \frac{d^{k+1}}{dx^{k+1}}x^{k-r}H(x^r)$ as a rational function, an infinite series, and a sum over generalized Stirling numbers. This allows us to reduce the proof of the entropy inequality for real $k$ to showing that an associated polynomial has only two real roots in the interval $(0,1)$, which also allows us to prove the inequality for fractional exponents such as $k=3/2$. The proof suggests a new framework for proving tight inequalities for the sum of polynomials times the logarithms of polynomials, which converts the inequality into a statement about the real roots of a simpler associated polynomial.

Algorithmic meta-theorems state that problems that can be formalized in a fixed logic can be solved efficiently on classes of structures with certain properties. A prominent example is Courcelle's Theorem, which states that all problems expressible in monadic second-order logic can be solved efficiently on structures of small treewidth. Such theorems are usually proven by a generic algorithm for the model-checking problem of the given logic, which is often complex and rarely leads to highly efficient solutions. Alternatively, we can solve the model-checking problem by grounding the given logic to propositional logic, for which dedicated solvers are available. Such encodings will, however, usually not preserve the input's treewidth. This paper investigates whether all problems definable in monadic second-order logic can efficiently be encoded into SAT such that the input's treewidth bounds the treewidth of the resulting formula. We answer this in the affirmative and, hence, provide an alternative proof of Courcelle's Theorem. Our technique can naturally be extended: There are treewidth-aware reductions from the optimization version of Courcelle's Theorem to MaxSAT and from the counting version of the theorem to #SAT. By using encodings to SAT, we obtain, ignoring polynomial factors, the same running time for the model-checking problem as we would with dedicated algorithms. We complement our upper bounds with new lower bounds based on ETH; and we show that the block size of the input's formula and the treewidth of the input's structure are tightly linked. We also provide matching upper and lower bounds for a fragment of guarded MSO, only using SAT-based techniques.

A new $H(\textrm{divdiv})$-conforming finite element is presented, which avoids the need for super-smoothness by redistributing the degrees of freedom to edges and faces. This leads to a hybridizable mixed method with superconvergence for the biharmonic equation. Moreover, new finite element divdiv complexes are established. Finally, new weak Galerkin and $C^0$ discontinuous Galerkin methods for the biharmonic equation are derived.

Physics-Informed Neural Networks (PINNs) have proven effective in solving partial differential equations (PDEs), especially when some data are available by blending seamlessly data and physics. However, extending PINNs to high-dimensional and even high-order PDEs encounters significant challenges due to the computational cost associated with automatic differentiation in the residual loss. Herein, we address the limitations of PINNs in handling high-dimensional and high-order PDEs by introducing Hutchinson Trace Estimation (HTE). Starting with the second-order high-dimensional PDEs ubiquitous in scientific computing, HTE transforms the calculation of the entire Hessian matrix into a Hessian vector product (HVP). This approach alleviates the computational bottleneck via Taylor-mode automatic differentiation and significantly reduces memory consumption from the Hessian matrix to HVP. We further showcase HTE's convergence to the original PINN loss and its unbiased behavior under specific conditions. Comparisons with Stochastic Dimension Gradient Descent (SDGD) highlight the distinct advantages of HTE, particularly in scenarios with significant variance among dimensions. We further extend HTE to higher-order and higher-dimensional PDEs, specifically addressing the biharmonic equation. By employing tensor-vector products (TVP), HTE efficiently computes the colossal tensor associated with the fourth-order high-dimensional biharmonic equation, saving memory and enabling rapid computation. The effectiveness of HTE is illustrated through experimental setups, demonstrating comparable convergence rates with SDGD under memory and speed constraints. Additionally, HTE proves valuable in accelerating the Gradient-Enhanced PINN (gPINN) version as well as the Biharmonic equation. Overall, HTE opens up a new capability in scientific machine learning for tackling high-order and high-dimensional PDEs.

We consider linear codes over a finite field of odd characteristic, derived from determinantal varieties, obtained from symmetric matrices of bounded ranks. A formula for the weight of a code word is derived. Using this formula, we have computed the minimum distance for the codes corresponding to matrices upper-bounded by any fixed, even rank. A conjecture is proposed for the cases where the upper bound is odd. At the end of the article, tables for the weights of these codes, for spaces of symmetric matrices up to order $5$, are given. We also correct typographical errors in Proposition 1.1/3.1 of [3], and in the last table, and we have rewritten Corollary 4.9 of that paper, and the usage of that Corollary in the proof of Proposition 4.10.

We explore the concept of separating systems of vertex sets of graphs. A separating system of a set $X$ is a collection of subsets of $X$ such that for any pair of distinct elements in $X$, there exists a set in the separating system that contains exactly one of the two elements. A separating system of the vertex set of a graph $G$ is called a vertex-separating path (tree) system of $G$ if the elements of the separating system are paths (trees) in the graph $G$. In this paper, we focus on the size of the smallest vertex-separating path (tree) system for different types of graphs, including trees, grids, and maximal outerplanar graphs.

We study the problem of contextual feature selection, where the goal is to learn a predictive function while identifying subsets of informative features conditioned on specific contexts. Towards this goal, we generalize the recently proposed stochastic gates (STG) Yamada et al. [2020] by modeling the probabilistic gates as conditional Bernoulli variables whose parameters are predicted based on the contextual variables. Our new scheme, termed conditional-STG (c-STG), comprises two networks: a hypernetwork that establishes the mapping between contextual variables and probabilistic feature selection parameters and a prediction network that maps the selected feature to the response variable. Training the two networks simultaneously ensures the comprehensive incorporation of context and feature selection within a unified model. We provide a theoretical analysis to examine several properties of the proposed framework. Importantly, our model leads to improved flexibility and adaptability of feature selection and, therefore, can better capture the nuances and variations in the data. We apply c-STG to simulated and real-world datasets, including healthcare, housing, and neuroscience, and demonstrate that it effectively selects contextually meaningful features, thereby enhancing predictive performance and interpretability.

Natural data observed in $\mathbb{R}^n$ is often constrained to an $m$-dimensional manifold $\mathcal{M}$, where $m < n$. This work focuses on the task of building theoretically principled generative models for such data. Current generative models learn $\mathcal{M}$ by mapping an $m$-dimensional latent variable through a neural network $f_\theta: \mathbb{R}^m \to \mathbb{R}^n$. These procedures, which we call pushforward models, incur a straightforward limitation: manifolds cannot in general be represented with a single parameterization, meaning that attempts to do so will incur either computational instability or the inability to learn probability densities within the manifold. To remedy this problem, we propose to model $\mathcal{M}$ as a neural implicit manifold: the set of zeros of a neural network. We then learn the probability density within $\mathcal{M}$ with a constrained energy-based model, which employs a constrained variant of Langevin dynamics to train and sample from the learned manifold. In experiments on synthetic and natural data, we show that our model can learn manifold-supported distributions with complex topologies more accurately than pushforward models.

Cold-start problems are long-standing challenges for practical recommendations. Most existing recommendation algorithms rely on extensive observed data and are brittle to recommendation scenarios with few interactions. This paper addresses such problems using few-shot learning and meta learning. Our approach is based on the insight that having a good generalization from a few examples relies on both a generic model initialization and an effective strategy for adapting this model to newly arising tasks. To accomplish this, we combine the scenario-specific learning with a model-agnostic sequential meta-learning and unify them into an integrated end-to-end framework, namely Scenario-specific Sequential Meta learner (or s^2 meta). By doing so, our meta-learner produces a generic initial model through aggregating contextual information from a variety of prediction tasks while effectively adapting to specific tasks by leveraging learning-to-learn knowledge. Extensive experiments on various real-world datasets demonstrate that our proposed model can achieve significant gains over the state-of-the-arts for cold-start problems in online recommendation. Deployment is at the Guess You Like session, the front page of the Mobile Taobao.

北京阿比特科技有限公司