亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Circuit knitting, a method for connecting quantum circuits across multiple processors to simulate nonlocal quantum operations, is a promising approach for distributed quantum computing. While various techniques have been developed for circuit knitting, we uncover fundamental limitations to the scalability of this technology. We prove that the sampling overhead of circuit knitting is exponentially lower bounded by the exact entanglement cost of the target bipartite dynamic, even for asymptotic overhead in the parallel cut regime. Specifically, we prove that the regularized sampling overhead assisted with local operations and classical communication (LOCC), of any bipartite quantum channel is lower bounded by the exponential of its exact entanglement cost under separable preserving operations. Furthermore, we show that the regularized sampling overhead for simulating a general bipartite channel via LOCC is lower bounded by $\kappa$-entanglement and max-Rains information, providing efficiently computable benchmarks. Our work reveals a profound connection between virtual quantum information processing via quasi-probability decomposition and quantum Shannon theory, highlighting the critical role of entanglement in distributed quantum computing.

相關內容

We revisit evaluation of logical formulas that allow both uninterpreted relations, constrained to be finite, as well as an interpreted vocabulary over an infinite domain. This formalism was denoted embedded finite model theory in the past. It is clear that the expressiveness and evaluating complexity of formulas of this type depends heavily on the infinite structure. If we embed in a wild structure like the integers with additive and multiplicative arithmetic, logic is extremely expressive and formulas are impossible to evaluate. On the other hand, for some well-known decidable structures, the expressiveness and evaluating complexity are similar to the situation without the additional infrastructure. The latter phenomenon was formalized via the notion of ``Restricted Quantifier Collapse'': adding quantification over the infinite structure does not add expressiveness. Beyond these two extremes little was known. In this work we show that the possibilities for expressiveness and complexity are much wider. We show that we can get almost any possible complexity of evaluation while staying within a decidable structure. We also show that in some decidable structures, there is a disconnect between expressiveness of the logic and complexity, in that we cannot eliminate quantification over the structure, but this is not due to an ability to embed complex relational computation in the logic. We show failure of collapse for the theory of finite fields and the related theory of pseudo-finite fields, which will involve coding computation in the logic. As a by-product of this, we establish new lower-bounds for the complexity of decision procedures for several decidable theories of fields, including the theory of finite fields. In the process of investigating this landscape, we investigate several weakenings of collapse.

We revisit the fundamental problem of learning with distribution shift, in which a learner is given labeled samples from training distribution $D$, unlabeled samples from test distribution $D'$ and is asked to output a classifier with low test error. The standard approach in this setting is to bound the loss of a classifier in terms of some notion of distance between $D$ and $D'$. These distances, however, seem difficult to compute and do not lead to efficient algorithms. We depart from this paradigm and define a new model called testable learning with distribution shift, where we can obtain provably efficient algorithms for certifying the performance of a classifier on a test distribution. In this model, a learner outputs a classifier with low test error whenever samples from $D$ and $D'$ pass an associated test; moreover, the test must accept if the marginal of $D$ equals the marginal of $D'$. We give several positive results for learning well-studied concept classes such as halfspaces, intersections of halfspaces, and decision trees when the marginal of $D$ is Gaussian or uniform on $\{\pm 1\}^d$. Prior to our work, no efficient algorithms for these basic cases were known without strong assumptions on $D'$. For halfspaces in the realizable case (where there exists a halfspace consistent with both $D$ and $D'$), we combine a moment-matching approach with ideas from active learning to simulate an efficient oracle for estimating disagreement regions. To extend to the non-realizable setting, we apply recent work from testable (agnostic) learning. More generally, we prove that any function class with low-degree $L_2$-sandwiching polynomial approximators can be learned in our model. We apply constructions from the pseudorandomness literature to obtain the required approximators.

With the increasing penetration of machine learning applications in critical decision-making areas, calls for algorithmic fairness are more prominent. Although there have been various modalities to improve algorithmic fairness through learning with fairness constraints, their performance does not generalize well in the test set. A performance-promising fair algorithm with better generalizability is needed. This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability. Most previous reweighing methods propose to assign a unified weight for each (sub)group. Rather, our method granularly models the distance from the sample predictions to the decision boundary. Our adaptive reweighing method prioritizes samples closer to the decision boundary and assigns a higher weight to improve the generalizability of fair classifiers. Extensive experiments are performed to validate the generalizability of our adaptive priority reweighing method for accuracy and fairness measures (i.e., equal opportunity, equalized odds, and demographic parity) in tabular benchmarks. We also highlight the performance of our method in improving the fairness of language and vision models. The code is available at //github.com/che2198/APW.

In statistical mechanics, computing the partition function is generally difficult. An approximation method using a variational autoregressive network (VAN) has been proposed recently. This approach offers the advantage of directly calculating the generation probabilities while obtaining a significantly large number of samples. The present study introduces a novel approximation method that employs samples derived from quantum annealing machines in conjunction with VAN, which are empirically assumed to adhere to the Gibbs-Boltzmann distribution. When applied to the finite-size Sherrington-Kirkpatrick model, the proposed method demonstrates enhanced accuracy compared to the traditional VAN approach and other approximate methods, such as the widely utilized naive mean field.

Proving equivalence between functional programs is a fundamental problem in program verification, which often amounts to reasoning about algebraic data types (ADTs) and compositions of structural recursions. Modern theorem provers address this problem by applying structural induction, which is insufficient for proving many equivalence theorems. In such cases, one has to invent a set of lemmas, prove these lemmas by additional induction, and use these lemmas to prove the original theorem. There is, however, a lack of systematic understanding of what lemmas are needed for inductive proofs and how these lemmas can be synthesized automatically. This paper presents directed lemma synthesis, an effective approach to automating equivalence proofs by discovering critical lemmas using program synthesis techniques. We first identify two induction-friendly forms of propositions that give formal guarantees to the progress of the proof. We then propose two tactics that synthesize and apply lemmas, thereby transforming the proof goal into induction-friendly forms. Both tactics reduce lemma synthesis to a specialized class of program synthesis problems with efficient algorithms. Experimental results demonstrate the effectiveness of our approach: Compared to state-of-the-art equivalence checkers employing heuristic-based lemma enumeration, directed lemma synthesis saves 95.47% runtime on average and solves 38 more tasks over an extended version of the standard benchmark set.

When optimizing machine learning models, there are various scenarios where gradient computations are challenging or even infeasible. Furthermore, in reinforcement learning (RL), preference-based RL that only compares between options has wide applications, including reinforcement learning with human feedback in large language models. In this paper, we systematically study optimization of a smooth function $f\colon\mathbb{R}^n\to\mathbb{R}$ only assuming an oracle that compares function values at two points and tells which is larger. When $f$ is convex, we give two algorithms using $\tilde{O}(n/\epsilon)$ and $\tilde{O}(n^{2})$ comparison queries to find an $\epsilon$-optimal solution, respectively. When $f$ is nonconvex, our algorithm uses $\tilde{O}(n/\epsilon^2)$ comparison queries to find an $\epsilon$-approximate stationary point. All these results match the best-known zeroth-order algorithms with function evaluation queries in $n$ dependence, thus suggest that \emph{comparisons are all you need for optimizing smooth functions using derivative-free methods}. In addition, we also give an algorithm for escaping saddle points and reaching an $\epsilon$-second order stationary point of a nonconvex $f$, using $\tilde{O}(n^{1.5}/\epsilon^{2.5})$ comparison queries.

The conventional modal analysis techniques face difficulties in handling nonstationary phenomena, such as transient, nonperiodic, or intermittent phenomena. This paper presents a variational mode decomposition--based nonstationary coherent structure (VMD-NCS) analysis that enables the extraction and analysis of coherent structures in the case of nonstationary phenomena from high-dimensional spatiotemporal data. The VMD-NCS analysis decomposes the input spatiotemporal data into intrinsic coherent structures (ICSs) that represent nonstationary spatiotemporal patterns and exhibit coherence in both spatial and temporal directions. Unlike many conventional modal analysis techniques, the proposed method accounts for the temporal changes in the spatial distribution with time. Tthe VMD-NCS analysis was validated based on the transient growth phenomena in the flow around a cylinder. It was confirmed that the temporal changes in the spatial distribution, depicting the transient growth of vortex shedding where fluctuations arising in the far-wake region gradually approach the near-wake region, were represented as a single ICS. Furthermore, in the analysis of the quasi-periodic flow field around a pitching airfoil, the temporal changes in the spatial distribution and the amplitude of vortex shedding behind the airfoil, influenced by the pitching motion of the airfoil, were captured as a single ICS. The impact of two parameters that control the number of ICSs ($K$) and the penalty factor related to the temporal coherence ($\alpha$), was investigated. The results revealed that $K$ has a significant impact on the VMD-NCS analysis results. In the case of a relatively high $K$, the VMD-NCS analysis tends to extract more periodic spatiotemporal patterns resembling the results of dynamic mode decomposition. In the case of a small $K$, it tends to extract more nonstationary spatiotemporal patterns.

We present the notion of a multilevel, slashable quorum system, where an application can obtain gradual levels of assurance that a certain value is bound to be decided (or "finalized") in a global consensus procedure, unless a large number of Byzantine processes are exposed to slashing (that is, penalty on staked assets). Our construction is a highly parameterized generalization of quorum systems based on finite projective spaces, with asymptotic high availability and optimal slashing properties. In particular, we show that any quorum system whose ground elements are disjoint subsets of nodes (e.g. "commmittees" in committee-based consensus protocols) has asymptotic high availability under very reasonable conditions, a general proof with significance of its own. Under similarly relaxed conditions, we show that our construction has asymptotically optimal slashing properties with respect to message complexity and process load; this illustrates a fundamental trade off between message complexity, load, and slashing. Our multilevel construction allows nodes to decide how many "levels" of finalization assurance they wish to obtain, noting that this functionality, if applied to a proof-of-stake blockchain, can be seen either as (i) a form of an early, slashing-based, probabilistic block finalization; or (ii) a service for reorg tolerance.

Traditionally, classical numerical schemes have been employed to solve partial differential equations (PDEs) using computational methods. Recently, neural network-based methods have emerged. Despite these advancements, neural network-based methods, such as physics-informed neural networks (PINNs) and neural operators, exhibit deficiencies in robustness and generalization. To address these issues, numerous studies have integrated classical numerical frameworks with machine learning techniques, incorporating neural networks into parts of traditional numerical methods. In this study, we focus on hyperbolic conservation laws by replacing traditional numerical fluxes with neural operators. To this end, we developed loss functions inspired by established numerical schemes related to conservation laws and approximated numerical fluxes using Fourier neural operators (FNOs). Our experiments demonstrated that our approach combines the strengths of both traditional numerical schemes and FNOs, outperforming standard FNO methods in several respects. For instance, we demonstrate that our method is robust, has resolution invariance, and is feasible as a data-driven method. In particular, our method can make continuous predictions over time and exhibits superior generalization capabilities with out-of-distribution (OOD) samples, which are challenges that existing neural operator methods encounter.

We propose a new method for event extraction (EE) task based on an imitation learning framework, specifically, inverse reinforcement learning (IRL) via generative adversarial network (GAN). The GAN estimates proper rewards according to the difference between the actions committed by the expert (or ground truth) and the agent among complicated states in the environment. EE task benefits from these dynamic rewards because instances and labels yield to various extents of difficulty and the gains are expected to be diverse -- e.g., an ambiguous but correctly detected trigger or argument should receive high gains -- while the traditional RL models usually neglect such differences and pay equal attention on all instances. Moreover, our experiments also demonstrate that the proposed framework outperforms state-of-the-art methods, without explicit feature engineering.

北京阿比特科技有限公司