亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Baker and Norine initiated the study of graph divisors as a graph-theoretic analogue of the Riemann-Roch theory for Riemann surfaces. One of the key concepts of graph divisor theory is the {\it rank} of a divisor on a graph. The importance of the rank is well illustrated by Baker's {\it Specialization lemma}, stating that the dimension of a linear system can only go up under specialization from curves to graphs, leading to a fruitful interaction between divisors on graphs and curves. Due to its decisive role, determining the rank is a central problem in graph divisor theory. Kiss and T\'othm\'eresz reformulated the problem using chip-firing games, and showed that computing the rank of a divisor on a graph is NP-hard via reduction from the Minimum Feedback Arc Set problem. In this paper, we strengthen their result by establishing a connection between chip-firing games and the Minimum Target Set Selection problem. As a corollary, we show that the rank is difficult to approximate to within a factor of $O(2^{\log^{1-\varepsilon}n})$ for any $\varepsilon > 0$ unless $P=NP$. Furthermore, assuming the Planted Dense Subgraph Conjecture, the rank is difficult to approximate to within a factor of $O(n^{1/4-\varepsilon})$ for any $\varepsilon>0$.

相關內容

We consider the fundamental problem of constructing fast and small circuits for binary addition. We propose a new algorithm with running time $\mathcal O(n \log_2 n)$ for constructing linear-size $n$-bit adder circuits with a significantly better depth guarantee compared to previous approaches: Our circuits have a depth of at most $\log_2 n + \log_2 \log_2 n + \log_2 \log_2 \log_2 n + \text{const}$, improving upon the previously best circuits by [12] with a depth of at most $\log_2 n + 8 \sqrt{\log_2 n} + 6 \log_2 \log_2 n + \text{const}$. Hence, we decrease the gap to the lower bound of $\log_2 n + \log_2 \log_2 n + \text{const}$ by [5] significantly from $\mathcal O (\sqrt{\log_2 n})$ to $\mathcal O(\log_2 \log_2 \log_2 n)$. Our core routine is a new algorithm for the construction of a circuit for a single carry bit, or, more generally, for an And-Or path, i.e., a Boolean function of type $t_0 \lor ( t_1 \land (t_2 \lor ( \dots t_{m-1}) \dots ))$. We compute linear-size And-Or path circuits with a depth of at most $\log_2 m + \log_2 \log_2 m + 0.65$ in time $\mathcal O(m \log_2 m)$. These are the first And-Or path circuits known that, up to an additive constant, match the lower bound by [5] and at the same time have a linear size. The previously fastest And-Or path circuits are only by an additive constant worse in depth, but have a much higher size in the order of $\mathcal O (m \log_2 m)$.

Quantum computing has recently emerged as a transformative technology. Yet, its promised advantages rely on efficiently translating quantum operations into viable physical realizations. In this work, we use generative machine learning models, specifically denoising diffusion models (DMs), to facilitate this transformation. Leveraging text-conditioning, we steer the model to produce desired quantum operations within gate-based quantum circuits. Notably, DMs allow to sidestep during training the exponential overhead inherent in the classical simulation of quantum dynamics -- a consistent bottleneck in preceding ML techniques. We demonstrate the model's capabilities across two tasks: entanglement generation and unitary compilation. The model excels at generating new circuits and supports typical DM extensions such as masking and editing to, for instance, align the circuit generation to the constraints of the targeted quantum device. Given their flexibility and generalization abilities, we envision DMs as pivotal in quantum circuit synthesis, enhancing both practical applications but also insights into theoretical quantum computation.

We introduce the concept of half-closed nodes for nodal Discontinuous Galerkin (DG) discretisations. This is in contrast to more commonly used closed nodes in DG where in each element nodes are placed on every boundary. Half-closed nodes relax this constraint by only requiring nodes on a subset of the boundaries in each element, with this extra freedom in node placement allowing for increased efficiency in the assembly of DG operators. To determine which element boundaries half-closed nodes are placed on we outline a simple procedure based on switch functions. We examine the effect on operator sparsity from using the different types of nodes and show that in particular for the Laplace operator there is no difference in the sparsity from using half-closed or closed nodes. We also discuss in this work some linear solver techniques commonly used for Finite Element or Discontinuous Galerkin methods such as static condensation and block-based methods, and how they can be applied to half-closed DG discretisations.

We propose an operator learning approach to accelerate geometric Markov chain Monte Carlo (MCMC) for solving infinite-dimensional Bayesian inverse problems (BIPs). While geometric MCMC employs high-quality proposals that adapt to posterior local geometry, it requires repeated computations of gradients and Hessians of the log-likelihood, which becomes prohibitive when the parameter-to-observable (PtO) map is defined through expensive-to-solve parametric partial differential equations (PDEs). We consider a delayed-acceptance geometric MCMC method driven by a neural operator surrogate of the PtO map, where the proposal exploits fast surrogate predictions of the log-likelihood and, simultaneously, its gradient and Hessian. To achieve a substantial speedup, the surrogate must accurately approximate the PtO map and its Jacobian, which often demands a prohibitively large number of PtO map samples via conventional operator learning methods. In this work, we present an extension of derivative-informed operator learning [O'Leary-Roseberry et al., J. Comput. Phys., 496 (2024)] that uses joint samples of the PtO map and its Jacobian. This leads to derivative-informed neural operator (DINO) surrogates that accurately predict the observables and posterior local geometry at a significantly lower training cost than conventional methods. Cost and error analysis for reduced basis DINO surrogates are provided. Numerical studies demonstrate that DINO-driven MCMC generates effective posterior samples 3--9 times faster than geometric MCMC and 60--97 times faster than prior geometry-based MCMC. Furthermore, the training cost of DINO surrogates breaks even compared to geometric MCMC after just 10--25 effective posterior samples.

The least squares Monte Carlo (LSM) algorithm proposed by Longstaff and Schwartz (2001) is widely used for pricing Bermudan options. The LSM estimator contains undesirable look-ahead bias, and the conventional technique of avoiding it requires additional simulation paths. We present the leave-one-out LSM (LOOLSM) algorithm to eliminate look-ahead bias without doubling simulations. We also show that look-ahead bias is asymptotically proportional to the regressors-to-paths ratio. Our findings are demonstrated with several option examples in which the LSM algorithm overvalues the options. The LOOLSM method can be extended to other regression-based algorithms that improve the LSM method.

The Lamport diagram is a pervasive and intuitive tool for informal reasoning about "happens-before" relationships in a concurrent system. However, traditional axiomatic formalizations of Lamport diagrams can be painful to work with in a mechanized setting like Agda. We propose an alternative, inductive formalization -- the causal separation diagram (CSD) -- that takes inspiration from string diagrams and concurrent separation logic, but enjoys a graphical syntax similar to Lamport diagrams. Critically, CSDs are based on the idea that causal relationships between events are witnessed by the paths that information follows between them. To that end, we model happens-before as a dependent type of paths between events. The inductive formulation of CSDs enables their interpretation into a variety of semantic domains. We demonstrate the interpretability of CSDs with a case study on properties of logical clocks, widely-used mechanisms for reifying causal relationships as data. We carry out this study by implementing a series of interpreters for CSDs, culminating in a generic proof of Lamport's clock condition that is parametric in a choice of clock. We instantiate this proof on Lamport's scalar clock, on Mattern's vector clock, and on the matrix clocks of Raynal et al. and of Wuu and Bernstein, yielding verified implementations of each. The CSD formalism and our case study are mechanized in the Agda proof assistant.

Given a standard myopic dynamic process among coalition structures, an absorbing set is a minimal collection of such structures that is never left once entered through that process. Absorbing sets are an important solution concept in coalition formation games, but they have drawbacks: they can be large and hard to obtain. In this paper, we characterize an absorbing set in terms of a collection consisting of a small number of sets of coalitions that we refer to as a "reduced form" of a game. We apply our characterization to study convergence to stability in several economic environments.

Deflation techniques are typically used to shift isolated clusters of small eigenvalues in order to obtain a tighter distribution and a smaller condition number. Such changes induce a positive effect in the convergence behavior of Krylov subspace methods, which are among the most popular iterative solvers for large sparse linear systems. We develop a deflation strategy for symmetric saddle point matrices by taking advantage of their underlying block structure. The vectors used for deflation come from an elliptic singular value decomposition relying on the generalized Golub-Kahan bidiagonalization process. The block targeted by deflation is the off-diagonal one since it features a problematic singular value distribution for certain applications. One example is the Stokes flow in elongated channels, where the off-diagonal block has several small, isolated singular values, depending on the length of the channel. Applying deflation to specific parts of the saddle point system is important when using solvers such as CRAIG, which operates on individual blocks rather than the whole system. The theory is developed by extending the existing framework for deflating square matrices before applying a Krylov subspace method like MINRES. Numerical experiments confirm the merits of our strategy and lead to interesting questions about using approximate vectors for deflation.

The diffusion of charged particles in a graph can be modeled using random walks on a weighted graph. We give strategies to hide (or cloak) changes in a subgraph from the perspective of measurements of expected net particle charges made at nodes away from the cloaked subgraph. We distinguish between passive and active strategies, depending on whether the strategy involves injecting particles. The passive strategy can hide topology and edge weight changes. In addition to these capabilities, the active strategy can also hide sources of particles, at the cost of prior knowledge of the expected net particle charges in the reference graph. The strategies we present rely on discrete analogues of classic potential theory, that include a Calder\'on calculus on graphs.

We present a generalization of the discrete Lehmann representation (DLR) to three-point correlation and vertex functions in imaginary time and Matsubara frequency. The representation takes the form of a linear combination of judiciously chosen exponentials in imaginary time, and products of simple poles in Matsubara frequency, which are universal for a given temperature and energy cutoff. We present a systematic algorithm to generate compact sampling grids, from which the coefficients of such an expansion can be obtained by solving a linear system. We show that the explicit form of the representation can be used to evaluate diagrammatic expressions involving infinite Matsubara sums, such as polarization functions or self-energies, with controllable, high-order accuracy. This collection of techniques establishes a framework through which methods involving three-point objects can be implemented robustly, with a substantially reduced computational cost and memory footprint.

北京阿比特科技有限公司