亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the following characterization problem. Given a set $T$ of terminals and a $(2^{|T|}-2)$-dimensional vector $\pi$ whose coordinates are indexed by proper subsets of $T$, is there a graph $G$ that contains $T$, such that for all subsets $\emptyset\subsetneq S\subsetneq T$, $\pi_S$ equals the value of the min-cut in $G$ separating $S$ from $T\setminus S$? The only known necessary conditions are submodularity and a special class of linear inequalities given by Chaudhuri, Subrahmanyam, Wagner and Zaroliagis. Our main result is a new class of linear inequalities concerning laminar families, that generalize all previous ones. Using our new class of inequalities, we can generalize Karger's approximate min-cut counting result to graphs with terminals.

相關內容

The current state-of-the-art theoretical analysis of Actor-Critic (AC) algorithms significantly lags in addressing the practical aspects of AC implementations. This crucial gap needs bridging to bring the analysis in line with practical implementations of AC. To address this, we advocate for considering the MMCLG criteria: \textbf{M}ulti-layer neural network parametrization for actor/critic, \textbf{M}arkovian sampling, \textbf{C}ontinuous state-action spaces, the performance of the \textbf{L}ast iterate, and \textbf{G}lobal optimality. These aspects are practically significant and have been largely overlooked in existing theoretical analyses of AC algorithms. In this work, we address these gaps by providing the first comprehensive theoretical analysis of AC algorithms that encompasses all five crucial practical aspects (covers MMCLG criteria). We establish global convergence sample complexity bounds of $\tilde{\mathcal{O}}\left({\epsilon^{-3}}\right)$. We achieve this result through our novel use of the weak gradient domination property of MDP's and our unique analysis of the error in critic estimation.

We study local filters for the Lipschitz property of real-valued functions $f: V \to [0,r]$, where the Lipschitz property is defined with respect to an arbitrary undirected graph $G=(V,E)$. We give nearly optimal local Lipschitz filters both with respect to $\ell_1$-distance and $\ell_0$-distance. Previous work only considered unbounded-range functions over $[n]^d$. Jha and Raskhodnikova (SICOMP `13) gave an algorithm for such functions with lookup complexity exponential in $d$, which Awasthi et al. (ACM Trans. Comput. Theory) showed was necessary in this setting. We demonstrate that important applications of local Lipschitz filters can be accomplished with filters for functions with bounded-range. For functions $f: [n]^d\to [0,r]$, we circumvent the lower bound and achieve running time $(d^r\log n)^{O(\log r)}$ for the $\ell_1$-respecting filter and $d^{O(r)}\text{polylog } n$ for the $\ell_0$-respecting filter. Our local filters provide a novel Lipschitz extension that can be implemented locally. Furthermore, we show that our algorithms have nearly optimal dependence on $r$ for the domain $\{0,1\}^d$. In addition, our lower bound resolves an open question of Awasthi et al., removing one of the conditions necessary for their lower bound for general range. We prove our lower bound via a reduction from distribution-free Lipschitz testing and a new technique for proving hardness for adaptive algorithms. We provide two applications of our local filters to arbitrary real-valued functions. In the first application, we use them in conjunction with the Laplace mechanism for differential privacy and noisy binary search to provide mechanisms for privately releasing outputs of black-box functions, even in the presence of malicious clients. In the second application, we use our local filters to obtain the first nontrivial tolerant tester for the Lipschitz property.

We build a finite volume scheme for the scalar conservation law $\partial_t u + \partial_x (H(x, u)) = 0$ with bounded initial condition for a wide class of flux function $H$, convex with respect to the second variable. The main idea for the construction of the scheme is to use the theory of discontinuous flux. We prove that the resulting approximating sequence converges boundedly almost everywhere on $\mathopen]0, +\infty\mathclose[$ to the entropy solution.

We study frequency domain electromagnetic scattering at a bounded, penetrable, and inhomogeneous obstacle $ \Omega \subset \mathbb{R}^3 $. From the Stratton-Chu integral representation, we derive a new representation formula when constant reference coefficients are given for the interior domain. The resulting integral representation contains the usual layer potentials, but also volume potentials on $\Omega$. Then it is possible to follow a single-trace approach to obtain boundary integral equations perturbed by traces of compact volume integral operators with weakly singular kernels. The coupled boundary and volume integral equations are discretized with a Galerkin approach with usual Curl-conforming and Div-conforming finite elements on the boundary and in the volume. Compression techniques and special quadrature rules for singular integrands are required for an efficient and accurate method. Numerical experiments provide evidence that our new formulation enjoys promising properties.

To address the limitations of current hate speech detection models, we introduce \textsf{SGHateCheck}, a novel framework designed for the linguistic and cultural context of Singapore and Southeast Asia. It extends the functional testing approach of HateCheck and MHC, employing large language models for translation and paraphrasing into Singapore's main languages, and refining these with native annotators. \textsf{SGHateCheck} reveals critical flaws in state-of-the-art models, highlighting their inadequacy in sensitive content moderation. This work aims to foster the development of more effective hate speech detection tools for diverse linguistic environments, particularly for Singapore and Southeast Asia contexts.

We present a flexible, deterministic numerical method for computing left-tail rare events of sums of non-negative, independent random variables. The method is based on iterative numerical integration of linear convolutions by means of Newtons-Cotes rules. The periodicity properties of convoluted densities combined with the Trapezoidal rule are exploited to produce a robust and efficient method, and the method is flexible in the sense that it can be applied to all kinds of non-negative continuous RVs. We present an error analysis and study the benefits of utilizing Newton-Cotes rules versus the fast Fourier transform (FFT) for numerical integration, showing that although there can be efficiency-benefits to using FFT, Newton-Cotes rules tend to preserve the relative error better, and indeed do so at an acceptable computational cost. Numerical studies on problems with both known and unknown rare-event probabilities showcase the method's performance and support our theoretical findings.

Assume that an $N$-bit sequence $S$ of $k$ numbers encoded as Elias gamma codes is given as input. We present space-efficient algorithms for sorting, dense ranking and competitive ranking on $S$ in the word RAM model with word size $\Omega(\log N)$ bits. Our algorithms run in $O(k + \frac{N}{\log N})$ time and use $O(N)$ bits. The sorting algorithm returns the given numbers in sorted order, stored within a bit-vector of $N$ bits, whereas our ranking algorithms construct data structures that allow us subsequently to return the dense/competitive rank of each number $x$ in $S$ in constant time. For numbers $x \in \mathbb{N}$ with $x > N$ we require the position $p_x$ of $x$ as the input for our dense-/competitive-rank data structure. As an application of our algorithms above we give an algorithm for tree isomorphism, which runs in $O(n)$ time and uses $O(n)$ bits on $n$-node trees. Finally, we generalize our result for tree isomorphism to forests and outerplanar graphs, while maintaining a space-usage of $O(n)$ bits. The previous best linear-time algorithms for trees, forests and outerplanar graph isomorphism all use $\Theta(n \log n)$ bits.

For a graph $G$, a subset $S\subseteq V(G)$ is called a resolving set of $G$ if, for any two vertices $u,v\in V(G)$, there exists a vertex $w\in S$ such that $d(w,u)\neq d(w,v)$. The Metric Dimension problem takes as input a graph $G$ on $n$ vertices and a positive integer $k$, and asks whether there exists a resolving set of size at most $k$. In another metric-based graph problem, Geodetic Set, the input is a graph $G$ and an integer $k$, and the objective is to determine whether there exists a subset $S\subseteq V(G)$ of size at most $k$ such that, for any vertex $u \in V(G)$, there are two vertices $s_1, s_2 \in S$ such that $u$ lies on a shortest path from $s_1$ to $s_2$. These two classical problems turn out to be intractable with respect to the natural parameter, i.e., the solution size, as well as most structural parameters, including the feedback vertex set number and pathwidth. Some of the very few existing tractable results state that they are both FPT with respect to the vertex cover number $vc$. More precisely, we observe that both problems admit an FPT algorithm running in time $2^{\mathcal{O}(vc^2)}\cdot n^{\mathcal{O}(1)}$, and a kernelization algorithm that outputs a kernel with $2^{\mathcal{O}(vc)}$ vertices. We prove that unless the Exponential Time Hypothesis fails, Metric Dimension and Geodetic Set, even on graphs of bounded diameter, neither admit an FPT algorithm running in time $2^{o(vc^2)}\cdot n^{\mathcal(1)}$, nor a kernelization algorithm that reduces the solution size and outputs a kernel with $2^{o(vc)}$ vertices. The versatility of our technique enables us to apply it to both these problems. We only know of one other problem in the literature that admits such a tight lower bound. Similarly, the list of known problems with exponential lower bounds on the number of vertices in kernelized instances is very short.

Solving a linear system $Ax=b$ is a fundamental scientific computing primitive for which numerous solvers and preconditioners have been developed. These come with parameters whose optimal values depend on the system being solved and are often impossible or too expensive to identify; thus in practice sub-optimal heuristics are used. We consider the common setting in which many related linear systems need to be solved, e.g. during a single numerical simulation. In this scenario, can we sequentially choose parameters that attain a near-optimal overall number of iterations, without extra matrix computations? We answer in the affirmative for Successive Over-Relaxation (SOR), a standard solver whose parameter $\omega$ has a strong impact on its runtime. For this method, we prove that a bandit online learning algorithm--using only the number of iterations as feedback--can select parameters for a sequence of instances such that the overall cost approaches that of the best fixed $\omega$ as the sequence length increases. Furthermore, when given additional structural information, we show that a contextual bandit method asymptotically achieves the performance of the instance-optimal policy, which selects the best $\omega$ for each instance. Our work provides the first learning-theoretic treatment of high-precision linear system solvers and the first end-to-end guarantees for data-driven scientific computing, demonstrating theoretically the potential to speed up numerical methods using well-understood learning algorithms.

Posterior sampling, i.e., exponential mechanism to sample from the posterior distribution, provides $\varepsilon$-pure differential privacy (DP) guarantees and does not suffer from potentially unbounded privacy breach introduced by $(\varepsilon,\delta)$-approximate DP. In practice, however, one needs to apply approximate sampling methods such as Markov chain Monte Carlo (MCMC), thus re-introducing the unappealing $\delta$-approximation error into the privacy guarantees. To bridge this gap, we propose the Approximate SAample Perturbation (abbr. ASAP) algorithm which perturbs an MCMC sample with noise proportional to its Wasserstein-infinity ($W_\infty$) distance from a reference distribution that satisfies pure DP or pure Gaussian DP (i.e., $\delta=0$). We then leverage a Metropolis-Hastings algorithm to generate the sample and prove that the algorithm converges in $W_\infty$ distance. We show that by combining our new techniques with a localization step, we obtain the first nearly linear-time algorithm that achieves the optimal rates in the DP-ERM problem with strongly convex and smooth losses.

北京阿比特科技有限公司