亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this article, we provide a unified and simplified approach to derandomize central results in the area of fault-tolerant graph algorithms. Given a graph $G$, a vertex pair $(s,t) \in V(G)\times V(G)$, and a set of edge faults $F \subseteq E(G)$, a replacement path $P(s,t,F)$ is an $s$-$t$ shortest path in $G \setminus F$. For integer parameters $L,f$, a replacement path covering (RPC) is a collection of subgraphs of $G$, denoted by $\textit{G}_{L,f}=\{G_1,\ldots, G_r \}$, such that for every set $F$ of at most $f$ faults (i.e., $|F|\le f$) and every replacement path $P(s,t,F)$ of at most $L$ edges, there exists a subgraph $G_i\in \textit{G}_{L,f}$ that contains all the edges of $P$ and does not contain any of the edges of $F$. The covering value of the RPC $\textit{G}_{L,f}$ is then defined to be the number of subgraphs in $\textit{G}_{L,f}$. We present efficient deterministic constructions of $(L,f)$-RPCs whose covering values almost match the randomized ones, for a wide range of parameters. Our time and value bounds improve considerably over the previous construction of Parter (DISC 2019). We also provide an almost matching lower bound for the value of these coverings. A key application of our above deterministic constructions is the derandomization of the algebraic construction of the distance sensitivity oracle by Weimann and Yuster (FOCS 2010). The preprocessing and query time of the our deterministic algorithm nearly match the randomized bounds. This resolves the open problem of Alon, Chechik and Cohen (ICALP 2019).

相關內容

Multi-agent path planning (MAPP) is the problem of planning collision-free trajectories from start to goal locations for a team of agents. This work explores a relatively unexplored setting of MAPP where streams of agents have to go through the starts and goals with high throughput. We tackle this problem by formulating a new variant of MAPP called periodic MAPP in which the timing of agent appearances is periodic. The objective with periodic MAPP is to find a periodic plan, a set of collision-free trajectories that the agent streams can use repeatedly over periods, with periods that are as small as possible. To meet this objective, we propose a solution method that is based on constraint relaxation and optimization. We show that the periodic plans once found can be used for a more practical case in which agents in a stream can appear at random times. We confirm the effectiveness of our method compared with baseline methods in terms of throughput in several scenarios that abstract autonomous intersection management tasks.

For a positive integer $k$, a proper $k$-coloring of a graph $G$ is a mapping $f: V(G) \rightarrow \{1,2, \ldots, k\}$ such that $f(u) \neq f(v)$ for each edge $uv \in E(G)$. The smallest integer $k$ for which there is a proper $k$-coloring of $G$ is called chromatic number of $G$, denoted by $\chi(G)$. A \emph{locally identifying coloring} (for short, lid-coloring) of a graph $G$ is a proper $k$-coloring of $G$ such that every pair of adjacent vertices with distinct closed neighborhoods has distinct set of colors in their closed neighborhoods. The smallest integer $k$ such that $G$ has a lid-coloring with $k$ colors is called \emph{locally identifying chromatic number} (for short, \emph{lid-chromatic number}) of $G$, denoted by $\chi_{lid}(G)$. In this paper, we study lid-coloring of Cartesian product and tensor product of two graphs. We prove that if $G$ and $H$ are two connected graphs having at least two vertices then (a) $\chi_{lid}(G \square H) \leq \chi(G) \chi(H)-1$ and (b) $\chi_{lid}(G \times H) \leq \chi(G) \chi(H)$. Here $G \square H$ and $G \times H$ denote the Cartesian and tensor products of $G$ and $H$ respectively. We also give exact values of lid-chromatic number of Cartesian product (resp. tensor product) of two paths, a cycle and a path, and two cycles.

We propose a reinforcement learning based method to identify important configurations that connect reactant and product states along chemical reaction paths. By shooting multiple trajectories from these configurations, we can generate an ensemble of configurations that concentrate on the transition path ensemble. This configuration ensemble can be effectively employed in a neural network-based partial differential equation solver to obtain an approximation solution of a restricted Backward Kolmogorov equation, even when the dimension of the problem is very high. The resulting solution, known as the committor function, encodes mechanistic information for the reaction and can in turn be used to evaluate reaction rates.

Existing online learning algorithms for adversarial Markov Decision Processes achieve ${O}(\sqrt{T})$ regret after $T$ rounds of interactions even if the loss functions are chosen arbitrarily by an adversary, with the caveat that the transition function has to be fixed. This is because it has been shown that adversarial transition functions make no-regret learning impossible. Despite such impossibility results, in this work, we develop algorithms that can handle both adversarial losses and adversarial transitions, with regret increasing smoothly in the degree of maliciousness of the adversary. More concretely, we first propose an algorithm that enjoys $\widetilde{{O}}(\sqrt{T} + C^{\textsf{P}})$ regret where $C^{\textsf{P}}$ measures how adversarial the transition functions are and can be at most ${O}(T)$. While this algorithm itself requires knowledge of $C^{\textsf{P}}$, we further develop a black-box reduction approach that removes this requirement. Moreover, we also show that further refinements of the algorithm not only maintains the same regret bound, but also simultaneously adapts to easier environments (where losses are generated in a certain stochastically constrained manner as in Jin et al.[2021]) and achieves $\widetilde{{O}}(U + \sqrt{UC^{\textsf{L}}} + C^{\textsf{P}})$ regret, where $U$ is some standard gap-dependent coefficient and $C^{\textsf{L}}$ is the amount of corruption on losses.

Given a closed simple polygon $P$, we say two points $p,q$ see each other if the segment $pq$ is fully contained in $P$. The art gallery problem seeks a minimum size set $G\subset P$ of guards that sees $P$ completely. The only currently correct algorithm to solve the art gallery problem exactly uses algebraic methods and is attributed to Sharir. As the art gallery problem is ER-complete, it seems unlikely to avoid algebraic methods, without additional assumptions. In this paper, we introduce the notion of vision stability. In order to describe vision stability consider an enhanced guard that can see "around the corner" by an angle of $\delta$ or a diminished guard whose vision is by an angle of $\delta$ "blocked" by reflex vertices. A polygon $P$ has vision stability $\delta$ if the optimal number of enhanced guards to guard $P$ is the same as the optimal number of diminished guards to guard $P$. We will argue that most relevant polygons are vision stable. We describe a one-shot vision stable algorithm that computes an optimal guard set for visionstable polygons using polynomial time and solving one integer program. It guarantees to find the optimal solution for every vision stable polygon. We implemented an iterative visionstable algorithm and show its practical performance is slower, but comparable with other state of the art algorithms. Our iterative algorithm is inspired and follows closely the one-shot algorithm. It delays several steps and only computes them when deemed necessary. Given a chord $c$ of a polygon, we denote by $n(c)$ the number of vertices visible from $c$. The chord-width of a polygon is the maximum $n(c)$ over all possible chords $c$. The set of vision stable polygons admits an FPT algorithm when parametrized by the chord-width. Furthermore, the one-shot algorithm runs in FPT time, when parameterized by the number of reflex vertices.

The paper revisits the robust $s$-$t$ path problem, one of the most fundamental problems in robust optimization. In the problem, we are given a directed graph with $n$ vertices and $k$ distinct cost functions (scenarios) defined over edges, and aim to choose an $s$-$t$ path such that the total cost of the path is always provable no matter which scenario is realized. With the view of each cost function being associated with an agent, our goal is to find a common $s$-$t$ path minimizing the maximum objective among all agents, and thus create a fair solution for them. The problem is hard to approximate within $o(\log k)$ by any quasi-polynomial time algorithm unless $\mathrm{NP} \subseteq \mathrm{DTIME}(n^{\mathrm{poly}\log n})$, and the best approximation ratio known to date is $\widetilde{O}(\sqrt{n})$ which is based on the natural flow linear program. A longstanding open question is whether we can achieve a polylogarithmic approximation even when a quasi-polynomial running time is allowed. We give the first polylogarithmic approximation for robust $s$-$t$ path since the problem was proposed more than two decades ago. In particular, we introduce a $O(\log n \log k)$-approximate algorithm running in quasi-polynomial time. The algorithm is built on a novel linear program formulation for a decision-tree-type structure which enables us to get rid of the $\Omega(\max\{k,\sqrt{n}\})$ integrality gap of the natural flow LP. Further, we also consider some well-known graph classes, e.g., graphs with bounded treewidth, and show that the polylogarithmic approximation can be achieved polynomially on these graphs. We hope the new proposed techniques in the paper can offer new insights into the robust $s$-$t$ path problem and related problems in robust optimization.

DPPs were introduced by Macchi as a model in quantum optics the 1970s. Since then, they have been widely used as models and subsampling tools in statistics and computer science. Most applications require sampling from a DPP, and given their quantum origin, it is natural to wonder whether sampling a DPP on a quantum computer is easier than on a classical one. We focus here on DPPs over a finite state space, which are distributions over the subsets of $\{1,\dots,N\}$ parametrized by an $N\times N$ Hermitian kernel matrix. Vanilla sampling consists in two steps, of respective costs $\mathcal{O}(N^3)$ and $\mathcal{O}(Nr^2)$ operations on a classical computer, where $r$ is the rank of the kernel matrix. A large first part of the current paper consists in explaining why the state-of-the-art in quantum simulation of fermionic systems already yields quantum DPP sampling algorithms. We then modify existing quantum circuits, and discuss their insertion in a full DPP sampling pipeline that starts from practical kernel specifications. The bottom line is that, with $P$ (classical) parallel processors, we can divide the preprocessing cost by $P$ and build a quantum circuit with $\mathcal{O}(Nr)$ gates that sample a given DPP, with depth varying from $\mathcal{O}(N)$ to $\mathcal{O}(r\log N)$ depending on qubit-communication constraints on the target machine. We also connect existing work on the simulation of superconductors to Pfaffian point processes, which generalize DPPs and would be a natural addition to the machine learner's toolbox. Finally, the circuits are empirically validated on a classical simulator and on 5-qubit machines.

Dimensionality reduction (DR) algorithms compress high-dimensional data into a lower dimensional representation while preserving important features of the data. DR is a critical step in many analysis pipelines as it enables visualisation, noise reduction and efficient downstream processing of the data. In this work, we introduce the ProbDR variational framework, which interprets a wide range of classical DR algorithms as probabilistic inference algorithms in this framework. ProbDR encompasses PCA, CMDS, LLE, LE, MVU, diffusion maps, kPCA, Isomap, (t-)SNE, and UMAP. In our framework, a low-dimensional latent variable is used to construct a covariance, precision, or a graph Laplacian matrix, which can be used as part of a generative model for the data. Inference is done by optimizing an evidence lower bound. We demonstrate the internal consistency of our framework and show that it enables the use of probabilistic programming languages (PPLs) for DR. Additionally, we illustrate that the framework facilitates reasoning about unseen data and argue that our generative models approximate Gaussian processes (GPs) on manifolds. By providing a unified view of DR, our framework facilitates communication, reasoning about uncertainties, model composition, and extensions, particularly when domain knowledge is present.

We design replicable algorithms in the context of statistical clustering under the recently introduced notion of replicability from Impagliazzo et al. [2022]. According to this definition, a clustering algorithm is replicable if, with high probability, its output induces the exact same partition of the sample space after two executions on different inputs drawn from the same distribution, when its internal randomness is shared across the executions. We propose such algorithms for the statistical $k$-medians, statistical $k$-means, and statistical $k$-centers problems by utilizing approximation routines for their combinatorial counterparts in a black-box manner. In particular, we demonstrate a replicable $O(1)$-approximation algorithm for statistical Euclidean $k$-medians ($k$-means) with $\operatorname{poly}(d)$ sample complexity. We also describe an $O(1)$-approximation algorithm with an additional $O(1)$-additive error for statistical Euclidean $k$-centers, albeit with $\exp(d)$ sample complexity. In addition, we provide experiments on synthetic distributions in 2D using the $k$-means++ implementation from sklearn as a black-box that validate our theoretical results.

Classic algorithms and machine learning systems like neural networks are both abundant in everyday life. While classic computer science algorithms are suitable for precise execution of exactly defined tasks such as finding the shortest path in a large graph, neural networks allow learning from data to predict the most likely answer in more complex tasks such as image classification, which cannot be reduced to an exact algorithm. To get the best of both worlds, this thesis explores combining both concepts leading to more robust, better performing, more interpretable, more computationally efficient, and more data efficient architectures. The thesis formalizes the idea of algorithmic supervision, which allows a neural network to learn from or in conjunction with an algorithm. When integrating an algorithm into a neural architecture, it is important that the algorithm is differentiable such that the architecture can be trained end-to-end and gradients can be propagated back through the algorithm in a meaningful way. To make algorithms differentiable, this thesis proposes a general method for continuously relaxing algorithms by perturbing variables and approximating the expectation value in closed form, i.e., without sampling. In addition, this thesis proposes differentiable algorithms, such as differentiable sorting networks, differentiable renderers, and differentiable logic gate networks. Finally, this thesis presents alternative training strategies for learning with algorithms.

北京阿比特科技有限公司