亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Transposition Distance Problem (TDP) is a classical problem in genome rearrangements which seeks to determine the minimum number of transpositions needed to transform a linear chromosome into another represented by the permutations $\pi$ and $\sigma$. This paper focuses on the equivalent problem of Sorting By Transpositions (SBT), where $\sigma$ is the identity permutation $\iota$. Specifically, we investigate properties of palisades, a family of permutations that are ``hard'' to sort, as they require numerous transpositions above the celebrated lower bound devised by Bafna and Pevzner. By determining the transposition distance of palisades, we were able to provide the exact transposition diameter for $3$-permutations (TD3), a special subset of the Symmetric Group $S_n$, essential for the study of approximate solutions for SBT using the simplification technique. The exact value for TD3 has remained unknown since Elias and Hartman showed an upper bound for it. Another consequence of determining the transposition distance of palisades is that, using as lower bound the one by Bafna and Pevzner, it is impossible to guarantee approximation ratios lower than $1.375$ when approximating SBT. This finding has significant implications for the study of SBT, as this problem has been subject of intense research efforts for the past 25 years.

相關內容

Multilevel estimators aim at reducing the variance of Monte Carlo statistical estimators, by combining samples generated with simulators of different costs and accuracies. In particular, the recent work of Schaden and Ullmann (2020) on the multilevel best linear unbiased estimator (MLBLUE) introduces a framework unifying several multilevel and multifidelity techniques. The MLBLUE is reintroduced here using a variance minimization approach rather than the regression approach of Schaden and Ullmann. We then discuss possible extensions of the scalar MLBLUE to a multidimensional setting, i.e. from the expectation of scalar random variables to the expectation of random vectors. Several estimators of increasing complexity are proposed: a) multilevel estimators with scalar weights, b) with element-wise weights, c) with spectral weights and d) with general matrix weights. The computational cost of each method is discussed. We finally extend the MLBLUE to the estimation of second-order moments in the multidimensional case, i.e. to the estimation of covariance matrices. The multilevel estimators proposed are d) a multilevel estimator with scalar weights and e) with element-wise weights. In large-dimension applications such as data assimilation for geosciences, the latter estimator is computationnally unaffordable. As a remedy, we also propose f) a multilevel covariance matrix estimator with optimal multilevel localization, inspired by the optimal localization theory of M\'en\'etrier and Aulign\'e (2015). Some practical details on weighted MLMC estimators of covariance matrices are given in appendix.

Providing formal safety and performance guarantees for autonomous systems is becoming increasingly important. Hamilton-Jacobi (HJ) reachability analysis is a popular formal verification tool for providing these guarantees, since it can handle general nonlinear system dynamics, bounded adversarial system disturbances, and state and input constraints. However, it involves solving a PDE, whose computational and memory complexity scales exponentially with respect to the state dimensionality, making its direct use on large-scale systems intractable. A recently proposed method called DeepReach overcomes this challenge by leveraging a sinusoidal neural PDE solver for high-dimensional reachability problems, whose computational requirements scale with the complexity of the underlying reachable tube rather than the state space dimension. Unfortunately, neural networks can make errors and thus the computed solution may not be safe, which falls short of achieving our overarching goal to provide formal safety assurances. In this work, we propose a method to compute an error bound for the DeepReach solution. This error bound can then be used for reachable tube correction, resulting in a safe approximation of the true reachable tube. We also propose a scenario-based optimization approach to compute a probabilistic bound on this error correction for general nonlinear dynamical systems. We demonstrate the efficacy of the proposed approach in obtaining probabilistically safe reachable tubes for high-dimensional rocket-landing and multi-vehicle collision-avoidance problems.

In this paper, we study the well-known "Heavy Ball" method for convex and nonconvex optimization introduced by Polyak in 1964, and establish its convergence under a variety of situations. Traditionally, most algorithms use "full-coordinate update," that is, at each step, every component of the argument is updated. However, when the dimension of the argument is very high, it is more efficient to update some but not all components of the argument at each iteration. We refer to this as "batch updating" in this paper. When gradient-based algorithms are used together with batch updating, in principle it is sufficient to compute only those components of the gradient for which the argument is to be updated. However, if a method such as backpropagation is used to compute these components, computing only some components of gradient does not offer much savings over computing the entire gradient. Therefore, to achieve a noticeable reduction in CPU usage at each step, one can use first-order differences to approximate the gradient. The resulting estimates are biased, and also have unbounded variance. Thus some delicate analysis is required to ensure that the HB algorithm converge when batch updating is used instead of full-coordinate updating, and/or approximate gradients are used instead of true gradients. In this paper, we establish the almost sure convergence of the iterations to the stationary point(s) of the objective function under suitable conditions; in addition, we also derive upper bounds on the rate of convergence. To the best of our knowledge, there is no other paper that combines all of these features. This paper is dedicated to the memory of Boris Teodorovich Polyak

The notion of graph covers is a discretization of covering spaces introduced and deeply studied in topology. In discrete mathematics and theoretical computer science, they have attained a lot of attention from both the structural and complexity perspectives. Nonetheless, disconnected graphs were usually omitted from the considerations with the explanation that it is sufficient to understand coverings of the connected components of the target graph by components of the source one. However, different (but equivalent) versions of the definition of covers of connected graphs generalize to non-equivalent definitions for disconnected graphs. The aim of this paper is to summarize this issue and to compare three different approaches to covers of disconnected graphs: 1) locally bijective homomorphisms, 2) globally surjective locally bijective homomorphisms (which we call \emph{surjective covers}), and 3) locally bijective homomorphisms which cover every vertex the same number of times (which we call \emph{equitable covers}). The standpoint of our comparison is the complexity of deciding if an input graph covers a fixed target graph. We show that both surjective and equitable covers satisfy what certainly is a natural and welcome property: covering a disconnected graph is polynomial-time decidable if such it is for every connected component of the graph, and it is NP-complete if it is NP-complete for at least one of its components. We further argue that the third variant, equitable covers, is the most natural one, namely when considering covers of colored graphs. Moreover, the complexity of surjective and equitable covers differ from the fixed parameter complexity point of view. In line with the current trends in topological graph theory, as well as its applications in mathematical physics, we consider graphs in a very general sense[...]

We consider the problem of estimating the causal effect of a treatment on an outcome in linear structural causal models (SCM) with latent confounders when we have access to a single proxy variable. Several methods (such as difference-in-difference (DiD) estimator or negative outcome control) have been proposed in this setting in the literature. However, these approaches require either restrictive assumptions on the data generating model or having access to at least two proxy variables. We propose a method to estimate the causal effect using cross moments between the treatment, the outcome, and the proxy variable. In particular, we show that the causal effect can be identified with simple arithmetic operations on the cross moments if the latent confounder in linear SCM is non-Gaussian. In this setting, DiD estimator provides an unbiased estimate only in the special case where the latent confounder has exactly the same direct causal effects on the outcomes in the pre-treatment and post-treatment phases. This translates to the common trend assumption in DiD, which we effectively relax. Additionally, we provide an impossibility result that shows the causal effect cannot be identified if the observational distribution over the treatment, the outcome, and the proxy is jointly Gaussian. Our experiments on both synthetic and real-world datasets showcase the effectiveness of the proposed approach in estimating the causal effect.

The multigrid V-cycle method is a popular method for solving systems of linear equations. It computes an approximate solution by using smoothing on fine levels and solving a system of linear equations on the coarsest level. Solving on the coarsest level depends on the size and difficulty of the problem. If the size permits, it is typical to use a direct method based on LU or Cholesky decomposition. In the settings with large coarsest-level problems approximate solvers such as iterative Krylov subspace methods, or direct methods based on low-rank approximation, are often used. The accuracy of the coarsest-level solver is typically determined based on the experience of the users with the concrete problems and methods. In this paper we present an approach to analyzing the effects of approximate coarsest-level solves on the convergence of the V-cycle method for symmetric positive definite problems. Using this approach we discuss how the convergence of the V-cycle method may be affected by (1) the choice of the tolerance in a stopping criterion based on the relative residual norm for an iterative coarsest-level solver or (2) by the choices of the low-rank threshold parameter and finite precision arithmetic for a block low-rank direct coarsest-level solver.Furthermore we present new coarsest-level stopping criteria tailored to the multigrid method and suggest a heuristic strategy for their effective use in practical computations.

We consider an inverse problem for a finite graph $(X,E)$ where we are given a subset of vertices $B\subset X$ and the distances $d_{(X,E)}(b_1,b_2)$ of all vertices $b_1,b_2\in B$. The distance of points $x_1,x_2\in X$ is defined as the minimal number of edges needed to connect two vertices, so all edges have length 1. The inverse problem is a discrete version of the boundary rigidity problem in Riemannian geometry or the inverse travel time problem in geophysics. We will show that this problem has unique solution under certain conditions and develop quantum computing methods to solve it. We prove the following uniqueness result: when $(X,E)$ is a tree and $B$ is the set of leaves of the tree, the graph $(X,E)$ can be uniquely determined in the class of all graphs having a fixed number of vertices. We present a quantum computing algorithm which produces a graph $(X,E)$, or one of those, which has a given number of vertices and the required distances between vertices in $B$. To this end we develop an algorithm that takes in a qubit representation of a graph and combine it with Grover's search algorithm. The algorithm can be implemented using only $O(|X|^2)$ qubits, the same order as the number of elements in the adjacency matrix of $(X,E)$. It also has a quadratic improvement in computational cost compared to standard classical algorithms. Finally, we consider applications in theory of computation, and show that a slight modification of the above inverse problem is NP-complete: all NP-problems can be reduced to a discrete inverse problem we consider.

In this research paper, we address the Distinct Elements estimation problem in the context of streaming algorithms. The problem involves estimating the number of distinct elements in a given data stream $\mathcal{A} = (a_1, a_2,\ldots, a_m)$, where $a_i \in \{1, 2, \ldots, n\}$. Over the past four decades, the Distinct Elements problem has received considerable attention, theoretically and empirically, leading to the development of space-optimal algorithms. A recent sampling-based algorithm proposed by Chakraborty et al.[11] has garnered significant interest and has even attracted the attention of renowned computer scientist Donald E. Knuth, who wrote an article on the same topic [6] and called the algorithm CVM. In this paper, we thoroughly examine the algorithms (referred to as CVM1, CVM2 in [6] and DonD, DonD' in [6]. We first unify all these algorithms and call them cutoff-based algorithms. Then we provide an approximation and biasedness analysis of these algorithms.

Rapidly increasing quality of AI-generated content makes it difficult to distinguish between human and AI-generated texts, which may lead to undesirable consequences for society. Therefore, it becomes increasingly important to study the properties of human texts that are invariant over text domains and various proficiency of human writers, can be easily calculated for any language, and can robustly separate natural and AI-generated texts regardless of the generation model and sampling method. In this work, we propose such an invariant of human texts, namely the intrinsic dimensionality of the manifold underlying the set of embeddings of a given text sample. We show that the average intrinsic dimensionality of fluent texts in natural language is hovering around the value $9$ for several alphabet-based languages and around $7$ for Chinese, while the average intrinsic dimensionality of AI-generated texts for each language is $\approx 1.5$ lower, with a clear statistical separation between human-generated and AI-generated distributions. This property allows us to build a score-based artificial text detector. The proposed detector's accuracy is stable over text domains, generator models, and human writer proficiency levels, outperforming SOTA detectors in model-agnostic and cross-domain scenarios by a significant margin.

In this work, we prove rigorous error estimates for a hybrid method introduced in [15] for solving the time-dependent radiation transport equation (RTE). The method relies on a splitting of the kinetic distribution function for the radiation into uncollided and collided components. A high-resolution method (in angle) is used to approximate the uncollided components and a low-resolution method is used to approximate the the collided component. After each time step, the kinetic distribution is reinitialized to be entirely uncollided. For this analysis, we consider a mono-energetic problem on a periodic domains, with constant material cross-sections of arbitrary size. To focus the analysis, we assume the uncollided equation is solved exactly and the collided part is approximated in angle via a spherical harmonic expansion ($\text{P}_N$ method). Using a non-standard set of semi-norms, we obtain estimates of the form $C(\varepsilon,\sigma,\Delta t)N^{-s}$ where $s\geq 1$ denotes the regularity of the solution in angle, $\varepsilon$ and $\sigma$ are scattering parameters, $\Delta t$ is the time-step before reinitialization, and $C$ is a complicated function of $\varepsilon$, $\sigma$, and $\Delta t$. These estimates involve analysis of the multiscale RTE that includes, but necessarily goes beyond, usual spectral analysis. We also compute error estimates for the monolithic $\text{P}_N$ method with the same resolution as the collided part in the hybrid. Our results highlight the benefits of the hybrid approach over the monolithic discretization in both highly scattering and streaming regimes.

北京阿比特科技有限公司