亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Many important problems in Bioinformatics (e.g., assembly or multi-assembly) admit multiple solutions, while the final objective is to report only one. A common approach to deal with this uncertainty is finding safe partial solutions (e.g., contigs) which are common to all solutions. Previous research on safety has focused on polynomially-time solvable problems, whereas many successful and natural models are NP-hard to solve, leaving a lack of "safety tools" for such problems. We propose the first method for computing all safe solutions for an NP-hard problem, minimum flow decomposition. We obtain our results by developing a "safety test" for paths based on a general Integer Linear Programming (ILP) formulation. Moreover, we provide implementations with practical optimizations aimed to reduce the total ILP time, the most efficient of these being based on a recursive group-testing procedure. Results: Experimental results on the transcriptome datasets of Shao and Kingsford (TCBB, 2017) show that all safe paths for minimum flow decompositions correctly recover up to 90% of the full RNA transcripts, which is at least 25% more than previously known safe paths, such as (Caceres et al. TCBB, 2021), (Zheng et al., RECOMB 2021), (Khan et al., RECOMB 2022, ESA 2022). Moreover, despite the NP-hardness of the problem, we can report all safe paths for 99.8% of the over 27,000 non-trivial graphs of this dataset in only 1.5 hours. Our results suggest that, on perfect data, there is less ambiguity than thought in the notoriously hard RNA assembly problem. Availability: //github.com/algbio/mfd-safety

相關內容

Generalized linear mixed models are powerful tools for analyzing clustered data, where the unknown parameters are classically (and most commonly) estimated by the maximum likelihood and restricted maximum likelihood procedures. However, since the likelihood based procedures are known to be highly sensitive to outliers, M-estimators have become popular as a means to obtain robust estimates under possible data contamination. In this paper, we prove that, for sufficiently smooth general loss functions defining the M-estimators in generalized linear mixed models, the tail probability of the deviation between the estimated and the true regression coefficients have an exponential bound. This implies an exponential rate of consistency of these M-estimators under appropriate assumptions, generalizing the existing exponential consistency results from univariate to multivariate responses. We have illustrated this theoretical result further for the special examples of the maximum likelihood estimator and the robust minimum density power divergence estimator, a popular example of model-based M-estimators, in the settings of linear and logistic mixed models, comparing it with the empirical rate of convergence through simulation studies.

We study the computational complexity of multi-stage robust optimization problems. Such problems are formulated with alternating min/max quantifiers and therefore naturally fall into a higher stage of the polynomial hierarchy. Despite this, almost no hardness results with respect to the polynomial hierarchy are known. In this work, we examine the hardness of robust two-stage adjustable and robust recoverable optimization with budgeted uncertainty sets. Our main technical contribution is the introduction of a technique tailored to prove $\Sigma^p_3$-hardness of such problems. We highlight a difference between continuous and discrete budgeted uncertainty: In the discrete case, indeed a wide range of problems becomes complete for the third stage of the polynomial hierarchy; in particular, this applies to the TSP, independent set, and vertex cover problems. However, in the continuous case this does not happen and problems remain in the first stage of the hierarchy. Finally, if we allow the uncertainty to not only affect the objective, but also multiple constraints, then this distinction disappears and even in the continuous case we encounter hardness for the third stage of the hierarchy. This shows that even robust problems which are already NP-complete can still exhibit a significant computational difference between column-wise and row-wise uncertainty.

We introduce algebraic machine reasoning, a new reasoning framework that is well-suited for abstract reasoning. Effectively, algebraic machine reasoning reduces the difficult process of novel problem-solving to routine algebraic computation. The fundamental algebraic objects of interest are the ideals of some suitably initialized polynomial ring. We shall explain how solving Raven's Progressive Matrices (RPMs) can be realized as computational problems in algebra, which combine various well-known algebraic subroutines that include: Computing the Gr\"obner basis of an ideal, checking for ideal containment, etc. Crucially, the additional algebraic structure satisfied by ideals allows for more operations on ideals beyond set-theoretic operations. Our algebraic machine reasoning framework is not only able to select the correct answer from a given answer set, but also able to generate the correct answer with only the question matrix given. Experiments on the I-RAVEN dataset yield an overall $93.2\%$ accuracy, which significantly outperforms the current state-of-the-art accuracy of $77.0\%$ and exceeds human performance at $84.4\%$ accuracy.

We consider a variant of contextual bandits in which the algorithm consumes multiple resources subject to linear constraints on total consumption. This problem generalizes contextual bandits with knapsacks (CBwK), allowing for packing and covering constraints, as well as positive and negative resource consumption. We present a new algorithm that is simple, computationally efficient, and admits vanishing regret. It is statistically optimal for CBwK when an algorithm must stop once some constraint is violated. Our algorithm builds on LagrangeBwK (Immorlica et al., FOCS 2019) , a Lagrangian-based technique for CBwK, and SquareCB (Foster and Rakhlin, ICML 2020), a regression-based technique for contextual bandits. Our analysis leverages the inherent modularity of both techniques.

We investigate a family of bilevel imaging learning problems where the lower-level instance corresponds to a convex variational model involving first- and second-order nonsmooth sparsity-based regularizers. By using geometric properties of the primal-dual reformulation of the lower-level problem and introducing suitable auxiliar variables, we are able to reformulate the original bilevel problems as Mathematical Programs with Complementarity Constraints (MPCC). For the latter, we prove tight constraint qualification conditions (MPCC-RCPLD and partial MPCC-LICQ) and derive Mordukhovich (M-) and Strong (S-) stationarity conditions. The stationarity systems for the MPCC turn also into stationarity conditions for the original formulation. Second-order sufficient optimality conditions are derived as well, together with a local uniqueness result for stationary points. The proposed reformulation may be extended to problems in function spaces, leading to MPCC's with constraints on the gradient of the state. The MPCC reformulation also leads to the efficient use of available large-scale nonlinear programming solvers, as shown in a companion paper, where different imaging applications are studied.

We propose a robust framework for the planar pose graph optimization contaminated by loop closure outliers. Our framework rejects outliers by first decoupling the robust PGO problem wrapped by a Truncated Least Squares kernel into two subproblems. Then, the framework introduces a linear angle representation to rewrite the first subproblem that is originally formulated with rotation matrices. The framework is configured with the Graduated Non-Convexity (GNC) algorithm to solve the two non-convex subproblems in succession without initial guesses. Thanks to the linearity properties of both the subproblems, our framework requires only linear solvers to optimally solve the optimization problems encountered in GNC. We extensively validate the proposed framework, named DEGNC-LAF (DEcoupled Graduated Non-Convexity with Linear Angle Formulation) in planar PGO benchmarks. It turns out that it runs significantly (sometimes up to over 30 times) faster than the standard and general-purpose GNC while resulting in high-quality estimates.

The ability to compose code in a modular fashion is important to the construction of large programs. In the logic programming setting, it is desirable that such capabilities be realized through logic-based devices. We describe an approach for doing this here. In our scheme a module corresponds to a block of code whose external view is mediated by a signature. Thus, signatures impose a form of hiding that is explained logically via existential quantifications over predicate, function and constant names. Modules interact through the mechanism of accumulation that translates into conjoining the clauses in them while respecting the scopes of existential quantifiers introduced by signatures. We show that this simple device for statically structuring name spaces suffices for realizing features related to code scoping for which the dynamic control of predicate definitions was earlier considered necessary. The module capabilities we present have previously been implemented via the compile-time inlining of accumulated modules. This approach does not support separate compilation. We redress this situation by showing how each distinct module can be compiled separately and inlining can be realized by a later, complementary and equally efficient linking phase.

Verified compositional compilation (VCC) is a notion of modular verification of compilers that supports compilation of heterogeneous programs. The key to achieve VCC is to design a semantic interface that enables composition of correctness theorems for compiling individual modules. Most of the existing techniques for VCC fix a semantic interface from the very beginning and force it down to every single compiler pass. This requires significant changes to the existing framework and makes it difficult to understand the relationship between conditions enforced by the semantic interface and the actual requirements of compiler passes. A different approach is to design appropriate semantic interfaces for individual compiler passes and combine them into a unified interface which faithfully reflects the requirements of underlying compiler passes. However, this requires vertically composable simulation relations, which were traditionally considered very difficult to construct even with extensive changes to compiler verification frameworks. We propose a solution to construction of unified semantic interfaces for VCC with a bottom-up approach. Our starting point is CompCertO, an extension of CompCert -- the state-of-the-art verified compiler -- that supports VCC but lacks a unified interface. We discover that a CompCert Kripke Logical Relation (CKLR) in CompCertO provides a uniform notion of memory protection for evolving memory states across modules and is transitively composable. Based on this uniform and composable CKLR, we then merge the simulation relations for all the compiler pass in CompCertO (except for three value analysis passes) into a unified interface. We demonstrate the conciseness and effectiveness of this unified interface by applying it to verify the compositional compilation of a non-trivial heterogeneous program with mutual recursion.

This paper develops efficient preconditioned iterative solvers for incompressible flow problems discretised by an enriched Taylor-Hood mixed approximation, in which the usual pressure space is augmented by a piecewise constant pressure to ensure local mass conservation. This enrichment process causes over-specification of the pressure, which complicates the design and implementation of efficient solvers for the resulting linear systems. We first describe the impact of this choice of pressure space on the matrices involved. Next, we show how to recover effective solvers for Stokes problems, with a preconditioner based on the singular pressure mass matrix, and for Oseen systems arising from linearised Navier-Stokes equations, by using a two-stage pressure convection-diffusion strategy. The codes used to generate the numerical results are available online.

Physics-based covariance models provide a systematic way to construct covariance models that are consistent with the underlying physical laws in Gaussian process analysis. The unknown parameters in the covariance models can be estimated using maximum likelihood estimation, but direct construction of the covariance matrix and classical strategies of computing with it requires $n$ physical model runs, $n^2$ storage complexity, and $n^3$ computational complexity. To address such challenges, we propose to approximate the discretized covariance function using hierarchical matrices. By utilizing randomized range sketching for individual off-diagonal blocks, the construction process of the hierarchical covariance approximation requires $O(\log{n})$ physical model applications and the maximum likelihood computations require $O(n\log^2{n})$ effort per iteration. We propose a new approach to compute exactly the trace of products of hierarchical matrices which results in the expected Fischer information matrix being computable in $O(n\log^2{n})$ as well. The construction is totally matrix-free and the derivatives of the covariance matrix can then be approximated in the same hierarchical structure by differentiating the whole process. Numerical results are provided to demonstrate the effectiveness, accuracy, and efficiency of the proposed method for parameter estimations and uncertainty quantification.

北京阿比特科技有限公司