亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In 2006, Biere, Jussila, and Sinz made the key observation that the underlying logic behind algorithms for constructing Reduced, Ordered Binary Decision Diagrams (BDDs) can be encoded as steps in a proof in the extended resolution logical framework. Through this, a BDD-based Boolean satisfiability (SAT) solver can generate a checkable proof of unsatisfiability for a set of clauses. Such a proof indicates that the formula is truly unsatisfiable without requiring the user to trust the BDD package or the SAT solver built on top of it. We extend their work to enable arbitrary existential quantification of the formula variables, a critical capability for BDD-based SAT solvers. We demonstrate the utility of this approach by applying a BDD-based solver, implemented by modifying an existing BDD package, to several challenging Boolean satisfiability problems. Our resultsdemonstrate scaling for parity formulas, as well as the Urquhart, mutilated chessboard, and pigeonhole problems far beyond that of other proof-generating SAT solvers.

相關內容

SAT是研究者關注命題可滿足性問題的理論與應用的第一次年度會議。除了簡單命題可滿足性外,它還包括布爾優化(如MaxSAT和偽布爾(PB)約束)、量化布爾公式(QBF)、可滿足性模理論(SMT)和約束規劃(CP),用于與布爾級推理有明確聯系的問題。官網鏈接: · MoDELS · 推斷 · CASES · Analysis ·
2023 年 5 月 19 日

Bayesian inference is a powerful tool for combining information in complex settings, a task of increasing importance in modern applications. However, Bayesian inference with a flawed model can produce unreliable conclusions. This review discusses approaches to performing Bayesian inference when the model is misspecified, where by misspecified we mean that the analyst is unwilling to act as if the model is correct. Much has been written about this topic, and in most cases we do not believe that a conventional Bayesian analysis is meaningful when there is serious model misspecification. Nevertheless, in some cases it is possible to use a well-specified model to give meaning to a Bayesian analysis of a misspecified model and we will focus on such cases. Three main classes of methods are discussed - restricted likelihood methods, which use a model based on a non-sufficient summary of the original data; modular inference methods which use a model constructed from coupled submodels and some of the submodels are correctly specified; and the use of a reference model to construct a projected posterior or predictive distribution for a simplified model considered to be useful for prediction or interpretation.

Submodularity in combinatorial optimization has been a topic of many studies and various algorithmic techniques exploiting submodularity of a studied problem have been proposed. It is therefore natural to ask, in cases where the cost function of the studied problem is not submodular, whether it is possible to approximate this cost function with a proxy submodular function. We answer this question in the negative for two major problems in metric optimization, namely Steiner Tree and Uncapacitated Facility Location. We do so by proving super-constant lower bounds on the submodularity gap for these problems, which are in contrast to the known constant factor cost sharing schemes known for them. Technically, our lower bounds build on strong lower bounds for the online variants of these two problems. Nevertheless, online lower bounds do not always imply submodularity lower bounds. We show that the problem Maximum Bipartite Matching does not exhibit any submodularity gap, despite its online variant being only (1 - 1/e)-competitive in the randomized setting.

Subsampling of node sets is useful in contexts such as multilevel methods, computer graphics, and machine learning. On uniform grid-based node sets, the process of subsampling is simple. However, on node sets with high density variation, the process of coarsening a node set through node elimination is more interesting. A novel method for the subsampling of variable density node sets is presented here. Additionally, two novel node set quality measures are presented to determine the ability of a subsampling method to preserve the quality of an initial node set. The new subsampling method is demonstrated on the test problems of solving the Poisson and Laplace equations by multilevel radial basis function-generated finite differences (RBF-FD) iterations. High-order solutions with robust convergence are achieved in linear time with respect to node set size.

A posteriori ratemaking in insurance uses a Bayesian credibility model to upgrade the current premiums of a contract by taking into account policyholders' attributes and their claim history. Most data-driven models used for this task are mathematically intractable, and premiums must be then obtained through numerical methods such as simulation such MCMC. However, these methods can be computationally expensive and prohibitive for large portfolios when applied at the policyholder level. Additionally, these computations become ``black-box" procedures as there is no expression showing how the claim history of policyholders is used to upgrade their premiums. To address these challenges, this paper proposes the use of a surrogate modeling approach to inexpensively derive a closed-form expression for computing the Bayesian credibility premiums for any given model. As a part of the methodology, the paper introduces the ``credibility index", which is a summary statistic of a policyholder's claim history that serves as the main input of the surrogate model and that is sufficient for several distribution families, including the exponential dispersion family. As a result, the computational burden of a posteriori ratemaking for large portfolios is therefore reduced through the direct evaluation of the closed-form expression, which additionally can provide a transparent and interpretable way of computing Bayesian premiums.

The approximate stabilizer rank of a quantum state is the minimum number of terms in any approximate decomposition of that state into stabilizer states. Bravyi and Gosset showed that the approximate stabilizer rank of a so-called "magic" state like $|T\rangle^{\otimes n}$, up to polynomial factors, is an upper bound on the number of classical operations required to simulate an arbitrary quantum circuit with Clifford gates and $n$ number of $T$ gates. As a result, an exponential lower bound on this quantity seems inevitable. Despite this intuition, several attempts using various techniques could not lead to a better than a linear lower bound on the "exact" rank of $|T\rangle^{\otimes n}$, meaning the minimal size of a decomposition that exactly produces the state. However, an "approximate" rank is more realistically related to the cost of simulating quantum circuits because exact rank is not robust to errors; there are quantum states with exponentially large exact ranks but constant approximate ranks even with arbitrarily small approximation parameters. No lower bound better than $\tilde \Omega(\sqrt n)$ has been known for the approximate rank. In this paper, we improve this lower bound to $\tilde \Omega (n)$ for a wide range of the approximation parameters. Our approach is based on a strong lower bound on the approximate rank of a quantum state sampled from the Haar measure and a step-by-step analysis of the approximate rank of a magic-state teleportation protocol to sample from the Haar measure.

Key economic variables are often published with a significant delay of over a month. The nowcasting literature has arisen to provide fast, reliable estimates of delayed economic indicators and is closely related to filtering methods in signal processing. The path signature is a mathematical object which captures geometric properties of sequential data; it naturally handles missing data from mixed frequency and/or irregular sampling -- issues often encountered when merging multiple data sources -- by embedding the observed data in continuous time. Calculating path signatures and using them as features in models has achieved state-of-the-art results in fields such as finance, medicine, and cyber security. We look at the nowcasting problem by applying regression on signatures, a simple linear model on these nonlinear objects that we show subsumes the popular Kalman filter. We quantify the performance via a simulation exercise, and through application to nowcasting US GDP growth, where we see a lower error than a dynamic factor model based on the New York Fed staff nowcasting model. Finally we demonstrate the flexibility of this method by applying regression on signatures to nowcast weekly fuel prices using daily data. Regression on signatures is an easy-to-apply approach that allows great flexibility for data with complex sampling patterns.

Answer Set Programming with Quantifiers ASP(Q) extends Answer Set Programming (ASP) to allow for declarative and modular modeling of problems from the entire polynomial hierarchy. The first implementation of ASP(Q), called qasp, was based on a translation to Quantified Boolean Formulae (QBF) with the aim of exploiting the well-developed and mature QBF-solving technology. However, the implementation of the QBF encoding employed in qasp is very general and might produce formulas that are hard to evaluate for existing QBF solvers because of the large number of symbols and sub-clauses. In this paper, we present a new implementation that builds on the ideas of qasp and features both a more efficient encoding procedure and new optimized encodings of ASP(Q) programs in QBF. The new encodings produce smaller formulas (in terms of the number of quantifiers, variables, and clauses) and result in a more efficient evaluation process. An algorithm selection strategy automatically combines several QBF-solving back-ends to further increase performance. An experimental analysis, conducted on known benchmarks, shows that the new system outperforms qasp.

Quantum Repeaters are one critical technology for scalable quantum networking. One of the key challenges regarding quantum repeaters is their management of how they provide quantum entanglement for distant quantum computers. We focus on the RuleSet architecture, which is a decentralized way to manage repeaters. The RuleSet concept is designed to scale up the management of quantum repeaters for future quantum repeaters, suitable because of its flexibility and asynchronous operation, however, it is still at the conceptual level of definition and it is very hard to define raw RuleSets. In this thesis, we introduce a new programming language, called "RuLa", to write the RuleSets in an intuitive and coherent way. The way RuLa defines RuleSet and Rule is very similar to how the Rule and RuleSets are executed so that the programmer can construct the RuleSets the way they want repeaters to execute them. We provide some examples of how the RuleSets are defined in RuLa and what is the output of the compilation. We also discussed future use cases and applications of this language.

This study proposes a hybrid deep-learning-metaheuristic framework with a bi-level architecture for road network design problems (NDPs). We train a graph neural network (GNN) to approximate the solution of the user equilibrium (UE) traffic assignment problem, and use inferences made by the trained model to calculate fitness function evaluations of a genetic algorithm (GA) to approximate solutions for NDPs. Using two NDP variants and an exact solver as benchmark, we show that our proposed framework can provide solutions within 5% gap of the global optimum results given less than 1% of the time required for finding the optimal results. Our framework can be utilized within an expert system for infrastructure planning to intelligently determine the best infrastructure management decisions. Given the flexibility of the framework, it can easily be adapted to many other decision problems that can be modeled as bi-level problems on graphs. Moreover, we observe many interesting future directions, thus we propose a brief research agenda for this topic. The key observation inspiring influential future research was that fitness function evaluation time using the inferences made by the GNN model for the genetic algorithm was in the order of milliseconds, which points to an opportunity and a need for novel heuristics that 1) can cope well with noisy fitness function values provided by neural networks, and 2) can use the significantly higher computation time provided to them to explore the search space effectively (rather than efficiently). This opens a new avenue for a modern class of metaheuristics that are crafted for use with AI-powered predictors.

In this paper, we propose the multivariate range Value-at-Risk (MRVaR) and the multivariate range covariance (MRCov) as two risk measures and explore their desirable properties in risk management. In particular, we explain that such range-based risk measures are appropriate for risk management of regulation and investment purposes. The multivariate range correlation matrix (MRCorr) is introduced accordingly. To facilitate analytical analyses, we derive explicit expressions of the MRVaR and the MRCov in the context of the multivariate (log-)elliptical distribution family. Frequently-used cases in industry, such as normal, student-$t$, logistic, Laplace, and Pearson type VII distributions, are presented with numerical examples. As an application, we propose a range-based mean-variance framework of optimal portfolio selection. We calculate the range-based efficient frontiers of the optimal portfolios based on real data of stocks' returns. Both the numerical examples and the efficient frontiers demonstrate consistences with the desirable properties of the range-based risk measures.

北京阿比特科技有限公司