亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider a first-order logic for the integers with addition. This logic extends classical first-order logic by modulo-counting, threshold-counting and exact-counting quantifiers, all applied to tuples of variables (here, residues are given as terms while moduli and thresholds are given explicitly). Our main result shows that satisfaction for this logic is decidable in two-fold exponential space. If only threshold- and exact-counting quantifiers are allowed, we prove an upper bound of alternating two-fold exponential time with linearly many alternations. This latter result almost matches Berman's exact complexity of first-order logic without counting quantifiers. To obtain these results, we first translate threshold- and exact-counting quantifiers into classical first-order logic in polynomial time (which already proves the second result). To handle the remaining modulo-counting quantifiers for tuples, we first reduce them in doubly exponential time to modulo-counting quantifiers for single elements. For these quantifiers, we provide a quantifier elimination procedure similar to Reddy and Loveland's procedure for first-order logic and analyse the growth of coefficients, constants, and moduli appearing in this process. The bounds obtained this way allow to restrict quantification in the original formula to integers of bounded size which then implies the first result mentioned above. Our logic is incomparable with the logic considered by Chistikov et al. in 2022. They allow more general counting operations in quantifiers, but only unary quantifiers. The move from unary to non-unary quantifiers is non-trivial, since, e.g., the non-unary version of the H\"artig quantifier results in an undecidable theory.

相關內容

In this paper, we present an error estimate of a second-order linearized finite element (FE) method for the 2D Navier-Stokes equations with variable density. In order to get error estimates, we first introduce an equivalent form of the original system. Later, we propose a general BDF2-FE method for solving this equivalent form, where the Taylor-Hood FE space is used for discretizing the Navier-Stokes equations and conforming FE space is used for discretizing density equation. We show that our scheme ensures discrete energy dissipation. Under the assumption of sufficient smoothness of strong solutions, an error estimate is presented for our numerical scheme for variable density incompressible flow in two dimensions. Finally, some numerical examples are provided to confirm our theoretical results.

The majority of fault-tolerant distributed algorithms are designed assuming a nominal corruption model, in which at most a fraction $f_n$ of parties can be corrupted by the adversary. However, due to the infamous Sybil attack, nominal models are not sufficient to express the trust assumptions in open (i.e., permissionless) settings. Instead, permissionless systems typically operate in a weighted model, where each participant is associated with a weight and the adversary can corrupt a set of parties holding at most a fraction $f_w$ of total weight. In this paper, we suggest a simple way to transform a large class of protocols designed for the nominal model into the weighted model. To this end, we formalize and solve three novel optimization problems, which we collectively call the weight reduction problems, that allow us to map large real weights into small integer weights while preserving the properties necessary for the correctness of the protocols. In all cases, we manage to keep the sum of the integer weights to be at most linear in the number of parties, resulting in extremely efficient protocols for the weighted model. Moreover, we demonstrate that, on weight distributions that emerge in practice, the sum of the integer weights tends to be far from the theoretical worst-case and, often even smaller than the number of participants. While, for some protocols, our transformation requires an arbitrarily small reduction in resilience (i.e., $f_w = f_n - \epsilon$), surprisingly, for many important problems we manage to obtain weighted solutions with the same resilience ($f_w = f_n$) as nominal ones. Notable examples include asynchronous consensus, verifiable secret sharing, erasure-coded distributed storage and broadcast protocols.

Neural abstractions have been recently introduced as formal approximations of complex, nonlinear dynamical models. They comprise a neural ODE and a certified upper bound on the error between the abstract neural network and the concrete dynamical model. So far neural abstractions have exclusively been obtained as neural networks consisting entirely of $ReLU$ activation functions, resulting in neural ODE models that have piecewise affine dynamics, and which can be equivalently interpreted as linear hybrid automata. In this work, we observe that the utility of an abstraction depends on its use: some scenarios might require coarse abstractions that are easier to analyse, whereas others might require more complex, refined abstractions. We therefore consider neural abstractions of alternative shapes, namely either piecewise constant or nonlinear non-polynomial (specifically, obtained via sigmoidal activations). We employ formal inductive synthesis procedures to generate neural abstractions that result in dynamical models with these semantics. Empirically, we demonstrate the trade-off that these different neural abstraction templates have vis-a-vis their precision and synthesis time, as well as the time required for their safety verification (done via reachability computation). We improve existing synthesis techniques to enable abstraction of higher-dimensional models, and additionally discuss the abstraction of complex neural ODEs to improve the efficiency of reachability analysis for these models.

The Bregman-Kaczmarz method is an iterative method which can solve strongly convex problems with linear constraints and uses only one or a selected number of rows of the system matrix in each iteration, thereby making it amenable for large-scale systems. To speed up convergence, we investigate acceleration by heavy ball momentum in the so-called dual update. Heavy ball acceleration of the Kaczmarz method with constant parameters has turned out to be difficult to analyze, in particular no accelerated convergence for the L2-error of the iterates has been proven to the best of our knowledge. Here we propose a way to adaptively choose the momentum parameter by a minimal-error principle similar to a recently proposed method for the standard randomized Kaczmarz method. The momentum parameter can be chosen to exactly minimize the error in the next iterate or to minimize a relaxed version of the minimal error principle. The former choice leads to a theoretically optimal step while the latter is cheaper to compute. We prove improved convergence results compared to the non-accelerated method. Numerical experiments show that the proposed methods can accelerate convergence in practice, also for matrices which arise from applications such as computational tomography.

Since the 1970s with the work of McNaughton, Papert and Sch\"utzenberger, a regular language is known to be definable in the first-order logic if and only if its syntactic monoid is aperiodic. This algebraic characterisation of a fundamental logical fragment has been extended in the quantitative case by Droste and Gastin, dealing with polynomially ambiguous weighted automata and a restricted fragment of weighted first-order logic. In the quantitative setting, the full weighted first-order logic (without the restriction that Droste and Gastin use, about the quantifier alternation) is more powerful than weighted automata, and extensions of the automata with two-way navigation, and pebbles or nested capabilities have been introduced to deal with it. In this work, we characterise the fragment of these extended weighted automata that recognise exactly the full weighted first-order logic, under the condition that automata are polynomially ambiguous.

The ever-growing use of wind energy makes necessary the optimization of turbine operations through pitch angle controllers and their maintenance with early fault detection. It is crucial to have accurate and robust models imitating the behavior of wind turbines, especially to predict the generated power as a function of the wind speed. Existing empirical and physics-based models have limitations in capturing the complex relations between the input variables and the power, aggravated by wind variability. Data-driven methods offer new opportunities to enhance wind turbine modeling of large datasets by improving accuracy and efficiency. In this study, we used physics-informed neural networks to reproduce historical data coming from 4 turbines in a wind farm, while imposing certain physical constraints to the model. The developed models for regression of the power, torque, and power coefficient as output variables showed great accuracy for both real data and physical equations governing the system. Lastly, introducing an efficient evidential layer provided uncertainty estimations of the predictions, proved to be consistent with the absolute error, and made possible the definition of a confidence interval in the power curve.

Penrose tilings are the most famous aperiodic tilings, and they have been studied extensively. In particular, patterns composed with hexagons ($H$), boats ($B$) and stars ($S$) were soon exhibited and many physicists published on what they later called $HBS$ tilings, but no article or book combines all we know about them. This work is done here, before introducing new decorations and properties including explicit substitutions. For the latter, the star comes in three versions so we have 5 prototiles in what we call the Star tileset. Yet this set yields exactly the strict $HBS$ tilings formed using 3 tiles decorated with either the usual decorations (arrows) or Ammann bar markings for instance. Another new tileset called Gemstones is also presented, derived from the Star tileset.

Despite their importance for assessing reliability of predictions, uncertainty quantification (UQ) measures for machine learning models have only recently begun to be rigorously characterized. One prominent issue is the curse of dimensionality: it is commonly believed that the marginal likelihood should be reminiscent of cross-validation metrics and that both should deteriorate with larger input dimensions. We prove that by tuning hyperparameters to maximize marginal likelihood (the empirical Bayes procedure), the performance, as measured by the marginal likelihood, improves monotonically} with the input dimension. On the other hand, we prove that cross-validation metrics exhibit qualitatively different behavior that is characteristic of double descent. Cold posteriors, which have recently attracted interest due to their improved performance in certain settings, appear to exacerbate these phenomena. We verify empirically that our results hold for real data, beyond our considered assumptions, and we explore consequences involving synthetic covariates.

Modeling multivariate time series has long been a subject that has attracted researchers from a diverse range of fields including economics, finance, and traffic. A basic assumption behind multivariate time series forecasting is that its variables depend on one another but, upon looking closely, it is fair to say that existing methods fail to fully exploit latent spatial dependencies between pairs of variables. In recent years, meanwhile, graph neural networks (GNNs) have shown high capability in handling relational dependencies. GNNs require well-defined graph structures for information propagation which means they cannot be applied directly for multivariate time series where the dependencies are not known in advance. In this paper, we propose a general graph neural network framework designed specifically for multivariate time series data. Our approach automatically extracts the uni-directed relations among variables through a graph learning module, into which external knowledge like variable attributes can be easily integrated. A novel mix-hop propagation layer and a dilated inception layer are further proposed to capture the spatial and temporal dependencies within the time series. The graph learning, graph convolution, and temporal convolution modules are jointly learned in an end-to-end framework. Experimental results show that our proposed model outperforms the state-of-the-art baseline methods on 3 of 4 benchmark datasets and achieves on-par performance with other approaches on two traffic datasets which provide extra structural information.

Many natural language processing tasks solely rely on sparse dependencies between a few tokens in a sentence. Soft attention mechanisms show promising performance in modeling local/global dependencies by soft probabilities between every two tokens, but they are not effective and efficient when applied to long sentences. By contrast, hard attention mechanisms directly select a subset of tokens but are difficult and inefficient to train due to their combinatorial nature. In this paper, we integrate both soft and hard attention into one context fusion model, "reinforced self-attention (ReSA)", for the mutual benefit of each other. In ReSA, a hard attention trims a sequence for a soft self-attention to process, while the soft attention feeds reward signals back to facilitate the training of the hard one. For this purpose, we develop a novel hard attention called "reinforced sequence sampling (RSS)", selecting tokens in parallel and trained via policy gradient. Using two RSS modules, ReSA efficiently extracts the sparse dependencies between each pair of selected tokens. We finally propose an RNN/CNN-free sentence-encoding model, "reinforced self-attention network (ReSAN)", solely based on ReSA. It achieves state-of-the-art performance on both Stanford Natural Language Inference (SNLI) and Sentences Involving Compositional Knowledge (SICK) datasets.

北京阿比特科技有限公司