亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A classic result by Stockmeyer gives a non-elementary lower bound to the emptiness problem for star-free generalized regular expressions. This result is intimately connected to the satisfiability problem for interval temporal logic, notably for formulas that make use of the so-called chop operator. Such an operator can indeed be interpreted as the inverse of the concatenation operation on regular languages, and this correspondence enables reductions between non-emptiness of star-free generalized regular expressions and satisfiability of formulas of the interval temporal logic of chop under the homogeneity assumption. In this paper, we study the complexity of the satisfiability problem for suitable weakenings of the chop interval temporal logic, that can be equivalently viewed as fragments of Halpern and Shoham interval logic. We first consider the logic $\mathsf{BD}_{hom}$ featuring modalities $B$, for \emph{begins}, corresponding to the prefix relation on pairs of intervals, and $D$, for \emph{during}, corresponding to the infix relation. The homogeneous models of $\mathsf{BD}_{hom}$ naturally correspond to languages defined by restricted forms of regular expressions, that use union, complementation, and the inverses of the prefix and infix relations. Such a fragment has been recently shown to be PSPACE-complete . In this paper, we study the extension $\mathsf{BD}_{hom}$ with the temporal neighborhood modality $A$ (corresponding to the Allen relation \emph{Meets}), and prove that it increases both its expressiveness and complexity. In particular, we show that the resulting logic $\mathsf{BDA}_{hom}$ is EXPSPACE-complete.

相關內容

We consider the NP-hard problem of finding the closest rank-one binary tensor to a given binary tensor, which we refer to as the rank-one Boolean tensor factorization (BTF) problem. This optimization problem can be used to recover a planted rank-one tensor from noisy observations. We formulate rank-one BTF as the problem of minimizing a linear function over a highly structured multilinear set. Leveraging on our prior results regarding the facial structure of multilinear polytopes, we propose novel linear programming relaxations for rank-one BTF. We then establish deterministic sufficient conditions under which our proposed linear programs recover a planted rank-one tensor. To analyze the effectiveness of these deterministic conditions, we consider a semi-random model for the noisy tensor, and obtain high probability recovery guarantees for the linear programs. Our theoretical results as well as numerical simulations indicate that certain facets of the multilinear polytope significantly improve the recovery properties of linear programming relaxations for rank-one BTF.

We present MULTIGAIN 2.0, a major extension to the controller synthesis tool MULTIGAIN, built on top of the probabilistic model checker PRISM. This new version extends MULTIGAIN's multi-objective capabilities, by allowing for the formal verification and synthesis of controllers for probabilistic systems with multi-dimensional long-run average reward structures, steady-state constraints, and linear temporal logic properties. Additionally, MULTIGAIN 2.0 can modify the underlying linear program to prevent unbounded-memory and other unintuitive solutions and visualizes Pareto curves, in the two- and three-dimensional cases, to facilitate trade-off analysis in multi-objective scenarios.

All known quantifier elimination procedures for Presburger arithmetic require doubly exponential time for eliminating a single block of existentially quantified variables. It has even been claimed in the literature that this upper bound is tight. We observe that this claim is incorrect and develop, as the main result of this paper, a quantifier elimination procedure eliminating a block of existentially quantified variables in singly exponential time. As corollaries, we can establish the precise complexity of numerous problems. Examples include deciding (i) monadic decomposability for existential formulas, (ii) whether an existential formula defines a well-quasi ordering or, more generally, (iii) certain formulas of Presburger arithmetic with Ramsey quantifiers. Moreover, despite the exponential blowup, our procedure shows that under mild assumptions, even NP upper bounds for decision problems about quantifier-free formulas can be transferred to existential formulas. The technical basis of our results is a kind of small model property for parametric integer programming that generalizes the seminal results by von zur Gathen and Sieveking on small integer points in convex polytopes.

We present a novel approach for modeling bounded count time series data, by deriving accurate upper and lower bounds for the variance of a bounded count random variable while maintaining a fixed mean. Leveraging these bounds, we propose semiparametric mean and variance joint (MVJ) models utilizing a clipped-Laplace link function. These models offer a flexible and feasible structure for both mean and variance, accommodating various scenarios of under-dispersion, equi-dispersion, or over-dispersion in bounded time series. The proposed MVJ models feature a linear mean structure with positive regression coefficients summing to one and allow for negative regression cefficients and autocorrelations. We demonstrate that the autocorrelation structure of MVJ models mirrors that of an autoregressive moving-average (ARMA) process, provided the proposed clipped-Laplace link functions with nonnegative regression coefficients summing to one are utilized. We establish conditions ensuring the stationarity and ergodicity properties of the MVJ process, along with demonstrating the consistency and asymptotic normality of the conditional least squares estimators. To aid model selection and diagnostics, we introduce two model selection criteria and apply two model diagnostics statistics. Finally, we conduct simulations and real data analyses to investigate the finite-sample properties of the proposed MVJ models, providing insights into their efficacy and applicability in practical scenarios.

We consider the problem of the discrete-time approximation of the solution of a one-dimensional SDE with piecewise locally Lipschitz drift and continuous diffusion coefficients with polynomial growth. In this paper, we study the strong convergence of a (semi-explicit) exponential-Euler scheme previously introduced in Bossy et al. (2021). We show the usual 1/2 rate of convergence for the exponential-Euler scheme when the drift is continuous. When the drift is discontinuous, the convergence rate is penalised by a factor {$\epsilon$} decreasing with the time-step. We examine the case of the diffusion coefficient vanishing at zero, which adds a positivity preservation condition and a convergence analysis that exploits the negative moments and exponential moments of the scheme with the help of change of time technique introduced in Berkaoui et al. (2008). Asymptotic behaviour and theoretical stability of the exponential scheme, as well as numerical experiments, are also presented.

The goal of multi-objective optimisation is to identify a collection of points which describe the best possible trade-offs between the multiple objectives. In order to solve this vector-valued optimisation problem, practitioners often appeal to the use of scalarisation functions in order to transform the multi-objective problem into a collection of single-objective problems. This set of scalarised problems can then be solved using traditional single-objective optimisation techniques. In this work, we formalise this convention into a general mathematical framework. We show how this strategy effectively recasts the original multi-objective optimisation problem into a single-objective optimisation problem defined over sets. An appropriate class of objective functions for this new problem are the R2 utilities, which are utility functions that are defined as a weighted integral over the scalarised optimisation problems. As part of our work, we show that these utilities are monotone and submodular set functions which can be optimised effectively using greedy optimisation algorithms. We then analyse the performance of these greedy algorithms both theoretically and empirically. Our analysis largely focusses on Bayesian optimisation, which is a popular probabilistic framework for black-box optimisation.

Considered herein is a class of Boussinesq systems of Bona-Smith type that describe water waves in bounded two-dimensional domains with slip-wall boundary conditions and variable bottom topography. Such boundary conditions are necessary in situations involving water waves in channels, ports, and generally in basins with solid boundaries. We prove that, given appropriate initial conditions, the corresponding initial-boundary value problems have unique solutions locally in time, which is a fundamental property of deterministic mathematical modeling. Moreover, we demonstrate that the systems under consideration adhere to three basic conservation laws for water waves: mass, vorticity, and energy conservation. The theoretical analysis of these specific Boussinesq systems leads to a conservative mixed finite element formulation. Using explicit, relaxation Runge-Kutta methods for the discretization in time, we devise a fully discrete scheme for the numerical solution of initial-boundary value problems with slip-wall conditions, preserving mass, vorticity, and energy. Finally, we present a series of challenging numerical experiments to assess the applicability of the new numerical model.

Overdamped Langevin dynamics are reversible stochastic differential equations which are commonly used to sample probability measures in high-dimensional spaces, such as the ones appearing in computational statistical physics and Bayesian inference. By varying the diffusion coefficient, there are in fact infinitely many overdamped Langevin dynamics which are reversible with respect to the target probability measure at hand. This suggests to optimize the diffusion coefficient in order to increase the convergence rate of the dynamics, as measured by the spectral gap of the generator associated with the stochastic differential equation. We analytically study this problem here, obtaining in particular necessary conditions on the optimal diffusion coefficient. We also derive an explicit expression of the optimal diffusion in some appropriate homogenized limit. Numerical results, both relying on discretizations of the spectral gap problem and Monte Carlo simulations of the stochastic dynamics, demonstrate the increased quality of the sampling arising from an appropriate choice of the diffusion coefficient.

Detecting multiple unknown objects in noisy data is a key problem in many scientific fields, such as electron microscopy imaging. A common model for the unknown objects is the linear subspace model, which assumes that the objects can be expanded in some known basis (such as the Fourier basis). In this paper, we develop an object detection algorithm that under the linear subspace model is asymptotically guaranteed to detect all objects, while controlling the family wise error rate or the false discovery rate. Numerical simulations show that the algorithm also controls the error rate with high power in the non-asymptotic regime, even in highly challenging regimes. We apply the proposed algorithm to experimental electron microscopy data set, and show that it outperforms existing standard software.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

北京阿比特科技有限公司