亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We prove that the scaled maximum steady-state waiting time and the scaled maximum steady-state queue length among $N$ $GI/GI/1$-queues in the $N$-server fork-join queue, converge to a normally distributed random variable as $N\to\infty$. The maximum steady-state waiting time in this queueing system scales around $\frac{1}{\gamma}\log N$, where $\gamma$ is determined by the cumulant generating function $\Lambda$ of the service distribution and solves the Cram\'er-Lundberg equation with stochastic service times and deterministic inter-arrival times. This value $\frac{1}{\gamma}\log N$ is reached at a certain hitting time. The number of arrivals until that hitting time satisfies the central limit theorem, with standard deviation $\frac{\sigma_A}{\sqrt{\Lambda'(\gamma)\gamma}}$. By using distributional Little's law, we can extend this result to the maximum queue length. Finally, we extend these results to a fork-join queue with different classes of servers.

相關內容

We present the OGAN algorithm for automatic requirement falsification of cyber-physical systems. System inputs and output are represented as piecewise constant signals over time while requirements are expressed in signal temporal logic. OGAN can find inputs that are counterexamples for the safety of a system revealing design, software, or hardware defects before the system is taken into operation. The OGAN algorithm works by training a generative machine learning model to produce such counterexamples. It executes tests atomically and does not require any previous model of the system under test. We evaluate OGAN using the ARCH-COMP benchmark problems, and the experimental results show that generative models are a viable method for requirement falsification. OGAN can be applied to new systems with little effort, has few requirements for the system under test, and exhibits state-of-the-art CPS falsification efficiency and effectiveness.

We give a simple proof that assuming the Exponential Time Hypothesis (ETH), determining the winner of a Rabin game cannot be done in time $2^{o(k \log k)} \cdot n^{O(1)}$, where $k$ is the number of pairs of vertex subsets involved in the winning condition and $n$ is the vertex count of the game graph. While this result follows from the lower bounds provided by Calude et al [SIAM J. Comp. 2022], our reduction is simpler and arguably provides more insight into the complexity of the problem. In fact, the analogous lower bounds discussed by Calude et al, for solving Muller games and multidimensional parity games, follow as simple corollaries of our approach. Our reduction also highlights the usefulness of a certain pivot problem -- Permutation SAT -- which may be of independent interest.

A new numerical domain decomposition method is proposed for solving elliptic equations on compact Riemannian manifolds. The advantage of this method is to avoid global triangulations or grids on manifolds. Our method is numerically tested on some $4$-dimensional manifolds such as the unit sphere $S^{4}$, the complex projective space $\mathbb{CP}^{2}$ and the product manifold $S^{2} \times S^{2}$.

In [3] it was shown that four seemingly different algorithms for computing low-rank approximate solutions $X_j$ to the solution $X$ of large-scale continuous-time algebraic Riccati equations (CAREs) $0 = \mathcal{R}(X) := A^HX+XA+C^HC-XBB^HX $ generate the same sequence $X_j$ when used with the same parameters. The Hermitian low-rank approximations $X_j$ are of the form $X_j = Z_jY_jZ_j^H,$ where $Z_j$ is a matrix with only few columns and $Y_j$ is a small square Hermitian matrix. Each $X_j$ generates a low-rank Riccati residual $\mathcal{R}(X_j)$ such that the norm of the residual can be evaluated easily allowing for an efficient termination criterion. Here a new family of methods to generate such low-rank approximate solutions $X_j$ of CAREs is proposed. Each member of this family of algorithms proposed here generates the same sequence of $X_j$ as the four previously known algorithms. The approach is based on a block rational Arnoldi decomposition and an associated block rational Krylov subspace spanned by $A^H$ and $C^H.$ Two specific versions of the general algorithm will be considered; one will turn out to be a rediscovery of the RADI algorithm, the other one allows for a slightly more efficient implementation compared to the RADI algorithm. Moreover, our approach allows for adding more than one shift at a time.

We construct a graph with $n$ vertices where the smoothed runtime of the 3-FLIP algorithm for the 3-Opt Local Max-Cut problem can be as large as $2^{\Omega(\sqrt{n})}$. This provides the first example where a local search algorithm for the Max-Cut problem can fail to be efficient in the framework of smoothed analysis. We also give a new construction of graphs where the runtime of the FLIP algorithm for the Local Max-Cut problem is $2^{\Omega(n)}$ for any pivot rule. This graph is much smaller and has a simpler structure than previous constructions.

We investigate the properties of the high-order discontinuous Galerkin spectral element method (DGSEM) with implicit backward-Euler time stepping for the approximation of hyperbolic linear scalar conservation equation in multiple space dimensions. We first prove that the DGSEM scheme in one space dimension preserves a maximum principle for the cell-averaged solution when the time step is large enough. This property however no longer holds in multiple space dimensions and we propose to use the flux-corrected transport limiting [Boris and Book, J. Comput. Phys., 11 (1973)] based on a low-order approximation using graph viscosity to impose a maximum principle on the cell-averaged solution. These results allow to use a linear scaling limiter [Zhang and Shu, J. Comput. Phys., 229 (2010)] in order to impose a maximum principle at nodal values within elements. Then, we investigate the inversion of the linear systems resulting from the time implicit discretization at each time step. We prove that the diagonal blocks are invertible and provide efficient algorithms for their inversion. Numerical experiments in one and two space dimensions are presented to illustrate the conclusions of the present analyses.

The aim of ordinal classification is to predict the ordered labels of the output from a set of observed inputs. Interval-valued data refers to data in the form of intervals. For the first time, interval-valued data and interval-valued functional data are considered as inputs in an ordinal classification problem. Six ordinal classifiers for interval data and interval-valued functional data are proposed. Three of them are parametric, one of them is based on ordinal binary decompositions and the other two are based on ordered logistic regression. The other three methods are based on the use of distances between interval data and kernels on interval data. One of the methods uses the weighted $k$-nearest-neighbor technique for ordinal classification. Another method considers kernel principal component analysis plus an ordinal classifier. And the sixth method, which is the method that performs best, uses a kernel-induced ordinal random forest. They are compared with na\"ive approaches in an extensive experimental study with synthetic and original real data sets, about human global development, and weather data. The results show that considering ordering and interval-valued information improves the accuracy. The source code and data sets are available at //github.com/aleixalcacer/OCFIVD.

We construct and analyze finite element approximations of the Einstein tensor in dimension $N \ge 3$. We focus on the setting where a smooth Riemannian metric tensor $g$ on a polyhedral domain $\Omega \subset \mathbb{R}^N$ has been approximated by a piecewise polynomial metric $g_h$ on a simplicial triangulation $\mathcal{T}$ of $\Omega$ having maximum element diameter $h$. We assume that $g_h$ possesses single-valued tangential-tangential components on every codimension-1 simplex in $\mathcal{T}$. Such a metric is not classically differentiable in general, but it turns out that one can still attribute meaning to its Einstein curvature in a distributional sense. We study the convergence of the distributional Einstein curvature of $g_h$ to the Einstein curvature of $g$ under refinement of the triangulation. We show that in the $H^{-2}(\Omega)$-norm, this convergence takes place at a rate of $O(h^{r+1})$ when $g_h$ is an optimal-order interpolant of $g$ that is piecewise polynomial of degree $r \ge 1$. We provide numerical evidence to support this claim.

Contraction in Wasserstein 1-distance with explicit rates is established for generalized Hamiltonian Monte Carlo with stochastic gradients under possibly nonconvex conditions. The algorithms considered include splitting schemes of kinetic Langevin diffusion. As consequence, quantitative Gaussian concentration bounds are provided for empirical averages. Convergence in Wasserstein 2-distance, total variation and relative entropy are also given, together with numerical bias estimates.

Next Point-of-Interest (POI) recommendation is a critical task in location-based services that aim to provide personalized suggestions for the user's next destination. Previous works on POI recommendation have laid focused on modeling the user's spatial preference. However, existing works that leverage spatial information are only based on the aggregation of users' previous visited positions, which discourages the model from recommending POIs in novel areas. This trait of position-based methods will harm the model's performance in many situations. Additionally, incorporating sequential information into the user's spatial preference remains a challenge. In this paper, we propose Diff-POI: a Diffusion-based model that samples the user's spatial preference for the next POI recommendation. Inspired by the wide application of diffusion algorithm in sampling from distributions, Diff-POI encodes the user's visiting sequence and spatial character with two tailor-designed graph encoding modules, followed by a diffusion-based sampling strategy to explore the user's spatial visiting trends. We leverage the diffusion process and its reversed form to sample from the posterior distribution and optimized the corresponding score function. We design a joint training and inference framework to optimize and evaluate the proposed Diff-POI. Extensive experiments on four real-world POI recommendation datasets demonstrate the superiority of our Diff-POI over state-of-the-art baseline methods. Further ablation and parameter studies on Diff-POI reveal the functionality and effectiveness of the proposed diffusion-based sampling strategy for addressing the limitations of existing methods.

北京阿比特科技有限公司