亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The classic technique of Baker [J. ACM '94] is the most fundamental approach for designing approximation schemes on planar, or more generally topologically-constrained graphs, and it has been applied in a myriad of different variants and settings throughout the last 30 years. In this work we propose a dynamic variant of Baker's technique, where instead of finding an approximate solution in a given static graph, the task is to design a data structure for maintaining an approximate solution in a fully dynamic graph, that is, a graph that is changing over time by edge deletions and edge insertions. Specifically, we address the two most basic problems -- Maximum Weight Independent Set and Minimum Weight Dominating Set -- and we prove the following: for a fully dynamic $n$-vertex planar graph $G$, one can: * maintain a $(1-\varepsilon)$-approximation of the maximum weight of an independent set in $G$ with amortized update time $f(\varepsilon)\cdot n^{o(1)}$; and, * under the additional assumption that the maximum degree of the graph is bounded at all times by a constant, also maintain a $(1+\varepsilon)$-approximation of the minimum weight of a dominating set in $G$ with amortized update time $f(\varepsilon)\cdot n^{o(1)}$. In both cases, $f(\varepsilon)$ is doubly-exponential in $\mathrm{poly}(1/\varepsilon)$ and the data structure can be initialized in time $f(\varepsilon)\cdot n^{1+o(1)}$. All our results in fact hold in the larger generality of any graph class that excludes a fixed apex-graph as a minor.

相關內容

We introduce a new technique called Drapes to enhance the sensitivity in searches for new physics at the LHC. By training diffusion models on side-band data, we show how background templates for the signal region can be generated either directly from noise, or by partially applying the diffusion process to existing data. In the partial diffusion case, data can be drawn from side-band regions, with the inverse diffusion performed for new target conditional values, or from the signal region, preserving the distribution over the conditional property that defines the signal region. We apply this technique to the hunt for resonances using the LHCO di-jet dataset, and achieve state-of-the-art performance for background template generation using high level input features. We also show how Drapes can be applied to low level inputs with jet constituents, reducing the model dependence on the choice of input observables. Using jet constituents we can further improve sensitivity to the signal process, but observe a loss in performance where the signal significance before applying any selection is below 4$\sigma$.

A simple graph on $n$ vertices may contain a lot of maximum cliques. But how many can it potentially contain? We will define prime and composite graphs, and we will show that if $n \ge 15$, then the grpahs with the maximum number of maximum cliques have to be composite. Moreover, we will show an edge bound from which we will prove that if any factor of a composite graph has $\omega(G_i) \ge 5$, then it cannot have the maximum number of maximum cliques. Using this we will show that the graph that contains $3^{\lfloor n/3 \rfloor}c$ maximum cliques has the most number of maximum cliques on $n$ vertices, where $c\in\{1,\frac{4}{3},2\}$, depending on $n \text{ mod } 3$.

Introducing a coupling framework reminiscent of FETI methods, but here on abstract form, we establish conditions for stability and minimal requirements for well-posedness on the continuous level, as well as conditions on local solvers for the approximation of subproblems. We then discuss stability of the resulting Lagrange multiplier methods and show stability under a mesh conditions between the local discretizations and the mortar space. If this condition is not satisfied we show how a stabilization, acting only on the multiplier can be used to achieve stability. The design of preconditioners of the Schur complement system is discussed in the unstabilized case. Finally we discuss some applications that enter the framework.

The arrival of AI techniques in computations, with the potential for hallucinations and non-robustness, has made trustworthiness of algorithms a focal point. However, trustworthiness of the many classical approaches are not well understood. This is the case for feature selection, a classical problem in the sciences, statistics, machine learning etc. Here, the LASSO optimisation problem is standard. Despite its widespread use, it has not been established when the output of algorithms attempting to compute support sets of minimisers of LASSO in order to do feature selection can be trusted. In this paper we establish how no (randomised) algorithm that works on all inputs can determine the correct support sets (with probability $> 1/2$) of minimisers of LASSO when reading approximate input, regardless of precision and computing power. However, we define a LASSO condition number and design an efficient algorithm for computing these support sets provided the input data is well-posed (has finite condition number) in time polynomial in the dimensions and logarithm of the condition number. For ill-posed inputs the algorithm runs forever, hence, it will never produce a wrong answer. Furthermore, the algorithm computes an upper bound for the condition number when this is finite. Finally, for any algorithm defined on an open set containing a point with infinite condition number, there is an input for which the algorithm will either run forever or produce a wrong answer. Our impossibility results stem from generalised hardness of approximation -- within the Solvability Complexity Index (SCI) hierarchy framework -- that generalises the classical phenomenon of hardness of approximation.

A common approach to analyze count time series is to fit models based on random sum operators. As an alternative, this paper introduces time series models based on a random multiplication operator, which is simply the multiplication of a variable operand by an integer-valued random coefficient, whose mean is the constant operand. Such operation is endowed into auto-regressive-like models with integer-valued random inputs, addressed as RMINAR. Two special variants are studied, namely the N0-valued random coefficient auto-regressive model and the N0-valued random coefficient multiplicative error model. Furthermore, Z-valued extensions are considered. The dynamic structure of the proposed models is studied in detail. In particular, their corresponding solutions are everywhere strictly stationary and ergodic, a fact that is not common neither in the literature on integer-valued time series models nor real-valued random coefficient auto-regressive models. Therefore, the parameters of the RMINAR model are estimated using a four-stage weighted least squares estimator, with consistency and asymptotic normality established everywhere in the parameter space. Finally, the new RMINAR models are illustrated with some simulated and empirical examples.

This paper is dedicated to the numerical solution of a fourth-order singular perturbation problem using the interior penalty virtual element method (IPVEM) proposed in [42]. The study introduces modifications to the jumps and averages in the penalty term, as well as presents an automated mesh-dependent selection of the penalty parameter. Drawing inspiration from the modified Morley finite element methods, we leverage the conforming interpolation technique to handle the lower part of the bilinear form. Through our analysis, we establish optimal convergence in the energy norm and provide a rigorous proof of uniform convergence concerning the perturbation parameter in the lowest-order case.

The aim of this paper is to give a systematic mathematical interpretation of the diffusion problem on which Graph Neural Networks (GNNs) models are based. The starting point of our approach is a dissipative functional leading to dynamical equations which allows us to study the symmetries of the model. We discuss the conserved charges and provide a charge-preserving numerical method for solving the dynamical equations. In any dynamical system and also in GRAph Neural Diffusion (GRAND), knowing the charge values and their conservation along the evolution flow could provide a way to understand how GNNs and other networks work with their learning capabilities.

Spatial data can come in a variety of different forms, but two of the most common generating models for such observations are random fields and point processes. Whilst it is known that spectral analysis can unify these two different data forms, specific methodology for the related estimation is yet to be developed. In this paper, we solve this problem by extending multitaper estimation, to estimate the spectral density matrix function for multivariate spatial data, where processes can be any combination of either point processes or random fields. We discuss finite sample and asymptotic theory for the proposed estimators, as well as specific details on the implementation, including how to perform estimation on non-rectangular domains and the correct implementation of multitapering for processes sampled in different ways, e.g. continuously vs on a regular grid.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

Graph representation learning for hypergraphs can be used to extract patterns among higher-order interactions that are critically important in many real world problems. Current approaches designed for hypergraphs, however, are unable to handle different types of hypergraphs and are typically not generic for various learning tasks. Indeed, models that can predict variable-sized heterogeneous hyperedges have not been available. Here we develop a new self-attention based graph neural network called Hyper-SAGNN applicable to homogeneous and heterogeneous hypergraphs with variable hyperedge sizes. We perform extensive evaluations on multiple datasets, including four benchmark network datasets and two single-cell Hi-C datasets in genomics. We demonstrate that Hyper-SAGNN significantly outperforms the state-of-the-art methods on traditional tasks while also achieving great performance on a new task called outsider identification. Hyper-SAGNN will be useful for graph representation learning to uncover complex higher-order interactions in different applications.

北京阿比特科技有限公司