亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the metric distortion problem there is a set of candidates and a set of voters, all residing in the same metric space. The objective is to choose a candidate with minimum social cost, defined as the total distance of the chosen candidate from all voters. The challenge is that the algorithm receives only ordinal input from each voter, in the form of a ranked list of candidates in non-decreasing order of their distances from her, whereas the objective function is cardinal. The distortion of an algorithm is its worst-case approximation factor with respect to the optimal social cost. A series of papers culminated in a 3-distortion algorithm, which is tight with respect to all deterministic algorithms. Aiming to overcome the limitations of worst-case analysis, we revisit the metric distortion problem through the learning-augmented framework, where the algorithm is provided with some prediction regarding the optimal candidate. The quality of this prediction is unknown, and the goal is to evaluate the performance of the algorithm under a accurate prediction (known as consistency), while simultaneously providing worst-case guarantees even for arbitrarily inaccurate predictions (known as robustness). For our main result, we characterize the robustness-consistency Pareto frontier for the metric distortion problem. We first identify an inevitable trade-off between robustness and consistency. We then devise a family of learning-augmented algorithms that achieves any desired robustness-consistency pair on this Pareto frontier. Furthermore, we provide a more refined analysis of the distortion bounds as a function of the prediction error (with consistency and robustness being two extremes). Finally, we also prove distortion bounds that integrate the notion of $\alpha$-decisiveness, which quantifies the extent to which a voter prefers her favorite candidate relative to the rest.

相關內容

Neural temporal point processes(TPPs) have shown promise for modeling continuous-time event sequences. However, capturing the interactions between events is challenging yet critical for performing inference tasks like forecasting on event sequence data. Existing TPP models have focused on parameterizing the conditional distribution of future events but struggle to model event interactions. In this paper, we propose a novel approach that leverages Neural Relational Inference (NRI) to learn a relation graph that infers interactions while simultaneously learning the dynamics patterns from observational data. Our approach, the Contrastive Relational Inference-based Hawkes Process (CRIHP), reasons about event interactions under a variational inference framework. It utilizes intensity-based learning to search for prototype paths to contrast relationship constraints. Extensive experiments on three real-world datasets demonstrate the effectiveness of our model in capturing event interactions for event sequence modeling tasks.

The reconstruction of electrical excitation patterns through the unobserved depth of the tissue is essential to realizing the potential of computational models in cardiac medicine. We have utilized experimental optical-mapping recordings of cardiac electrical excitation on the epicardial and endocardial surfaces of a canine ventricle as observations directing a local ensemble transform Kalman Filter (LETKF) data assimilation scheme. We demonstrate that the inclusion of explicit information about the stimulation protocol can marginally improve the confidence of the ensemble reconstruction and the reliability of the assimilation over time. Likewise, we consider the efficacy of stochastic modeling additions to the assimilation scheme in the context of experimentally derived observation sets. Approximation error is addressed at both the observation and modeling stages, through the uncertainty of observations and the specification of the model used in the assimilation ensemble. We find that perturbative modifications to the observations have marginal to deleterious effects on the accuracy and robustness of the state reconstruction. Further, we find that incorporating additional information from the observations into the model itself (in the case of stimulus and stochastic currents) has a marginal improvement on the reconstruction accuracy over a fully autonomous model, while complicating the model itself and thus introducing potential for new types of model error. That the inclusion of explicit modeling information has negligible to negative effects on the reconstruction implies the need for new avenues for optimization of data assimilation schemes applied to cardiac electrical excitation.

Many physical systems are governed by ordinary or partial differential equations (see, for example, Chapter ''Differential equations'', ''System of Differential Equations''). Typically the solution of such systems are functions of time or of a single space variable (in the case of ODE's), or they depend on multidimensional space coordinates or on space and time (in the case of PDE's). In some cases, the solutions may depend on several time or space scales. An example governed by ODE's is the damped harmonic oscillator, in the two extreme cases of very small or very large damping, the cardiovascular system, where the thickness of the arteries and veins varies from centimeters to microns, shallow water equations, which are valid when water depth is small compared to typical wavelength of surface waves, and sorption kinetics, in which the range of interaction of a surfactant with an air bubble is much smaller than the size of the bubble itself. In all such cases a detailed simulation of the models which resolves all space or time scales is often inefficient or intractable, and usually even unnecessary to provide a reasonable description of the behavior of the system. In the Chapter ''Multiscale modeling with differential equations'' we present examples of systems described by ODE's and PDE's which are intrinsically multiscale, and illustrate how suitable modeling provide an effective way to capture the essential behavior of the solutions of such systems without resolving the small scales.

Collective perception is a foundational problem in swarm robotics, in which the swarm must reach consensus on a coherent representation of the environment. An important variant of collective perception casts it as a best-of-$n$ decision-making process, in which the swarm must identify the most likely representation out of a set of alternatives. Past work on this variant primarily focused on characterizing how different algorithms navigate the speed-vs-accuracy tradeoff in a scenario where the swarm must decide on the most frequent environmental feature. Crucially, past work on best-of-$n$ decision-making assumes the robot sensors to be perfect (noise- and fault-less), limiting the real-world applicability of these algorithms. In this paper, we derive from first principles an optimal, probabilistic framework for minimalistic swarm robots equipped with flawed sensors. Then, we validate our approach in a scenario where the swarm collectively decides the frequency of a certain environmental feature. We study the speed and accuracy of the decision-making process with respect to several parameters of interest. Our approach can provide timely and accurate frequency estimates even in presence of severe sensory noise.

Discrete state spaces represent a major computational challenge to statistical inference, since the computation of normalisation constants requires summation over large or possibly infinite sets, which can be impractical. This paper addresses this computational challenge through the development of a novel generalised Bayesian inference procedure suitable for discrete intractable likelihood. Inspired by recent methodological advances for continuous data, the main idea is to update beliefs about model parameters using a discrete Fisher divergence, in lieu of the problematic intractable likelihood. The result is a generalised posterior that can be sampled from using standard computational tools, such as Markov chain Monte Carlo, circumventing the intractable normalising constant. The statistical properties of the generalised posterior are analysed, with sufficient conditions for posterior consistency and asymptotic normality established. In addition, a novel and general approach to calibration of generalised posteriors is proposed. Applications are presented on lattice models for discrete spatial data and on multivariate models for count data, where in each case the methodology facilitates generalised Bayesian inference at low computational cost.

Discontinuous motion which is a motion composed of multiple continuous motions with sudden change in direction or velocity in between, can be seen in state-aware robotic tasks. Such robotic tasks are often coordinated with sensor information such as image. In recent years, Dynamic Movement Primitives (DMP) which is a method for generating motor behaviors suitable for robotics has garnered several deep learning based improvements to allow associations between sensor information and DMP parameters. While the implementation of deep learning framework does improve upon DMP's inability to directly associate to an input, we found that it has difficulty learning DMP parameters for complex motion which requires large number of basis functions to reconstruct. In this paper we propose a novel deep learning network architecture called Deep Segmented DMP Network (DSDNet) which generates variable-length segmented motion by utilizing the combination of multiple DMP parameters predicting network architecture, double-stage decoder network, and number of segments predictor. The proposed method is evaluated on both artificial data (object cutting & pick-and-place) and real data (object cutting) where our proposed method could achieve high generalization capability, task-achievement, and data-efficiency compared to previous method on generating discontinuous long-horizon motions.

Sublinear time complexity is required by the massively parallel computation (MPC) model. Breaking dynamic programs into a set of sparse dynamic programs that can be divided, solved, and merged in sublinear time. The rectangle escape problem (REP) is defined as follows: For $n$ axis-aligned rectangles inside an axis-aligned bounding box $B$, extend each rectangle in only one of the four directions: up, down, left, or right until it reaches $B$ and the density $k$ is minimized, where $k$ is the maximum number of extensions of rectangles to the boundary that pass through a point inside bounding box $B$. REP is NP-hard for $k>1$. If the rectangles are points of a grid (or unit squares of a grid), the problem is called the square escape problem (SEP) and it is still NP-hard. We give a $2$-approximation algorithm for SEP with $k\geq2$ with time complexity $O(n^{3/2}k^2)$. This improves the time complexity of existing algorithms which are at least quadratic. Also, the approximation ratio of our algorithm for $k\geq 3$ is $3/2$ which is tight. We also give a $8$-approximation algorithm for REP with time complexity $O(n\log n+nk)$ and give a MPC version of this algorithm for $k=O(1)$ which is the first parallel algorithm for this problem.

Recently, a considerable literature has grown up around the theme of Graph Convolutional Network (GCN). How to effectively leverage the rich structural information in complex graphs, such as knowledge graphs with heterogeneous types of entities and relations, is a primary open challenge in the field. Most GCN methods are either restricted to graphs with a homogeneous type of edges (e.g., citation links only), or focusing on representation learning for nodes only instead of jointly propagating and updating the embeddings of both nodes and edges for target-driven objectives. This paper addresses these limitations by proposing a novel framework, namely the Knowledge Embedding based Graph Convolutional Network (KE-GCN), which combines the power of GCNs in graph-based belief propagation and the strengths of advanced knowledge embedding (a.k.a. knowledge graph embedding) methods, and goes beyond. Our theoretical analysis shows that KE-GCN offers an elegant unification of several well-known GCN methods as specific cases, with a new perspective of graph convolution. Experimental results on benchmark datasets show the advantageous performance of KE-GCN over strong baseline methods in the tasks of knowledge graph alignment and entity classification.

Graph Neural Networks (GNNs) have proven to be useful for many different practical applications. However, many existing GNN models have implicitly assumed homophily among the nodes connected in the graph, and therefore have largely overlooked the important setting of heterophily, where most connected nodes are from different classes. In this work, we propose a novel framework called CPGNN that generalizes GNNs for graphs with either homophily or heterophily. The proposed framework incorporates an interpretable compatibility matrix for modeling the heterophily or homophily level in the graph, which can be learned in an end-to-end fashion, enabling it to go beyond the assumption of strong homophily. Theoretically, we show that replacing the compatibility matrix in our framework with the identity (which represents pure homophily) reduces to GCN. Our extensive experiments demonstrate the effectiveness of our approach in more realistic and challenging experimental settings with significantly less training data compared to previous works: CPGNN variants achieve state-of-the-art results in heterophily settings with or without contextual node features, while maintaining comparable performance in homophily settings.

It is important to detect anomalous inputs when deploying machine learning systems. The use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples. At the same time, diverse image and text data are available in enormous quantities. We propose leveraging these data to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure (OE). This enables anomaly detectors to generalize and detect unseen anomalies. In extensive experiments on natural language processing and small- and large-scale vision tasks, we find that Outlier Exposure significantly improves detection performance. We also observe that cutting-edge generative models trained on CIFAR-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; we use OE to mitigate this issue. We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance.

北京阿比特科技有限公司