亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We provide a deterministic algorithm for computing the $5$-edge-connected components of an undirected multigraph in linear time. There were probably good indications that this computation can be performed in linear time, but no such algorithm was actually known prior to this work. Thus, our paper answers a theoretical question, and sheds light on the possibility that a solution may exist for general $k$. A key component in our algorithm is an oracle for answering connectivity queries for pairs of vertices in the presence of at most four edge-failures. Specifically, the oracle has size $O(n)$, it can be constructed in linear time, and it answers connectivity queries in the presence of at most four edge-failures in worst-case constant time, where $n$ denotes the number of vertices of the graph. We note that this is a result of independent interest. Our paper can be considered as a follow-up of recent work on computing the $4$-edge-connected components in linear time. However, in dealing with the computation of the $5$-edge-connected components, we are faced with unique challenges that do not appear when dealing with lower connectivity. The problem is that the $4$-edge cuts in $3$-edge-connected graphs are entangled in various complicated ways, that make it difficult to organize them in a compact way. Here we provide a novel analysis of those cuts, that reveals the existence of various interesting structures. These can be exploited so that we can disentangle and collect only those cuts that are essential in computing the $5$-edge-connected components. This analysis may provide a clue for a general solution for the $k$-edge-connected components, or other related graph connectivity problems.

相關內容

We propose a new representation of functions in Sobolev spaces on an $N$-dimensional hyper-rectangle, expressing such functions in terms of their admissible derivatives, evaluated along lower-boundaries of the domain. These boundary values are either finite-dimensional or exist in the space $L_{2}$ of square-integrable functions -- free of the continuity constraints inherent to Sobolev space. Moreover, we show that the map from this space of boundary values to the Sobolev space is given by an integral operator with polynomial kernel, and we prove that this map is invertible. Using this result, we propose a method for polynomial approximation of functions in Sobolev space, reconstructing such an approximation from polynomial projections of the boundary values. We prove that this approximation is optimal with respect to a discrete-continuous Sobolev norm, and show through numerical examples that it exhibits better convergence behavior than direct projection of the function. Finally, we show that this approach may also be adapted to use a basis of step functions, to construct accurate piecewise polynomial approximations that do not suffer from e.g. Gibbs phenomenon.

We study the problem of estimating the distribution of the return of a policy using an offline dataset that is not generated from the policy, i.e., distributional offline policy evaluation (OPE). We propose an algorithm called Fitted Likelihood Estimation (FLE), which conducts a sequence of Maximum Likelihood Estimation (MLE) and has the flexibility of integrating any state-of-the-art probabilistic generative models as long as it can be trained via MLE. FLE can be used for both finite-horizon and infinite-horizon discounted settings where rewards can be multi-dimensional vectors. Our theoretical results show that for both finite-horizon and infinite-horizon discounted settings, FLE can learn distributions that are close to the ground truth under total variation distance and Wasserstein distance, respectively. Our theoretical results hold under the conditions that the offline data covers the test policy's traces and that the supervised learning MLE procedures succeed. Experimentally, we demonstrate the performance of FLE with two generative models, Gaussian mixture models and diffusion models. For the multi-dimensional reward setting, FLE with diffusion models is capable of estimating the complicated distribution of the return of a test policy.

We introduce a new approach for identifying and characterizing voids within two-dimensional (2D) point distributions through the integration of Delaunay triangulation and Voronoi diagrams, combined with a Minimal Distance Scoring algorithm. Our methodology initiates with the computational determination of the Convex Hull vertices within the point cloud, followed by a systematic selection of optimal line segments, strategically chosen for their likelihood of intersecting internal void regions. We then utilize Delaunay triangulation in conjunction with Voronoi diagrams to ascertain the initial points for the construction of the maximal internal curve envelope by adopting a pseudo-recursive approach for higher-order void identification. In each iteration, the existing collection of maximal internal curve envelope points serves as a basis for identifying additional candidate points. This iterative process is inherently self-converging, ensuring progressive refinement of the void's shape with each successive computation cycle. The mathematical robustness of this method allows for an efficient convergence to a stable solution, reflecting both the geometric intricacies and the topological characteristics of the voids within the point cloud. Our findings introduce a method that aims to balance geometric accuracy with computational practicality. The approach is designed to improve the understanding of void shapes within point clouds and suggests a potential framework for exploring more complex, multi-dimensional data analysis.

We consider estimation of a functional parameter of a realistically modeled data distribution based on independent and identically distributed observations. Suppose that the true function is defined as the minimizer of the expectation of a specified loss function over its parameter space. Estimators of the true function are provided, viewed as a data-adaptive coordinate transformation for the true function. For any $J$-dimensional real valued cadlag function with finite sectional variation norm, we define a candidate ensemble estimator as the mapping from the data into the composition of the cadlag function and the $J$ estimated functions. Using $V$-fold cross-validation, we define the cross-validated empirical risk of each cadlag function specific ensemble estimator. We then define the Meta Highly Adaptive Lasso Minimum Loss Estimator (M-HAL-MLE) as the cadlag function that minimizes this cross-validated empirical risk over all cadlag functions with a uniform bound on the sectional variation norm. For each of the $V$ training samples, this yields a composition of the M-HAL-MLE ensemble and the $J$ estimated functions trained on the training sample. We can estimate the true function with the average of these $V$ estimated functions, which we call the M-HAL super-learner. The M-HAL super-learner converges to the oracle estimator at a rate $n^{-2/3}$ (up till $\log n$-factor) w.r.t. excess risk, where the oracle estimator minimizes the excess risk among all considered ensembles. The excess risk of the oracle estimator and true function is generally second order. Under weak conditions on the $J$ candidate estimators, target features of the undersmoothed M-HAL super-learner are asymptotically linear estimators of the corresponding target features of true function, with influence curve either the efficient influence curve, or potentially, a super-efficient influence curve.

Earth system forecasting has traditionally relied on complex physical models that are computationally expensive and require significant domain expertise. In the past decade, the unprecedented increase in spatiotemporal Earth observation data has enabled data-driven forecasting models using deep learning techniques. These models have shown promise for diverse Earth system forecasting tasks but either struggle with handling uncertainty or neglect domain-specific prior knowledge, resulting in averaging possible futures to blurred forecasts or generating physically implausible predictions. To address these limitations, we propose a two-stage pipeline for probabilistic spatiotemporal forecasting: 1) We develop PreDiff, a conditional latent diffusion model capable of probabilistic forecasts. 2) We incorporate an explicit knowledge alignment mechanism to align forecasts with domain-specific physical constraints. This is achieved by estimating the deviation from imposed constraints at each denoising step and adjusting the transition distribution accordingly. We conduct empirical studies on two datasets: N-body MNIST, a synthetic dataset with chaotic behavior, and SEVIR, a real-world precipitation nowcasting dataset. Specifically, we impose the law of conservation of energy in N-body MNIST and anticipated precipitation intensity in SEVIR. Experiments demonstrate the effectiveness of PreDiff in handling uncertainty, incorporating domain-specific prior knowledge, and generating forecasts that exhibit high operational utility.

Rough set theory is a well-known mathematical framework that can deal with inconsistent data by providing lower and upper approximations of concepts. A prominent property of these approximations is their granular representation: that is, they can be written as unions of simple sets, called granules. The latter can be identified with "if. . . , then. . . " rules, which form the backbone of rough set rule induction. It has been shown previously that this property can be maintained for various fuzzy rough set models, including those based on ordered weighted average (OWA) operators. In this paper, we will focus on some instances of the general class of fuzzy quantifier-based fuzzy rough sets (FQFRS). In these models, the lower and upper approximations are evaluated using binary and unary fuzzy quantifiers, respectively. One of the main targets of this study is to examine the granular representation of different models of FQFRS. The main findings reveal that Choquet-based fuzzy rough sets can be represented granularly under the same conditions as OWA-based fuzzy rough sets, whereas Sugeno-based FRS can always be represented granularly. This observation highlights the potential of these models for resolving data inconsistencies and managing noise.

We devise a deterministic algorithm for minimum Steiner cut which uses polylogarithmic maximum flow calls and near-linear time outside of these maximum flow calls. This improves on Li and Panigrahi's (FOCS 2020) algorithm which takes $O(m^{1+\epsilon})$ time outside of maximum flow calls. Our algorithm thus shows that deterministic minimum Steiner cut can be solved in maximum flow time up to polylogarithmic factors, given any black-box deterministic maximum flow algorithm. Our main technical contribution is a novel deterministic graph decomposition method for terminal vertices which generalizes all existing $s$-strong partitioning methods and may have future applications.

Join-preserving maps on the discrete time scale $\omega^+$, referred to as time warps, have been proposed as graded modalities that can be used to quantify the growth of information in the course of program execution. The set of time warps forms a simple distributive involutive residuated lattice -- called the time warp algebra -- that is equipped with residual operations relevant to potential applications. In this paper, we show that although the time warp algebra generates a variety that lacks the finite model property, it nevertheless has a decidable equational theory. We also describe an implementation of a procedure for deciding equations in this algebra, written in the OCaml programming language, that makes use of the Z3 theorem prover.

The existence of representative datasets is a prerequisite of many successful artificial intelligence and machine learning models. However, the subsequent application of these models often involves scenarios that are inadequately represented in the data used for training. The reasons for this are manifold and range from time and cost constraints to ethical considerations. As a consequence, the reliable use of these models, especially in safety-critical applications, is a huge challenge. Leveraging additional, already existing sources of knowledge is key to overcome the limitations of purely data-driven approaches, and eventually to increase the generalization capability of these models. Furthermore, predictions that conform with knowledge are crucial for making trustworthy and safe decisions even in underrepresented scenarios. This work provides an overview of existing techniques and methods in the literature that combine data-based models with existing knowledge. The identified approaches are structured according to the categories integration, extraction and conformity. Special attention is given to applications in the field of autonomous driving.

As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.

北京阿比特科技有限公司