Delaunay Triangulation(DT) is one of the important geometric problems that is used in various branches of knowledge such as computer vision, terrain modeling, spatial clustering and networking. Kinetic data structures have become very important in computational geometry for dealing with moving objects. However, when dealing with moving points, maintaining a dynamically changing Delaunay triangulation can be challenging. So, In this case, we have to update triangulation repeatedly. If points move so far, it is better to rebuild the triangulation. One approach to handle moving points is to use an incremental algorithm. For the case that points move slowly, we can give a faster algorithm than rebuilding it. Furthermore, sequential algorithms can be computationally expensive for large datasets. So, one way to compute as fast as possible is parallelism. In this paper, we propose a parallel algorithm for moving points. we propose an algorithm that divides datasets into equal partitions and give every partition to one block. Each block satisfay the Delaunay constraints after each time step and uses delete and insert algorithms to do this. We show this algorithm works faster than serial algorithms.
Approximated forms of the RII and RIII redistribution matrices are frequently applied to simplify the numerical solution of the radiative transfer problem for polarized radiation, taking partial frequency redistribution (PRD) effects into account. A widely used approximation for RIII is to consider its expression under the assumption of complete frequency redistribution (CRD) in the observer frame (RIII CRD). The adequacy of this approximation for modeling the intensity profiles has been firmly established. By contrast, its suitability for modeling scattering polarization signals has only been analyzed in a few studies, considering simplified settings. In this work, we aim at quantitatively assessing the impact and the range of validity of the RIII CRD approximation in the modeling of scattering polarization. Methods. We first present an analytic comparison between RIII and RIII CRD. We then compare the results of radiative transfer calculations, out of local thermodynamic equilibrium, performed with RIII and RIII CRD in realistic 1D atmospheric models. We focus on the chromospheric Ca i line at 4227 A and on the photospheric Sr i line at 4607 A.
Hindsight Experience Replay (HER) is a technique used in reinforcement learning (RL) that has proven to be very efficient for training off-policy RL-based agents to solve goal-based robotic manipulation tasks using sparse rewards. Even though HER improves the sample efficiency of RL-based agents by learning from mistakes made in past experiences, it does not provide any guidance while exploring the environment. This leads to very large training times due to the volume of experience required to train an agent using this replay strategy. In this paper, we propose a method that uses primitive behaviours that have been previously learned to solve simple tasks in order to guide the agent toward more rewarding actions during exploration while learning other more complex tasks. This guidance, however, is not executed by a manually designed curriculum, but rather using a critic network to decide at each timestep whether or not to use the actions proposed by the previously-learned primitive policies. We evaluate our method by comparing its performance against HER and other more efficient variations of this algorithm in several block manipulation tasks. We demonstrate the agents can learn a successful policy faster when using our proposed method, both in terms of sample efficiency and computation time. Code is available at //github.com/franroldans/qmp-her.
Compositionality is an important feature of discrete symbolic systems, such as language and programs, as it enables them to have infinite capacity despite a finite symbol set. It serves as a useful abstraction for reasoning in both cognitive science and in AI, yet the interface between continuous and symbolic processing is often imposed by fiat at the algorithmic level, such as by means of quantization or a softmax sampling step. In this work, we explore how discretization could be implemented in a more neurally plausible manner through the modeling of attractor dynamics that partition the continuous representation space into basins that correspond to sequences of symbols. Building on established work in attractor networks and introducing novel training methods, we show that imposing structure in the symbolic space can produce compositionality in the attractor-supported representation space of rich sensory inputs. Lastly, we argue that our model exhibits the process of an information bottleneck that is thought to play a role in conscious experience, decomposing the rich information of a sensory input into stable components encoding symbolic information.
Generalized cross-validation (GCV) is a widely-used method for estimating the squared out-of-sample prediction risk that employs a scalar degrees of freedom adjustment (in a multiplicative sense) to the squared training error. In this paper, we examine the consistency of GCV for estimating the prediction risk of arbitrary ensembles of penalized least squares estimators. We show that GCV is inconsistent for any finite ensemble of size greater than one. Towards repairing this shortcoming, we identify a correction that involves an additional scalar correction (in an additive sense) based on degrees of freedom adjusted training errors from each ensemble component. The proposed estimator (termed CGCV) maintains the computational advantages of GCV and requires neither sample splitting, model refitting, or out-of-bag risk estimation. The estimator stems from a finer inspection of ensemble risk decomposition and two intermediate risk estimators for the components in this decomposition. We provide a non-asymptotic analysis of the CGCV and the two intermediate risk estimators for ensembles of convex penalized estimators under Gaussian features and a linear response model. In the special case of ridge regression, we extend the analysis to general feature and response distributions using random matrix theory, which establishes model-free uniform consistency of CGCV.
Post's Correspondence Problem (the PCP) is a classical decision problem in theoretical computer science that asks whether for pairs of free monoid morphisms $g, h\colon\Sigma^*\to\Delta^*$ there exists any non-trivial $x\in\Sigma^*$ such that $g(x)=h(x)$. Post's Correspondence Problem for a group $\Gamma$ takes pairs of group homomorphisms $g, h\colon F(\Sigma)\to \Gamma$ instead, and similarly asks whether there exists an $x$ such that $g(x)=h(x)$ holds for non-elementary reasons. The restrictions imposed on $x$ in order to get non-elementary solutions lead to several interpretations of the problem; we consider the natural restriction asking that $x \notin \ker(g) \cap \ker(h)$ and prove that the resulting interpretation of the PCP is undecidable for arbitrary hyperbolic $\Gamma$, but decidable when $\Gamma$ is virtually nilpotent. We also study this problem for group constructions such as subgroups, direct products and finite extensions. This problem is equivalent to an interpretation due to Myasnikov, Nikolev and Ushakov when one map is injective.
We propose a new method to compare survival data based on Higher Criticism (HC) of P-values obtained from many exact hypergeometric tests. The method can accommodate censorship and is sensitive to moderate differences in some unknown and relatively few time intervals, attaining much better power against such differences than the log-rank test and other tests that are popular under non-proportional hazard alternatives. We demonstrate the usefulness of the HC-based test in detecting rare differences compared to existing tests using simulated data and using actual gene expression data. Additionally, we analyze the asymptotic power of our method under a piece-wise homogeneous exponential decay model with rare and weak departures, describing two groups experiencing failure rates that are usually identical over time except in a few unknown instances in which the second group's failure rate is higher. Under an asymptotic calibration of the model's parameters, the HC-based test's power experiences a phase transition across the plane involving the rarity and intensity parameters that mirrors the phase transition in a two-sample rare and weak normal means setting. In particular, the phase transition curve of our test indicates a larger region in which it is fully powered than the corresponding region of the log-rank test. %The latter attains a phase transition curve that is analogous to a test based on Fisher's combination statistic of the hypergeometric P-values. %To our knowledge, this is the first analysis of a rare and weak signal detection model that involves individually dependent effects in a non-Gaussian setting.
I propose an alternative algorithm to compute the MMS voting rule. Instead of using linear programming, in this new algorithm the maximin support value of a committee is computed using a sequence of maximum flow problems.
Complexity is a fundamental concept underlying statistical learning theory that aims to inform generalization performance. Parameter count, while successful in low-dimensional settings, is not well-justified for overparameterized settings when the number of parameters is more than the number of training samples. We revisit complexity measures based on Rissanen's principle of minimum description length (MDL) and define a novel MDL-based complexity (MDL-COMP) that remains valid for overparameterized models. MDL-COMP is defined via an optimality criterion over the encodings induced by a good Ridge estimator class. We provide an extensive theoretical characterization of MDL-COMP for linear models and kernel methods and show that it is not just a function of parameter count, but rather a function of the singular values of the design or the kernel matrix and the signal-to-noise ratio. For a linear model with $n$ observations, $d$ parameters, and i.i.d. Gaussian predictors, MDL-COMP scales linearly with $d$ when $d<n$, but the scaling is exponentially smaller -- $\log d$ for $d>n$. For kernel methods, we show that MDL-COMP informs minimax in-sample error, and can decrease as the dimensionality of the input increases. We also prove that MDL-COMP upper bounds the in-sample mean squared error (MSE). Via an array of simulations and real-data experiments, we show that a data-driven Prac-MDL-COMP informs hyper-parameter tuning for optimizing test MSE with ridge regression in limited data settings, sometimes improving upon cross-validation and (always) saving computational costs. Finally, our findings also suggest that the recently observed double decent phenomenons in overparameterized models might be a consequence of the choice of non-ideal estimators.
Problems from metric graph theory such as Metric Dimension, Geodetic Set, and Strong Metric Dimension have recently had a strong impact on the field of parameterized complexity by being the first problems in NP to admit double-exponential lower bounds in the treewidth, and even in the vertex cover number for the latter. We initiate the study of enumerating minimal solution sets for these problems and show that they are also of great interest in enumeration. More specifically, we show that enumerating minimal resolving sets in graphs and minimal geodetic sets in split graphs are equivalent to hypergraph dualization, arguably one of the most important open problems in algorithmic enumeration. This provides two new natural examples to a question that emerged in different works this last decade: for which vertex (or edge) set graph property $\Pi$ is the enumeration of minimal (or maximal) subsets satisfying $\Pi$ equivalent to hypergraph dualization? As only very few properties are known to fit within this context -- namely, properties related to minimal domination -- our results make significant progress in characterizing such properties, and provide new angles of approach for tackling hypergraph dualization. In a second step, we consider cases where our reductions do not apply, namely graphs with no long induced paths, and show these cases to be mainly tractable.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.