This paper deals with the problem of finding the preferred extensions of an argumentation framework by means of a bijection with the naive sets of another framework. First, we consider the case where an argumentation framework is naive-bijective: its naive sets and preferred extensions are equal. Recognizing naive-bijective argumentation frameworks is hard, but we show that it is tractable for frameworks with bounded in-degree. Next, we give a bijection between the preferred extensions of an argumentation framework being admissible-closed (the intersection of two admissible sets is admissible) and the naive sets of another framework on the same set of arguments. On the other hand, we prove that identifying admissible-closed argumentation frameworks is coNP-complete. At last, we introduce the notion of irreducible self-defending sets as those that are not the union of others. It turns out there exists a bijection between the preferred extensions of an argumentation framework and the naive sets of a framework on its irreducible self-defending sets. Consequently, the preferred extensions of argumentation frameworks with some lattice properties can be listed with polynomial delay and polynomial space.
About ten years ago, a paper proposed the first integer linear programming formulation for the constrained two-dimensional guillotine cutting problem (with unlimited cutting stages). Since, six other formulations followed, five of them in the last two years. This spike of interest gave no opportunity for a comprehensive comparison between the formulations. We review each formulation and compare their empirical results over instance datasets of the literature. We adapt most formulations to allow for piece rotation. The possibility of adaptation was already predicted but not realized by the prior work. The results show the dominance of pseudo-polynomial formulations until the point instances become intractable by them, while more compact formulations keep achieving good primal solutions. Our study also reveals a small but consistent advantage of the Gurobi solver over the CPLEX solver in our context; that the choice of solver hardly benefits one formulation over another; and a mistake in the generation of the T instances, which should have the same optima with or without guillotine cuts. Our study also proposes hybridising the most recent formulation with a prior formulation for a restricted version of the problem. The hybridisations show a reduction of about 20% of the branch-and-bound time thanks to the symmetries broken by the hybridisation.
A variant of the standard notion of branching bisimilarity for processes with discrete relative timing is proposed which is coarser than the standard notion. Using a version of ACP (Algebra of Communicating Processes) with abstraction for processes with discrete relative timing, it is shown that the proposed variant allows of both the functional correctness and the performance properties of the PAR (Positive Acknowledgement with Retransmission) protocol to be analyzed. In the version of ACP concerned, the difference between the standard notion of branching bisimilarity and its proposed variant is characterized by a single axiom schema.
We consider an unknown multivariate function representing a system-such as a complex numerical simulator-taking both deterministic and uncertain inputs. Our objective is to estimate the set of deterministic inputs leading to outputs whose probability (with respect to the distribution of the uncertain inputs) of belonging to a given set is less than a given threshold. This problem, which we call Quantile Set Inversion (QSI), occurs for instance in the context of robust (reliability-based) optimization problems, when looking for the set of solutions that satisfy the constraints with sufficiently large probability. To solve the QSI problem, we propose a Bayesian strategy based on Gaussian process modeling and the Stepwise Uncertainty Reduction (SUR) principle, to sequentially choose the points at which the function should be evaluated to efficiently approximate the set of interest. We illustrate the performance and interest of the proposed SUR strategy through several numerical experiments.
This article presents the openCFS submodule scattered data reader for coupling multi-physical simulations performed in different simulation programs. For instance, by considering a forward-coupling of a surface vibration simulation (mechanical system) to an acoustic propagation simulation using time-dependent acoustic absorbing material as a noise mitigation measure. The nearest-neighbor search of the target and source points from the interpolation is performed using the FLANN or the CGAL library. In doing so, the coupled field (e.g., surface velocity) is interpolated from a source representation consisting of field values physically stored and organized in a file directory to a target representation being the quadrature points in the case of the finite element method. A test case of the functionality is presented in the "testsuite" module of the openCFS software called "Abc2dcsvt". This scattered data reader module was successfully applied in numerous studies on flow-induced sound generation. Within this short article, the functionality, and usability of this module are described.
We present a rigorous and precise analysis of the maximum degree and the average degree in a dynamic duplication-divergence graph model introduced by Sol\'e, Pastor-Satorras et al. in which the graph grows according to a duplication-divergence mechanism, i.e. by iteratively creating a copy of some node and then randomly alternating the neighborhood of a new node with probability $p$. This model captures the growth of some real-world processes e.g. biological or social networks. In this paper, we prove that for some $0 < p < 1$ the maximum degree and the average degree of a duplication-divergence graph on $t$ vertices are asymptotically concentrated with high probability around $t^p$ and $\max\{t^{2 p - 1}, 1\}$, respectively, i.e. they are within at most a polylogarithmic factor from these values with probability at least $1 - t^{-A}$ for any constant $A > 0$.
Stochastic inversion problems are typically encountered when it is wanted to quantify the uncertainty affecting the inputs of computer models. They consist in estimating input distributions from noisy, observable outputs, and such problems are increasingly examined in Bayesian contexts where the targeted inputs are affected by stochastic uncertainties. In this regard, a stochastic input can be qualified as meaningful if it explains most of the output uncertainty. While such inverse problems are characterized by identifiability conditions, constraints of "signal to noise", that can formalize this meaningfulness, should be accounted for within the definition of the model, prior to inference. This article investigates the possibility of forcing a solution to be meaningful in the context of parametric uncertainty quantification, through the tools of global sensitivity analysis and information theory (variance, entropy, Fisher information). Such forcings have mainly the nature of constraints placed on the input covariance, and can be made explicit by considering linear or linearizable models. Simulated experiments indicate that, when injected into the modeling process, these constraints can limit the influence of measurement or process noise on the estimation of the input distribution, and let hope for future extensions in a full non-linear framework, for example through the use of linear Gaussian mixtures.
The main computational cost per iteration of adaptive cubic regularization methods for solving large-scale nonconvex problems is the computation of the step $s_k$, which requires an approximate minimizer of the cubic model. We propose a new approach in which this minimizer is sought in a low dimensional subspace that, in contrast to classical approaches, is reused for a number of iterations. A regularized Newton step to correct $s_k$ is also incorporated whenever needed. We show that our method increases efficiency while preserving the worst-case complexity of classical cubic regularized methods. We also explore the use of rational Krylov subspaces for the subspace minimization, to overcome some of the issues encountered when using polynomial Krylov subspaces. We provide several experimental results illustrating the gains of the new approach when compared to classic implementations.
We propose, analyze and realize a variational multiclass segmentation scheme that partitions a given image into multiple regions exhibiting specific properties. Our method determines multiple functions that encode the segmentation regions by minimizing an energy functional combining information from different channels. Multichannel image data can be obtained by lifting the image into a higher dimensional feature space using specific multichannel filtering or may already be provided by the imaging modality under consideration, such as an RGB image or multimodal medical data. Experimental results show that the proposed method performs well in various scenarios. In particular, promising results are presented for two medical applications involving classification of brain abscess and tumor growth, respectively. As main theoretical contributions, we prove the existence of global minimizers of the proposed energy functional and show its stability and convergence with respect to noisy inputs. In particular, these results also apply to the special case of binary segmentation, and these results are also novel in this particular situation.
V. Levenshtein first proposed the sequence reconstruction problem in 2001. This problem studies the model where the same sequence from some set is transmitted over multiple channels, and the decoder receives the different outputs. Assume that the transmitted sequence is at distance $d$ from some code and there are at most $r$ errors in every channel. Then the sequence reconstruction problem is to find the minimum number of channels required to recover exactly the transmitted sequence that has to be greater than the maximum intersection between two metric balls of radius $r$, where the distance between their centers is at least $d$. In this paper, we study the sequence reconstruction problem of permutations under the Hamming distance. In this model we define a Cayley graph over the symmetric group, study its properties and find the exact value of the largest intersection of its two metric balls for $d=2r$. Moreover, we give a lower bound on the largest intersection of two metric balls for $d=2r-1$.
This paper addresses the problem of providing robust estimators under a functional logistic regression model. Logistic regression is a popular tool in classification problems with two populations. As in functional linear regression, regularization tools are needed to compute estimators for the functional slope. The traditional methods are based on dimension reduction or penalization combined with maximum likelihood or quasi--likelihood techniques and for that reason, they may be affected by misclassified points especially if they are associated to functional covariates with atypical behaviour. The proposal given in this paper adapts some of the best practices used when the covariates are finite--dimensional to provide reliable estimations. Under regularity conditions, consistency of the resulting estimators and rates of convergence for the predictions are derived. A numerical study illustrates the finite sample performance of the proposed method and reveals its stability under different contamination scenarios. A real data example is also presented.