In this paper, we study the perturbation analysis of a class of composite optimization problems, which is a very convenient and unified framework for developing both theoretical and algorithmic issues of constrained optimization problems. The underlying theme of this paper is very important in both theoretical and computational study of optimization problems. Under some mild assumptions on the objective function, we provide a definition of a strong second order sufficient condition (SSOSC) for the composite optimization problem and also prove that the following conditions are equivalent to each other: the SSOSC and the nondegeneracy condition, the nonsingularity of Clarke's generalized Jacobian of the nonsmooth system at a Karush-Kuhn-Tucker (KKT) point, and the strong regularity of the KKT point. These results provide an important way to characterize the stability of the KKT point. As for the convex composite optimization problem, which is a special case of the general problem, we establish the equivalence between the primal/dual second order sufficient condition and the dual/primal strict Robinson constraint qualification, the equivalence between the primal/dual SSOSC and the dual/primal nondegeneracy condition. Moreover, we prove that the dual nondegeneracy condition and the nonsingularity of Clarke's generalized Jacobian of the subproblem corresponding to the augmented Lagrangian method are also equivalent to each other. These theoretical results lay solid foundation for designing an efficient algorithm.
In this work, for a given oriented graph $D$, we study its interval and hull numbers, respectively, in the oriented geodetic, P3 and P3* convexities. This last one, we believe to be formally defined and first studied in this paper, although its undirected version is well-known in the literature. Concerning bounds, for a strongly oriented graph D, and the oriented geodetic convexity, we prove that $ohng(D)\leq m(D)-n(D)+2$ and that there is at least one such that $ohng(D) = m(D)-n(D)$. We also determine exact values for the hull numbers in these three convexities for tournaments, which imply polynomial-time algorithms to compute them. These results allow us to deduce polynomial-time algorithms to compute $ohnp(D)$ when the underlying graph of $D$ is split or cobipartite. Moreover, we provide a meta-theorem by proving that if deciding whether $oing(D)\leq k$ or $ohng(D)\leq k$ is NP-hard or W[i]-hard parameterized by $k$, for some $i\in\mathbb{Z_+^*}$, then the same holds even if the underlying graph of $D$ is bipartite. Next, we prove that deciding whether $ohnp(D)\leq k$ or $ohnps(D)\leq k$ is W[2]-hard parameterized by $k$, even if $D$ is acyclic and its underlying graph is bipartite; that deciding whether $ohng(D)\leq k$ is W[2]-hard parameterized by $k$, even if $D$ is acyclic; that deciding whether $oinp(D)\leq k$ or $oinps(D)\leq k$ is NP-complete, even if $D$ has no directed cycles and the underlying graph of $D$ is a chordal bipartite graph; and that deciding whether $oinp(D)\leq k$ or $oinps(D)\leq k$ is W[2]-hard parameterized by $k$, even if the underlying graph of $D$ is split. Finally, also argue that the interval and hull numbers in the oriented P3 and P3* convexities can be computed in cubic time for graphs of bounded clique-width by using Courcelle's theorem.
Sylvester matrix equations are ubiquitous in scientific computing. However, few solution techniques exist for their generalized multiterm version, as they now arise in an increasingly large number of applications. In this work, we consider algebraic parameter-free preconditioning techniques for the iterative solution of generalized multiterm Sylvester equations. They consist in constructing low Kronecker rank approximations of either the operator itself or its inverse. While the former requires solving standard Sylvester equations in each iteration, the latter only requires matrix-matrix multiplications, which are highly optimized on modern computer architectures. Moreover, low Kronecker rank approximate inverses can be easily combined with sparse approximate inverse techniques, thereby enhancing their performance with little or no damage to their effectiveness.
In this manuscript, we highlight a new phenomenon of complex algebraic singularity formation for solutions of a large class of genuinely nonlinear partial differential equations (PDEs). We start from a unique Cauchy datum, which is holomorphic ramified around the smooth locus and is sufficiently singular. Then, we expect the existence of a solution which should be holomorphic ramified around the singular locus S defined by the vanishing of the discriminant of an algebraic equation. Notice, moreover, that the monodromy of the Cauchy datum is Abelian, whereas one of the solutions is non-Abelian. Moreover, the singular locus S depends on the Cauchy datum in contrast to the Leray principle (stated for linear problems only). This phenomenon is due to the fact that the PDE is genuinely nonlinear and that the Cauchy datum is sufficiently singular. First, we investigate the case of the inviscid Burgers equation. Later, we state a general conjecture that describes the expected phenomenon. We view this Conjecture as a working programme allowing us to develop interesting new Mathematics. We also state another Conjecture 2, which is a particular case of the general Conjecture but keeps all the flavour and difficulty of the subject. Then, we propose a new algorithm with a map F such that a fixed point of F would give a solution to the problem associated with Conjecture 2. Then, we perform convincing, elaborate numerical tests that suggest that a Banach norm should exist for which the mapping F should be a contraction so that the solution (with the above specific algebraic structure) should be unique. This work is a continuation of Leichtnam (1993).
This paper presents a method for thematic agreement assessment of geospatial data products of different semantics and spatial granularities, which may be affected by spatial offsets between test and reference data. The proposed method uses a multi-scale framework allowing for a probabilistic evaluation whether thematic disagreement between datasets is induced by spatial offsets due to different nature of the datasets or not. We test our method using real-estate derived settlement locations and remote-sensing derived building footprint data.
This research addresses the crucial issue of pollution from aircraft operations, focusing on optimizing both gate allocation and runway scheduling simultaneously, a novel approach not previously explored. The study presents an innovative genetic algorithm-based method for minimizing pollution from fuel combustion during aircraft take-off and landing at airports. This algorithm uniquely integrates the optimization of both landing gates and take-off/landing runways, considering the correlation between engine operation time and pollutant levels. The approach employs advanced constraint handling techniques to manage the intricate time and resource limitations inherent in airport operations. Additionally, the study conducts a thorough sensitivity analysis of the model, with a particular emphasis on the mutation factor and the type of penalty function, to fine-tune the optimization process. This dual-focus optimization strategy represents a significant advancement in reducing environmental impact in the aviation sector, establishing a new standard for comprehensive and efficient airport operation management.
We analyze call center data on properties such as agent heterogeneity, customer patience and breaks. Then we compare simulation models that are different in the ways these properties are modeled. We classify them according to the extend in which they approach the actual service level and average waiting times. We obtain a theoretical understanding on how to distinguish between the model error and other aspects such as random noise. We conclude that modeling explicitly breaks and agent heterogeneity is crucial for obtaining a precise model.
With the growing prevalence of machine learning and artificial intelligence-based medical decision support systems, it is equally important to ensure that these systems provide patient outcomes in a fair and equitable fashion. This paper presents an innovative framework for detecting areas of algorithmic bias in medical-AI decision support systems. Our approach efficiently identifies potential biases in medical-AI models, specifically in the context of sepsis prediction, by employing the Classification and Regression Trees (CART) algorithm. We verify our methodology by conducting a series of synthetic data experiments, showcasing its ability to estimate areas of bias in controlled settings precisely. The effectiveness of the concept is further validated by experiments using electronic medical records from Grady Memorial Hospital in Atlanta, Georgia. These tests demonstrate the practical implementation of our strategy in a clinical environment, where it can function as a vital instrument for guaranteeing fairness and equity in AI-based medical decisions.
In this paper we consider a superlinear one-dimensional elliptic boundary value problem that generalizes the one studied by Moore and Nehari in [43]. Specifically, we deal with piecewise-constant weight functions in front of the nonlinearity with an arbitrary number $\kappa\geq 1$ of vanishing regions. We study, from an analytic and numerical point of view, the number of positive solutions, depending on the value of a parameter $\lambda$ and on $\kappa$. Our main results are twofold. On the one hand, we study analytically the behavior of the solutions, as $\lambda\downarrow-\infty$, in the regions where the weight vanishes. Our result leads us to conjecture the existence of $2^{\kappa+1}-1$ solutions for sufficiently negative $\lambda$. On the other hand, we support such a conjecture with the results of numerical simulations which also shed light on the structure of the global bifurcation diagrams in $\lambda$ and the profiles of positive solutions. Finally, we give additional numerical results suggesting that the same high multiplicity result holds true for a much larger class of weights, also arbitrarily close to situations where there is uniqueness of positive solutions.
In an error estimation of finite element solutions to the Poisson equation, we usually impose the shape regularity assumption on the meshes to be used. In this paper, we show that even if the shape regularity condition is violated, the standard error estimation can be obtained if "bad" elements (elements that violate the shape regularity or maximum angle condition) are covered virtually by "good" simplices. A numerical experiment confirms the theoretical result.
In this work, we present new constructions for topological subsystem codes using semi-regular Euclidean and hyperbolic tessellations. They give us new families of codes, and we also provide a new family of codes obtained through an already existing construction, due to Sarvepalli and Brown. We also prove new results that allow us to obtain the parameters of these new codes.