We study the maximum-average submatrix problem, in which given an $N \times N$ matrix $J$ one needs to find the $k \times k$ submatrix with the largest average of entries. We study the problem for random matrices $J$ whose entries are i.i.d. random variables by mapping it to a variant of the Sherrington-Kirkpatrick spin-glass model at fixed magnetization. We characterize analytically the phase diagram of the model as a function of the submatrix average and the size of the submatrix $k$ in the limit $N\to\infty$. We consider submatrices of size $k = m N$ with $0 < m < 1$. We find a rich phase diagram, including dynamical, static one-step replica symmetry breaking and full-step replica symmetry breaking. In the limit of $m \to 0$, we find a simpler phase diagram featuring a frozen 1-RSB phase, where the Gibbs measure is composed of exponentially many pure states each with zero entropy.
The conic bundle implementation of the spectral bundle method for large scale semidefinite programming solves in each iteration a semidefinite quadratic subproblem by an interior point approach. For larger cutting model sizes the limiting operation is collecting and factorizing a Schur complement of the primal-dual KKT system. We explore possibilities to improve on this by an iterative approach that exploits structural low rank properties. Two preconditioning approaches are proposed and analyzed. Both might be of interest for rank structured positive definite systems in general. The first employs projections onto random subspaces, the second projects onto a subspace that is chosen deterministically based on structural interior point properties. For both approaches theoretic bounds are derived for the associated condition number. In the instances tested the deterministic preconditioner provides surprisingly efficient control on the actual condition number. The results suggest that for large scale instances the iterative solver is usually the better choice if precision requirements are moderate or if the size of the Schur complemented system clearly exceeds the active dimension within the subspace giving rise to the cutting model of the bundle method.
At present, the ubiquity method to diagnose the severity of diabetic feet (DF) depends on professional podiatrists. However, in most cases, professional podiatrists have a heavy workload, especially in underdeveloped and developing countries and regions, and there are often insufficient podiatrists to meet the rapidly growing treatment needs of DF patients. It is necessary to develop a medical system that assists in diagnosing DF in order to reduce part of the workload for podiatrists and to provide timely relevant information to patients with DF. In this paper, we have developed a system that can classify and locate Wagner ulcers of diabetic foot in real-time. First, we proposed a dataset of 2688 diabetic feet with annotations. Then, in order to enable the system to detect diabetic foot ulcers in real time and accurately, this paper is based on the YOLOv3 algorithm coupled with image fusion, label smoothing, and variant learning rate mode technologies to improve the robustness and predictive accuracy of the original algorithm. Finally, the refinements on YOLOv3 was used as the optimal algorithm in this paper to deploy into Android smartphone to predict the classes and localization of the diabetic foot with real-time. The experimental results validate that the improved YOLOv3 algorithm achieves a mAP of 91.95%, and meets the needs of real-time detection and analysis of diabetic foot Wagner Ulcer on mobile devices, such as smart phones. This work has the potential to lead to a paradigm shift for clinical treatment of the DF in the future, to provide an effective healthcare solution for DF tissue analysis and healing status.
By the MAXSAT problem, we are given a set $V$ of $m$ variables and a collection $C$ of $n$ clauses over $V$. We will seek a truth assignment to maximize the number of satisfied clauses. This problem is $\textit{NP}$-hard even for its restricted version, the 2-MAXSAT problem by which every clause contains at most 2 literals. In this paper, we discuss a polynomial time algorithm to solve this problem. Its time complexity is bounded by O($n^2m^3$). Hence, we provide a proof of $P$ = $\textit{NP}$.
Current ethical debates on the use of artificial intelligence (AI) in health care treat AI as a product of technology in three ways: First, by assessing risks and potential benefits of currently developed AI-enabled products with ethical checklists; second, by proposing ex ante lists of ethical values seen as relevant for the design and development of assisting technology, and third, by promoting AI technology to use moral reasoning as part of the automation process. Subsequently, we propose a fourth approach to AI, namely as a methodological tool to assist ethical reflection. We provide a concept of an AI-simulation informed by three separate elements: 1) stochastic human behavior models based on behavioral data for simulating realistic settings, 2) qualitative empirical data on value statements regarding internal policy, and 3) visualization components that aid in understanding the impact of changes in these variables. The potential of this approach is to inform an interdisciplinary field about anticipated ethical challenges or ethical trade-offs in concrete settings and, hence, to spark a re-evaluation of design and implementation plans. This may be particularly useful for applications that deal with extremely complex values and behavior or with limitations on the communication resources of affected persons (e.g., persons with dementia care or for care of persons with cognitive impairment). Simulation does not replace ethical reflection but does allow for detailed, context-sensitive analysis during the design process and prior to implementation. Finally, we discuss the inherently quantitative methods of analysis afforded by stochastic simulations as well as the potential for ethical discussions and how simulations with AI can improve traditional forms of thought experiments and future-oriented technology assessment.
We consider a node where packets of fixed size (in bits) are generated at arbitrary intervals. The node is required to maintain the peak age of information (AoI) at the monitor below a threshold by transmitting potentially a subset of the generated packets. At any time, depending on the packet availability and the current AoI, the node can choose which packet to transmit, and at what transmission speed (in bits per second). Power consumption is a monotonically increasing convex function of the transmission speed. In this paper, for any given time horizon, the objective is to find a causal policy that minimizes the total energy consumption while satisfying the peak AoI constraint. We consider competitive ratio as the performance metric, that is defined as the ratio of the expected cost of a causal policy, and the expected cost of an optimal offline policy that knows the input (packet generation times) in advance. We first derive a lower bound on the competitive ratio of all causal policies, in terms of the system parameters (such as power function, packet size and peak AoI threshold), and then propose a particular policy for which we show that its competitive ratio has similar order of dependence on the system parameters as the derived lower bound.
Kemeny's rule is one of the most studied and well-known voting schemes with various important applications in computational social choice and biology. Recently, Kemeny's rule was generalized via a set-wise approach by Gilbert et. al. Following this paradigm, we have shown in \cite{Phung-Hamel-2023} that the $3$-wise Kemeny voting scheme induced by the $3$-wise Kendall-tau distance presents interesting advantages in comparison with the classical Kemeny rule. While the $3$-wise Kemeny problem, which consists of computing the set of $3$-wise consensus rankings of a voting profile, is NP-hard, we establish in this paper several generalizations of the Major Order Theorems, as obtained in \cite{Milosz-Hamel-2020} for the classical Kemeny rule, for the $3$-wise Kemeny voting scheme to achieve a substantial search space reduction by efficiently determining in polynomial time the relative orders of pairs of alternatives. Essentially, our theorems quantify precisely the non-trivial property that if the preference for an alternative over another one in an election is strong enough, not only in the head-to-head competition but even when taking into consideration one or two more alternatives, then the relative order of these two alternatives in every $3$-wise consensus ranking must be as expected. Moreover, we show that the well-known $3/4$-majority rule of Betzler et al. for the classical Kemeny rule is only valid for elections with no more than $5$ alternatives with respect to the $3$-wise Kemeny scheme. Examples are also provided to show that the $3$-wise Kemeny rule is more resistant to manipulation than the classical one.
We study the $d$-dimensional Vector Bin Packing ($d$VBP) problem, a generalization of Bin Packing with central applications in resource allocation and scheduling. In $d$VBP, we are given a set of items, each of which is characterized by a $d$-dimensional volume vector; the objective is to partition the items into a minimum number of subsets (bins), such that the total volume of items in each subset is at most $1$ in each dimension. Our main result is an asymptotic approximation algorithm for $d$VBP that yields a ratio of $(1+\ln d-\chi(d) +\varepsilon)$ for all $d \in \mathbb{N}$ and any $\varepsilon > 0$; here, $\chi(d)$ is some strictly positive function. This improves upon the best known asymptotic ratio of $ \left(1+ \ln d +\varepsilon\right)$ due to Bansal, Caprara and Sviridenko (SICOMP 2010) for any $d >3$. By slightly modifying our algorithm to include an initial matching phase and applying a tighter analysis we obtain an asymptotic approximation ratio of $\left(\frac{4}{3}+\varepsilon\right)$ for the special case of $d=2$, thus substantially improving the previous best ratio of $\left(\frac{3}{2}+\varepsilon\right)$ due to Bansal, Elias and Khan (SODA 2016). Our algorithm iteratively solves a configuration LP relaxation for the residual instance (from previous iterations) and samples a small number of configurations based on the solution for the configuration LP. While iterative rounding was already used by Karmarkar and Karp (FOCS 1982) to establish their celebrated result for classic (one-dimensional) Bin Packing, iterative randomized rounding is used here for the first time in the context of (Vector) Bin Packing. Our results show that iterative randomized rounding is a powerful tool for approximating $d$VBP, leading to simple algorithms with improved approximation guarantees.
We present algorithms based on satisfiability problem (SAT) solving, as well as answer set programming (ASP), for solving the problem of determining inconsistency degrees in propositional knowledge bases. We consider six different inconsistency measures whose respective decision problems lie on the first level of the polynomial hierarchy. Namely, these are the contension inconsistency measure, the forgetting-based inconsistency measure, the hitting set inconsistency measure, the max-distance inconsistency measure, the sum-distance inconsistency measure, and the hit-distance inconsistency measure. In an extensive experimental analysis, we compare the SAT-based and ASP-based approaches with each other, as well as with a set of naive baseline algorithms. Our results demonstrate that overall, both the SAT-based and the ASP-based approaches clearly outperform the naive baseline methods in terms of runtime. The results further show that the proposed ASP-based approaches perform superior to the SAT-based ones with regard to all six inconsistency measures considered in this work. Moreover, we conduct additional experiments to explain the aforementioned results in greater detail.
In temporal extensions of Answer Set Programming (ASP) based on linear-time, the behavior of dynamic systems is captured by sequences of states. While this representation reflects their relative order, it abstracts away the specific times associated with each state. However, timing constraints are important in many applications like, for instance, when planning and scheduling go hand in hand. We address this by developing a metric extension of linear-time temporal equilibrium logic, in which temporal operators are constrained by intervals over natural numbers. The resulting Metric Equilibrium Logic provides the foundation of an ASP-based approach for specifying qualitative and quantitative dynamic constraints. To this end, we define a translation of metric formulas into monadic first-order formulas and give a correspondence between their models in Metric Equilibrium Logic and Monadic Quantified Equilibrium Logic, respectively. Interestingly, our translation provides a blue print for implementation in terms of ASP modulo difference constraints.
Classical mathematical techniques such as discrete integration, gradient descent optimization, and state estimation (exemplified by the Runge-Kutta method, Gauss-Newton minimization, and extended Kalman filter or EKF, respectively), rely on linear algebra and hence are only applicable to state vectors belonging to Euclidean spaces when implemented as described in the literature. This document discusses how to modify these methods so they can be applied to non-Euclidean state vectors, such as those containing rotations and full motions of rigid bodies. To do so, this document provides an in-depth review of the concept of manifolds or Lie groups, together with their tangent spaces or Lie algebras, their exponential and logarithmic maps, the analysis of perturbations, the treatment of uncertainty and covariance, and in particular the definitions of the Jacobians required to employ the previously mentioned calculus methods. These concepts are particularized to the specific cases of the SO(3) and SE(3) Lie groups, known as the special orthogonal and special Euclidean groups of R3, which represent the rigid body rotations and motions, describing their various possible parameterizations as well as their advantages and disadvantages.