Robins et al. (2008, 2017) applied the theory of higher order influence functions (HOIFs) to derive an estimator of the mean $\psi$ of an outcome Y in a missing data model with Y missing at random conditional on a vector X of continuous covariates; their estimator, in contrast to previous estimators, is semiparametric efficient under the minimal conditions of Robins et al. (2009b), together with an additional (non-minimal) smoothness condition on the density g of X, because the Robins et al. (2008, 2017) estimator depends on a nonparametric estimate of g. In this paper, we introduce a new HOIF estimator that has the same asymptotic properties as the original one, but does not impose any smoothness requirement on g. This is important for two reasons. First, one rarely has the knowledge about the properties of g. Second, even when g is smooth, if the dimension of X is even moderate, accurate nonparametric estimation of its density is not feasible at the sample sizes often encountered in applications. In fact, to the best of our knowledge, this new HOIF estimator remains the only semiparametric efficient estimator of $\psi$ under minimal conditions, despite the rapidly growing literature on causal effect estimation. We also show that our estimator can be generalized to the entire class of functionals considered by Robins et al. (2008) which include the average effect of a treatment on a response Y when a vector X suffices to control confounding and the expected conditional variance of a response Y given a vector X. Simulation experiments are also conducted, which demonstrate that our new estimator outperforms those of Robins et al. (2008, 2017) in finite samples, when g is not very smooth.
We design new deterministic CONGEST approximation algorithms for \emph{maximum weight independent set (MWIS)} in \emph{sparse graphs}. As our main results, we obtain new $\Delta(1+\epsilon)$-approximation algorithms as well as algorithms whose approximation ratio depend strictly on $\alpha$, in graphs with maximum degree $\Delta$ and arboricity $\alpha$. For (deterministic) $\Delta(1+\epsilon)$-approximation, the current state-of-the-art is due to a recent breakthrough by Faour et al.\ [SODA 2023] that showed an $O(\log^{2} (\Delta W)\cdot \log (1/\epsilon)+\log ^{*}n)$-round algorithm, where $W$ is the largest node-weight (this bound translates to $O(\log^{2} n\cdot\log (1/\epsilon))$ under the common assumption that $W=\text{poly}(n)$). As for $\alpha$-dependent approximations, a deterministic CONGEST $(8(1+\epsilon)\cdot\alpha)$-approximation algorithm with runtime $O(\log^{3} n\cdot\log (1/\epsilon))$ can be derived by combining the aforementioned algorithm of Faour et al.\ with a method presented by Kawarabayashi et al.\ [DISC 2020].
We propose Structured Language Generation Model (SLGM), a mixture of new loss function and inference method for better generalization of structured outputs. Previous studies on structure prediction (e.g. NER, RE) make use of explicit dataset information, which would boost performance, yet it might pose challenges to robust generalization in real-world situations. Instead, our model gives generalized format information about data indirectly. With format information, we could reduce sequence-to-sequence problem into classification problem via loss calibration and formatted decoding. Our experimental results showed SLGM successfully maintain performance without dataset information, and showed much less format errors. We also showed our model can work like adapters on individual dataset, with no additional training.
Facial analysis has emerged as a prominent area of research with diverse applications, including cosmetic surgery programs, the beauty industry, photography, and entertainment. Manipulating patient images often necessitates professional image processing software. This study contributes by providing a model that facilitates the detection of blemishes and skin lesions on facial images through a convolutional neural network and machine learning approach. The proposed method offers advantages such as simple architecture, speed and suitability for image processing while avoiding the complexities associated with traditional methods. The model comprises four main steps: area selection, scanning the chosen region, lesion diagnosis, and marking the identified lesion. Raw data for this research were collected from a reputable clinic in Tehran specializing in skincare and beauty services. The dataset includes administrative information, clinical data, and facial and profile images. A total of 2300 patient images were extracted from this raw data. A software tool was developed to crop and label lesions, with input from two treatment experts. In the lesion preparation phase, the selected area was standardized to 50 * 50 pixels. Subsequently, a convolutional neural network model was employed for lesion labeling. The classification model demonstrated high accuracy, with a measure of 0.98 for healthy skin and 0.97 for lesioned skin specificity. Internal validation involved performance indicators and cross-validation, while external validation compared the model's performance indicators with those of the transfer learning method using the Vgg16 deep network model. Compared to existing studies, the results of this research showcase the efficacy and desirability of the proposed model and methodology.
Given an edge-weighted metric complete graph with $n$ vertices, the maximum weight metric triangle packing problem is to find a set of $n/3$ vertex-disjoint triangles with the total weight of all triangles in the packing maximized. Several simple methods can lead to a 2/3-approximation ratio. However, this barrier is not easy to break. Chen et al. proposed a randomized approximation algorithm with an expected ratio of $(0.66768-\varepsilon)$ for any constant $\varepsilon>0$. In this paper, we improve the approximation ratio to $(0.66835-\varepsilon)$. Furthermore, we can derandomize our algorithm.
We present a polynomial-time algorithm for online differentially private synthetic data generation. For a data stream within the hypercube $[0,1]^d$ and an infinite time horizon, we develop an online algorithm that generates a differentially private synthetic dataset at each time $t$. This algorithm achieves a near-optimal accuracy bound of $O(t^{-1/d}\log(t))$ for $d\geq 2$ and $O(t^{-1}\log^{4.5}(t))$ for $d=1$ in the 1-Wasserstein distance. This result generalizes the previous work on the continual release model for counting queries to include Lipschitz queries. Compared to the offline case, where the entire dataset is available at once, our approach requires only an extra polylog factor in the accuracy bound.
The angular synchronization problem aims to accurately estimate (up to a constant additive phase) a set of unknown angles $\theta_1, \dots, \theta_n\in[0, 2\pi)$ from $m$ noisy measurements of their offsets $\theta_i-\theta_j \;\mbox{mod} \; 2\pi.$ Applications include, for example, sensor network localization, phase retrieval, and distributed clock synchronization. An extension of the problem to the heterogeneous setting (dubbed $k$-synchronization) is to estimate $k$ groups of angles simultaneously, given noisy observations (with unknown group assignment) from each group. Existing methods for angular synchronization usually perform poorly in high-noise regimes, which are common in applications. In this paper, we leverage neural networks for the angular synchronization problem, and its heterogeneous extension, by proposing GNNSync, a theoretically-grounded end-to-end trainable framework using directed graph neural networks. In addition, new loss functions are devised to encode synchronization objectives. Experimental results on extensive data sets demonstrate that GNNSync attains competitive, and often superior, performance against a comprehensive set of baselines for the angular synchronization problem and its extension, validating the robustness of GNNSync even at high noise levels.
Bayesian optimization has been successfully applied to optimize black-box functions where the number of evaluations is severely limited. However, in many real-world applications, it is hard or impossible to know in advance which designs are feasible due to some physical or system limitations. These issues lead to an even more challenging problem of optimizing an unknown function with unknown constraints. In this paper, we observe that in such scenarios optimal solution typically lies on the boundary between feasible and infeasible regions of the design space, making it considerably more difficult than that with interior optima. Inspired by this observation, we propose BE-CBO, a new Bayesian optimization method that efficiently explores the boundary between feasible and infeasible designs. To identify the boundary, we learn the constraints with an ensemble of neural networks that outperform the standard Gaussian Processes for capturing complex boundaries. Our method demonstrates superior performance against state-of-the-art methods through comprehensive experiments on synthetic and real-world benchmarks.
(I) We revisit the algorithmic problem of finding all triangles in a graph $G=(V,E)$ with $n$ vertices and $m$ edges. According to a result of Chiba and Nishizeki (1985), this task can be achieved by a combinatorial algorithm running in $O(m \alpha) = O(m^{3/2})$ time, where $\alpha= \alpha(G)$ is the graph arboricity. We provide a new very simple combinatorial algorithm for finding all triangles in a graph and show that is amenable to the same running time analysis. We derive these worst-case bounds from first principles and with very simple proofs that do not rely on classic results due to Nash-Williams from the 1960s. (II) We extend our arguments to the problem of finding all small complete subgraphs of a given fixed size. We show that the dependency on $m$ and $\alpha$ in the running time $O(\alpha^{\ell-2} \cdot m)$ of the algorithm of Chiba and Nishizeki for listing all copies of $K_\ell$, where $\ell \geq 3$, is asymptotically tight. (III) We give improved arboricity-sensitive running times for counting and/or detection of copies of $K_\ell$, for small $\ell \geq 4$. A key ingredient in our algorithms is, once again, the algorithm of Chiba and Nishizeki. Our new algorithms are faster than all previous algorithms in certain high-range arboricity intervals for every $\ell \geq 7$.
This work proposes a class of locally differentially private mechanisms for linear queries, in particular range queries, that leverages correlated input perturbation to simultaneously achieve unbiasedness, consistency, statistical transparency, and control over utility requirements in terms of accuracy targets expressed either in certain query margins or as implied by the hierarchical database structure. The proposed Cascade Sampling algorithm instantiates the mechanism exactly and efficiently. Our bounds show that we obtain near-optimal utility while being empirically competitive against output perturbation methods.
We develop a general theory to optimize the frequentist regret for sequential learning problems, where efficient bandit and reinforcement learning algorithms can be derived from unified Bayesian principles. We propose a novel optimization approach to generate "algorithmic beliefs" at each round, and use Bayesian posteriors to make decisions. The optimization objective to create "algorithmic beliefs," which we term "Algorithmic Information Ratio," represents an intrinsic complexity measure that effectively characterizes the frequentist regret of any algorithm. To the best of our knowledge, this is the first systematical approach to make Bayesian-type algorithms prior-free and applicable to adversarial settings, in a generic and optimal manner. Moreover, the algorithms are simple and often efficient to implement. As a major application, we present a novel algorithm for multi-armed bandits that achieves the "best-of-all-worlds" empirical performance in the stochastic, adversarial, and non-stationary environments. And we illustrate how these principles can be used in linear bandits, bandit convex optimization, and reinforcement learning.