The classic online facility location problem deals with finding the optimal set of facilities in an online fashion when demand requests arrive one at a time and facilities need to be opened to service these requests. In this work, we study two variants of the online facility location problem; (1) weighted requests and (2) congestion. Both of these variants are motivated by their applications to real life scenarios and the previously known results on online facility location cannot be directly adapted to analyse them. Weighted requests: In this variant, each demand request is a pair $(x,w)$ where $x$ is the standard location of the demand while $w$ is the corresponding weight of the request. The cost of servicing request $(x,w)$ at facility $F$ is $w\cdot d(x,F)$. For this variant, given $n$ requests, we present an online algorithm attaining a competitive ratio of $\mathcal{O}(\log n)$ in the secretarial model for the weighted requests and show that it is optimal. Congestion: The congestion variant considers the case when there is an additional congestion cost that grows with the number of requests served by each facility. For this variant, when the congestion cost is a monomial, we show that there exists an algorithm attaining a constant competitive ratio. This constant is a function of the exponent of the monomial and the facility opening cost but independent of the number of requests.
Recent discussions on the future of metropolitan cities underscore the pivotal role of (social) equity, driven by demographic and economic trends. More equal policies can foster and contribute to a city's economic success and social stability. In this work, we focus on identifying metropolitan areas with distinct economic and social levels in the greater Los Angeles area, one of the most diverse yet unequal areas in the United States. Utilizing American Community Survey data, we propose a Bayesian model for boundary detection based on income distributions. The model identifies areas with significant income disparities, offering actionable insights for policymakers to address social and economic inequalities. Our approach formalized as a Bayesian structural learning framework, models areal densities through finite mixture models. Efficient posterior computation is facilitated by a transdimensional Markov Chain Monte Carlo sampler. The methodology is validated via extensive simulations and applied to the income distributions in the greater Los Angeles area. We identify several boundaries in the income distributions which can be explained in light of other social dynamics such as crime rates and healthcare, showing the usefulness of such an analysis to policymakers.
Rational best approximations (in a Chebyshev sense) to real functions are characterized by an equioscillating approximation error. Similar results do not hold true for rational best approximations to complex functions in general. In the present work, we consider unitary rational approximations to the exponential function on the imaginary axis, which map the imaginary axis to the unit circle. In the class of unitary rational functions, best approximations are shown to exist, to be uniquely characterized by equioscillation of a phase error, and to possess a super-linear convergence rate. Furthermore, the best approximations have full degree (i.e., non-degenerate), achieve their maximum approximation error at points of equioscillation, and interpolate at intermediate points. Asymptotic properties of poles, interpolation nodes, and equioscillation points of these approximants are studied. Three algorithms, which are found very effective to compute unitary rational approximations including candidates for best approximations, are sketched briefly. Some consequences to numerical time-integration are discussed. In particular, time propagators based on unitary best approximants are unitary, symmetric and A-stable.
Speech bandwidth extension (BWE) has demonstrated promising performance in enhancing the perceptual speech quality in real communication systems. Most existing BWE researches primarily focus on fixed upsampling ratios, disregarding the fact that the effective bandwidth of captured audio may fluctuate frequently due to various capturing devices and transmission conditions. In this paper, we propose a novel streaming adaptive bandwidth extension solution dubbed BAE-Net, which is suitable to handle the low-resolution speech with unknown and varying effective bandwidth. To address the challenges of recovering both the high-frequency magnitude and phase speech content blindly, we devise a dual-stream architecture that incorporates the magnitude inpainting and phase refinement. For potential applications on edge devices, this paper also introduces BAE-NET-lite, which is a lightweight, streaming and efficient framework. Quantitative results demonstrate the superiority of BAE-Net in terms of both performance and computational efficiency when compared with existing state-of-the-art BWE methods.
We extend the notion of boycotts in cooperative games from one-on-one boycotts between single players to boycotts between coalitions. We prove that convex game offer a proper setting for studying the impact of boycotts. Boycotts have a heterogenous effect. Individual players that are targeted by many-on-one boycotts suffer most, while non-participating players may actually benefit from a boycott.
We consider unregularized robust M-estimators for linear models under Gaussian design and heavy-tailed noise, in the proportional asymptotics regime where the sample size n and the number of features p are both increasing such that $p/n \to \gamma\in (0,1)$. An estimator of the out-of-sample error of a robust M-estimator is analysed and proved to be consistent for a large family of loss functions that includes the Huber loss. As an application of this result, we propose an adaptive tuning procedure of the scale parameter $\lambda>0$ of a given loss function $\rho$: choosing$\hat \lambda$ in a given interval $I$ that minimizes the out-of-sample error estimate of the M-estimator constructed with loss $\rho_\lambda(\cdot) = \lambda^2 \rho(\cdot/\lambda)$ leads to the optimal out-of-sample error over $I$. The proof relies on a smoothing argument: the unregularized M-estimation objective function is perturbed, or smoothed, with a Ridge penalty that vanishes as $n\to+\infty$, and show that the unregularized M-estimator of interest inherits properties of its smoothed version.
The aim of this article is to investigate the well-posedness, stability and convergence of solutions to the time-dependent Maxwell's equations for electric field in conductive media in continuous and discrete settings. The situation we consider would represent a physical problem where a subdomain is emerged in a homogeneous medium, characterized by constant dielectric permittivity and conductivity functions. It is well known that in these homogeneous regions the solution to the Maxwell's equations also solves the wave equation which makes calculations very efficient. In this way our problem can be considered as a coupling problem for which we derive stability and convergence analysis. A number of numerical examples validate theoretical convergence rates of the proposed stabilized explicit finite element scheme.
This study investigates the misclassification excess risk bound in the context of 1-bit matrix completion, a significant problem in machine learning involving the recovery of an unknown matrix from a limited subset of its entries. Matrix completion has garnered considerable attention in the last two decades due to its diverse applications across various fields. Unlike conventional approaches that deal with real-valued samples, 1-bit matrix completion is concerned with binary observations. While prior research has predominantly focused on the estimation error of proposed estimators, our study shifts attention to the prediction error. This paper offers theoretical analysis regarding the prediction errors of two previous works utilizing the logistic regression model: one employing a max-norm constrained minimization and the other employing nuclear-norm penalization. Significantly, our findings demonstrate that the latter achieves the minimax-optimal rate without the need for an additional logarithmic term. These novel results contribute to a deeper understanding of 1-bit matrix completion by shedding light on the predictive performance of specific methodologies.
Every sufficiently big matrix with small spectral norm has a nearby low-rank matrix if the distance is measured in the maximum norm (Udell \& Townsend, SIAM J Math Data Sci, 2019). We use the Hanson--Wright inequality to improve the estimate of the distance for matrices with incoherent column and row spaces. In numerical experiments with several classes of matrices we study how well the theoretical upper bound describes the approximation errors achieved with the method of alternating projections.
Coordinate exchange (CEXCH) is a popular algorithm for generating exact optimal experimental designs. The authors of CEXCH advocated for a highly greedy implementation - one that exchanges and optimizes single element coordinates of the design matrix. We revisit the effect of greediness on CEXCHs efficacy for generating highly efficient designs. We implement the single-element CEXCH (most greedy), a design-row (medium greedy) optimization exchange, and particle swarm optimization (PSO; least greedy) on 21 exact response surface design scenarios, under the $D$- and $I-$criterion, which have well-known optimal designs that have been reproduced by several researchers. We found essentially no difference in performance of the most greedy CEXCH and the medium greedy CEXCH. PSO did exhibit better efficacy for generating $D$-optimal designs, and for most $I$-optimal designs than CEXCH, but not to a strong degree under our parametrization. This work suggests that further investigation of the greediness dimension and its effect on CEXCH efficacy on a wider suite of models and criterion is warranted.
Discretizing a solution in the Fourier domain rather than the time domain presents a significant advantage in solving transport problems that vary smoothly and periodically in time, such as cardiorespiratory flows. The finite element solution of the resulting time-spectral formulation is investigated here for the convection-diffusion equations. In addition to the baseline Galerkin's method, we consider stabilized approaches inspired by the streamline upwind Petrov/Galerkin (SUPG), Galerkin/least square (GLS), and variational multiscale (VMS) methods. We also introduce a new augmented SUPG (ASU) method that, by design, produces a nodally exact solution in one dimension for piecewise linear interpolation functions. Comparing these five methods using 1D, 2D, and 3D canonical test cases shows while the ASU is most accurate overall, it exhibits stability issues in extremely oscillatory flows with a high Womersley number in 3D. The GLS method, which is identical to the VMS for this problem, presents an attractive alternative due to its excellent stability and reasonable accuracy.