A new coupling rule for the Lighthill-Whitham-Richards model at merging junctions is introduced that imposes the preservation of the ratio between inflow from a given road to the total inflow into the junction. This rule is considered both in the context of the original traffic flow model and a relaxation setting giving rise to two different Riemann solvers that are discussed for merging 2-to-1 junctions. Numerical experiments are shown suggesting that the relaxation based Riemann solver is capable of suitable predictions of both, free-flow and congestion scenarios without relying on flow maximization.
The present article aims to design and analyze efficient first-order strong schemes for a generalized A\"{i}t-Sahalia type model arising in mathematical finance and evolving in a positive domain $(0, \infty)$, which possesses a diffusion term with superlinear growth and a highly nonlinear drift that blows up at the origin. Such a complicated structure of the model unavoidably causes essential difficulties in the construction and convergence analysis of time discretizations. By incorporating implicitness in the term $\alpha_{-1} x^{-1}$ and a corrective mapping $\Phi_h$ in the recursion, we develop a novel class of explicit and unconditionally positivity-preserving (i.e., for any step-size $h>0$) Milstein-type schemes for the underlying model. In both non-critical and general critical cases, we introduce a novel approach to analyze mean-square error bounds of the novel schemes, without relying on a priori high-order moment bounds of the numerical approximations. The expected order-one mean-square convergence is attained for the proposed scheme. The above theoretical guarantee can be used to justify the optimal complexity of the Multilevel Monte Carlo method. Numerical experiments are finally provided to verify the theoretical findings.
In small area estimation, it is a smart strategy to rely on data measured over time. However, linear mixed models struggle to properly capture time dependencies when the number of lags is large. Given the lack of published studies addressing robust prediction in small areas using time-dependent data, this research seeks to extend M-quantile models to this field. Indeed, our methodology successfully addresses this challenge and offers flexibility to the widely imposed assumption of unit-level independence. Under the new model, robust bias-corrected predictors for small area linear indicators are derived. Additionally, the optimal selection of the robustness parameter for bias correction is explored, contributing theoretically to the field and enhancing outlier detection. For the estimation of the mean squared error (MSE), a first-order approximation and analytical estimators are obtained under general conditions. Several simulation experiments are conducted to evaluate the performance of the fitting algorithm, the new predictors, and the resulting MSE estimators, as well as the optimal selection of the robustness parameter. Finally, an application to the Spanish Living Conditions Survey data illustrates the usefulness of the proposed predictors.
The rapid pace of development in quantum computing technology has sparked a proliferation of benchmarks for assessing the performance of quantum computing hardware and software. Good benchmarks empower scientists, engineers, programmers, and users to understand a computing system's power, but bad benchmarks can misdirect research and inhibit progress. In this Perspective, we survey the science of quantum computer benchmarking. We discuss the role of benchmarks and benchmarking, and how good benchmarks can drive and measure progress towards the long-term goal of useful quantum computations, i.e., "quantum utility". We explain how different kinds of benchmark quantify the performance of different parts of a quantum computer, we survey existing benchmarks, critically discuss recent trends in benchmarking, and highlight important open research questions in this field.
The Retinex theory models the image as a product of illumination and reflection components, which has received extensive attention and is widely used in image enhancement, segmentation and color restoration. However, it has been rarely used in additive noise removal due to the inclusion of both multiplication and addition operations in the Retinex noisy image modeling. In this paper, we propose an exponential Retinex decomposition model based on hybrid non-convex regularization and weak space oscillation-modeling for image denoising. The proposed model utilizes non-convex first-order total variation (TV) and non-convex second-order TV to regularize the reflection component and the illumination component, respectively, and employs weak $H^{-1}$ norm to measure the residual component. By utilizing different regularizers, the proposed model effectively decomposes the image into reflection, illumination, and noise components. An alternating direction multipliers method (ADMM) combined with the Majorize-Minimization (MM) algorithm is developed to solve the proposed model. Furthermore, we provide a detailed proof of the convergence property of the algorithm. Numerical experiments validate both the proposed model and algorithm. Compared with several state-of-the-art denoising models, the proposed model exhibits superior performance in terms of peak signal-to-noise ratio (PSNR) and mean structural similarity (MSSIM).
A common method for estimating the Hessian operator from random samples on a low-dimensional manifold involves locally fitting a quadratic polynomial. Although widely used, it is unclear if this estimator introduces bias, especially in complex manifolds with boundaries and nonuniform sampling. Rigorous theoretical guarantees of its asymptotic behavior have been lacking. We show that, under mild conditions, this estimator asymptotically converges to the Hessian operator, with nonuniform sampling and curvature effects proving negligible, even near boundaries. Our analysis framework simplifies the intensive computations required for direct analysis.
A new decoder for the SIF test problems of the CUTEst collection is described, which produces problem files allowing the computation of values and derivatives of the objective function and constraints of most \cutest\ problems directly within ``native'' Matlab, Python or Julia, without any additional installation or interfacing with MEX files or Fortran programs. When used with Matlab, the new problem files optionally support reduced-precision computations.
We prove that random quantum circuits on any geometry, including a 1D line, can form approximate unitary designs over $n$ qubits in $\log n$ depth. In a similar manner, we construct pseudorandom unitaries (PRUs) in 1D circuits in $\text{poly} \log n $ depth, and in all-to-all-connected circuits in $\text{poly} \log \log n $ depth. In all three cases, the $n$ dependence is optimal and improves exponentially over known results. These shallow quantum circuits have low complexity and create only short-range entanglement, yet are indistinguishable from unitaries with exponential complexity. Our construction glues local random unitaries on $\log n$-sized or $\text{poly} \log n$-sized patches of qubits to form a global random unitary on all $n$ qubits. In the case of designs, the local unitaries are drawn from existing constructions of approximate unitary $k$-designs, and hence also inherit an optimal scaling in $k$. In the case of PRUs, the local unitaries are drawn from existing unitary ensembles conjectured to form PRUs. Applications of our results include proving that classical shadows with 1D log-depth Clifford circuits are as powerful as those with deep circuits, demonstrating superpolynomial quantum advantage in learning low-complexity physical systems, and establishing quantum hardness for recognizing phases of matter with topological order.
This work describes the development and validation of a digital twin for a semi-autogenous grinding (SAG) mill controlled by an expert system. The digital twin consists of three modules emulating a closed-loop system: fuzzy logic for the expert control, a state-space model for regulatory control, and a recurrent neural network for the SAG mill process. The model was trained with 68 hours of data and validated with 8 hours of test data. It predicts the mill's behavior within a 2.5-minute horizon with a 30-second sampling time. The disturbance detection evaluates the need for retraining, and the digital twin shows promise for supervising the SAG mill with the expert control system. Future work will focus on integrating this digital twin into real-time optimization strategies with industrial validation.
This investigation is firstly focused into showing that two metric parameters represent the same object in graph theory. That is, we prove that the multiset resolving sets and the ID-colorings of graphs are the same thing. We also consider some computational and combinatorial problems of the multiset dimension, or equivalently, the ID-number of graphs. We prove that the decision problem concerning finding the multiset dimension of graphs is NP-complete. We consider the multiset dimension of king grids and prove that it is bounded above by 4. We also give a characterization of the strong product graphs with one factor being a complete graph, and whose multiset dimension is not infinite.
In this work we propose a discretization of the second boundary condition for the Monge-Ampere equation arising in geometric optics and optimal transport. The discretization we propose is the natural generalization of the popular Oliker-Prussner method proposed in 1988. For the discretization of the differential operator, we use a discrete analogue of the subdifferential. Existence, unicity and stability of the solutions to the discrete problem are established. Convergence results to the continuous problem are given.