When solving the Poisson equation on honeycomb hexagonal grids, we show that the $P_1$ virtual element is three-order superconvergent in $H^1$-norm, and two-order superconvergent in $L^2$ and $L^\infty$ norms. We define a local post-process which lifts the superconvergent $P_1$ solution to a $P_3$ solution of the optimal-order approximation. The theory is confirmed by a numerical test.
We develop a Bayesian modeling framework to address a pressing real-life problem faced by the police in tackling insurgent gangs. Unlike criminals associated with common crimes such as robbery, theft or street crime, insurgent gangs are trained in sophisticated arms and strategise against the government to weaken its resolve. They are constantly on the move, operating over large areas causing damage to national properties and terrorizing ordinary citizens. Different from the more commonly addressed problem of modeling crime-events, our context requires that an approach be formulated to model the movement of insurgent gangs, which is more valuable to the police forces in preempting their activities and nabbing them. This paper evolved as a collaborative work with the Indian police to help augment their tactics with a systematic method, by integrating past data on observed gang-locations with the expert knowledge of the police officers. A methodological challenge in modeling the movement of insurgent gangs is that the data on their locations is incomplete, since they are observable only at some irregularly separated time-points. Based on a weighted kernel density formulation for temporal data, we analytically derive the closed form of the likelihood, conditional on incomplete past observed data. Building on the current tactics used by the police, we device an approach for constructing an expert-prior on gang-locations, along with a sequential Bayesian procedure for estimation and prediction. We also propose a new metric for predictive assessment that complements another known metric used in similar problems.
Recently, text-to-image (T2I) synthesis has undergone significant advancements, particularly with the emergence of Large Language Models (LLM) and their enhancement in Large Vision Models (LVM), greatly enhancing the instruction-following capabilities of traditional T2I models. Nevertheless, previous methods focus on improving generation quality but introduce unsafe factors into prompts. We explore that appending specific camera descriptions to prompts can enhance safety performance. Consequently, we propose a simple and safe prompt engineering method (SSP) to improve image generation quality by providing optimal camera descriptions. Specifically, we create a dataset from multi-datasets as original prompts. To select the optimal camera, we design an optimal camera matching approach and implement a classifier for original prompts capable of automatically matching. Appending camera descriptions to original prompts generates optimized prompts for further LVM image generation. Experiments demonstrate that SSP improves semantic consistency by an average of 16% compared to others and safety metrics by 48.9%.
We introduce a pressure robust Finite Element Method for the linearized Magnetohydrodynamics equations in three space dimensions, which is provably quasi-robust also in the presence of high fluid and magnetic Reynolds numbers. The proposed scheme uses a non-conforming BDM approach with suitable DG terms for the fluid part, combined with an $H^1$-conforming choice for the magnetic fluxes. The method introduces also a specific CIP-type stabilization associated to the coupling terms. Finally, the theoretical result are further validated by numerical experiments.
Accurate uncertainty quantification is crucial for the safe deployment of language models (LMs), and prior research has demonstrated improvements in the calibration of modern LMs. Our study focuses on in-context learning (ICL), a prevalent method for adapting static LMs through tailored prompts, and examines the balance between performance and calibration across a broad spectrum of natural language understanding and reasoning tasks. Through comprehensive experiments, we observe that, with an increasing number of ICL examples, models initially exhibit increased miscalibration before achieving better calibration and miscalibration tends to arise in low-shot settings. Moreover, we find that methods aimed at improving usability, such as fine-tuning and chain-of-thought (CoT) prompting, can lead to miscalibration and unreliable natural language explanations, suggesting that new methods may be required for scenarios where models are expected to be reliable.
We study Whitney-type estimates for approximation of convex functions in the uniform norm on various convex multivariate domains while paying a particular attention to the dependence of the involved constants on the dimension and the geometry of the domain.
Lawson's iteration is a classical and effective method for solving the linear (polynomial) minimax approximation in the complex plane. Extension of Lawson's iteration for the rational minimax approximation with both computationally high efficiency and theoretical guarantee is challenging. A recent work [L.-H. Zhang, L. Yang, W. H. Yang and Y.-N. Zhang, A convex dual programming for the rational minimax approximation and Lawson's iteration, 2023, arxiv.org/pdf/2308.06991v1] reveals that Lawson's iteration can be viewed as a method for solving the dual problem of the original rational minimax approximation, and a new type of Lawson's iteration was proposed. Such a dual problem is guaranteed to obtain the original minimax solution under Ruttan's sufficient condition, and numerically, the proposed Lawson's iteration was observed to converge monotonically with respect to the dual objective function. In this paper, we perform theoretical convergence analysis for Lawson's iteration for both the linear and rational minimax approximations. In particular, we show that (i) for the linear minimax approximation, the near-optimal Lawson exponent $\beta$ in Lawson's iteration is $\beta=1$, and (ii) for the rational minimax approximation, the proposed Lawson's iteration converges monotonically with respect to the dual objective function for any sufficiently small $\beta>0$, and the convergent solution fulfills the complementary slackness: all nodes associated with positive weights achieve the maximum error.
For a matrix $A$ which satisfies Crouzeix's conjecture, we construct several classes of matrices from $A$ for which the conjecture will also hold. We discover a new link between cyclicity and Crouzeix's conjecture, which shows that Crouzeix's Conjecture holds in full generality if and only if it holds for the differentiation operator on a class of analytic functions. We pose several open questions, which if proved, will prove Crouzeix's conjecture. We also begin an investigation into Crouzeix's conjecture for symmetric matrices and in the case of $3 \times 3$ matrices, we show Crouzeix's conjecture holds for symmetric matrices if and only if it holds for analytic truncated Toeplitz operators.
The HTTPS protocol has enforced a higher level of robustness to several attacks; however, it is not easy to set up the required certificates on intranets, nor is it effective in the case the server confidentiality is not reliable, as in the case of cloud services, or it could be compromised. A simple method is proposed to encrypt the data on the client side, using Web Assembly. It never transfers data to the server as clear text. Searching fields in the server is made possible by an encoding scheme that ensures a stable prefix correspondence between ciphertext and plaintext. The method has been developed for a semantic medical database, and allows accessing personal data using an additional password while maintaining non-sensitive information in clear form. Web Assembly has been chosen to guarantee the fast and efficient execution of encrypting/decrypting operations and because of its characteristic of producing modules that are very robust against reverse engineering. The code is available at //github.com/mfalda/client-encdec.
The paper introduces a new meshfree pseudospectral method based on Gaussian radial basis functions (RBFs) collocation to solve fractional Poisson equations. Hypergeometric functions are used to represent the fractional Laplacian of Gaussian RBFs, enabling an efficient computation of stiffness matrix entries. Unlike existing RBF-based methods, our approach ensures a Toeplitz structure in the stiffness matrix with equally spaced RBF centers, enabling efficient matrix-vector multiplications using fast Fourier transforms. We conduct a comprehensive study on the shape parameter selection, addressing challenges related to ill-conditioning and numerical stability. The main contribution of our work includes rigorous stability analysis and error estimates of the Gaussian RBF collocation method, representing a first attempt at the rigorous analysis of RBF-based methods for fractional PDEs to the best of our knowledge. We conduct numerical experiments to validate our analysis and provide practical insights for implementation.
ESGReveal is an innovative method proposed for efficiently extracting and analyzing Environmental, Social, and Governance (ESG) data from corporate reports, catering to the critical need for reliable ESG information retrieval. This approach utilizes Large Language Models (LLM) enhanced with Retrieval Augmented Generation (RAG) techniques. The ESGReveal system includes an ESG metadata module for targeted queries, a preprocessing module for assembling databases, and an LLM agent for data extraction. Its efficacy was appraised using ESG reports from 166 companies across various sectors listed on the Hong Kong Stock Exchange in 2022, ensuring comprehensive industry and market capitalization representation. Utilizing ESGReveal unearthed significant insights into ESG reporting with GPT-4, demonstrating an accuracy of 76.9% in data extraction and 83.7% in disclosure analysis, which is an improvement over baseline models. This highlights the framework's capacity to refine ESG data analysis precision. Moreover, it revealed a demand for reinforced ESG disclosures, with environmental and social data disclosures standing at 69.5% and 57.2%, respectively, suggesting a pursuit for more corporate transparency. While current iterations of ESGReveal do not process pictorial information, a functionality intended for future enhancement, the study calls for continued research to further develop and compare the analytical capabilities of various LLMs. In summary, ESGReveal is a stride forward in ESG data processing, offering stakeholders a sophisticated tool to better evaluate and advance corporate sustainability efforts. Its evolution is promising in promoting transparency in corporate reporting and aligning with broader sustainable development aims.