We systematically evaluated the performance of seven large language models in generating programming code using various prompt strategies, programming languages, and task difficulties. GPT-4 substantially outperforms other large language models, including Gemini Ultra and Claude 2. The coding performance of GPT-4 varies considerably with different prompt strategies. In most LeetCode and GeeksforGeeks coding contests evaluated in this study, GPT-4 employing the optimal prompt strategy outperforms 85 percent of human participants. Additionally, GPT-4 demonstrates strong capabilities in translating code between different programming languages and in learning from past errors. The computational efficiency of the code generated by GPT-4 is comparable to that of human programmers. These results suggest that GPT-4 has the potential to serve as a reliable assistant in programming code generation and software development.
In a XOR-based alternating block cipher the plaintext is masked by a sequence of layers each performing distinct actions: a highly nonlinear permutation, a linear transformation, and the bitwise key addition. When assessing resistance against classical differential attacks (where differences are computed with respect to XOR), the cryptanalysts must only take into account differential probabilities introduced by the nonlinear layer, this being the only one whose differential transitions are not deterministic. The temptation of computing differentials with respect to another difference operation runs into the difficulty of understanding how differentials propagate through the XOR-affine levels of the cipher. In this paper we introduce a special family of braces that enable the derivation of a set of differences whose interaction with every layer of an XOR-based alternating block cipher can be understood. We show that such braces can be described also in terms of alternating binary algebras of nilpotency class two. Additionally, we present a method to compute the automorphism group of these structures through an equivalence between bilinear maps. By doing so, we characterise the XOR-linear permutations for which the differential transitions with respect to the new difference are deterministic, facilitating an alternative differential attack.
Deep neural networks (DNNs) trained for image denoising are able to generate high-quality samples with score-based reverse diffusion algorithms. These impressive capabilities seem to imply an escape from the curse of dimensionality, but recent reports of memorization of the training set raise the question of whether these networks are learning the "true" continuous density of the data. Here, we show that two DNNs trained on non-overlapping subsets of a dataset learn nearly the same score function, and thus the same density, when the number of training images is large enough. In this regime of strong generalization, diffusion-generated images are distinct from the training set, and are of high visual quality, suggesting that the inductive biases of the DNNs are well-aligned with the data density. We analyze the learned denoising functions and show that the inductive biases give rise to a shrinkage operation in a basis adapted to the underlying image. Examination of these bases reveals oscillating harmonic structures along contours and in homogeneous regions. We demonstrate that trained denoisers are inductively biased towards these geometry-adaptive harmonic bases since they arise not only when the network is trained on photographic images, but also when it is trained on image classes supported on low-dimensional manifolds for which the harmonic basis is suboptimal. Finally, we show that when trained on regular image classes for which the optimal basis is known to be geometry-adaptive and harmonic, the denoising performance of the networks is near-optimal.
We propose the novel p-branch-and-bound method for solving two-stage stochastic programming problems whose deterministic equivalents are represented by non-convex mixed-integer quadratically constrained quadratic programming (MIQCQP) models. The precision of the solution generated by the p-branch-and-bound method can be arbitrarily adjusted by altering the value of the precision factor p. The proposed method combines two key techniques. The first one, named p-Lagrangian decomposition, generates a mixed-integer relaxation of a dual problem with a separable structure for a primal non-convex MIQCQP problem. The second one is a version of the classical dual decomposition approach that is applied to solve the Lagrangian dual problem and ensures that integrality and non-anticipativity conditions are met once the optimal solution is obtained. This paper also presents a comparative analysis of the p-branch-and-bound method efficiency considering two alternative solution methods for the dual problems as a subroutine. These are the proximal bundle method and Frank-Wolfe progressive hedging. The latter algorithm relies on the interpolation of linearisation steps similar to those taken in the Frank-Wolfe method as an inner loop in the classic progressive hedging. The p-branch-and-bound method's efficiency was tested on randomly generated instances and demonstrated superior performance over commercial solver Gurobi.
We show that established performance metrics in binary classification, such as the F-score, the Jaccard similarity coefficient or Matthews' correlation coefficient (MCC), are not robust to class imbalance in the sense that if the proportion of the minority class tends to $0$, the true positive rate (TPR) of the Bayes classifier under these metrics tends to $0$ as well. Thus, in imbalanced classification problems, these metrics favour classifiers which ignore the minority class. To alleviate this issue we introduce robust modifications of the F-score and the MCC for which, even in strongly imbalanced settings, the TPR is bounded away from $0$. We numerically illustrate the behaviour of the various performance metrics in simulations as well as on a credit default data set. We also discuss connections to the ROC and precision-recall curves and give recommendations on how to combine their usage with performance metrics.
Determining which parameters of a non-linear model could best describe a set of experimental data is a fundamental problem in science and it has gained much traction lately with the rise of complex large-scale simulators (a.k.a. black-box simulators). The likelihood of such models is typically intractable, which is why classical MCMC methods can not be used. Simulation-based inference (SBI) stands out in this context by only requiring a dataset of simulations to train deep generative models capable of approximating the posterior distribution that relates input parameters to a given observation. In this work, we consider a tall data extension in which multiple observations are available and one wishes to leverage their shared information to better infer the parameters of the model. The method we propose is built upon recent developments from the flourishing score-based diffusion literature and allows us to estimate the tall data posterior distribution simply using information from the score network trained on individual observations. We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
We prove the uniform convergence of the geometric multigrid V-cycle for hybrid high-order (HHO) and other discontinuous skeletal methods. Our results generalize previously established results for HDG methods, and our multigrid method uses standard smoothers and local solvers that are bounded, convergent, and consistent. We use a weak version of elliptic regularity in our proofs. Numerical experiments confirm our theoretical results.
In recent years, the use of open-source models has gained immense popularity in various fields, including legal language modelling and analysis. These models have proven to be highly effective in tasks such as summarizing legal documents, extracting key information, and even predicting case outcomes. This has revolutionized the legal industry, enabling lawyers, researchers, and policymakers to quickly access and analyse vast amounts of legal text, saving time and resources. This paper presents a novel approach to legal language modeling (LLM) and analysis using open-source models from Hugging Face. We leverage Hugging Face embeddings via LangChain and Sentence Transformers to develop an LLM tailored for legal texts. We then demonstrate the application of this model by extracting insights from the official Constitution of India. Our methodology involves preprocessing the data, splitting it into chunks, using ChromaDB and LangChainVectorStores, and employing the Google/Flan-T5-XXL model for analysis. The trained model is tested on the Indian Constitution, which is available in PDF format. Our findings suggest that our approach holds promise for efficient legal language processing and analysis.
We present a fast generative modeling approach for resistive memories that reproduces the complex statistical properties of real-world devices. To enable efficient modeling of analog circuits, the model is implemented in Verilog-A. By training on extensive measurement data of integrated 1T1R arrays (6,000 cycles of 512 devices), an autoregressive stochastic process accurately accounts for the cross-correlations between the switching parameters, while non-linear transformations ensure agreement with both cycle-to-cycle (C2C) and device-to-device (D2D) variability. Benchmarks show that this statistically comprehensive model achieves read/write throughputs exceeding those of even highly simplified and deterministic compact models.
This chapter explores the role of patent protection in algorithmic surveillance and whether ordre public exceptions from patentability should apply to such patents, due to their potential to enable human rights violations. It concludes that in most cases, it is undesirable to exclude algorithmic surveillance patents from patentability, as the patent system is ill-equipped to evaluate the impacts of the exploitation of such technologies. Furthermore, the disclosure of such patents has positive externalities from the societal perspective by opening the black box of surveillance for public scrutiny.
Identifying keywords in an open-vocabulary context is crucial for personalizing interactions with smart devices. Previous approaches to open vocabulary keyword spotting dependon a shared embedding space created by audio and text encoders. However, these approaches suffer from heterogeneous modality representations (i.e., audio-text mismatch). To address this issue, our proposed framework leverages knowledge acquired from a pre-trained text-to-speech (TTS) system. This knowledge transfer allows for the incorporation of awareness of audio projections into the text representations derived from the text encoder. The performance of the proposed approach is compared with various baseline methods across four different datasets. The robustness of our proposed model is evaluated by assessing its performance across different word lengths and in an Out-of-Vocabulary (OOV) scenario. Additionally, the effectiveness of transfer learning from the TTS system is investigated by analyzing its different intermediate representations. The experimental results indicate that, in the challenging LibriPhrase Hard dataset, the proposed approach outperformed the cross-modality correspondence detector (CMCD) method by a significant improvement of 8.22% in area under the curve (AUC) and 12.56% in equal error rate (EER).