The distribution for the minimum of Brownian motion or the Cauchy process is well-known using the reflection principle. Here we consider the problem of finding the sample-by-sample minimum, which we call the online minimum search. We consider the possibility of the golden search method, but we show quantitatively that the bisection method is more efficient. In the bisection method there is a hierarchical parameter, which tunes the depth to which each sub-search is conducted, somewhat similarly to how a depth-first search works to generate a topological ordering on nodes. Finally, we consider the possibility of using harmonic measure, which is a novel idea that has so far been unexplored.
The LC method described in this work seeks to approximate the roots of polynomial equations in one variable. This book allows you to explore the LC method, which uses geometric structures of Lines L and Circumferences C in the plane of complex numbers, based on polynomial coefficients. These structures depend on the inclination angle of a line with fixed point that seeks to contain one of the roots; they are associated with an error measure that indicates the degree of proximity to that root, without knowing a priori its location. Using a computer with parallel processing capabilities, it is feasible to construct several of these geometric structures at the same time, varying the inclination angle of the lines with fixed point, in order to obtain an error measure map, with which it is possible to identify, approximately, the location of all polynomial roots. To show how the LC method works, this book includes numerical examples for quadratic, cubic, and quartic polynomials, and also for polynomials of degree greater than or equal to 5; this book also includes R programs that allow you to reproduce the results of the examples on a typical personal computer; these R programs use vectorization of operations instead of loops, which can be seen as a basic and accessible form of parallel processing. This book, in the end, invites us to explore beyond the basic ideas and concepts described here, motivating the development of a more efficient and complete computational implementation of the LC method.
We prove explicit uniform two-sided bounds for the phase functions of Bessel functions and of their derivatives. As a consequence, we obtain new enclosures for the zeros of Bessel functions and their derivatives in terms of inverse values of some elementary functions. These bounds are valid, with a few exceptions, for all zeros and all Bessel functions with non-negative indices. We provide numerical evidence showing that our bounds either improve or closely match the best previously known ones.
Knowing which countries contribute the most to pushing the boundaries of knowledge in science and technology has social and political importance. However, common citation metrics do not adequately measure this contribution. This measure requires more stringent metrics appropriate for the highly influential breakthrough papers that push the boundaries of knowledge, which are very highly cited but very rare. Here I used the recently described Rk index, specifically designed to address this issue. I applied this index to 25 countries and the EU across 10 key research topics, five technological and five biomedical, studying domestic and international collaborative papers independently. In technological topics, the Rk indices of domestic papers show that overall, the USA, China, and the EU are leaders; other countries are clearly behind. The USA is notably ahead of China, and the EU is far behind China. The same approach to biomedical topics shows an overwhelming dominance of the USA and that the EU is ahead of China. The analysis of internationally collaborative papers further demonstrates the US dominance. These results conflict with current country rankings based on less stringent indicators.
In this paper I will develop a lambda-term calculus, lambda-2Int, for a bi-intuitionistic logic and discuss its implications for the notions of sense and denotation of derivations in a bilateralist setting. Thus, I will use the Curry-Howard correspondence, which has been well-established between the simply typed lambda-calculus and natural deduction systems for intuitionistic logic, and apply it to a bilateralist proof system displaying two derivability relations, one for proving and one for refuting. The basis will be the natural deduction system of Wansing's bi-intuitionistic logic 2Int, which I will turn into a term-annotated form. Therefore, we need a type theory that extends to a two-sorted typed lambda-calculus. I will present such a term-annotated proof system for 2Int and prove a Dualization Theorem relating proofs and refutations in this system. On the basis of these formal results I will argue that this gives us interesting insights into questions about sense and denotation as well as synonymy and identity of proofs from a bilateralist point of view.
Numerical simulation of moving immersed solid bodies in fluids is now practiced routinely following pioneering work of Peskin and co-workers on immersed boundary method (IBM), Glowinski and co-workers on fictitious domain method (FDM), and others on related methods. A variety of variants of IBM and FDM approaches have been published, most of which rely on using a background mesh for the fluid equations and tracking the solid body using Lagrangian points. The key idea that is common to these methods is to assume that the entire fluid-solid domain is a fluid and then to constrain the fluid within the solid domain to move in accordance with the solid governing equations. The immersed solid body can be rigid or deforming. Thus, in all these methods the fluid domain is extended into the solid domain. In this review, we provide a mathemarical perspective of various immersed methods by recasting the governing equations in an extended domain form for the fluid. The solid equations are used to impose appropriate constraints on the fluid that is extended into the solid domain. This leads to extended domain constrained fluid-solid governing equations that provide a unified framework for various immersed body techniques. The unified constrained governing equations in the strong form are independent of the temporal or spatial discretization schemes. We show that particular choices of time stepping and spatial discretization lead to different techniques reported in literature ranging from freely moving rigid to elastic self-propelling bodies. These techniques have wide ranging applications including aquatic locomotion, underwater vehicles, car aerodynamics, and organ physiology (e.g. cardiac flow, esophageal transport, respiratory flows), wave energy convertors, among others. We conclude with comments on outstanding challenges and future directions.
Algorithmic decision support (ADS), using Machine-Learning-based AI, is becoming a major part of many processes. Organizations introduce ADS to improve decision-making and make optimal use of data, thereby possibly avoiding deviations from the normative "homo economicus" and the biases that characterize human decision-making. A closer look at the development process of ADS systems reveals that ADS itself results from a series of largely unspecified human decisions. They begin with deliberations for which decisions to use ADS, continue with choices while developing the ADS, and end with using the ADS output for decisions. Finally, conclusions are implemented in organizational settings, often without analyzing the implications of the decision support. The paper explores some issues in developing and using ADS, pointing to behavioral aspects that should be considered when implementing ADS in organizational settings. It points out directions for further research, which is essential for gaining an informed understanding of the processes and their vulnerabilities.
Background and Objective: In recent years, Artificial Intelligence (AI) and in particular Deep Neural Networks (DNN) became a relevant research topic in biomedical image segmentation due to the availability of more and more data sets along with the establishment of well known competitions. Despite the popularity of DNN based segmentation on the research side, these techniques are almost unused in the daily clinical practice even if they could support effectively the physician during the diagnostic process. Apart from the issues related to the explainability of the predictions of a neural model, such systems are not integrated in the diagnostic workflow, and a standardization of their use is needed to achieve this goal. Methods: This paper presents IODeep a new DICOM Information Object Definition (IOD) aimed at storing both the weights and the architecture of a DNN already trained on a particular image dataset that is labeled as regards the acquisition modality, the anatomical region, and the disease under investigation. Results: The IOD architecture is presented along with a DNN selection algorithm from the PACS server based on the labels outlined above, and a simple PACS viewer purposely designed for demonstrating the effectiveness of the DICOM integration, while no modifications are required on the PACS server side. Also a service based architecture in support of the entire workflow has been implemented. Conclusion: IODeep ensures full integration of a trained AI model in a DICOM infrastructure, and it is also enables a scenario where a trained model can be either fine-tuned with hospital data or trained in a federated learning scheme shared by different hospitals. In this way AI models can be tailored to the real data produced by a Radiology ward thus improving the physician decision making process. Source code is freely available at //github.com/CHILab1/IODeep.git
In the evolving e-commerce field, recommendation systems crucially shape user experience and engagement. The rise of Consumer-to-Consumer (C2C) recommendation systems, noted for their flexibility and ease of access for customer vendors, marks a significant trend. However, the academic focus remains largely on Business-to-Consumer (B2C) models, leaving a gap filled by the limited C2C recommendation datasets that lack in item attributes, user diversity, and scale. The intricacy of C2C recommendation systems is further accentuated by the dual roles users assume as both sellers and buyers, introducing a spectrum of less uniform and varied inputs. Addressing this, we introduce MerRec, the first large-scale dataset specifically for C2C recommendations, sourced from the Mercari e-commerce platform, covering millions of users and products over 6 months in 2023. MerRec not only includes standard features such as user_id, item_id, and session_id, but also unique elements like timestamped action types, product taxonomy, and textual product attributes, offering a comprehensive dataset for research. This dataset, extensively evaluated across six recommendation tasks, establishes a new benchmark for the development of advanced recommendation algorithms in real-world scenarios, bridging the gap between academia and industry and propelling the study of C2C recommendations.
Relation extraction is an efficient way of mining the extraordinary wealth of human knowledge on the Web. Existing methods rely on domain-specific training data or produce noisy outputs. We focus here on extracting targeted relations from semi-structured web pages given only a short description of the relation. We present GraphScholarBERT, an open-domain information extraction method based on a joint graph and language model structure. GraphScholarBERT can generalize to previously unseen domains without additional data or training and produces only clean extraction results matched to the search keyword. Experiments show that GraphScholarBERT can improve extraction F1 scores by as much as 34.8\% compared to previous work in a zero-shot domain and zero-shot website setting.
Advances in artificial intelligence (AI) are fueling a new paradigm of discoveries in natural sciences. Today, AI has started to advance natural sciences by improving, accelerating, and enabling our understanding of natural phenomena at a wide range of spatial and temporal scales, giving rise to a new area of research known as AI for science (AI4Science). Being an emerging research paradigm, AI4Science is unique in that it is an enormous and highly interdisciplinary area. Thus, a unified and technical treatment of this field is needed yet challenging. This work aims to provide a technically thorough account of a subarea of AI4Science; namely, AI for quantum, atomistic, and continuum systems. These areas aim at understanding the physical world from the subatomic (wavefunctions and electron density), atomic (molecules, proteins, materials, and interactions), to macro (fluids, climate, and subsurface) scales and form an important subarea of AI4Science. A unique advantage of focusing on these areas is that they largely share a common set of challenges, thereby allowing a unified and foundational treatment. A key common challenge is how to capture physics first principles, especially symmetries, in natural systems by deep learning methods. We provide an in-depth yet intuitive account of techniques to achieve equivariance to symmetry transformations. We also discuss other common technical challenges, including explainability, out-of-distribution generalization, knowledge transfer with foundation and large language models, and uncertainty quantification. To facilitate learning and education, we provide categorized lists of resources that we found to be useful. We strive to be thorough and unified and hope this initial effort may trigger more community interests and efforts to further advance AI4Science.