We sometimes need to compute the most significant digits of the product of small integers with a multiplier requiring much storage: e.g., a large integer (e.g., $5^{100}$) or an irrational number ($\pi$). We only need to access the most significant digits of the multiplier-as long as the integers are sufficiently small. We provide an efficient algorithm to compute the range of integers given a truncated multiplier and a desired number of digits.
In this paper, an important discovery has been found for nonconforming immersed finite element (IFE) methods using the integral values on edges as degrees of freedom for solving elliptic interface problems. We show that those IFE methods without penalties are not guaranteed to converge optimally if the tangential derivative of the exact solution and the jump of the coefficient are not zero on the interface. A nontrivial counter example is also provided to support our theoretical analysis. To recover the optimal convergence rates, we develop a new nonconforming IFE method with additional terms locally on interface edges. The new method is parameter-free which removes the limitation of the conventional partially penalized IFE method. We show the IFE basis functions are unisolvent on arbitrary triangles which is not considered in the literature. Furthermore, different from multipoint Taylor expansions, we derive the optimal approximation capabilities of both the Crouzeix-Raviart and the rotated-$Q_1$ IFE spaces via a unified approach which can handle the case of variable coefficients easily. Finally, optimal error estimates in both $H^1$- and $L^2$- norms are proved and confirmed with numerical experiments.
In this paper we study power series with coefficients equal to a product of a generic sequence and an explicitly given function of a positive parameter expressible in terms of the Pochhammer symbols. Four types of such series are treated. We show that logarithmic concavity (convexity) of the generic sequence leads to logarithmic concavity (convexity) of the sum of the series with respect to the argument of the explicitly given function. The logarithmic concavity (convexity) is derived from a stronger property, \ie, positivity (negativity) of the power series coefficients of the so-called generalized Tur\'{a}nian. Applications to special functions such as the generalized hypergeometric function and the Fox-Wright function are also discussed.
In this paper we consider the zeros of the chromatic polynomial of series-parallel graphs. Complementing a result of Sokal, showing density outside the disk $|q-1|\leq1$, we show density of these zeros in the half plane $\Re(q)>3/2$ and we show there exists an open region $U$ containing the interval $(0,32/27)$ such that $U\setminus\{1\}$ does not contain zeros of the chromatic polynomial of series-parallel graphs. We also disprove a conjecture of Sokal by showing that for each large enough integer $\Delta$ there exists a series-parallel graph for which all vertices but one have degree at most $\Delta$ and whose chromatic polynomial has a zero with real part exceeding $\Delta$.
When studying the association between treatment and a clinical outcome, a parametric multivariable model of the conditional outcome expectation is often used to adjust for covariates. The treatment coefficient of the outcome model targets a conditional treatment effect. Model-based standardization is typically applied to average the model predictions over the target covariate distribution, and generate a covariate-adjusted estimate of the marginal treatment effect. The standard approach to model-based standardization involves maximum-likelihood estimation and use of the non-parametric bootstrap. We introduce a novel, general-purpose, model-based standardization method based on multiple imputation that is easily applicable when the outcome model is a generalized linear model. We term our proposed approach multiple imputation marginalization (MIM). MIM consists of two main stages: the generation of synthetic datasets and their analysis. MIM accommodates a Bayesian statistical framework, which naturally allows for the principled propagation of uncertainty, integrates the analysis into a probabilistic framework, and allows for the incorporation of prior evidence. We conduct a simulation study to benchmark the finite-sample performance of MIM in conjunction with a parametric outcome model. The simulations provide proof-of-principle in scenarios with binary outcomes, continuous-valued covariates, a logistic outcome model and the marginal log odds ratio as the target effect measure. When parametric modeling assumptions hold, MIM yields unbiased estimation in the target covariate distribution, valid coverage rates, and similar precision and efficiency than the standard approach to model-based standardization.
SQL query performance is critical in database applications, and query rewriting is a technique that transforms an original query into an equivalent query with a better performance. In a wide range of database-supported systems, there is a unique problem where both the application and database layer are black boxes, and the developers need to use their knowledge about the data and domain to rewrite queries sent from the application to the database for better performance. Unfortunately, existing solutions do not give the users enough freedom to express their rewriting needs. To address this problem, we propose QueryBooster, a novel middleware-based service architecture for human-centered query rewriting, where users can use its expressive and easy-to-use rule language (called VarSQL) to formulate rewriting rules based on their needs. It also allows users to express rewriting intentions by providing examples of the original query and its rewritten query. QueryBooster automatically generalizes them to rewriting rules and suggests high-quality ones. We conduct a user study to show the benefits of VarSQL to formulate rewriting rules. Our experiments on real and synthetic workloads show the effectiveness of the rule-suggesting framework and the significant advantages of using QueryBooster for human-centered query rewriting to improve the end-to-end query performance.
Imitation is a key component of human social behavior, and is widely used by both children and adults as a way to navigate uncertain or unfamiliar situations. But in an environment populated by multiple heterogeneous agents pursuing different goals or objectives, indiscriminate imitation is unlikely to be an effective strategy -- the imitator must instead determine who is most useful to copy. There are likely many factors that play into these judgements, depending on context and availability of information. Here we investigate the hypothesis that these decisions involve inferences about other agents' reward functions. We suggest that people preferentially imitate the behavior of others they deem to have similar reward functions to their own. We further argue that these inferences can be made on the basis of very sparse or indirect data, by leveraging an inductive bias toward positing the existence of different \textit{groups} or \textit{types} of people with similar reward functions, allowing learners to select imitation targets without direct evidence of alignment.
Multi-access Edge Computing (MEC) is one of the enabling technologies of the fifth generation (5G) of mobile networks. MEC enables services with strict latency requirements by bringing computing capabilities close to the users. As with any new technology, the dependability of MEC is one of the aspects that need to be carefully studied. In this paper, we propose a two-level model to compute the availability of a 5G-MEC system. We then use the model to evaluate the availability of a 5G-MEC system under various configurations. The results show that having a single redundancy of the 5G-MEC elements leads an acceptable availability. To reach a high availability, the software failure intensity of the management elements of 5G and MEC should be reduced.
Emerging applications in the IoT domain require ultra-low-power and high-performance end-nodes to deal with complex near-sensor-data analytics. Domains such as audio, radar, and Structural Health Monitoring require many computations to be performed in the frequency domain rather than in the time domain. We present ECHOES, a System-On-a-Chip (SoC) composed of a RISC-V core enhanced with fixed and floating-point digital signal processing (DSP) extensions and a Fast-Fourier Transform (FFT) hardware accelerator targeting emerging frequency-domain application. The proposed SoC features an autonomous I/O engine supporting a wide set of peripherals, including Ultra-Low-Power radars, MEMS, and digital microphones over I2S protocol with full-duplex Time Division Multiplexing DSP mode, making ECHOES the first open-source SoC which offers this functionality enabling simultaneous communication with up to 16 I/Os devices. ECHOES, fabricated with 65nm CMOS technology, reaches a peak performance of 0.16 GFLOPS and a peak energy efficiency of 9.68 GFLOPS/W on a wide range of floating and fixed-point general-purpose DSP kernels. The FFT accelerator achieves performance up to 10.16 GOPS with an efficiency of 199.8 GOPS/W, improving performance and efficiency by up to 41.1x and 11.2x, respectively, over its software implementation of this critical task for frequency domain processing.
When predicting future events, it is common to issue forecasts that are probabilistic, in the form of probability distributions over the range of possible outcomes. Such forecasts can be evaluated using proper scoring rules. Proper scoring rules condense forecast performance into a single numerical value, allowing competing forecasters to be ranked and compared. To facilitate the use of scoring rules in practical applications, the scoringRules package in R provides popular scoring rules for a wide range of forecast distributions. This paper discusses an extension to the scoringRules package that additionally permits the implementation of popular weighted scoring rules. Weighted scoring rules allow particular outcomes to be targeted during forecast evaluation, recognising that certain outcomes are often of more interest than others when assessing forecast quality. This introduces the potential for very flexible, user-oriented evaluation of probabilistic forecasts. We discuss the theory underlying weighted scoring rules, and describe how they can readily be implemented in practice using scoringRules. Functionality is available for weighted versions of several popular scoring rules, including the logarithmic score, the continuous ranked probability score (CRPS), and the energy score. Two case studies are presented to demonstrate this, whereby weighted scoring rules are applied to univariate and multivariate probabilistic forecasts in the fields of meteorology and economics.
A recent trend in real-time rendering is the utilization of the new hardware ray tracing capabilities. Often, usage of a distance field representation is proposed as an alternative when hardware ray tracing is deemed too costly, and the two are seen as competing approaches. In this work, we show that both approaches can work together effectively for a single ray query on modern hardware. We choose to use hardware ray tracing where precision is most important, while avoiding its heavy cost by using a distance field when possible. While a simple approach, in our experiments the resulting tracing algorithm overcomes the associated overhead and allows a user-defined middle ground between the performance of distance field traversal and the improved visual quality of hardware ray tracing.