In this paper, we consider the problem of deciding the existence of real solutions to a system of polynomial equations having real coefficients, and which are invariant under the action of the symmetric group. We construct and analyze a Monte Carlo probabilistic algorithm which solves this problem, under some regularity assumptions on the input, by taking advantage of the symmetry invariance property. The complexity of our algorithm is polynomial in $d^s, {{n+d} \choose d}$, and ${{n} \choose {s+1}}$, where $n$ is the number of variables and $d$ is the maximal degree of $s$ input polynomials defining the real algebraic set under study. In particular, this complexity is polynomial in $n$ when $d$ and $s$ are fixed and is equal to $n^{O(1)}2^n$ when $d=n$.
Let $F$ be a finite field, let $f$ be a function from $F$ to $F$, and let $a$ be a nonzero element of $F$. The discrete derivative of $f$ in direction $a$ is $\Delta_a f \colon F \to F$ with $(\Delta_a f)(x)=f(x+a)-f(x)$. The differential spectrum of $f$ is the multiset of cardinalities of all the fibers of all the derivatives $\Delta_a f$ as $a$ runs through $F^*$. The function $f$ is almost perfect nonlinear (APN) if the largest cardinality in the differential spectrum is $2$. Almost perfect nonlinear functions are of interest as cryptographic primitives. If $d$ is a positive integer, the power function over $F$ with exponent $d$ is the function $f \colon F \to F$ with $f(x)=x^d$ for every $x \in F$. There is a small number of known infinite families of APN power functions. In this paper, we re-express the exponents for one such family in a more convenient form. This enables us to give the differential spectrum and, even more, to determine the sizes of individual fibers of derivatives.
The Fast Reciprocal Square Root Algorithm is a well-established approximation technique consisting of two stages: first, a coarse approximation is obtained by manipulating the bit pattern of the floating point argument using integer instructions, and second, the coarse result is refined through one or more steps, traditionally using Newtonian iteration but alternatively using improved expressions with carefully chosen numerical constants found by other authors. The algorithm was widely used before microprocessors carried built-in hardware support for computing reciprocal square roots. At the time of writing, however, there is in general no hardware acceleration for computing other fixed fractional powers. This paper generalises the algorithm to cater to all rational powers, and to support any polynomial degree(s) in the refinement step(s), and under the assumption of unlimited floating point precision provides a procedure which automatically constructs provably optimal constants in all of these cases. It is also shown that, under certain assumptions, the use of monic refinement polynomials yields results which are much better placed with respect to the cost/accuracy tradeoff than those obtained using general polynomials. Further extensions are also analysed, and several new best approximations are given.
Price discrimination, which refers to the strategy of setting different prices for different customer groups, has been widely used in online retailing. Although it helps boost the collected revenue for online retailers, it might create serious concerns about fairness, which even violates the regulation and laws. This paper studies the problem of dynamic discriminatory pricing under fairness constraints. In particular, we consider a finite selling horizon of length $T$ for a single product with two groups of customers. Each group of customers has its unknown demand function that needs to be learned. For each selling period, the seller determines the price for each group and observes their purchase behavior. While existing literature mainly focuses on maximizing revenue, ensuring fairness among different customers has not been fully explored in the dynamic pricing literature. This work adopts the fairness notion from Cohen et al. (2022). For price fairness, we propose an optimal dynamic pricing policy regarding regret, which enforces the strict price fairness constraint. In contrast to the standard $\sqrt{T}$-type regret in online learning, we show that the optimal regret in our case is $\tilde{O}(T^{4/5})$. We further extend our algorithm to a more general notion of fairness, which includes demand fairness as a special case. To handle this general class, we propose a soft fairness constraint and develop a dynamic pricing policy that achieves $\tilde{O}(T^{4/5})$ regret. We also demonstrate that our algorithmic techniques can be adapted to more general scenarios such as fairness among multiple groups of customers.
In this paper, we propose a method for estimating model parameters using Small-Angle Scattering (SAS) data based on the Bayesian inference. Conventional SAS data analyses involve processes of manual parameter adjustment by analysts or optimization using gradient methods. These analysis processes tend to involve heuristic approaches and may lead to local solutions.Furthermore, it is difficult to evaluate the reliability of the results obtained by conventional analysis methods. Our method solves these problems by estimating model parameters as probability distributions from SAS data using the framework of the Bayesian inference. We evaluate the performance of our method through numerical experiments using artificial data of representative measurement target models.From the results of the numerical experiments, we show that our method provides not only high accuracy and reliability of estimation, but also perspectives on the transition point of estimability with respect to the measurement time and the lower bound of the angular domain of the measured data.
This paper presents a fast high-order method for the solution of two-dimensional problems of scattering by penetrable inhomogeneous media, with application to high-frequency configurations containing (possibly) discontinuous refractivities. The method relies on a hybrid direct/iterative combination of 1)~A differential volumetric formulation (which is based on the use of appropriate Chebyshev differentiation matrices enacting the Laplace operator) and, 2)~A second-kind boundary integral formulation. The approach enjoys low dispersion and high-order accuracy for smooth refractivities, as well as second-order accuracy (while maintaining low dispersion) in the discontinuous refractivity case. The solution approach proceeds by application of Impedance-to-Impedance (ItI) maps to couple the volumetric and boundary discretizations. The volumetric linear algebra solutions are obtained by means of a multifrontal solver, and the coupling with the boundary integral formulation is achieved via an application of the iterative linear-algebra solver GMRES. In particular, the existence and uniqueness theory presented in the present paper provides an affirmative answer to an open question concerning the existence of a uniquely solvable second-kind ItI-based formulation for the overall scattering problem under consideration. Relying on a modestly-demanding scatterer-dependent precomputation stage (requiring in practice a computing cost of the order of $O(N^{\alpha})$ operations, with $\alpha \approx 1.07$, for an $N$-point discretization), together with fast ($O(N)$-cost) single-core runs for each incident field considered, the proposed algorithm can effectively solve scattering problems for large and complex objects possibly containing strong refractivity contrasts and discontinuities.
In this work, we study the convergence and performance of nonlinear solvers for the Bidomain equations after decoupling the ordinary and partial differential equations of the cardiac system. Firstly, we provide a rigorous proof of the global convergence of Quasi-Newton methods, such as BFGS, and nonlinear Conjugate-Gradient methods, such as Fletcher--Reeves, for the Bidomain system, by analyzing an auxiliary variational problem under physically reasonable hypotheses. Secondly, we compare several nonlinear Bidomain solvers in terms of execution time, robustness with respect to the data and parallel scalability. Our findings indicate that Quasi-Newton methods are the best choice for nonlinear Bidomain systems, since they exhibit faster convergence rates compared to standard Newton-Krylov methods, while maintaining robustness and scalability. Furthermore, first-order methods also demonstrate competitiveness and serve as a viable alternative, particularly for matrix-free implementations that are well-suited for GPU computing.
The research detailed in this paper scrutinizes Principal Component Analysis (PCA), a seminal method employed in statistics and machine learning for the purpose of reducing data dimensionality. Singular Value Decomposition (SVD) is often employed as the primary means for computing PCA, a process that indispensably includes the step of centering - the subtraction of the mean location from the data set. In our study, we delve into a detailed exploration of the influence of this critical yet often ignored or downplayed data centering step. Our research meticulously investigates the conditions under which two PCA embeddings, one derived from SVD with centering and the other without, can be viewed as aligned. As part of this exploration, we analyze the relationship between the first singular vector and the mean direction, subsequently linking this observation to the congruity between two SVDs of centered and uncentered matrices. Furthermore, we explore the potential implications arising from the absence of centering in the context of performing PCA via SVD from a spectral analysis standpoint. Our investigation emphasizes the importance of a comprehensive understanding and acknowledgment of the subtleties involved in the computation of PCA. As such, we believe this paper offers a crucial contribution to the nuanced understanding of this foundational statistical method and stands as a valuable addition to the academic literature in the field of statistics.
One of the major open problems in complexity theory is proving super-logarithmic lower bounds on the depth of circuits (i.e., $\mathbf{P}\not\subseteq\mathbf{NC}^{1}$). Karchmer, Raz, and Wigderson (Computational Complexity 5(3/4), 1995) suggested to approach this problem by proving that depth complexity of a composition of functions $f\diamond g$ is roughly the sum of the depth complexities of $f$ and $g$. They showed that the validity of this conjecture would imply that $\mathbf{P}\not\subseteq\mathbf{NC}^{1}$. The intuition that underlies the KRW conjecture is that the composition $f\diamond g$ should behave like a "direct-sum problem", in a certain sense, and therefore the depth complexity of $f\diamond g$ should be the sum of the individual depth complexities. Nevertheless, there are two obstacles toward turning this intuition into a proof: first, we do not know how to prove that $f\diamond g$ must behave like a direct-sum problem; second, we do not know how to prove that the complexity of the latter direct-sum problem is indeed the sum of the individual complexities. In this work, we focus on the second obstacle. To this end, we study a notion called "strong composition", which is the same as $f\diamond g$ except that it is forced to behave like a direct-sum problem. We prove a variant of the KRW conjecture for strong composition, thus overcoming the above second obstacle. This result demonstrates that the first obstacle above is the crucial barrier toward resolving the KRW conjecture. Along the way, we develop some general techniques that might be of independent interest.
Iterative decoder failures of quantum low density parity check (QLDPC) codes are attributed to substructures in the code's graph, known as trapping sets, as well as degenerate errors that can arise in quantum codes. Failure inducing sets are subsets of codeword coordinates that, when initially in error, lead to decoding failure in a trapping set. In this paper we examine the failure inducing sets of QLDPC codes under syndrome-based iterative decoding, and their connecting to absorbing sets in classical LDPC codes.
Understanding how and why certain communities bear a disproportionate burden of disease is challenging due to the scarcity of data on these communities. Surveys provide a useful avenue for accessing hard-to-reach populations, as many surveys specifically oversample understudied and vulnerable populations. When survey data is used for analysis, it is important to account for the complex survey design that gave rise to the data, in order to avoid biased conclusions. The field of Bayesian survey statistics aims to account for such survey design while leveraging the advantages of Bayesian models, which can flexibly handle sparsity through borrowing of information and provide a coherent inferential framework to easily obtain variances for complex models and data types. For these reasons, Bayesian survey methods seem uniquely well-poised for health disparities research, where heterogeneity and sparsity are frequent considerations. This review discusses three main approaches found in the Bayesian survey methodology literature: 1) multilevel regression and post-stratification, 2) weighted pseudolikelihood-based methods, and 3) synthetic population generation. We discuss advantages and disadvantages of each approach, examine recent applications and extensions, and consider how these approaches may be leveraged to improve research in population health equity.