Functional magnetic resonance imaging analytical workflows are highly flexible with no definite consensus on how to choose a pipeline. While methods have been developed to explore this analytical space, there is still a lack of understanding of the relationships between the different pipelines. We use community detection algorithms to explore the pipeline space and assess its stability across different contexts. We show that there are subsets of pipelines that give similar results, especially those sharing specific parameters (e.g. number of motion regressors, software packages, etc.), with relative stability across groups of participants. By visualizing the differences between these subsets, we describe the effect of pipeline parameters and derive general relationships in the analytical space.
The modification of Amdahl's law for the case of increment of processor elements in a computer system is considered. The coefficient $k$ linking accelerations of parallel and parallel specialized computer systems is determined. The limiting values of the coefficient are investigated and its theoretical maximum is calculated. It is proved that $k$ > 1 for any positive increment of processor elements. The obtained formulas are combined into a single method allowing to determine the maximum theoretical acceleration of a parallel specialized computer system in comparison with the acceleration of a minimal parallel computer system. The method is tested on Apriori, k-nearest neighbors, CDF 9/7, fast Fourier transform and naive Bayesian classifier algorithms.
We consider a general multivariate model where univariate marginal distributions are known up to a parameter vector and we are interested in estimating that parameter vector without specifying the joint distribution, except for the marginals. If we assume independence between the marginals and maximize the resulting quasi-likelihood, we obtain a consistent but inefficient QMLE estimator. If we assume a parametric copula (other than independence) we obtain a full MLE, which is efficient but only under a correct copula specification and may be biased if the copula is misspecified. Instead we propose a sieve MLE estimator (SMLE) which improves over QMLE but does not have the drawbacks of full MLE. We model the unknown part of the joint distribution using the Bernstein-Kantorovich polynomial copula and assess the resulting improvement over QMLE and over misspecified FMLE in terms of relative efficiency and robustness. We derive the asymptotic distribution of the new estimator and show that it reaches the relevant semiparametric efficiency bound. Simulations suggest that the sieve MLE can be almost as efficient as FMLE relative to QMLE provided there is enough dependence between the marginals. We demonstrate practical value of the new estimator with several applications. First, we apply SMLE in an insurance context where we build a flexible semi-parametric claim loss model for a scenario where one of the variables is censored. As in simulations, the use of SMLE leads to tighter parameter estimates. Next, we consider financial risk management examples and show how the use of SMLE leads to superior Value-at-Risk predictions. The paper comes with an online archive which contains all codes and datasets.
We present some algorithms that provide useful topological information about curves in surfaces. One of the main algorithms computes the geometric intersection number of two properly embedded 1-manifolds $C_1$ and $C_2$ in a compact orientable surface $S$. The surface $S$ is presented via a triangulation or a handle structure, and the 1-manifolds are given in normal form via their normal coordinates. The running time is bounded above by a polynomial function of the number of triangles in the triangulation (or the number of handles in the handle structure), and the logarithm of the weight of $C_1$ and $C_2$. This algorithm represents an improvement over previous work, since its running time depends polynomially on the size of the triangulation of $S$ and it can deal with closed surfaces, unlike many earlier algorithms. Another algorithm, with similar bounds on its running time, can determine whether $C_1$ and $C_2$ are isotopic. We also present a closely related algorithm that can be used to place a standard 1-manifold into normal form.
We determine the material parameters in the relaxed micromorphic generalized continuum model for a given periodic microstructure in this work. This is achieved through a least squares fitting of the total energy of the relaxed micromorphic homogeneous continuum to the total energy of the fully-resolved heterogeneous microstructure, governed by classical linear elasticity. The relaxed micromorphic model is a generalized continuum that utilizes the $\Curl$ of a micro-distortion field instead of its full gradient as in the classical micromorphic theory, leading to several advantages and differences. The most crucial advantage is that it operates between two well-defined scales. These scales are determined by linear elasticity with microscopic and macroscopic elasticity tensors, which respectively bound the stiffness of the relaxed micromorphic continuum from above and below. While the macroscopic elasticity tensor is established a priori through standard periodic first-order homogenization, the microscopic elasticity tensor remains to be determined. Additionally, the characteristic length parameter, associated with curvature measurement, controls the transition between the micro- and macro-scales. Both the microscopic elasticity tensor and the characteristic length parameter are here determined using a computational approach based on the least squares fitting of energies. This process involves the consideration of an adequate number of quadratic deformation modes and different specimen sizes. We conduct a comparative analysis between the least square fitting results of the relaxed micromorphic model, the fitting of a skew-symmetric micro-distortion field (Cosserat-micropolar model), and the fitting of the classical micromorphic model with two different formulations for the curvature...
We derive bounds on the moduli of the eigenvalues of special type of matrix rational functions using the following techniques/methods: (1) the Bauer-Fike theorem on an associated block matrix of the given matrix rational function, (2) by associating a real rational function, along with Rouch$\text{\'e}$ theorem for the matrix rational function and (3) by a numerical radius inequality for a block matrix for the matrix rational function. These bounds are compared when the coefficients are unitary matrices. Numerical examples are given to illustrate the results obtained.
Understanding fluid movement in multi-pored materials is vital for energy security and physiology. For instance, shale (a geological material) and bone (a biological material) exhibit multiple pore networks. Double porosity/permeability models provide a mechanics-based approach to describe hydrodynamics in aforesaid porous materials. However, current theoretical results primarily address steady-state response, and their counterparts in the transient regime are still wanting. The primary aim of this paper is to fill this knowledge gap. We present three principal properties -- with rigorous mathematical arguments -- that the solutions under the double porosity/permeability model satisfy in the transient regime: backward-in-time uniqueness, reciprocity, and a variational principle. We employ the ``energy method'' -- by exploiting the physical total kinetic energy of the flowing fluid -- to establish the first property and Cauchy-Riemann convolutions to prove the next two. The results reported in this paper -- that qualitatively describe the dynamics of fluid flow in double-pored media -- have (a) theoretical significance, (b) practical applications, and (c) considerable pedagogical value. In particular, these results will benefit practitioners and computational scientists in checking the accuracy of numerical simulators. The backward-in-time uniqueness lays a firm theoretical foundation for pursuing inverse problems in which one predicts the prescribed initial conditions based on data available about the solution at a later instance.
It is known that standard stochastic Galerkin methods encounter challenges when solving partial differential equations with high-dimensional random inputs, which are typically caused by the large number of stochastic basis functions required. It becomes crucial to properly choose effective basis functions, such that the dimension of the stochastic approximation space can be reduced. In this work, we focus on the stochastic Galerkin approximation associated with generalized polynomial chaos (gPC), and explore the gPC expansion based on the analysis of variance (ANOVA) decomposition. A concise form of the gPC expansion is presented for each component function of the ANOVA expansion, and an adaptive ANOVA procedure is proposed to construct the overall stochastic Galerkin system. Numerical results demonstrate the efficiency of our proposed adaptive ANOVA stochastic Galerkin method for both diffusion and Helmholtz problems.
Any interactive protocol between a pair of parties can be reliably simulated in the presence of noise with a multiplicative overhead on the number of rounds (Schulman 1996). The reciprocal of the best (least) overhead is called the interactive capacity of the noisy channel. In this work, we present lower bounds on the interactive capacity of the binary erasure channel. Our lower bound improves the best known bound due to Ben-Yishai et al. 2021 by roughly a factor of 1.75. The improvement is due to a tighter analysis of the correctness of the simulation protocol using error pattern analysis. More precisely, instead of using the well-known technique of bounding the least number of erasures needed to make the simulation fail, we identify and bound the probability of specific erasure patterns causing simulation failure. We remark that error pattern analysis can be useful in solving other problems involving stochastic noise, such as bounding the interactive capacity of different channels.
Most existing neural network-based approaches for solving stochastic optimal control problems using the associated backward dynamic programming principle rely on the ability to simulate the underlying state variables. However, in some problems, this simulation is infeasible, leading to the discretization of state variable space and the need to train one neural network for each data point. This approach becomes computationally inefficient when dealing with large state variable spaces. In this paper, we consider a class of this type of stochastic optimal control problems and introduce an effective solution employing multitask neural networks. To train our multitask neural network, we introduce a novel scheme that dynamically balances the learning across tasks. Through numerical experiments on real-world derivatives pricing problems, we prove that our method outperforms state-of-the-art approaches.
In this work, a Generalized Finite Difference (GFD) scheme is presented for effectively computing the numerical solution of a parabolic-elliptic system modelling a bacterial strain with density-suppressed motility. The GFD method is a meshless method known for its simplicity for solving non-linear boundary value problems over irregular geometries. The paper first introduces the basic elements of the GFD method, and then an explicit-implicit scheme is derived. The convergence of the method is proven under a bound for the time step, and an algorithm is provided for its computational implementation. Finally, some examples are considered comparing the results obtained with a regular mesh and an irregular cloud of points.