Though the method-of-moments implementation of the electric-field integral equation plays an important role in computational electromagnetics, it provides many code-verification challenges due to the different sources of numerical error. In this paper, we provide an approach through which we can apply the method of manufactured solutions to isolate and verify the solution-discretization error. We accomplish this by manufacturing both the surface current and the Green's function. Because the arising equations are poorly conditioned, we reformulate them as a set of constraints for an optimization problem that selects the solution closest to the manufactured solution. We demonstrate the effectiveness of this approach for cases with and without coding errors.
An important issue for many economic experiments is how the experimenter can ensure sufficient power for rejecting one or more hypotheses. Here, we apply methods developed mainly within the area of clinical trials for testing multiple hypotheses simultaneously in adaptive, two-stage designs. Our main goal is to illustrate how this approach can be used to improve the power of economic experiments. Having briefly introduced the relevant theory, we perform a simulation study supported by the open source R package asd in order to evaluate the power of some different designs. The simulations show that the power to reject at least one hypothesis can be improved while still ensuring strong control of the overall Type I error probability, and without increasing the total sample size and thus the costs of the study. The derived designs are further illustrated by applying them to two different real-world data sets from experimental economics.
Omics technologies are powerful tools for analyzing patterns in gene expression data for thousands of genes. Due to a number of systematic variations in experiments, the raw gene expression data is often obfuscated by undesirable technical noises. Various normalization techniques were designed in an attempt to remove these non-biological errors prior to any statistical analysis. One of the reasons for normalizing data is the need for recovering the covariance matrix used in gene network analysis. In this paper, we introduce a novel normalization technique, called the covariance shift (C-SHIFT) method. This normalization algorithm uses optimization techniques together with the blessing of dimensionality philosophy and energy minimization hypothesis for covariance matrix recovery under additive noise (in biology, known as the bias). Thus, it is perfectly suited for the analysis of logarithmic gene expression data. Numerical experiments on synthetic data demonstrate the method's advantage over the classical normalization techniques. Namely, the comparison is made with Rank, Quantile, cyclic LOESS (locally estimated scatterplot smoothing), and MAD (median absolute deviation) normalization methods. We also evaluate the performance of C-SHIFT algorithm on real biological data.
Two crucial factors for accurate numerical simulations of cardiac electromechanics, which are also essential to reproduce the synchronous activity of the heart, are: i) accounting for the interaction between the heart and the circulatory system that determines pressures and volumes loads in the heart chambers; ii) reconstructing the muscular fiber architecture that drives the electrophysiology signal and the myocardium contraction. In this work, we present a 3D biventricular electromechanical model coupled with a 0D closed-loop model of the whole cardiovascular system that addresses the two former crucial factors. With this aim, we introduce a boundary condition for the mechanical problem that accounts for the neglected part of the domain located on top of the biventricular basal plane and that is consistent with the principles of momentum and energy conservation. We also discuss in detail the coupling conditions that stand behind the 3D and the 0D models. We perform electromechanical simulations in physiological conditions using the 3D-0D model and we show that our results match the experimental data of relevant mechanical biomarkers available in literature. Furthermore, we investigate different arrangements in cross-fibers active contraction. We prove that an active tension along the sheet direction counteracts the myofiber contraction, while the one along the sheet-normal direction enhances the cardiac work. Finally, several myofiber architectures are analysed. We show that a different fiber field in the septal area and in the transmural wall effect the pumping functionality of the left ventricle.
The support structure is required to successfully create structural parts in the powder bed fusion process of additive manufacturing. In this study, we present the topology optimization of the support structure that improves heat dissipation in the building process. First, we construct a numerical method that obtains the temperature field in the building process represented by the transient heat conduction phenomenon with volume heat flux. Next, we formulate an optimization problem for maximizing heat dissipation and develop an optimization algorithm incorporating the level-set-based topology optimization. A sensitivity of the objective function is derived using the adjoint variable method. Finally, several numerical examples are provided to demonstrate the effectiveness and validity of the proposed method.
This paper considers the numerical treatment of the time-dependent Gross-Pitaevskii equation. In order to conserve the time invariants of the equation as accurately as possible, we propose a Crank-Nicolson-type time discretization that is combined with a suitable generalized finite element discretization in space. The space discretization is based on the technique of Localized Orthogonal Decompositions (LOD) and allows to capture the time invariants with an accuracy of order $\mathcal{O}(H^6)$ with respect to the chosen mesh size $H$. This accuracy is preserved due to the conservation properties of the time stepping method. Furthermore, we prove that the resulting scheme approximates the exact solution in the $L^{\infty}(L^2)$-norm with order $\mathcal{O}(\tau^2 + H^4)$, where $\tau$ denotes the step size. The computational efficiency of the method is demonstrated in numerical experiments for a benchmark problem with known exact solution.
In this paper, we propose a direct parallel-in-time (PinT) algorithm for time-dependent problems with first- or second-order derivative. We use a second-order boundary value method as the time integrator that leads to a tridiagonal time discretization matrix. Instead of solving the corresponding all-at-once system iteratively, we diagonalize the time discretization matrix, which yields a direct parallel implementation across all time levels. A crucial issue on this methodology is how the condition number of the eigenvector matrix $V$ grows as $n$ is increased, where $n$ is the number of time levels. A large condition number leads to large roundoff error in the diagonalization procedure, which could seriously pollute the numerical accuracy. Based on a novel connection between the characteristic equation and the Chebyshev polynomials, we present explicit formulas for computing $V$ and $V^{-1}$, by which we prove that $\mathrm{Cond}_2(V)=\mathcal{O}(n^{2})$. This implies that the diagonalization process is well-conditioned and the roundoff error only increases moderately as $n$ grows and thus, compared to other direct PinT algorithms, a much larger $n$ can be used to yield satisfactory parallelism. Numerical results on parallel machine are given to support our findings, where over 60 times speedup is achieved with 256 cores.
In this work, we optimally solve the problem of multiplierless design of second-order Infinite Impulse Response filters with minimum number of adders. Given a frequency specification, we design a stable direct form filter with hardware-aware fixed-point coefficients that yielding minimal number of adders when replacing all the multiplications by bit shifts and additions. The coefficient design, quantization and implementation, typically conducted independently, are now gathered into one global optimization problem, modeled through integer linear programming and efficiently solved using generic solvers. We guarantee the frequency-domain specifications and stability, which together with optimal number of adders will significantly simplify design-space exploration for filter designers. The optimal filters are implemented within the FloPoCo IP core generator and synthesized for Field Programmable Gate Arrays. With respect to state-of-the-art three-step filter design methods, our one-step design approach achieves, on average, 42% reduction in the number of lookup tables and 21% improvement in delay.
We present a novel isogeometric method, namely the Immersed Boundary-Conformal Method (IBCM), that features a layer of discretization conformal to the boundary while employing a simple background mesh for the remaining domain. In this manner, we leverage the geometric flexibility of the immersed boundary method with the advantages of a conformal discretization, such as intuitive control of mesh resolution around the boundary, higher accuracy per degree of freedom, automatic satisfaction of interface kinematic conditions, and the ability to strongly impose Dirichlet boundary conditions. In the proposed method, starting with a boundary representation of a geometric model, we extrude it to obtain a corresponding conformal layer. Next, a given background B-spline mesh is cut with the conformal layer, leading to two disconnected regions: an exterior region and an interior region. Depending on the problem of interest, one of the two regions is selected to be coupled with the conformal layer through Nitsche's method. Such a construction involves Boolean operations such as difference and union, which therefore require proper stabilization to deal with arbitrarily cut elements. In this regard, we follow our precedent work called the minimal stabilization method [1]. In the end, we solve several 2D benchmark problems to demonstrate improved accuracy and expected convergence with IBCM. Two applications that involve complex geometries are also studied to show the potential of IBCM, including a spanner model and a fiber-reinforced composite model. Moreover, we demonstrate the effectiveness of IBCM in an application that exhibits boundary-layer phenomena.
Poor laryngeal muscle coordination that results in abnormal glottal posturing is believed to be a primary etiologic factor in common voice disorders such as non-phonotraumatic vocal hyperfunction. An imbalance in the activity of antagonistic laryngeal muscles is hypothesized to play a key role in the alteration of normal vocal fold biomechanics that results in the dysphonia associated with such disorders. Current low-order models are unsatisfactory to test this hypothesis since they do not capture the co-contraction of antagonist laryngeal muscle pairs. To address this limitation, a scheme for controlling a self-sustained triangular body-cover model with intrinsic muscle control is introduced. The approach builds upon prior efforts and allows for exploring the role of antagonistic muscle pairs in phonation. The proposed scheme is illustrated through the ample agreement with prior studies using finite element models, excised larynges, and clinical studies in sustained and time-varying vocal gestures. Pilot simulations of abnormal scenarios illustrated that poorly regulated and elevated muscle activities result in more abducted prephonatory posturing, which lead to inefficient phonation and subglottal pressure compensation to regain loudness. The proposed tool is deemed sufficiently accurate and flexible for future comprehensive investigations of non-phonotraumatic vocal hyperfunction and other laryngeal motor control disorders.
Deep neural network architectures have traditionally been designed and explored with human expertise in a long-lasting trial-and-error process. This process requires huge amount of time, expertise, and resources. To address this tedious problem, we propose a novel algorithm to optimally find hyperparameters of a deep network architecture automatically. We specifically focus on designing neural architectures for medical image segmentation task. Our proposed method is based on a policy gradient reinforcement learning for which the reward function is assigned a segmentation evaluation utility (i.e., dice index). We show the efficacy of the proposed method with its low computational cost in comparison with the state-of-the-art medical image segmentation networks. We also present a new architecture design, a densely connected encoder-decoder CNN, as a strong baseline architecture to apply the proposed hyperparameter search algorithm. We apply the proposed algorithm to each layer of the baseline architectures. As an application, we train the proposed system on cine cardiac MR images from Automated Cardiac Diagnosis Challenge (ACDC) MICCAI 2017. Starting from a baseline segmentation architecture, the resulting network architecture obtains the state-of-the-art results in accuracy without performing any trial-and-error based architecture design approaches or close supervision of the hyperparameters changes.