A new domain decomposition method for Maxwell's equations in conductive media is presented. Using this method reconstruction algorithms are developed for determination of dielectric permittivity function using time-dependent scattered data of electric field. All reconstruction algorithms are based on optimization approach to find stationary point of the Lagrangian. Adaptive reconstruction algorithms and space mesh refinement indicators are also presented. Our computational tests show qualitative reconstruction of dielectric permittivity function using anatomically realistic breast phantom.
The Gaussian graphical model is routinely employed to model the joint distribution of multiple random variables. The graph it induces is not only useful for describing the relationship between random variables but also critical for improving statistical estimation precision. In high-dimensional data analysis, despite an abundant literature on estimating this graph structure, tests for the adequacy of its specification at a global level is severely underdeveloped. To make progress, this paper proposes a novel goodness-of-fit test that is computationally easy and theoretically tractable. Under the null hypothesis, it is shown that asymptotic distribution of the proposed test statistic follows a Gumbel distribution. Interestingly the location parameter of this limiting Gumbel distribution depends on the dependence structure under the null. We further develop a novel consistency-empowered test statistic when the true structure is nested in the postulated structure, by amplifying the noise incurred in estimation. Extensive simulation illustrates that the proposed test procedure has the right size under the null, and is powerful under the alternative. As an application, we apply the test to the analysis of a COVID-19 data set, demonstrating that our test can serve as a valuable tool in choosing a graph structure to improve estimation efficiency.
In this paper, we consider a fully-discrete approximation of an abstract evolution equation deploying a non-conforming spatial approximation and finite differences in time (Rothe-Galerkin method). The main result is the convergence of the discrete solutions to a weak solution of the continuous problem. Therefore, the result can be interpreted either as a justification of the numerical method or as an alternative way of constructing weak solutions. We formulate the problem in the very general and abstract setting of so-called non-conforming Bochner pseudo-monotone operators, which allows for a unified treatment of several evolution problems. Our abstract results for non-conforming Bochner pseudo-monotone operators allow to establish (weak) convergence just by verifying a few natural assumptions on the operators time-by-time and on the discretization spaces. Hence, applications and extensions to several other evolution problems can be performed easily. We exemplify the applicability of our approach on several DG schemes for the unsteady $p$-Navier-Stokes problem. The results of some numerical experiments are reported in the final section.
For solving a broad class of nonconvex programming problems on an unbounded constraint set, we provide a self-adaptive step-size strategy that does not include line-search techniques and establishes the convergence of a generic approach under mild assumptions. Specifically, the objective function may not satisfy the convexity condition. Unlike descent line-search algorithms, it does not need a known Lipschitz constant to figure out how big the first step should be. The crucial feature of this process is the steady reduction of the step size until a certain condition is fulfilled. In particular, it can provide a new gradient projection approach to optimization problems with an unbounded constrained set. The correctness of the proposed method is verified by preliminary results from some computational examples. To demonstrate the effectiveness of the proposed technique for large-scale problems, we apply it to some experiments on machine learning, such as supervised feature selection, multi-variable logistic regressions and neural networks for classification.
We consider the reliable implementation of high-order unfitted finite element methods on Cartesian meshes with hanging nodes for elliptic interface problems. We construct a reliable algorithm to merge small interface elements with their surrounding elements to automatically generate the finite element mesh whose elements are large with respect to both domains. We propose new basis functions for the interface elements to control the growth of the condition number of the stiffness matrix in terms of the finite element approximation order, the number of elements of the mesh, and the interface deviation which quantifies the mesh resolution of the geometry of the interface. Numerical examples are presented to illustrate the competitive performance of the method.
In many applications, heterogeneous treatment effects on a censored response variable are of primary interest, and it is natural to evaluate the effects at different quantiles (e.g., median). The large number of potential effect modifiers, the unknown structure of the treatment effects, and the presence of right censoring pose significant challenges. In this paper, we develop a hybrid forest approach called Hybrid Censored Quantile Regression Forest (HCQRF) to assess the heterogeneous effects varying with high-dimensional variables. The hybrid estimation approach takes advantage of the random forests and the censored quantile regression. We propose a doubly-weighted estimation procedure that consists of a redistribution-of-mass weight to handle censoring and an adaptive nearest neighbor weight derived from the forest to handle high-dimensional effect functions. We propose a variable importance decomposition to measure the impact of a variable on the treatment effect function. Extensive simulation studies demonstrate the efficacy and stability of HCQRF. The result of the simulation study also convinces us of the effectiveness of the variable importance decomposition. We apply HCQRF to a clinical trial of colorectal cancer. We achieve insightful estimations of the treatment effect and meaningful variable importance results. The result of the variable importance also confirms the necessity of the decomposition.
Patankar schemes have attracted increasing interest in recent years because they preserve the positivity of the analytical solution of a production-destruction system (PDS) irrespective of the chosen time step size. Although they are now of great interest, for a long time it was not clear what stability properties such schemes have. Recently a new stability approach based on Lyapunov stability with an extension of the center manifold theorem has been proposed to study the stability properties of positivity preserving time integrators. In this work, we study the stability properties of the classical modified Patankar--Runge--Kutta schemes (MPRK) and the modified Patankar Deferred Correction (MPDeC) approaches. We prove that most of the considered MPRK schemes are stable for any time step size and compute the stability function of MPDeC. We investigate its properties numerically revealing that also most MPDeC are stable irrespective of the chosen time step size. Finally, we verify our theoretical results with numerical simulations.
In a mixed generalized linear model, the objective is to learn multiple signals from unlabeled observations: each sample comes from exactly one signal, but it is not known which one. We consider the prototypical problem of estimating two statistically independent signals in a mixed generalized linear model with Gaussian covariates. Spectral methods are a popular class of estimators which output the top two eigenvectors of a suitable data-dependent matrix. However, despite the wide applicability, their design is still obtained via heuristic considerations, and the number of samples $n$ needed to guarantee recovery is super-linear in the signal dimension $d$. In this paper, we develop exact asymptotics on spectral methods in the challenging proportional regime in which $n, d$ grow large and their ratio converges to a finite constant. By doing so, we are able to optimize the design of the spectral method, and combine it with a simple linear estimator, in order to minimize the estimation error. Our characterization exploits a mix of tools from random matrices, free probability and the theory of approximate message passing algorithms. Numerical simulations for mixed linear regression and phase retrieval display the advantage enabled by our analysis over existing designs of spectral methods.
Assignment mechanisms for many-to-one matching markets with preferences revolve around the key concept of stability. Using school choice as our matching market application, we introduce the problem of jointly allocating a school capacity expansion and finding the best stable allocation for the students in the expanded market. We analyze theoretically the problem, focusing on the trade-off behind the multiplicity of student-optimal assignments, the incentive properties, and the problem's complexity. Due to the impossibility of efficiently solving the problem with classical methods, we generalize existent mathematical programming formulations of stability constraints to our setting, most of which result in integer quadratically-constrained programs. In addition, we propose a novel mixed-integer linear programming formulation that is exponentially-large on the problem size. We show that its stability constraints can be separated in linear time, leading to an effective cutting-plane method. We evaluate the performance of our approaches in a detailed computational study, and we find that our cutting-plane method outperforms mixed-integer programming solvers applied to the formulations obtained by extending existing approaches. We also propose two heuristics that are effective for large instances of the problem. Finally, we use the Chilean school choice system data to demonstrate the impact of capacity planning under stability conditions. Our results show that each additional school seat can benefit multiple students. Moreover, our methodology can prioritize the assignment of previously unassigned students or improve the assignment of several students through improvement chains. These insights empower the decision-maker in tuning the matching algorithm to provide a fair application-oriented solution.
Unsupervised domain adaptation has recently emerged as an effective paradigm for generalizing deep neural networks to new target domains. However, there is still enormous potential to be tapped to reach the fully supervised performance. In this paper, we present a novel active learning strategy to assist knowledge transfer in the target domain, dubbed active domain adaptation. We start from an observation that energy-based models exhibit free energy biases when training (source) and test (target) data come from different distributions. Inspired by this inherent mechanism, we empirically reveal that a simple yet efficient energy-based sampling strategy sheds light on selecting the most valuable target samples than existing approaches requiring particular architectures or computation of the distances. Our algorithm, Energy-based Active Domain Adaptation (EADA), queries groups of targe data that incorporate both domain characteristic and instance uncertainty into every selection round. Meanwhile, by aligning the free energy of target data compact around the source domain via a regularization term, domain gap can be implicitly diminished. Through extensive experiments, we show that EADA surpasses state-of-the-art methods on well-known challenging benchmarks with substantial improvements, making it a useful option in the open world. Code is available at //github.com/BIT-DA/EADA.
Attributed graph clustering is challenging as it requires joint modelling of graph structures and node attributes. Recent progress on graph convolutional networks has proved that graph convolution is effective in combining structural and content information, and several recent methods based on it have achieved promising clustering performance on some real attributed networks. However, there is limited understanding of how graph convolution affects clustering performance and how to properly use it to optimize performance for different graphs. Existing methods essentially use graph convolution of a fixed and low order that only takes into account neighbours within a few hops of each node, which underutilizes node relations and ignores the diversity of graphs. In this paper, we propose an adaptive graph convolution method for attributed graph clustering that exploits high-order graph convolution to capture global cluster structure and adaptively selects the appropriate order for different graphs. We establish the validity of our method by theoretical analysis and extensive experiments on benchmark datasets. Empirical results show that our method compares favourably with state-of-the-art methods.