亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A multiphysics data analytic platform is established for imaging poroelastic interfaces of finite permeability (e.g., hydraulic fractures) from elastic waveforms and/or acoustic pore pressure measurements. This is accomplished through recent advances in design of non-iterative sampling methods to inverse scattering. The direct problem is formulated via the Biot equations in the frequency domain where a network of discontinuities is illuminated by a set of total body forces and fluid volumetric sources, while monitoring the induced (acoustic and elastic) scattered waves in an arbitrary near-field configuration. A thin-layer approximation is deployed to capture the rough and multiphase nature of interfaces whose spatially varying hydro-mechanical properties are a priori unknown. In this setting, the well-posedness analysis of the forward problem yields the admissibility conditions for the contact parameters. In light of which, the poroelastic scattering operator and its first and second factorizations are introduced and their mathematical properties are carefully examined. It is shown that the non-selfadjoint nature of the Biot system leads to an intrinsically asymmetric factorization of the scattering operator which may be symmetrized at certain limits. These results furnish a robust framework for systematic design of regularized and convex cost functionals whose minimizers underpin the multiphysics imaging indicators. The proposed solution is synthetically implemented with application to spatiotemporal reconstruction of hydraulic fracture networks via deep-well measurements.

相關內容

The classical Langevin Monte Carlo method looks for i.i.d. samples from a target distribution by descending along the gradient of the target distribution. It is popular partially due to its fast convergence rate. However, the numerical cost is sometimes high because the gradient can be hard to obtain. One approach to eliminate the gradient computation is to employ the concept of "ensemble", where a large number of particles are evolved together so that the neighboring particles provide gradient information to each other. In this article, we discuss two algorithms that integrate the ensemble feature into LMC, and the associated properties. There are two sides of our discovery: 1. By directly surrogating the gradient using the ensemble approximation, we develop Ensemble Langevin Monte Carlo. We show that this method is unstable due to a potentially small denominator that induces high variance. We provide a counterexample to explicitly show this instability. 2. We then change the strategy and enact the ensemble approximation to the gradient only in a constrained manner, to eliminate the unstable points. The algorithm is termed Constrained Ensemble Langevin Monte Carlo. We show that, with a proper tuning, the surrogation takes place often enough to bring the reasonable numerical saving, while the induced error is still low enough for us to maintain the fast convergence rate, up to a controllable discretization and ensemble error. Such combination of ensemble method and LMC shed light on inventing gradient-free algorithms that produce i.i.d. samples almost exponentially fast.

We present a new algorithm for the approximate near neighbor problem that combines classical ideas from group testing with locality-sensitive hashing (LSH). We reduce the near neighbor search problem to a group testing problem by designating neighbors as "positives," non-neighbors as "negatives," and approximate membership queries as group tests. We instantiate this framework using distance-sensitive Bloom Filters to Identify Near-Neighbor Groups (FLINNG). We prove that FLINNG has sub-linear query time and show that our algorithm comes with a variety of practical advantages. For example, FLINNG can be constructed in a single pass through the data, consists entirely of efficient integer operations, and does not require any distance computations. We conduct large-scale experiments on high-dimensional search tasks such as genome search, URL similarity search, and embedding search over the massive YFCC100M dataset. In our comparison with leading algorithms such as HNSW and FAISS, we find that FLINNG can provide up to a 10x query speedup with substantially smaller indexing time and memory.

The Ensemble Kalman inversion (EKI), proposed by Iglesias et al. for the solution of Bayesian inverse problems of type $y=A u^\dagger +\varepsilon$, with $u^\dagger$ being an unknown parameter and $y$ a given datum, is a powerful tool usually derived from a sequential Monte Carlo point of view. It describes the dynamics of an ensemble of particles $\{u^j(t)\}_{j=1}^J$, whose initial empirical measure is sampled from the prior, evolving over an artificial time $t$ towards an approximate solution of the inverse problem. Using spectral techniques, we provide a complete description of the \new{deterministic} dynamics of EKI and their asymptotic behavior in parameter space. In particular, we analyze dynamics of deterministic EKI, averaged quantities of stochastic EKI, and mean-field EKI. We show that in the linear Gaussian regime, the Bayesian posterior can only be recovered with the mean-field limit and not with finite sample sizes or deterministic EKI. Furthermore, we show that -- even in the deterministic case -- residuals in parameter space do not decrease monotonously in the Euclidean norm and suggest a problem-adapted norm, where monotonicity can be proved. Finally, we derive a system of ordinary differential equations governing the spectrum and eigenvectors of the covariance matrix.

We introduce two data completion algorithms for the limited-aperture problems in inverse acoustic scattering. Both completion algorithms are independent of the topological and physical properties of the unknown scatterers. The main idea is to relate the limited-aperture data to the full-aperture data via the prolate matrix. The data completion algorithms are simple and fast since only the approximate inversion of the prolate matrix is involved. We then combine the data completion algorithms with imaging methods such as factorization method and direct sampling method for the object reconstructions. A variety of numerical examples are presented to illustrate the effectiveness and robustness of the proposed algorithms.

In this paper, we are interested in nonparametric kernel estimation of a generalized regression function, including conditional cumulative distribution and conditional quantile functions, based on an incomplete sample $(X_t, Y_t, \zeta_t)_{t\in \mathbb{ R}^+}$ copies of a continuous-time stationary ergodic process $(X, Y, \zeta)$. The predictor $X$ is valued in some infinite-dimensional space, whereas the real-valued process $Y$ is observed when $\zeta= 1$ and missing whenever $\zeta = 0$. Pointwise and uniform consistency (with rates) of these estimators as well as a central limit theorem are established. Conditional bias and asymptotic quadratic error are also provided. Asymptotic and bootstrap-based confidence intervals for the generalized regression function are also discussed. A first simulation study is performed to compare the discrete-time to the continuous-time estimations. A second simulation is also conducted to discuss the selection of the optimal sampling mesh in the continuous-time case. Finally, it is worth noting that our results are stated under ergodic assumption without assuming any classical mixing conditions.

Federated learning (FL) is an emerging, privacy-preserving machine learning paradigm, drawing tremendous attention in both academia and industry. A unique characteristic of FL is heterogeneity, which resides in the various hardware specifications and dynamic states across the participating devices. Theoretically, heterogeneity can exert a huge influence on the FL training process, e.g., causing a device unavailable for training or unable to upload its model updates. Unfortunately, these impacts have never been systematically studied and quantified in existing FL literature. In this paper, we carry out the first empirical study to characterize the impacts of heterogeneity in FL. We collect large-scale data from 136k smartphones that can faithfully reflect heterogeneity in real-world settings. We also build a heterogeneity-aware FL platform that complies with the standard FL protocol but with heterogeneity in consideration. Based on the data and the platform, we conduct extensive experiments to compare the performance of state-of-the-art FL algorithms under heterogeneity-aware and heterogeneity-unaware settings. Results show that heterogeneity causes non-trivial performance degradation in FL, including up to 9.2% accuracy drop, 2.32x lengthened training time, and undermined fairness. Furthermore, we analyze potential impact factors and find that device failure and participant bias are two potential factors for performance degradation. Our study provides insightful implications for FL practitioners. On the one hand, our findings suggest that FL algorithm designers consider necessary heterogeneity during the evaluation. On the other hand, our findings urge system providers to design specific mechanisms to mitigate the impacts of heterogeneity.

Graph convolutional networks (GCNs) have shown promising results in processing graph data by extracting structure-aware features. This gave rise to extensive work in geometric deep learning, focusing on designing network architectures that ensure neuron activations conform to regularity patterns within the input graph. However, in most cases the graph structure is only accounted for by considering the similarity of activations between adjacent nodes, which in turn degrades the results. In this work, we augment GCN models by incorporating richer notions of regularity by leveraging cascades of band-pass filters, known as geometric scatterings. The produced graph features incorporate multiscale representations of local graph structures, while avoiding overly smooth activations forced by previous architectures. Moreover, inspired by skip connections used in residual networks, we introduce graph residual convolutions that reduce high-frequency noise caused by joining together information at multiple scales. Our hybrid architecture introduces a new model for semi-supervised learning on graph-structured data, and its potential is demonstrated for node classification tasks on multiple graph datasets, where it outperforms leading GCN models.

Traditional 3D models learn a latent representation of faces using linear subspaces from no more than 300 training scans of a single database. The main roadblock of building a large-scale face model from diverse 3D databases lies in the lack of dense correspondence among raw scans. To address these problems, this paper proposes an innovative framework to jointly learn a nonlinear face model from a diverse set of raw 3D scan databases and establish dense point-to-point correspondence among their scans. Specifically, by treating input raw scans as unorganized point clouds, we explore the use of PointNet architectures for converting point clouds to identity and expression feature representations, from which the decoder networks recover their 3D face shapes. Further, we propose a weakly supervised learning approach that does not require correspondence label for the scans. We demonstrate the superior dense correspondence and representation power of our proposed method in shape and expression, and its contribution to single-image 3D face reconstruction.

Because of continuous advances in mathematical programing, Mix Integer Optimization has become a competitive vis-a-vis popular regularization method for selecting features in regression problems. The approach exhibits unquestionable foundational appeal and versatility, but also poses important challenges. We tackle these challenges, reducing computational burden when tuning the sparsity bound (a parameter which is critical for effectiveness) and improving performance in the presence of feature collinearity and of signals that vary in nature and strength. Importantly, we render the approach efficient and effective in applications of realistic size and complexity - without resorting to relaxations or heuristics in the optimization, or abandoning rigorous cross-validation tuning. Computational viability and improved performance in subtler scenarios is achieved with a multi-pronged blueprint, leveraging characteristics of the Mixed Integer Programming framework and by means of whitening, a data pre-processing step.

Image segmentation is considered to be one of the critical tasks in hyperspectral remote sensing image processing. Recently, convolutional neural network (CNN) has established itself as a powerful model in segmentation and classification by demonstrating excellent performances. The use of a graphical model such as a conditional random field (CRF) contributes further in capturing contextual information and thus improving the segmentation performance. In this paper, we propose a method to segment hyperspectral images by considering both spectral and spatial information via a combined framework consisting of CNN and CRF. We use multiple spectral cubes to learn deep features using CNN, and then formulate deep CRF with CNN-based unary and pairwise potential functions to effectively extract the semantic correlations between patches consisting of three-dimensional data cubes. Effective piecewise training is applied in order to avoid the computationally expensive iterative CRF inference. Furthermore, we introduce a deep deconvolution network that improves the segmentation masks. We also introduce a new dataset and experimented our proposed method on it along with several widely adopted benchmark datasets to evaluate the effectiveness of our method. By comparing our results with those from several state-of-the-art models, we show the promising potential of our method.

北京阿比特科技有限公司