亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we develop sixth-order hybrid finite difference methods (FDMs) for the elliptic interface problem $-\nabla \cdot( a\nabla u)=f$ in $\Omega\backslash \Gamma$, where $\Gamma$ is a smooth interface inside $\Omega$. The variable scalar coefficient $a>0$ and source $f$ are possibly discontinuous across $\Gamma$. The hybrid FDMs utilize a $9$-point compact stencil at any interior regular points of the grid and a $13$-point stencil at irregular points near $\Gamma$. For interior regular points away from $\Gamma$, we obtain a sixth-order $9$-point compact FDM satisfying the sign and sum conditions for ensuring the M-matrix property. We also derive sixth-order compact ($4$-point for corners and $6$-point for edges) FDMs satisfying the sign and sum conditions for the M-matrix property at any boundary point subject to (mixed) Dirichlet/Neumann/Robin boundary conditions. Thus, for the elliptic problem without interface (i.e., $\Gamma$ is empty), our compact FDM has the M-matrix property for any mesh size $h>0$ and consequently, satisfies the discrete maximum principle, which guarantees the theoretical sixth-order convergence. For irregular points near $\Gamma$, we propose fifth-order $13$-point FDMs, whose stencil coefficients can be effectively calculated by recursively solving several small linear systems. Theoretically, the proposed high order FDMs use high order (partial) derivatives of the coefficient $a$, the source term $f$, the interface curve $\Gamma$, the two jump functions along $\Gamma$, and the functions on $\partial \Omega$. Numerically, we always use function values to approximate all required high order (partial) derivatives in our hybrid FDMs without losing accuracy. Our numerical experiments confirm the sixth-order convergence in the $l_{\infty}$ norm of the proposed hybrid FDMs for the elliptic interface problem.

相關內容

For an input graph $G=(V, E)$ and a source vertex $s \in V$, the \emph{$\alpha$-approximate vertex fault-tolerant distance sensitivity oracle} (\emph{$\alpha$-VSDO}) answers an $\alpha$-approximate distance from $s$ to $t$ in $G-x$ for any query $(x, t)$. It is a data structure version of the so-called single-source replacement path problem (SSRP). In this paper, we present a new \emph{nearly linear time} algorithm of constructing the $(1 + \epsilon)$-VSDO for any weighted directed graph of $n$ vertices and $m$ edges with integer weights in range $[1, W]$, and any positive constant $\epsilon \in (0, 1]$. More precisely, the presented oracle attains $\tilde{O}(m / \epsilon + n /\epsilon^2)$ construction time, $\tilde{O}(n/ \epsilon)$ size, and $\tilde{O}(1/\epsilon)$ query time for any polynomially-bounded $W$. To the best of our knowledge, this is the first non-trivial result for SSRP/VSDO beating the trivial $\tilde{O}(mn)$ computation time for directed graphs with polynomially-bounded edge weights. Such a result has been unknown so far even for the setting of $(1 + \epsilon)$-approximation. It also implies that the known barrier of $\Omega(m\sqrt{n})$ time for the exact SSRP by Chechik and Magen~[ICALP2020] does not apply to the case of approximation.

In robust optimization problems, the magnitude of perturbations is relatively small. Consequently, solutions within certain regions are less likely to represent the robust optima when perturbations are introduced. Hence, a more efficient search process would benefit from increased opportunities to explore promising regions where global optima or good local optima are situated. In this paper, we introduce a novel robust evolutionary algorithm named the dual-stage robust evolutionary algorithm (DREA) aimed at discovering robust solutions. DREA operates in two stages: the peak-detection stage and the robust solution-searching stage. The primary objective of the peak-detection stage is to identify peaks in the fitness landscape of the original optimization problem. Conversely, the robust solution-searching stage focuses on swiftly identifying the robust optimal solution using information obtained from the peaks discovered in the initial stage. These two stages collectively enable the proposed DREA to efficiently obtain the robust optimal solution for the optimization problem. This approach achieves a balance between solution optimality and robustness by separating the search processes for optimal and robust optimal solutions. Experimental results demonstrate that DREA significantly outperforms five state-of-the-art algorithms across 18 test problems characterized by diverse complexities. Moreover, when evaluated on higher-dimensional robust optimization problems (100-$D$ and 200-$D$), DREA also demonstrates superior performance compared to all five counterpart algorithms.

We propose a novel data-driven semi-confirmatory factor analysis (SCFA) model that addresses the absence of model specification and handles the estimation and inference tasks with high-dimensional data. Confirmatory factor analysis (CFA) is a prevalent and pivotal technique for statistically validating the covariance structure of latent common factors derived from multiple observed variables. In contrast to other factor analysis methods, CFA offers a flexible covariance modeling approach for common factors, enhancing the interpretability of relationships between the common factors, as well as between common factors and observations. However, the application of classic CFA models faces dual barriers: the lack of a prerequisite specification of "non-zero loadings" or factor membership (i.e., categorizing the observations into distinct common factors), and the formidable computational burden in high-dimensional scenarios where the number of observed variables surpasses the sample size. To bridge these two gaps, we propose the SCFA model by integrating the underlying high-dimensional covariance structure of observed variables into the CFA model. Additionally, we offer computationally efficient solutions (i.e., closed-form uniformly minimum variance unbiased estimators) and ensure accurate statistical inference through closed-form exact variance estimators for all model parameters and factor scores. Through an extensive simulation analysis benchmarking against standard computational packages, SCFA exhibits superior performance in estimating model parameters and recovering factor scores, while substantially reducing the computational load, across both low- and high-dimensional scenarios. It exhibits moderate robustness to model misspecification. We illustrate the practical application of the SCFA model by conducting factor analysis on a high-dimensional gene expression dataset.

We consider the problem of tracking multiple, unknown, and time-varying numbers of objects using a distributed network of heterogeneous sensors. In an effort to derive a formulation for practical settings, we consider limited and unknown sensor field-of-views (FoVs), sensors with limited local computational resources and communication channel capacity. The resulting distributed multi-object tracking algorithm involves solving an NP-hard multidimensional assignment problem either optimally for small-size problems or sub-optimally for general practical problems. For general problems, we propose an efficient distributed multi-object tracking algorithm that performs track-to-track fusion using a clustering-based analysis of the state space transformed into a density space to mitigate the complexity of the assignment problem. The proposed algorithm can more efficiently group local track estimates for fusion than existing approaches. To ensure we achieve globally consistent identities for tracks across a network of nodes as objects move between FoVs, we develop a graph-based algorithm to achieve label consensus and minimise track segmentation. Numerical experiments with a synthetic and a real-world trajectory dataset demonstrate that our proposed method is significantly more computationally efficient than state-of-the-art solutions, achieving similar tracking accuracy and bandwidth requirements but with improved label consistency.

In this paper, we propose a novel and general framework to construct tight framelet systems on graphs with localized supports based on hierarchical partitions. Our construction provides parametrized graph framelet systems with great generality based on partition trees, by which we are able to find the size of a low-dimensional subspace that best fits the low-rank structure of a family of signals. The orthogonal decomposition of subspaces provides a key ingredient for the definition of "generalized vanishing moments" for graph framelets. In a data-adaptive setting, the graph framelet systems can be learned by solving an optimization problem on Stiefel manifolds with respect to our parameterization. Moreover, such graph framelet systems can be further improved by solving a subsequent optimization problem on Stiefel manifolds, aiming at providing the utmost sparsity for a given family of graph signals. Experimental results show that our learned graph framelet systems perform superiorly in non-linear approximation and denoising tasks.

We develop a post-selection inference method for the Cox proportional hazards model with interval-censored data, which provides asymptotically valid p-values and confidence intervals conditional on the model selected by lasso. The method is based on a pivotal quantity that is shown to converge to a uniform distribution under local alternatives. The proof can be adapted to many other regression models, which is illustrated by the extension to generalized linear models and the Cox model with right-censored data. Our method involves estimation of the efficient information matrix, for which several approaches are proposed with proof of their consistency. Thorough simulation studies show that our method has satisfactory performance in samples of modest sizes. The utility of the method is illustrated via an application to an Alzheimer's disease study.

Large-scale text-to-image (T2I) diffusion models have been extended for text-guided video editing, yielding impressive zero-shot video editing performance. Nonetheless, the generated videos usually show spatial irregularities and temporal inconsistencies as the temporal characteristics of videos have not been faithfully modeled. In this paper, we propose an elegant yet effective Temporal-Consistent Video Editing (TCVE) method to mitigate the temporal inconsistency challenge for robust text-guided video editing. In addition to the utilization of a pretrained T2I 2D Unet for spatial content manipulation, we establish a dedicated temporal Unet architecture to faithfully capture the temporal coherence of the input video sequences. Furthermore, to establish coherence and interrelation between the spatial-focused and temporal-focused components, a cohesive spatial-temporal modeling unit is formulated. This unit effectively interconnects the temporal Unet with the pretrained 2D Unet, thereby enhancing the temporal consistency of the generated videos while preserving the capacity for video content manipulation. Quantitative experimental results and visualization results demonstrate that TCVE achieves state-of-the-art performance in both video temporal consistency and video editing capability, surpassing existing benchmarks in the field.

We develop a new efficient sequential approximate leverage score algorithm, SALSA, using methods from randomized numerical linear algebra (RandNLA) for large matrices. We demonstrate that, with high probability, the accuracy of SALSA's approximations is within $(1 + O({\varepsilon}))$ of the true leverage scores. In addition, we show that the theoretical computational complexity and numerical accuracy of SALSA surpass existing approximations. These theoretical results are subsequently utilized to develop an efficient algorithm, named LSARMA, for fitting an appropriate ARMA model to large-scale time series data. Our proposed algorithm is, with high probability, guaranteed to find the maximum likelihood estimates of the parameters for the true underlying ARMA model. Furthermore, it has a worst-case running time that significantly improves those of the state-of-the-art alternatives in big data regimes. Empirical results on large-scale data strongly support these theoretical results and underscore the efficacy of our new approach.

Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.

In this paper, we propose the joint learning attention and recurrent neural network (RNN) models for multi-label classification. While approaches based on the use of either model exist (e.g., for the task of image captioning), training such existing network architectures typically require pre-defined label sequences. For multi-label classification, it would be desirable to have a robust inference process, so that the prediction error would not propagate and thus affect the performance. Our proposed model uniquely integrates attention and Long Short Term Memory (LSTM) models, which not only addresses the above problem but also allows one to identify visual objects of interests with varying sizes without the prior knowledge of particular label ordering. More importantly, label co-occurrence information can be jointly exploited by our LSTM model. Finally, by advancing the technique of beam search, prediction of multiple labels can be efficiently achieved by our proposed network model.

北京阿比特科技有限公司