亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider two hypothesis testing problems for low-rank and high-dimensional tensor signals, namely the tensor signal alignment and tensor signal matching problems. These problems are challenging due to the high dimension of tensors and lack of meaningful test statistics. By exploiting a recent tensor contraction method, we propose and validate relevant test statistics using eigenvalues of a data matrix resulting from the tensor contraction. The matrix has a long range dependence among its entries, which makes the analysis of the matrix challenging, involved and distinct from standard random matrix theory. Our approach provides a novel framework for addressing hypothesis testing problems in the context of high-dimensional tensor signals.

相關內容

Most of the scientific literature on causal modeling considers the structural framework of Pearl and the potential-outcome framework of Rubin to be formally equivalent, and therefore interchangeably uses do-interventions and the potential-outcome subscript notation to write counterfactual outcomes. In this paper, we agnostically superimpose the two causal models to specify under which mathematical conditions structural counterfactual outcomes and potential outcomes need to, do not need to, can, or cannot be equal (almost surely or law). Our comparison reminds that a structural causal model and a Rubin causal model compatible with the same observations do not have to coincide, and highlights real-world problems where they even cannot correspond. Then, we examine common claims and practices from the causal-inference literature in the light of these results. In doing so, we aim at clarifying the relationship between the two causal frameworks, and the interpretation of their respective counterfactuals.

Phase-field models of fatigue are capable of reproducing the main phenomenology of fatigue behavior. However, phase-field computations in the high-cycle fatigue regime are prohibitively expensive, due to the need to resolve spatially the small length scale inherent to phase-field models and temporally the loading history for several millions of cycles. As a remedy, we propose a fully adaptive acceleration scheme based on the cycle jump technique, where the cycle-by-cycle resolution of an appropriately determined number of cycles is skipped while predicting the local system evolution during the jump. The novelty of our approach is a cycle-jump criterion to determine the appropriate cycle-jump size based on a target increment of a global variable which monitors the advancement of fatigue. We propose the definition and meaning of this variable for three general stages of the fatigue life. In comparison to existing acceleration techniques, our approach needs no parameters and bounds for the cycle-jump size, and it works independently of the material, specimen or loading conditions. Since one of the monitoring variables is the fatigue crack length, we introduce an accurate, flexible and efficient method for its computation, which overcomes the issues of conventional crack tip tracking algorithms and enables the consideration of several cracks evolving at the same time. The performance of the proposed acceleration scheme is demonstrated with representative numerical examples, which show a speedup reaching four orders of magnitude in the high-cycle fatigue regime with consistently high accuracy.

For several types of information relations, the induced rough sets system RS does not form a lattice but only a partially ordered set. However, by studying its Dedekind-MacNeille completion DM(RS), one may reveal new important properties of rough set structures. Building upon D. Umadevi's work on describing joins and meets in DM(RS), we previously investigated pseudo-Kleene algebras defined on DM(RS) for reflexive relations. This paper delves deeper into the order-theoretic properties of DM(RS) in the context of reflexive relations. We describe the completely join-irreducible elements of DM(RS) and characterize when DM(RS) is a spatial completely distributive lattice. We show that even in the case of a non-transitive reflexive relation, DM(RS) can form a Nelson algebra, a property generally associated with quasiorders. We introduce a novel concept, the core of a relational neighborhood, and use it to provide a necessary and sufficient condition for DM(RS) to determine a Nelson algebra.

When dealing with a large number of points was required, the traditional uniform sampling approach for approximating integrals using the Monte Carlo method becomes inefficient. In this work, we leverage the good lattice point sets from number-theoretic methods for sampling purposes and develop a deep learning framework that integrates the good lattice point sets with Physics-Informed Neural Networks. This framework is designed to address low-regularity and high-dimensional problems. Furthermore, rigorous mathematical proofs are provided for our algorithm, demonstrating its validity. Lastly, in the experimental section, we employ numerical experiments involving the Poisson equation with low regularity, the two-dimensional inverse Helmholtz equation, and high-dimensional linear and nonlinear problems to illustrate the effectiveness of our algorithm from a numerical perspective.

The multi-modal perception methods are thriving in the autonomous driving field due to their better usage of complementary data from different sensors. Such methods depend on calibration and synchronization between sensors to get accurate environmental information. There have already been studies about space-alignment robustness in autonomous driving object detection process, however, the research for time-alignment is relatively few. As in reality experiments, LiDAR point clouds are more challenging for real-time data transfer, our study used historical frames of LiDAR to better align features when the LiDAR data lags exist. We designed a Timealign module to predict and combine LiDAR features with observation to tackle such time misalignment based on SOTA GraphBEV framework.

The dynamics of magnetization in ferromagnetic materials are modeled by the Landau-Lifshitz equation, which presents significant challenges due to its inherent nonlinearity and non-convex constraint. These complexities necessitate efficient numerical methods for micromagnetics simulations. The Gauss-Seidel Projection Method (GSPM), first introduced in 2001, is among the most efficient techniques currently available. However, existing GSPMs are limited to first-order accuracy. This paper introduces two novel second-order accurate GSPMs based on a combination of the biharmonic equation and the second-order backward differentiation formula, achieving computational complexity comparable to that of solving the scalar biharmonic equation implicitly. The first proposed method achieves unconditional stability through Gauss-Seidel updates, while the second method exhibits conditional stability with a Courant-Friedrichs-Lewy constant of 0.25. Through consistency analysis and numerical experiments, we demonstrate the efficacy and reliability of these methods. Notably, the first method displays unconditional stability in micromagnetics simulations, even when the stray field is updated only once per time step.

High-dimensional, higher-order tensor data are gaining prominence in a variety of fields, including but not limited to computer vision and network analysis. Tensor factor models, induced from noisy versions of tensor decompositions or factorizations, are natural potent instruments to study a collection of tensor-variate objects that may be dependent or independent. However, it is still in the early stage of developing statistical inferential theories for the estimation of various low-rank structures, which are customary to play the role of signals of tensor factor models. In this paper, we attempt to ``decode" the estimation of a higher-order tensor factor model by leveraging tensor matricization. Specifically, we recast it into mode-wise traditional high-dimensional vector/fiber factor models, enabling the deployment of conventional principal components analysis (PCA) for estimation. Demonstrated by the Tucker tensor factor model (TuTFaM), which is induced from the noisy version of the widely-used Tucker decomposition, we summarize that estimations on signal components are essentially mode-wise PCA techniques, and the involvement of projection and iteration will enhance the signal-to-noise ratio to various extent. We establish the inferential theory of the proposed estimators, conduct rich simulation experiments, and illustrate how the proposed estimations can work in tensor reconstruction, and clustering for independent video and dependent economic datasets, respectively.

Approximating field variables and data vectors from sparse samples is a key challenge in computational science. Widely used methods such as gappy proper orthogonal decomposition and empirical interpolation rely on linear approximation spaces, limiting their effectiveness for data representing transport-dominated and wave-like dynamics. To address this limitation, we introduce quadratic manifold sparse regression, which trains quadratic manifolds with a sparse greedy method and computes approximations on the manifold through novel nonlinear projections of sparse samples. The nonlinear approximations obtained with quadratic manifold sparse regression achieve orders of magnitude higher accuracies than linear methods on data describing transport-dominated dynamics in numerical experiments.

We consider the problem of causal inference based on observational data (or the related missing data problem) with a binary or discrete treatment variable. In that context, we study inference for the counterfactual density functions and contrasts thereof, which can provide more nuanced information than counterfactual means and the average treatment effect. We impose the shape-constraint of log-concavity, a type of unimodality constraint, on the counterfactual densities, and then develop doubly robust estimators of the log-concave counterfactual density based on augmented inverse-probability weighted pseudo-outcomes. We provide conditions under which the estimator is consistent in various global metrics. We also develop asymptotically valid pointwise confidence intervals for the counterfactual density functions and differences and ratios thereof, which serve as a building block for more comprehensive analyses of distributional differences. We also present a method for using our estimator to implement density confidence bands.

Two sequential estimators are proposed for the odds p/(1-p) and log odds log(p/(1-p)) respectively, using independent Bernoulli random variables with parameter p as inputs. The estimators are unbiased, and guarantee that the variance of the estimation error divided by the true value of the odds, or the variance of the estimation error of the log odds, are less than a target value for any p in (0,1). The estimators are close to optimal in the sense of Wolfowitz's bound.

北京阿比特科技有限公司