亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a new computation method for simulating reflection high-energy electron diffraction and the total-reflection high-energy positron diffraction experiments. The two experiments are used commonly for the structural analysis of material surface. The present paper improves the conventional numerical method, the multi-slice method, for faster computation, since the present method avoids the matrix-eigenvalue solver for the computation of matrix exponentials and can adopt higher-order ordinary differential equation solvers. Moreover, we propose a high-performance implementation based on multi-thread parallelization and cache-reusable subroutines. In our tests, this new method performs up to 2,000 times faster than the conventional method.

相關內容

FAST:Conference on File and Storage Technologies。 Explanation:文件和存儲技術會議。 Publisher:USENIX。 SIT:

With computational models becoming more expensive and complex, surrogate models have gained increasing attention in many scientific disciplines and are often necessary to conduct sensitivity studies, parameter optimization etc. In the scientific discipline of uncertainty quantification (UQ), model input quantities are often described by probability distributions. For the construction of surrogate models, space-filling designs are generated in the input space to define training points, and evaluations of the computational model at these points are then conducted. The physical parameter space is often transformed into an i.i.d. uniform input space in order to apply space-filling training procedures in a sensible way. Due to this transformation surrogate modeling techniques tend to suffer with regard to their prediction accuracy. Therefore, a new method is proposed in this paper where input parameter transformations are applied to basis functions for universal kriging. To speed up hyperparameter optimization for universal kriging, suitable expressions for efficient gradient-based optimization are developed. Several benchmark functions are investigated and the proposed method is compared with conventional methods.

An asymptotic preserving and energy stable scheme for the Euler-Poisson system under the quasineutral scaling is designed and analysed. Correction terms are introduced in the convective fluxes and the electrostatic potential, which lead to the dissipation of mechanical energy and the entropy stability. The resolution of the semi-implicit in time finite volume in space fully-discrete scheme involves two steps: the solution of an elliptic problem for the potential and an explicit evaluation for the density and velocity. The proposed scheme possesses several physically relevant attributes, such as the the entropy stability and the consistency with the weak formulation of the continuous Euler-Poisson system. The AP property of the scheme, i.e. the boundedness of the mesh parameters with respect to the Debye length and its consistency with the quasineutral limit system, is shown. The results of numerical case studies are presented to substantiate the robustness and efficiency of the proposed method.

Consider the community detection problem in random hypergraphs under the non-uniform hypergraph stochastic block model (HSBM), where each hyperedge appears independently with some given probability depending only on the labels of its vertices. We establish, for the first time in the literature, a sharp threshold for exact recovery under this non-uniform case, subject to minor constraints; in particular, we consider the model with multiple communities ($K \geq 2$). One crucial point here is that by aggregating information from all the uniform layers, we may obtain exact recovery even in cases when this may appear impossible if each layer were considered alone. Two efficient algorithms that successfully achieve exact recovery above the threshold are provided. The theoretical analysis of our algorithms relies on the concentration and regularization of the adjacency matrix for non-uniform random hypergraphs, which could be of independent interest. We also address some open problems regarding parameter knowledge and estimation.

Karger (STOC 1995) gave the first FPTAS for the network (un)reliability problem, setting in motion research over the next three decades that obtained increasingly faster running times, eventually leading to a $\tilde{O}(n^2)$-time algorithm (Karger, STOC 2020). This represented a natural culmination of this line of work because the algorithmic techniques used can enumerate $\Theta(n^2)$ (near)-minimum cuts. In this paper, we go beyond this quadratic barrier and obtain a faster FPTAS for the network unreliability problem. Our algorithm runs in $m^{1+o(1)} + \tilde{O}(n^{1.5})$ time. Our main contribution is a new estimator for network unreliability in very reliable graphs. These graphs are usually the bottleneck for network unreliability since the disconnection event is elusive. Our estimator is obtained by defining an appropriate importance sampling subroutine on a dual spanning tree packing of the graph. To complement this estimator for very reliable graphs, we use recursive contraction for moderately reliable graphs. We show that an interleaving of sparsification and contraction can be used to obtain a better parametrization of the recursive contraction algorithm that yields a faster running time matching the one obtained for the very reliable case.

The Black-Scholes (B-S) equation has been recently extended as a kind of tempered time-fractional B-S equations, which becomes an interesting mathematical model in option pricing. In this study, we provide a fast numerical method to approximate the solution of the tempered time-fractional B-S model. To achieve high-order accuracy in space and overcome the weak initial singularity of exact solution, we combine the compact difference operator with L1-type approximation under nonuniform time steps to yield the numerical scheme. The convergence of the proposed difference scheme is proved to be unconditionally stable. Moreover, the kernel function in the tempered Caputo fractional derivative is approximated by sum-of-exponentials, which leads to a fast unconditionally stable compact difference method that reduces the computational cost. Finally, numerical results demonstrate the effectiveness of the proposed methods.

Existing action recognition methods are typically actor-specific due to the intrinsic topological and apparent differences among the actors. This requires actor-specific pose estimation (e.g., humans vs. animals), leading to cumbersome model design complexity and high maintenance costs. Moreover, they often focus on learning the visual modality alone and single-label classification whilst neglecting other available information sources (e.g., class name text) and the concurrent occurrence of multiple actions. To overcome these limitations, we propose a new approach called 'actor-agnostic multi-modal multi-label action recognition,' which offers a unified solution for various types of actors, including humans and animals. We further formulate a novel Multi-modal Semantic Query Network (MSQNet) model in a transformer-based object detection framework (e.g., DETR), characterized by leveraging visual and textual modalities to represent the action classes better. The elimination of actor-specific model designs is a key advantage, as it removes the need for actor pose estimation altogether. Extensive experiments on five publicly available benchmarks show that our MSQNet consistently outperforms the prior arts of actor-specific alternatives on human and animal single- and multi-label action recognition tasks by up to 50%. Code will be released at //github.com/mondalanindya/MSQNet.

We present two families of sub-grid scale (SGS) turbulence models developed for large-eddy simulation (LES) purposes. Their development required the formulation of physics-informed robust and efficient Deep Learning (DL) algorithms which, unlike state-of-the-art analytical modeling techniques can produce high-order complex non-linear relations between inputs and outputs. Explicit filtering of data from direct simulations of the canonical channel flow at two friction Reynolds numbers $Re_\tau\approx 395$ and 590 provided accurate data for training and testing. The two sets of models use different network architectures. One of the architectures uses tensor basis neural networks (TBNN) and embeds the simplified analytical model form of the general effective-viscosity hypothesis, thus incorporating the Galilean, rotational and reflectional invariances. The other architecture is that of a relatively simple network, that is able to incorporate the Galilean invariance only. However, this simpler architecture has better feature extraction capacity owing to its ability to establish relations between and extract information from cross-components of the integrity basis tensors and the SGS stresses. Both sets of models are used to predict the SGS stresses for feature datasets generated with different filter widths, and at different Reynolds numbers. It is shown that due to the simpler model's better feature learning capabilities, it outperforms the invariance embedded model in statistical performance metrics. In a priori tests, both sets of models provide similar levels of dissipation and backscatter. Based on the test results, both sets of models should be usable in a posteriori actual LESs.

Since their introduction in Abadie and Gardeazabal (2003), Synthetic Control (SC) methods have quickly become one of the leading methods for estimating causal effects in observational studies in settings with panel data. Formal discussions often motivate SC methods by the assumption that the potential outcomes were generated by a factor model. Here we study SC methods from a design-based perspective, assuming a model for the selection of the treated unit(s) and period(s). We show that the standard SC estimator is generally biased under random assignment. We propose a Modified Unbiased Synthetic Control (MUSC) estimator that guarantees unbiasedness under random assignment and derive its exact, randomization-based, finite-sample variance. We also propose an unbiased estimator for this variance. We document in settings with real data that under random assignment, SC-type estimators can have root mean-squared errors that are substantially lower than that of other common estimators. We show that such an improvement is weakly guaranteed if the treated period is similar to the other periods, for example, if the treated period was randomly selected. While our results only directly apply in settings where treatment is assigned randomly, we believe that they can complement model-based approaches even for observational studies.

Information about action costs is critical for real-world AI planning applications. Rather than rely solely on declarative action models, recent approaches also use black-box external action cost estimators, often learned from data, that are applied during the planning phase. These, however, can be computationally expensive, and produce uncertain values. In this paper we suggest a generalization of deterministic planning with action costs that allows selecting between multiple estimators for action cost, to balance computation time against bounded estimation uncertainty. This enables a much richer -- and correspondingly more realistic -- problem representation. Importantly, it allows planners to bound plan accuracy, thereby increasing reliability, while reducing unnecessary computational burden, which is critical for scaling to large problems. We introduce a search algorithm, generalizing $A^*$, that solves such planning problems, and additional algorithmic extensions. In addition to theoretical guarantees, extensive experiments show considerable savings in runtime compared to alternatives.

In the realm of Tiny AI, we introduce "You Only Look at Interested Cells" (YOLIC), an efficient method for object localization and classification on edge devices. Seamlessly blending the strengths of semantic segmentation and object detection, YOLIC offers superior computational efficiency and precision. By adopting Cells of Interest for classification instead of individual pixels, YOLIC encapsulates relevant information, reduces computational load, and enables rough object shape inference. Importantly, the need for bounding box regression is obviated, as YOLIC capitalizes on the predetermined cell configuration that provides information about potential object location, size, and shape. To tackle the issue of single-label classification limitations, a multi-label classification approach is applied to each cell, effectively recognizing overlapping or closely situated objects. This paper presents extensive experiments on multiple datasets, demonstrating that YOLIC achieves detection performance comparable to the state-of-the-art YOLO algorithms while surpassing in speed, exceeding 30fps on a Raspberry Pi 4B CPU. All resources related to this study, including datasets, cell designer, image annotation tool, and source code, have been made publicly available on our project website at //kai3316.github.io/yolic.github.io

北京阿比特科技有限公司