亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we propose a numerical algorithm based on a cell-centered finite volume method to compute a distance from given objects on a three-dimensional computational domain discretized by polyhedral cells. Inspired by the vanishing viscosity method, a Laplacian regularized eikonal equation is solved and the Soner boundary condition is applied to the boundary of the domain to avoid a non-viscosity solution. As the regularization parameter depending on a characteristic length of the discretized domain is reduced, a corresponding numerical solution is calculated. A convergence to the viscosity solution is verified numerically as the characteristic length becomes smaller and the regularization parameter accordingly becomes smaller. From the numerical experiments, the second experimental order of convergence in the $L^1$ norm error is confirmed for a smooth solution. Compared to an algorithm to solve a time-dependent form of eikonal equation, the proposed algorithm has the advantage of reducing computational cost dramatically when a more significant number of cells is used or a region of interest is far away from the given objects. Moreover, the implementation of parallel computing on decomposed domains with $1$-ring face neighborhood structure can be done straightforwardly in a standard cell-centered finite volume code.

相關內容

Over-the-air computation (AirComp) is a known technique in which wireless devices transmit values by analog amplitude modulation so that a function of these values is computed over the communication channel at a common receiver. The physical reason is the superposition properties of the electromagnetic waves, which naturally return sums of analog values. Consequently, the applications of AirComp are almost entirely restricted to analog communication systems. However, the use of digital communications for over-the-air computations would have several benefits, such as error correction, synchronization, acquisition of channel state information, and easier adoption by current digital communication systems. Nevertheless, a common belief is that digital modulations are generally unfeasible for computation tasks because the overlapping of digitally modulated signals returns signals that seem to be meaningless for these tasks. This paper breaks through such a belief and proposes a fundamentally new computing method, named ChannelComp, for performing over-the-air computations by any digital modulation. In particular, we propose digital modulation formats that allow us to compute a wider class of functions than AirComp can compute, and we propose a feasibility optimization problem that ascertains the optimal digital modulation for computing functions over-the-air. The simulation results verify the superior performance of ChannelComp in comparison to AirComp, particularly for the product functions, with around 10 dB improvement of the computation error.

The numerical solution of partial differential equations (PDEs) is difficult, having led to a century of research so far. Recently, there have been pushes to build neural--numerical hybrid solvers, which piggy-backs the modern trend towards fully end-to-end learned systems. Most works so far can only generalize over a subset of properties to which a generic solver would be faced, including: resolution, topology, geometry, boundary conditions, domain discretization regularity, dimensionality, etc. In this work, we build a solver, satisfying these properties, where all the components are based on neural message passing, replacing all heuristically designed components in the computation graph with backprop-optimized neural function approximators. We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes. In order to encourage stability in training autoregressive models, we put forward a method that is based on the principle of zero-stability, posing stability as a domain adaptation problem. We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.

An important requirement in the standard finite element method (FEM) is that all elements in the underlying mesh must be tangle-free i.e., the Jacobian must be positive throughout each element. To relax this requirement, an isoparametric tangled finite element method (i-TFEM) was recently proposed for linear elasticity problems. It was demonstrated that i-TFEM leads to optimal convergence even for severely tangled meshes. In this paper, i-TFEM is generalized to nonlinear elasticity. Specifically, a variational formulation is proposed that leads to local modification in the tangent stiffness matrix associated with tangled elements, and an additional piece-wise compatibility constraint. i-TFEM reduces to standard FEM for tangle-free meshes. The effectiveness and convergence characteristics of i-TFEM are demonstrated through a series of numerical experiments, involving both compressible and in-compressible problems.

A class of implicit Milstein type methods is introduced and analyzed in the present article for stochastic differential equations (SDEs) with non-globally Lipschitz drift and diffusion coefficients. By incorporating a pair of method parameters $\theta, \eta \in [0, 1]$ into both the drift and diffusion parts, the new schemes are indeed a kind of drift-diffusion double implicit methods. Within a general framework, we offer upper mean-square error bounds for the proposed schemes, based on certain error terms only getting involved with the exact solution processes. Such error bounds help us to easily analyze mean-square convergence rates of the schemes, without relying on a priori high-order moment estimates of numerical approximations. Putting further globally polynomial growth condition, we successfully recover the expected mean-square convergence rate of order one for the considered schemes with $\theta \in [\tfrac12, 1], \eta \in [0, 1]$. Also, some of the proposed schemes are applied to solve three SDE models evolving in the positive domain $(0, \infty)$. More specifically, the particular drift-diffusion implicit Milstein method ($ \theta = \eta = 1 $) is utilized to approximate the Heston $\tfrac32$-volatility model and the stochastic Lotka-Volterra competition model. The semi-implicit Milstein method ($\theta =1, \eta = 0$) is used to solve the Ait-Sahalia interest rate model. Thanks to the previously obtained error bounds, we reveal the optimal mean-square convergence rate of the positivity preserving schemes under more relaxed conditions, compared with existing relevant results in the literature. Numerical examples are also reported to confirm the previous findings.

State-of-art NPUs are typically architected as a self-contained sub-system with multiple heterogeneous hardware computing modules, and a dataflow-driven programming model. There lacks well-established methodology and tools in the industry to evaluate and compare the performance of NPUs from different architectures. We present an event-based performance modeling framework, VPU-EM, targeting scalable performance evaluation of modern NPUs across diversified AI workloads. The framework adopts high-level event-based system-simulation methodology to abstract away design details for speed, while maintaining hardware pipelining, concurrency and interaction with software task scheduling. It is natively developed in Python and built to interface directly with AI frameworks such as Tensorflow, PyTorch, ONNX and OpenVINO, linking various in-house NPU graph compilers to achieve optimized full model performance. Furthermore, VPU-EM also provides the capability to model power characteristics of NPU in Power-EM mode to enable joint performance/power analysis. Using VPU-EM, we conduct performance/power analysis of models from representative neural network architecture. We demonstrate that even though this framework is developed for Intel VPU, an Intel in-house NPU IP technology, the methodology can be generalized for analysis of modern NPUs.

In this paper, a new feature selection algorithm, called SFE (Simple, Fast, and Efficient), is proposed for high-dimensional datasets. The SFE algorithm performs its search process using a search agent and two operators: non-selection and selection. It comprises two phases: exploration and exploitation. In the exploration phase, the non-selection operator performs a global search in the entire problem search space for the irrelevant, redundant, trivial, and noisy features, and changes the status of the features from selected mode to non-selected mode. In the exploitation phase, the selection operator searches the problem search space for the features with a high impact on the classification results, and changes the status of the features from non-selected mode to selected mode. The proposed SFE is successful in feature selection from high-dimensional datasets. However, after reducing the dimensionality of a dataset, its performance cannot be increased significantly. In these situations, an evolutionary computational method could be used to find a more efficient subset of features in the new and reduced search space. To overcome this issue, this paper proposes a hybrid algorithm, SFE-PSO (particle swarm optimization) to find an optimal feature subset. The efficiency and effectiveness of the SFE and the SFE-PSO for feature selection are compared on 40 high-dimensional datasets. Their performances were compared with six recently proposed feature selection algorithms. The results obtained indicate that the two proposed algorithms significantly outperform the other algorithms, and can be used as efficient and effective algorithms in selecting features from high-dimensional datasets.

Computing accurate splines of degree greater than three is still a challenging task in today's applications. In this type of interpolation, high-order derivatives are needed on the given mesh. As these derivatives are rarely known and are often not easy to approximate accurately, high-degree splines are difficult to obtain using standard approaches. In Beaudoin (1998), Beaudoin and Beauchemin (2003), and Pepin et al. (2019), a new method to compute spline approximations of low or high degree from equidistant interpolation nodes based on the discrete Fourier transform is analyzed. The accuracy of this method greatly depends on the accuracy of the boundary conditions. An algorithm for the computation of the boundary conditions can be found in Beaudoin (1998), and Beaudoin and Beauchemin (2003). However, this algorithm lacks robustness since the approximation of the boundary conditions is strongly dependant on the choice of $\theta$ arbitrary parameters, $\theta$ being the degree of the spline. The goal of this paper is therefore to propose two new robust algorithms, independent of arbitrary parameters, for the computation of the boundary conditions in order to obtain accurate splines of any degree. Numerical results will be presented to show the efficiency of these new approaches.

Agglomeration-based strategies are important both within adaptive refinement algorithms and to construct scalable multilevel algebraic solvers. In order to automatically perform agglomeration of polygonal grids, we propose the use of Machine Learning (ML) strategies, that can naturally exploit geometrical information about the mesh in order to preserve the grid quality, enhancing performance of numerical methods and reducing the overall computational cost. In particular, we employ the k-means clustering algorithm and Graph Neural Networks (GNNs) to partition the connectivity graph of a computational mesh. Moreover, GNNs have high online inference speed and the advantage to process naturally and simultaneously both the graph structure of mesh and the geometrical information, such as the areas of the elements or their barycentric coordinates. These techniques are compared with METIS, a standard algorithm for graph partitioning, which is meant to process only the graph information of the mesh. We demonstrate that performance in terms of quality metrics is enhanced for ML strategies. Such models also show a good degree of generalization when applied to more complex geometries, such as brain MRI scans, and the capability of preserving the quality of the grid. The effectiveness of these strategies is demonstrated also when applied to MultiGrid (MG) solvers in a Polygonal Discontinuous Galerkin (PolyDG) framework. In the considered experiments, GNNs show overall the best performance in terms of inference speed, accuracy and flexibility of the approach.

The weak maximum principle of the isoparametric finite element method is proved for the Poisson equation under the Dirichlet boundary condition in a (possibly concave) curvilinear polyhedral domain with edge openings smaller than $\pi$, which include smooth domains and smooth deformations of convex polyhedra. The proof relies on the analysis of a dual elliptic problem with a discontinuous coefficient matrix arising from the isoparametric finite elements. Therefore, the standard $H^2$ elliptic regularity which is required in the proof of the weak maximum principle in the literature does not hold for this dual problem. To overcome this difficulty, we have decomposed the solution into a smooth part and a nonsmooth part, and estimated the two parts by $H^2$ and $W^{1,p}$ estimates, respectively. As an application of the weak maximum principle, we have proved a maximum-norm best approximation property of the isoparametric finite element method for the Poisson equation in a curvilinear polyhedron. The proof contains non-trivial modifications of Schatz's argument due to the non-conformity of the iso-parametric finite elements, which requires us to construct a globally smooth flow map which maps the curvilinear polyhedron to a perturbed larger domain on which we can establish the $W^{1,\infty}$ regularity estimate of the Poisson equation uniformly with respect to the perturbation.

With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.

北京阿比特科技有限公司