亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We construct an efficient class of increasingly high-order (up to 17th-order) essentially non-oscillatory schemes with multi-resolution (ENO-MR) for solving hyperbolic conservation laws. The candidate stencils for constructing ENO-MR schemes range from first-order one-point stencil increasingly up to the designed very high-order stencil. The proposed ENO-MR schemes adopt a very simple and efficient strategy that only requires the computation of the highest-order derivatives of a part of candidate stencils. Besides simplicity and high efficiency, ENO-MR schemes are completely parameter-free and essentially scale-invariant. Theoretical analysis and numerical computations show that ENO-MR schemes achieve designed high-order convergence in smooth regions which may contain high-order critical points (local extrema) and retain ENO property for strong shocks. In addition, ENO-MR schemes could capture complex flow structures very well.

相關內容

The rise of AI in human contexts places new demands on automated systems to be transparent and explainable. We examine some anthropomorphic ideas and principles relevant to such accountablity in order to develop a theoretical framework for thinking about digital systems in complex human contexts and the problem of explaining their behaviour. Structurally, systems are made of modular and hierachical components, which we abstract in a new system model using notions of modes and mode transitions. A mode is an independent component of the system with its own objectives, monitoring data, and algorithms. The behaviour of a mode, including its transitions to other modes, is determined by functions that interpret each mode's monitoring data in the light of its objectives and algorithms. We show how these belief functions can help explain system behaviour by visualising their evaluation as trajectories in higher-dimensional geometric spaces. These ideas are formalised mathematically by abstract and concrete simplicial complexes. We offer three techniques: a framework for design heuristics, a general system theory based on modes, and a geometric visualisation, and apply them in three types of human-centred systems.

Auditory spatial attention detection (ASAD) aims to decode the attended spatial location with EEG in a multiple-speaker setting. ASAD methods are inspired by the brain lateralization of cortical neural responses during the processing of auditory spatial attention, and show promising performance for the task of auditory attention decoding (AAD) with neural recordings. In the previous ASAD methods, the spatial distribution of EEG electrodes is not fully exploited, which may limit the performance of these methods. In the present work, by transforming the original EEG channels into a two-dimensional (2D) spatial topological map, the EEG data is transformed into a three-dimensional (3D) arrangement containing spatial-temporal information. And then a 3D deep convolutional neural network (DenseNet-3D) is used to extract temporal and spatial features of the neural representation for the attended locations. The results show that the proposed method achieves higher decoding accuracy than the state-of-the-art (SOTA) method (94.3% compared to XANet's 90.6%) with 1-second decision window for the widely used KULeuven (KUL) dataset, and the code to implement our work is available on Github: //github.com/xuxiran/ASAD_DenseNet

We initiate the study of Boolean function analysis on high-dimensional expanders. We give a random-walk based definition of high-dimensional expansion, which coincides with the earlier definition in terms of two-sided link expanders. Using this definition, we describe an analog of the Fourier expansion and the Fourier levels of the Boolean hypercube for simplicial complexes. Our analog is a decomposition into approximate eigenspaces of random walks associated with the simplicial complexes. Our random-walk definition and the decomposition have the additional advantage that they extend to the more general setting of posets, encompassing both high-dimensional expanders and the Grassmann poset, which appears in recent work on the unique games conjecture. We then use this decomposition to extend the Friedgut-Kalai-Naor theorem to high-dimensional expanders. Our results demonstrate that a constant-degree high-dimensional expander can sometimes serve as a sparse model for the Boolean slice or hypercube, and quite possibly additional results from Boolean function analysis can be carried over to this sparse model. Therefore, this model can be viewed as a derandomization of the Boolean slice, containing only $|X(k-1)|=O(n)$ points in contrast to $\binom{n}{k}$ points in the $(k)$-slice (which consists of all $n$-bit strings with exactly $k$ ones).

We study the Euler scheme for scalar non-autonomous stochastic differential equations, whose diffusion coefficient is not globally Lipschitz but a fractional power of a globally Lipschitz function. We analyse the strong error and establish a criterion, which relates the convergence order of the Euler scheme to an inverse moment condition for the diffusion coefficient. Our result in particular applies to Cox-Ingersoll-Ross-, Chan-Karolyi-Longstaff-Sanders- or Wright-Fisher-type stochastic differential equations and thus provides a unifying framework.

In variable selection, a selection rule that prescribes the permissible sets of selected variables (called a "selection dictionary") is desirable due to the inherent structural constraints among the candidate variables. Such selection rules can be complex in real-world data analyses, and failing to incorporate such restrictions could not only compromise the interpretability of the model but also lead to decreased prediction accuracy. However, no general framework has been proposed to formalize selection rules and their applications, which poses a significant challenge for practitioners seeking to integrate these rules into their analyses. In this work, we establish a framework for structured variable selection that can incorporate universal structural constraints. We develop a mathematical language for constructing arbitrary selection rules, where the selection dictionary is formally defined. We demonstrate that all selection rules can be expressed as combinations of operations on constructs, facilitating the identification of the corresponding selection dictionary. Once this selection dictionary is derived, practitioners can apply their own user-defined criteria to select the optimal model. Additionally, our framework enhances existing penalized regression methods for variable selection by providing guidance on how to appropriately group variables to achieve the desired selection rule. Furthermore, our innovative framework opens the door to establishing new l0 norm-based penalized regression techniques that can be tailored to respect arbitrary selection rules, thereby expanding the possibilities for more robust and tailored model development.

We present accurate and mathematically consistent formulations of a diffuse-interface model for two-phase flow problems involving rapid evaporation. The model addresses challenges including discontinuities in the density field by several orders of magnitude, leading to high velocity and pressure jumps across the liquid-vapor interface, along with dynamically changing interface topologies. To this end, we integrate an incompressible Navier--Stokes solver combined with a conservative level-set formulation and a regularized, i.e., diffuse, representation of discontinuities into a matrix-free adaptive finite element framework. The achievements are three-fold: First, this work proposes mathematically consistent definitions for the level-set transport velocity in the diffuse interface region by extrapolating the velocity from the liquid or gas phase, which exhibit superior prediction accuracy for the evaporated mass and the resulting interface dynamics compared to a local velocity evaluation, especially for highly curved interfaces. Second, we show that accurate prediction of the evaporation-induced pressure jump requires a consistent, namely a reciprocal, density interpolation across the interface, which satisfies local mass conservation. Third, the combination of diffuse interface models for evaporation with standard Stokes-type constitutive relations for viscous flows leads to significant pressure artifacts in the diffuse interface region. To mitigate these, we propose a modification for such constitutive model types. Through selected analytical and numerical examples, the aforementioned properties are validated. The presented model promises new insights in simulation-based prediction of melt-vapor interactions in thermal multiphase flows such as in laser-based powder bed fusion of metals.

With insurers benefiting from ever-larger amounts of data of increasing complexity, we explore a data-driven method to model dependence within multilevel claims in this paper. More specifically, we start from a non-parametric estimator for Archimedean copula generators introduced by Genest and Rivest (1993), and we extend it to diverse flexible censoring scenarios using techniques derived from survival analysis. We implement a graphical selection procedure for copulas that we validate using goodness-of-fit methods applied to complete, single-censored, and double-censored bivariate data. We illustrate the performance of our model with multiple simulation studies. We then apply our methodology to a recent Canadian automobile insurance dataset where we seek to model the dependence between the activation delays of correlated coverages. We show that our model performs quite well in selecting the best-fitted copula for the data at hand, especially when the dataset is large, and that the results can then be used as part of a larger claims reserving methodology.

It is well-known that a multilinear system with a nonsingular M-tensor and a positive right-hand side has a unique positive solution. Tensor splitting methods generalizing the classical iterative methods for linear systems have been proposed for finding the unique positive solution. The Alternating Anderson-Richardson (AAR) method is an effective method to accelerate the classical iterative methods. In this study, we apply the idea of AAR for finding the unique positive solution quickly. We first present a tensor Richardson method based on tensor regular splittings, then apply Anderson acceleration to the tensor Richardson method and derive a tensor Anderson-Richardson method, finally, we periodically employ the tensor Anderson-Richardson method within the tensor Richardson method and propose a tensor AAR method. Numerical experiments show that the proposed method is effective in accelerating tensor splitting methods.

We present an accelerated greedy strategy for training of projection-based reduced-order models for parametric steady and unsteady partial differential equations. Our approach exploits hierarchical approximate proper orthogonal decomposition to speed up the construction of the empirical test space for least-square Petrov-Galerkin formulations, a progressive construction of the empirical quadrature rule based on a warm start of the non-negative least-square algorithm, and a two-fidelity sampling strategy to reduce the number of expensive greedy iterations. We illustrate the performance of our method for two test cases: a two-dimensional compressible inviscid flow past a LS89 blade at moderate Mach number, and a three-dimensional nonlinear mechanics problem to predict the long-time structural response of the standard section of a nuclear containment building under external loading.

We consider scalar semilinear elliptic PDEs, where the nonlinearity is strongly monotone, but only locally Lipschitz continuous. To linearize the arising discrete nonlinear problem, we employ a damped Zarantonello iteration, which leads to a linear Poisson-type equation that is symmetric and positive definite. The resulting system is solved by a contractive algebraic solver such as a multigrid method with local smoothing. We formulate a fully adaptive algorithm that equibalances the various error components coming from mesh refinement, iterative linearization, and algebraic solver. We prove that the proposed adaptive iteratively linearized finite element method (AILFEM) guarantees convergence with optimal complexity, where the rates are understood with respect to the overall computational cost (i.e., the computational time). Numerical experiments investigate the involved adaptivity parameters.

北京阿比特科技有限公司