亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present adapted Zhang Neural Networks (AZNN) in which the parameter settings for the exponential decay constant $\eta$ and the length of the start-up phase of basic ZNN are adapted to the problem at hand. Specifically we study experiments with AZNN for time-varying square matrix factorizations as a product of time-varying symmetric matrices and for the time-varying matrix square roots problem. Differing from generally used small $\eta$ values and minimal start-up length phases in ZNN, we adapt the basic ZNN method to work with large or even gigantic $\eta$ settings and arbitrary length start-ups using Euler's low accuracy finite difference formula. These adaptations improve the speed of AZNN's convergence and lower its solution error bounds for our chosen problems significantly to near machine constant or even lower levels. Parameter-varying AZNN also allows us to find full rank symmetrizers of static matrices reliably, for example for the Kahan and Frank matrices and for matrices with highly ill-conditioned eigenvalues and complicated Jordan structures of dimensions from $n = 2$ on up. This helps in cases where full rank static matrix symmetrizers have never been successfully computed before.

相關內容

 Beginner's All-purpose Symbolic Instruction Code(初學者通用的符號指令代碼),剛開始被作者寫做 BASIC,后來被微軟廣泛地叫做 Basic 。

In this paper, we present a novel pseudospectral (PS) method for solving a new class of initial-value problems (IVPs) of time-dependent one-dimensional fractional partial differential equations (FPDEs) with variable coefficients and periodic solutions. A main ingredient of our work is the use of the recently developed periodic RL/Caputo fractional derivative (FD) operators with sliding positive fixed memory length of Bourafa et al. [1] or their reduced forms obtained by Elgindy [2] as the natural FD operators to accurately model FPDEs with periodic solutions. The proposed method converts the IVP into a well-conditioned linear system of equations using the PS method based on Fourier collocations and Gegenbauer quadratures. The reduced linear system has a simple special structure and can be solved accurately and rapidly by using standard linear system solvers. A rigorous study of the error and convergence of the proposed method is presented. The idea and results presented in this paper are expected to be useful in the future to address more general problems involving FPDEs with periodic solutions.

The $hp$-adaptive finite element method (FEM) - where one independently chooses the mesh size ($h$) and polynomial degree ($p$) to be used on each cell - has long been known to have better theoretical convergence properties than either $h$- or $p$-adaptive methods alone. However, it is not widely used, owing at least in parts to the difficulty of the underlying algorithms and the lack of widely usable implementations. This is particularly true when used with continuous finite elements. Herein, we discuss algorithms that are necessary for a comprehensive and generic implementation of $hp$-adaptive finite element methods on distributed-memory, parallel machines. In particular, we will present a multi-stage algorithm for the unique enumeration of degrees of freedom (DoFs) suitable for continuous finite element spaces, describe considerations for weighted load balancing, and discuss the transfer of variable size data between processes. We illustrate the performance of our algorithms with numerical examples, and demonstrate that they scale reasonably up to at least 16,384 Message Passing Interface (MPI) processes. We provide a reference implementation of our algorithms as part of the open-source library deal.II.

In 1-bit matrix completion, the aim is to estimate an underlying low-rank matrix from a partial set of binary observations. We propose a novel method for 1-bit matrix completion called MMGN. Our method is based on the majorization-minimization (MM) principle, which yields a sequence of standard low-rank matrix completion problems in our setting. We solve each of these sub-problems by a factorization approach that explicitly enforces the assumed low-rank structure and then apply a Gauss-Newton method. Our numerical studies and application to a real-data example illustrate that MMGN outputs comparable if not more accurate estimates, is often significantly faster, and is less sensitive to the spikiness of the underlying matrix than existing methods.

ChatGPT is a large language model recently released by the OpenAI company. In this technical report, we explore for the first time the capability of ChatGPT for programming numerical algorithms. Specifically, we examine the capability of GhatGPT for generating codes for numerical algorithms in different programming languages, for debugging and improving written codes by users, for completing missed parts of numerical codes, rewriting available codes in other programming languages, and for parallelizing serial codes. Additionally, we assess if ChatGPT can recognize if given codes are written by humans or machines. To reach this goal, we consider a variety of mathematical problems such as the Poisson equation, the diffusion equation, the incompressible Navier-Stokes equations, compressible inviscid flow, eigenvalue problems, solving linear systems of equations, storing sparse matrices, etc. Furthermore, we exemplify scientific machine learning such as physics-informed neural networks and convolutional neural networks with applications to computational physics. Through these examples, we investigate the successes, failures, and challenges of ChatGPT. Examples of failures are producing singular matrices, operations on arrays with incompatible sizes, programming interruption for relatively long codes, etc. Our outcomes suggest that ChatGPT can successfully program numerical algorithms in different programming languages, but certain limitations and challenges exist that require further improvement of this machine learning model.

In this work, we consider the numerical computation of ground states and dynamics of single-component Bose-Einstein condensates (BECs). The corresponding models are spatially discretized with a multiscale finite element approach known as Localized Orthogonal Decomposition (LOD). Despite the outstanding approximation properties of such a discretization in the context of BECs, taking full advantage of it without creating severe computational bottlenecks can be tricky. In this paper, we therefore present two fully-discrete numerical approaches that are formulated in such a way that they take special account of the structure of the LOD spaces. One approach is devoted to the computation of ground states and another one for the computation of dynamics. A central focus of this paper is also the discussion of implementation aspects that are very important for the practical realization of the methods. In particular, we discuss the use of suitable data structures that keep the memory costs economical. The paper concludes with various numerical experiments in 1d, 2d and 3d that investigate convergence rates and approximation properties of the methods and which demonstrate their performance and computational efficiency, also in comparison to spectral and standard finite element approaches.

The scene graph is a new data structure describing objects and their pairwise relationship within image scenes. As the size of scene graph in vision applications grows, how to losslessly and efficiently store such data on disks or transmit over the network becomes an inevitable problem. However, the compression of scene graph is seldom studied before because of the complicated data structures and distributions. Existing solutions usually involve general-purpose compressors or graph structure compression methods, which is weak at reducing redundancy for scene graph data. This paper introduces a new lossless compression framework with adaptive predictors for joint compression of objects and relations in scene graph data. The proposed framework consists of a unified prior extractor and specialized element predictors to adapt for different data elements. Furthermore, to exploit the context information within and between graph elements, Graph Context Convolution is proposed to support different graph context modeling schemes for different graph elements. Finally, a learned distribution model is devised to predict numerical data under complicated conditional constraints. Experiments conducted on labeled or generated scene graphs proves the effectiveness of the proposed framework in scene graph lossless compression task.

Human cognition has a ``large-scale first'' cognitive mechanism, therefore possesses adaptive multi-granularity description capabilities. This results in computational characteristics such as efficiency, robustness, and interpretability. Although most existing artificial intelligence learning methods have certain multi-granularity features, they do not fully align with the ``large-scale first'' cognitive mechanism. Multi-granularity granular-ball computing is an important model method developed in recent years. This method can use granular-balls of different sizes to adaptively represent and cover the sample space, and perform learning based on granular-balls. Since the number of coarse-grained "granular-ball" is smaller than the number of sample points, granular-ball computing is more efficient; the coarse-grained characteristics of granular-balls are less likely to be affected by fine-grained sample points, making them more robust; the multi-granularity structure of granular-balls can produce topological structures and coarse-grained descriptions, providing natural interpretability. Granular-ball computing has now been effectively extended to various fields of artificial intelligence, developing theoretical methods such as granular-ball classifiers, granular-ball clustering methods, granular-ball neural networks, granular-ball rough sets, and granular-ball evolutionary computation, significantly improving the efficiency, noise robustness, and interpretability of existing methods. It has good innovation, practicality, and development potential. This article provides a systematic introduction to these methods and analyzes the main problems currently faced by granular-ball computing, discussing both the primary applicable scenarios for granular-ball computing and offering references and suggestions for future researchers to improve this theory.

Tensor contraction operations in computational chemistry consume significant fractions of computing time on large-scale computing platforms. The widespread use of tensor contractions between large multi-dimensional tensors in describing electronic structure theory has motivated the development of multiple tensor algebra frameworks targeting heterogeneous computing platforms. In this paper, we present Tensor Algebra for Many-body Methods (TAMM), a framework for productive and performance-portable development of scalable computational chemistry methods. The TAMM framework decouples the specification of the computation and the execution of these operations on available high-performance computing systems. With this design choice, the scientific application developers (domain scientists) can focus on the algorithmic requirements using the tensor algebra interface provided by TAMM whereas high-performance computing developers can focus on various optimizations on the underlying constructs such as efficient data distribution, optimized scheduling algorithms, efficient use of intra-node resources (e.g., GPUs). The modular structure of TAMM allows it to be extended to support different hardware architectures and incorporate new algorithmic advances. We describe the TAMM framework and our approach to sustainable development of tensor contraction-based methods in computational chemistry applications. We present case studies that highlight the ease of use as well as the performance and productivity gains compared to other implementations.

This PhD thesis contains several contributions to the field of statistical causal modeling. Statistical causal models are statistical models embedded with causal assumptions that allow for the inference and reasoning about the behavior of stochastic systems affected by external manipulation (interventions). This thesis contributes to the research areas concerning the estimation of causal effects, causal structure learning, and distributionally robust (out-of-distribution generalizing) prediction methods. We present novel and consistent linear and non-linear causal effects estimators in instrumental variable settings that employ data-dependent mean squared prediction error regularization. Our proposed estimators show, in certain settings, mean squared error improvements compared to both canonical and state-of-the-art estimators. We show that recent research on distributionally robust prediction methods has connections to well-studied estimators from econometrics. This connection leads us to prove that general K-class estimators possess distributional robustness properties. We, furthermore, propose a general framework for distributional robustness with respect to intervention-induced distributions. In this framework, we derive sufficient conditions for the identifiability of distributionally robust prediction methods and present impossibility results that show the necessity of several of these conditions. We present a new structure learning method applicable in additive noise models with directed trees as causal graphs. We prove consistency in a vanishing identifiability setup and provide a method for testing substructure hypotheses with asymptotic family-wise error control that remains valid post-selection. Finally, we present heuristic ideas for learning summary graphs of nonlinear time-series models.

This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.

北京阿比特科技有限公司