亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Point processes model the occurrence of a countable number of random points over some support. They can model diverse phenomena, such as chemical reactions, stock market transactions and social interactions. We show that JumpProcesses.jl is a fast, general-purpose library for simulating point processes. JumpProcesses.jl was first developed for simulating jump processes via stochastic simulation algorithms (SSAs) (including Doob's method, Gillespie's methods, and Kinetic Monte Carlo methods). Historically, jump processes have been developed in the context of dynamical systems to describe dynamics with discrete jumps. In contrast, the development of point processes has been more focused on describing the occurrence of random events. In this paper, we bridge the gap between the treatment of point and jump process simulation. The algorithms previously included in JumpProcesses.jl can be mapped to three general methods developed in statistics for simulating evolutionary point processes. Our comparative exercise revealed that the library initially lacked an efficient algorithm for simulating processes with variable intensity rates. We, therefore, extended JumpProcesses.jl with a new simulation algorithm, Coevolve, that enables the rapid simulation of processes with locally-bounded variable intensity rates. It is now possible to efficiently simulate any point process on the real line with a non-negative, left-continuous, history-adapted and locally bounded intensity rate coupled or not with differential equations. This extension significantly improves the computational performance of JumpProcesses.jl when simulating such processes, enabling it to become one of the few readily available, fast, general-purpose libraries for simulating evolutionary point processes.

相關內容

 Processing 是一(yi)門開(kai)源編(bian)(bian)程語言和與(yu)之(zhi)配套的(de)集成開(kai)發環境(jing)(IDE)的(de)名稱。Processing 在電(dian)子藝術和視覺設計(ji)社(she)區(qu)被用(yong)來教授編(bian)(bian)程基礎,并運用(yong)于大量的(de)新媒(mei)體和互(hu)動藝術作(zuo)品中。

This work proposes the extended functional tensor train (EFTT) format for compressing and working with multivariate functions on tensor product domains. Our compression algorithm combines tensorized Chebyshev interpolation with a low-rank approximation algorithm that is entirely based on function evaluations. Compared to existing methods based on the functional tensor train format, our approach often reduces the required storage, sometimes considerably, while achieving the same accuracy. In particular, we reduce the number of function evaluations required to achieve a prescribed accuracy by up to over 96% compared to the algorithm from [Gorodetsky, Karaman and Marzouk, Comput. Methods Appl. Mech. Eng., 347 (2019)] .

Auditory spatial attention detection (ASAD) aims to decode the attended spatial location with EEG in a multiple-speaker setting. ASAD methods are inspired by the brain lateralization of cortical neural responses during the processing of auditory spatial attention, and show promising performance for the task of auditory attention decoding (AAD) with neural recordings. In the previous ASAD methods, the spatial distribution of EEG electrodes is not fully exploited, which may limit the performance of these methods. In the present work, by transforming the original EEG channels into a two-dimensional (2D) spatial topological map, the EEG data is transformed into a three-dimensional (3D) arrangement containing spatial-temporal information. And then a 3D deep convolutional neural network (DenseNet-3D) is used to extract temporal and spatial features of the neural representation for the attended locations. The results show that the proposed method achieves higher decoding accuracy than the state-of-the-art (SOTA) method (94.4% compared to XANet's 90.6%) with 1-second decision window for the widely used KULeuven (KUL) dataset, and the code to implement our work is available on Github: //github.com/xuxiran/ASAD_DenseNet

Unveiling the underlying governing equations of nonlinear dynamic systems remains a significant challenge, especially when encountering noisy observations and no prior knowledge available. This study proposes R-DISCOVER, a framework designed to robustly uncover open-form partial differential equations (PDEs) from limited and noisy data. The framework operates through two alternating update processes: discovering and embedding. The discovering phase employs symbolic representation and a reinforcement learning (RL)-guided hybrid PDE generator to efficiently produce diverse open-form PDEs with tree structures. A neural network-based predictive model fits the system response and serves as the reward evaluator for the generated PDEs. PDEs with superior fits are utilized to iteratively optimize the generator via the RL method and the best-performing PDE is selected by a parameter-free stability metric. The embedding phase integrates the initially identified PDE from the discovering process as a physical constraint into the predictive model for robust training. The traversal of PDE trees automates the construction of the computational graph and the embedding process without human intervention. Numerical experiments demonstrate our framework's capability to uncover governing equations from nonlinear dynamic systems with limited and highly noisy data and outperform other physics-informed neural network-based discovery methods. This work opens new potential for exploring real-world systems with limited understanding.

This paper aims at the algorithmic/theoretical core of reinforcement learning (RL) by introducing the novel class of proximal Bellman mappings. These mappings are defined in reproducing kernel Hilbert spaces (RKHSs), to benefit from the rich approximation properties and inner product of RKHSs, they are shown to belong to the powerful Hilbertian family of (firmly) nonexpansive mappings, regardless of the values of their discount factors, and possess ample degrees of design freedom to even reproduce attributes of the classical Bellman mappings and to pave the way for novel RL designs. An approximate policy-iteration scheme is built on the proposed class of mappings to solve the problem of selecting online, at every time instance, the "optimal" exponent $p$ in a $p$-norm loss to combat outliers in linear adaptive filtering, without training data and any knowledge on the statistical properties of the outliers. Numerical tests on synthetic data showcase the superior performance of the proposed framework over several non-RL and kernel-based RL schemes.

The autologistic actor attribute model, or ALAAM, is the social influence counterpart of the better-known exponential-family random graph model (ERGM) for social selection. Extensive experience with ERGMs has shown that the problem of near-degeneracy which often occurs with simple models can be overcome by using "geometrically weighted" or "alternating" statistics. In the much more limited empirical applications of ALAAMs to date, the problem of near-degeneracy, although theoretically expected, appears to have been less of an issue. In this work I present a comprehensive survey of ALAAM applications, showing that this model has to date only been used with relatively small networks, in which near-degeneracy does not appear to be a problem. I show near-degeneracy does occur in simple ALAAM models of larger empirical networks, define some geometrically weighted ALAAM statistics analogous to those for ERGM, and demonstrate that models with these statistics do not suffer from near-degeneracy and hence can be estimated where they could not be with the simple statistics.

The vast majority of reduced-order models (ROMs) first obtain a low dimensional representation of the problem from high-dimensional model (HDM) training data which is afterwards used to obtain a system of reduced complexity. Unfortunately, convection-dominated problems generally have a slowly decaying Kolmogorov n-width, which makes obtaining an accurate ROM built solely from training data very challenging. The accuracy of a ROM can be improved through enrichment with HDM solutions; however, due to the large computational expense of HDM evaluations for complex problems, they can only be used parsimoniously to obtain relevant computational savings. In this work, we exploit the local spatial and temporal coherence often exhibited by these problems to derive an accurate, cost-efficient approach that repeatedly combines HDM and ROM evaluations without a separate training phase. Our approach obtains solutions at a given time step by either fully solving the HDM or by combining partial HDM and ROM solves. A dynamic sampling procedure identifies regions that require the HDM solution for global accuracy and the reminder of the flow is reconstructed using the ROM. Moreover, solutions combining both HDM and ROM solves use spatial filtering to eliminate potential spurious oscillations that may develop. We test the proposed method on inviscid compressible flow problems and demonstrate speedups up to an order of magnitude.

This paper presents a numerical method for the simulation of elastic solid materials coupled to fluid inclusions. The application is motivated by the modeling of vascularized tissues and by problems in medical imaging which target the estimation of effective (i.e., macroscale) material properties, taking into account the influence of microscale dynamics, such as fluid flow in the microvasculature. The method is based on the recently proposed Reduced Lagrange Multipliers framework. In particular, the interface between solid and fluid domains is not resolved within the computational mesh for the elastic material but discretized independently, imposing the coupling condition via non-matching Lagrange multipliers. Exploiting the multiscale properties of the problem, the resulting Lagrange multipliers space is reduced to a lower-dimensional characteristic set. We present the details of the stability analysis of the resulting method considering a non-standard boundary condition that enforces a local deformation on the solid-fluid boundary. The method is validated with several numerical examples.

Climate models must simulate hundreds of future scenarios for hundreds of years at coarse resolutions, and a handful of high-resolution decadal simulations to resolve localized extreme events. Using Oceananigans.jl, written from scratch in Julia, we report several achievements: First, a global ocean simulation with breakthrough horizontal resolution -- 488m -- reaching 15 simulated days per day (0.04 simulated years per day; SYPD). Second, Oceananigans simulates the global ocean at 488m with breakthrough memory efficiency on just 768 Nvidia A100 GPUs, a fraction of the resources available on current and upcoming exascale supercomputers. Third, and arguably most significant for climate modeling, Oceananigans achieves breakthrough energy efficiency reaching 0.95 SYPD at 1.7 km on 576 A100s and 9.9 SYPD at 10 km on 68 A100s -- the latter representing the highest horizontal resolutions employed by current IPCC-class ocean models. Routine climate simulations with 10 km ocean components are within reach.

Modern high-throughput sequencing assays efficiently capture not only gene expression and different levels of gene regulation but also a multitude of genome variants. Focused analysis of alternative alleles of variable sites at homologous chromosomes of the human genome reveals allele-specific gene expression and allele-specific gene regulation by assessing allelic imbalance of read counts at individual sites. Here we formally describe an advanced statistical framework for detecting the allelic imbalance in allelic read counts at single-nucleotide variants detected in diverse omics studies (ChIP-Seq, ATAC-Seq, DNase-Seq, CAGE-Seq, and others). MIXALIME accounts for copy-number variants and aneuploidy, reference read mapping bias, and provides several scoring models to balance between sensitivity and specificity when scoring data with varying levels of experimental noise-caused overdispersion.

We explore a link between complexity and physics for circuits of given functionality. Taking advantage of the connection between circuit counting problems and the derivation of ensembles in statistical mechanics, we tie the entropy of circuits of a given functionality and fixed number of gates to circuit complexity. We use thermodynamic relations to connect the quantity analogous to the equilibrium temperature to the exponent describing the exponential growth of the number of distinct functionalities as a function of complexity. This connection is intimately related to the finite compressibility of typical circuits. Finally, we use the thermodynamic approach to formulate a framework for the obfuscation of programs of arbitrary length -- an important problem in cryptography -- as thermalization through recursive mixing of neighboring sections of a circuit, which can viewed as the mixing of two containers with ``gases of gates''. This recursive process equilibrates the average complexity and leads to the saturation of the circuit entropy, while preserving functionality of the overall circuit. The thermodynamic arguments hinge on ergodicity in the space of circuits which we conjecture is limited to disconnected ergodic sectors due to fragmentation. The notion of fragmentation has important implications for the problem of circuit obfuscation as it implies that there are circuits with same size and functionality that cannot be connected via local moves. Furthermore, we argue that fragmentation is unavoidable unless the complexity classes NP and coNP coincide, a statement that implies the collapse of the polynomial hierarchy of complexity theory to its first level.

北京阿比特科技有限公司