亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The circuits comprising superconducting optoelectronic synapses, dendrites, and neurons are described by numerically cumbersome and formally opaque coupled differential equations. Reference 1 showed that a phenomenological model of superconducting loop neurons eliminates the need to solve the Josephson circuit equations that describe synapses and dendrites. The initial goal of the model was to decrease the time required for simulations, yet an additional benefit of the model was increased transparency of the underlying neural circuit operations and conceptual clarity regarding the connection of loop neurons to other physical systems. Whereas the original model simplified the treatment of the Josephson-junction dynamics, essentially by only considering low-pass versions of the dendritic outputs, the model resorted to an awkward treatment of spikes generated by semiconductor transmitter circuits that required explicitly checking for threshold crossings and distinct treatment of time steps wherein somatic threshold is reached. Here we extend that model to simplify the treatment of spikes coming from somas, again making use of the fact that in neural systems the downstream recipients of spike events almost always perform low-pass filtering. We provide comparisons between the first and second phenomenological models, quantifying the accuracy of the additional approximations. We identify regions of circuit parameter space in which the extended model works well and regions where it works poorly. For some circuit parameters it is possible to represent the downstream dendritic response to a single spike as well as coincidences or sequences of spikes, indicating the model is not simply a reduction to rate coding. The governing equations are shown to be nearly identical to those ubiquitous in the neuroscience literature for modeling leaky-integrator dendrites and neurons.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 近似 · 估計/估計量 · 樣本復雜度 · Ray ·
2024 年 11 月 5 日

We consider the problem of estimating the spectrum of a symmetric bounded entry (not necessarily PSD) matrix via entrywise sampling. This problem was introduced by [Bhattacharjee, Dexter, Drineas, Musco, Ray '22], where it was shown that one can obtain an $\epsilon n$ additive approximation to all eigenvalues of $A$ by sampling a principal submatrix of dimension $\frac{\text{poly}(\log n)}{\epsilon^3}$. We improve their analysis by showing that it suffices to sample a principal submatrix of dimension $\tilde{O}(\frac{1}{\epsilon^2})$ (with no dependence on $n$). This matches known lower bounds and therefore resolves the sample complexity of this problem up to $\log\frac{1}{\epsilon}$ factors. Using similar techniques, we give a tight $\tilde{O}(\frac{1}{\epsilon^2})$ bound for obtaining an additive $\epsilon\|A\|_F$ approximation to the spectrum of $A$ via squared row-norm sampling, improving on the previous best $\tilde{O}(\frac{1}{\epsilon^{8}})$ bound. We also address the problem of approximating the top eigenvector for a bounded entry, PSD matrix $A.$ In particular, we show that sampling $O(\frac{1}{\epsilon})$ columns of $A$ suffices to produce a unit vector $u$ with $u^T A u \geq \lambda_1(A) - \epsilon n$. This matches what one could achieve via the sampling bound of [Musco, Musco'17] for the special case of approximating the top eigenvector, but does not require adaptivity. As additional applications, we observe that our sampling results can be used to design a faster eigenvalue estimation sketch for dense matrices resolving a question of [Swartworth, Woodruff'23], and can also be combined with [Musco, Musco'17] to achieve $O(1/\epsilon^3)$ (adaptive) sample complexity for approximating the spectrum of a bounded entry PSD matrix to $\epsilon n$ additive error.

Validity, reliability, and fairness are core ethical principles embedded in classical argument-based assessment validation theory. These principles are also central to the Standards for Educational and Psychological Testing (2014) which recommended best practices for early applications of artificial intelligence (AI) in high-stakes assessments for automated scoring of written and spoken responses. Responsible AI (RAI) principles and practices set forth by the AI ethics community are critical to ensure the ethical use of AI across various industry domains. Advances in generative AI have led to new policies as well as guidance about the implementation of RAI principles for assessments using AI. Building on Chapelle's foundational validity argument work to address the application of assessment validation theory for technology-based assessment, we propose a unified assessment framework that considers classical test validation theory and assessment-specific and domain-agnostic RAI principles and practice. The framework addresses responsible AI use for assessment that supports validity arguments, alignment with AI ethics to maintain human values and oversight, and broader social responsibility associated with AI use.

Model editing aims to data-efficiently correct predictive errors of large pre-trained models while ensuring generalization to neighboring failures and locality to minimize unintended effects on unrelated examples. While significant progress has been made in editing Transformer-based large language models, effective strategies for editing vision Transformers (ViTs) in computer vision remain largely untapped. In this paper, we take initial steps towards correcting predictive errors of ViTs, particularly those arising from subpopulation shifts. Taking a locate-then-edit approach, we first address the where-to-edit challenge by meta-learning a hypernetwork on CutMix-augmented data generated for editing reliability. This trained hypernetwork produces generalizable binary masks that identify a sparse subset of structured model parameters, responsive to real-world failure samples. Afterward, we solve the how-to-edit problem by simply fine-tuning the identified parameters using a variant of gradient descent to achieve successful edits. To validate our method, we construct an editing benchmark that introduces subpopulation shifts towards natural underrepresented images and AI-generated images, thereby revealing the limitations of pre-trained ViTs for object recognition. Our approach not only achieves superior performance on the proposed benchmark but also allows for adjustable trade-offs between generalization and locality. Our code is available at //github.com/hustyyq/Where-to-Edit.

Gaussian graphical regressions have emerged as a powerful approach for regressing the precision matrix of a Gaussian graphical model on covariates, which, unlike traditional Gaussian graphical models, can help determine how graphs are modulated by high dimensional subject-level covariates, and recover both the population-level and subject-level graphs. To fit the model, a multi-task learning approach {achieves} %has been shown to result in lower error rates compared to node-wise regressions. However, due to the high complexity and dimensionality of the Gaussian graphical regression problem, the important task of statistical inference remains unexplored. We propose a class of debiased estimators based on multi-task learners for statistical inference in Gaussian graphical regressions. We show that debiasing can be performed quickly and separately for the multi-task learners. In a key debiasing step {that estimates} %involving the estimation of the inverse covariance matrix, we propose a novel {projection technique} %diagonalization approach that dramatically reduces computational costs {in optimization} to scale only with the sample size $n$. We show that our debiased estimators enjoy a fast convergence rate and asymptotically follow a normal distribution, enabling valid statistical inference such as constructing confidence intervals and performing hypothesis testing. Simulation studies confirm the practical utility of the proposed approach, and we further apply it to analyze gene co-expression graph data from a brain cancer study, revealing meaningful biological relationships.

Using statistical learning methods to analyze stochastic simulation outputs can significantly enhance decision-making by uncovering relationships between different simulated systems and between a system's inputs and outputs. We focus on clustering multivariate empirical distributions of simulation outputs to identify patterns and trade-offs among performance measures. We present a novel agglomerative clustering algorithm that utilizes the regularized Wasserstein distance to cluster these multivariate empirical distributions. This framework has several important use cases, including anomaly detection, pre-optimization, and online monitoring. In numerical experiments involving a call-center model, we demonstrate how this methodology can identify staffing plans that yield similar performance outcomes and inform policies for intervening when queue lengths signal potentially worsening system performance.

Many scientific questions in biomedical, environmental, and psychological research involve understanding the effects of multiple factors on outcomes. While factorial experiments are ideal for this purpose, randomized controlled treatment assignment is generally infeasible in many empirical studies. Therefore, investigators must rely on observational data, where drawing reliable causal inferences for multiple factors remains challenging. As the number of treatment combinations grows exponentially with the number of factors, some treatment combinations can be rare or missing by chance in observed data, further complicating factorial effects estimation. To address these challenges, we propose a novel weighting method tailored to observational studies with multiple factors. Our approach uses weighted observational data to emulate a randomized factorial experiment, enabling simultaneous estimation of the effects of multiple factors and their interactions. Our investigations reveal a crucial nuance: achieving balance among covariates, as in single-factor scenarios, is necessary but insufficient for unbiasedly estimating factorial effects; balancing the factors is also essential in multi-factor settings. Moreover, we extend our weighting method to handle missing treatment combinations in observed data. Finally, we study the asymptotic behavior of the new weighting estimators and propose a consistent variance estimator, providing reliable inferences on factorial effects in observational studies.

Existing neural operator architectures face challenges when solving multiphysics problems with coupled partial differential equations (PDEs) due to complex geometries, interactions between physical variables, and the limited amounts of high-resolution training data. To address these issues, we propose Codomain Attention Neural Operator (CoDA-NO), which tokenizes functions along the codomain or channel space, enabling self-supervised learning or pretraining of multiple PDE systems. Specifically, we extend positional encoding, self-attention, and normalization layers to function spaces. CoDA-NO can learn representations of different PDE systems with a single model. We evaluate CoDA-NO's potential as a backbone for learning multiphysics PDEs over multiple systems by considering few-shot learning settings. On complex downstream tasks with limited data, such as fluid flow simulations, fluid-structure interactions, and Rayleigh-B\'enard convection, we found CoDA-NO to outperform existing methods by over 36%.

Causality can be described in terms of a structural causal model (SCM) that carries information on the variables of interest and their mechanistic relations. For most processes of interest the underlying SCM will only be partially observable, thus causal inference tries to leverage any exposed information. Graph neural networks (GNN) as universal approximators on structured input pose a viable candidate for causal learning, suggesting a tighter integration with SCM. To this effect we present a theoretical analysis from first principles that establishes a novel connection between GNN and SCM while providing an extended view on general neural-causal models. We then establish a new model class for GNN-based causal inference that is necessary and sufficient for causal effect identification. Our empirical illustration on simulations and standard benchmarks validate our theoretical proofs.

Embedding entities and relations into a continuous multi-dimensional vector space have become the dominant method for knowledge graph embedding in representation learning. However, most existing models ignore to represent hierarchical knowledge, such as the similarities and dissimilarities of entities in one domain. We proposed to learn a Domain Representations over existing knowledge graph embedding models, such that entities that have similar attributes are organized into the same domain. Such hierarchical knowledge of domains can give further evidence in link prediction. Experimental results show that domain embeddings give a significant improvement over the most recent state-of-art baseline knowledge graph embedding models.

The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.

北京阿比特科技有限公司