亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a flexible data-driven method for dynamical system analysis that does not require explicit model discovery. The method is rooted in well-established techniques for approximating the Koopman operator from data and is implemented as a semidefinite program that can be solved numerically. The method is agnostic of whether data is generated through a deterministic or stochastic process, so its implementation requires no prior adjustments by the user to accommodate these different scenarios. Rigorous convergence results justify the applicability of the method, while also extending and uniting similar results from across the literature. Examples on discovering Lyapunov functions and on performing ergodic optimization for both deterministic and stochastic dynamics exemplify these convergence results and demonstrate the performance of the method.

相關內容

Here we consider a problem of multiple measurement vector (MMV) compressed sensing with multiple signal sources. The observation model is motivated by the application of {\em unsourced random access} in wireless cell-free MIMO (multiple-input-multiple-output) networks. We present a novel (and rigorous) high-dimensional analysis of the AMP (approximate message passing) algorithm devised for the model. As the system dimensions in the order, say $\mathcal O(L)$, tend to infinity, we show that the empirical dynamical order parameters -- describing the dynamics of the AMP -- converge to deterministic limits (described by a state-evolution equation) with the convergence rate $\mathcal O(L^{-\frac 1 2})$. Furthermore, we have shown the asymptotic consistency of the AMP analysis with the replica-symmetric calculation of the static problem. In addition, we provide some interesting aspects on the unsourced random access (or initial access) for cell-free systems, which is the application motivating the algorithm.

We present a comprehensive computational study of a class of linear system solvers, called {\it Triangle Algorithm} (TA) and {\it Centering Triangle Algorithm} (CTA), developed by Kalantari \cite{kalantari23}. The algorithms compute an approximate solution or minimum-norm solution to $Ax=b$ or $A^TAx=A^Tb$, where $A$ is an $m \times n$ real matrix of arbitrary rank. The algorithms specialize when $A$ is symmetric positive semi-definite. Based on the description and theoretical properties of TA and CTA from \cite{kalantari23}, we give an implementation of the algorithms that is easy-to-use for practitioners, versatile for a wide range of problems, and robust in that our implementation does not necessitate any constraints on $A$. Next, we make computational comparisons of our implementation with the Matlab implementations of two state-of-the-art algorithms, GMRES and ``lsqminnorm". We consider square and rectangular matrices, for $m$ up to $10000$ and $n$ up to $1000000$, encompassing a variety of applications. These results indicate that our implementation outperforms GMRES and ``lsqminnorm" both in runtime and quality of residuals. Moreover, the relative residuals of CTA decrease considerably faster and more consistently than GMRES, and our implementation provides high precision approximation, faster than GMRES reports lack of convergence. With respect to ``lsqminnorm", our implementation runs faster, producing better solutions. Additionally, we present a theoretical study in the dynamics of iterations of residuals in CTA and complement it with revealing visualizations. Lastly, we extend TA for LP feasibility problems, handling non-negativity constraints. Computational results show that our implementation for this extension is on par with those of TA and CTA, suggesting applicability in linear programming and related problems.

Noiseless compressive sensing is a protocol that enables undersampling and later recovery of a signal without loss of information. This compression is possible because the signal is usually sufficiently sparse in a given basis. Currently, the algorithm offering the best tradeoff between compression rate, robustness, and speed for compressive sensing is the LASSO (l1-norm bias) algorithm. However, many studies have pointed out the possibility that the implementation of lp-norms biases, with p smaller than one, could give better performance while sacrificing convexity. In this work, we focus specifically on the extreme case of the l0-based reconstruction, a task that is complicated by the discontinuity of the loss. In the first part of the paper, we describe via statistical physics methods, and in particular the replica method, how the solutions to this optimization problem are arranged in a clustered structure. We observe two distinct regimes: one at low compression rate where the signal can be recovered exactly, and one at high compression rate where the signal cannot be recovered accurately. In the second part, we present two message-passing algorithms based on our first results for the l0-norm optimization problem. The proposed algorithms are able to recover the signal at compression rates higher than the ones achieved by LASSO while being computationally efficient.

Since the concern of privacy leakage extremely discourages user participation in sharing data, federated learning has gradually become a promising technique for both academia and industry for achieving collaborative learning without leaking information about the local data. Unfortunately, most federated learning solutions cannot efficiently verify the execution of each participant's local machine learning model and protect the privacy of user data, simultaneously. In this article, we first propose a Zero-Knowledge Proof-based Federated Learning (ZKP-FL) scheme on blockchain. It leverages zero-knowledge proof for both the computation of local data and the aggregation of local model parameters, aiming to verify the computation process without requiring the plaintext of the local data. We further propose a Practical ZKP-FL (PZKP-FL) scheme to support fraction and non-linear operations. Specifically, we explore a Fraction-Integer mapping function, and use Taylor expansion to efficiently handle non-linear operations while maintaining the accuracy of the federated learning model. We also analyze the security of PZKP-FL. Performance analysis demonstrates that the whole running time of the PZKP-FL scheme is approximately less than one minute in parallel execution.

We propose a machine-learning approach to model long-term out-of-sample dynamics of brain activity from task-dependent fMRI data. Our approach is a three stage one. First, we exploit Diffusion maps (DMs) to discover a set of variables that parametrize the low-dimensional manifold on which the emergent high-dimensional fMRI time series evolve. Then, we construct reduced-order-models (ROMs) on the embedded manifold via two techniques: Feedforward Neural Networks (FNNs) and the Koopman operator. Finally, for predicting the out-of-sample long-term dynamics of brain activity in the ambient fMRI space, we solve the pre-image problem coupling DMs with Geometric Harmonics (GH) when using FNNs and the Koopman modes per se. For our illustrations, we have assessed the performance of the two proposed schemes using a benchmark fMRI dataset with recordings during a visuo-motor task. The results suggest that just a few (for the particular task, five) non-linear coordinates of the high-dimensional fMRI time series provide a good basis for modelling and out-of-sample prediction of the brain activity. Furthermore, we show that the proposed approaches outperform the one-step ahead predictions of the naive random walk model, which, in contrast to our scheme, relies on the knowledge of the signals in the previous time step. Importantly, we show that the proposed Koopman operator approach provides, for any practical purposes, equivalent results to the FNN-GH approach, thus bypassing the need to train a non-linear map and to use GH to extrapolate predictions in the ambient fMRI space; one can use instead the low-frequency truncation of the DMs function space of L^2-integrable functions, to predict the entire list of coordinate functions in the fMRI space and to solve the pre-image problem.

Processing-in-memory (PIM) promises to alleviate the data movement bottleneck in modern computing systems. However, current real-world PIM systems have the inherent disadvantage that their hardware is more constrained than in conventional processors (CPU, GPU), due to the difficulty and cost of building processing elements near or inside the memory. As a result, general-purpose PIM architectures support fairly limited instruction sets and struggle to execute complex operations such as transcendental functions and other hard-to-calculate operations (e.g., square root). These operations are particularly important for some modern workloads, e.g., activation functions in machine learning applications. In order to provide support for transcendental (and other hard-to-calculate) functions in general-purpose PIM systems, we present \emph{TransPimLib}, a library that provides CORDIC-based and LUT-based methods for trigonometric functions, hyperbolic functions, exponentiation, logarithm, square root, etc. We develop an implementation of TransPimLib for the UPMEM PIM architecture and perform a thorough evaluation of TransPimLib's methods in terms of performance and accuracy, using microbenchmarks and three full workloads (Blackscholes, Sigmoid, Softmax). We open-source all our code and datasets at~\url{//github.com/CMU-SAFARI/transpimlib}.

The Frank-Wolfe (FW) method is a popular approach for solving optimization problems with structured constraints that arise in machine learning applications. In recent years, stochastic versions of FW have gained popularity, motivated by large datasets for which the computation of the full gradient is prohibitively expensive. In this paper, we present two new variants of the FW algorithms for stochastic finite-sum minimization. Our algorithms have the best convergence guarantees of existing stochastic FW approaches for both convex and non-convex objective functions. Our methods do not have the issue of permanently collecting large batches, which is common to many stochastic projection-free approaches. Moreover, our second approach does not require either large batches or full deterministic gradients, which is a typical weakness of many techniques for finite-sum problems. The faster theoretical rates of our approaches are confirmed experimentally.

Recently, graph neural networks have been gaining a lot of attention to simulate dynamical systems due to their inductive nature leading to zero-shot generalizability. Similarly, physics-informed inductive biases in deep-learning frameworks have been shown to give superior performance in learning the dynamics of physical systems. There is a growing volume of literature that attempts to combine these two approaches. Here, we evaluate the performance of thirteen different graph neural networks, namely, Hamiltonian and Lagrangian graph neural networks, graph neural ODE, and their variants with explicit constraints and different architectures. We briefly explain the theoretical formulation highlighting the similarities and differences in the inductive biases and graph architecture of these systems. We evaluate these models on spring, pendulum, gravitational, and 3D deformable solid systems to compare the performance in terms of rollout error, conserved quantities such as energy and momentum, and generalizability to unseen system sizes. Our study demonstrates that GNNs with additional inductive biases, such as explicit constraints and decoupling of kinetic and potential energies, exhibit significantly enhanced performance. Further, all the physics-informed GNNs exhibit zero-shot generalizability to system sizes an order of magnitude larger than the training system, thus providing a promising route to simulate large-scale realistic systems.

Interest in the field of Explainable Artificial Intelligence has been growing for decades and has accelerated recently. As Artificial Intelligence models have become more complex, and often more opaque, with the incorporation of complex machine learning techniques, explainability has become more critical. Recently, researchers have been investigating and tackling explainability with a user-centric focus, looking for explanations to consider trustworthiness, comprehensibility, explicit provenance, and context-awareness. In this chapter, we leverage our survey of explanation literature in Artificial Intelligence and closely related fields and use these past efforts to generate a set of explanation types that we feel reflect the expanded needs of explanation for today's artificial intelligence applications. We define each type and provide an example question that would motivate the need for this style of explanation. We believe this set of explanation types will help future system designers in their generation and prioritization of requirements and further help generate explanations that are better aligned to users' and situational needs.

The era of big data provides researchers with convenient access to copious data. However, people often have little knowledge about it. The increasing prevalence of big data is challenging the traditional methods of learning causality because they are developed for the cases with limited amount of data and solid prior causal knowledge. This survey aims to close the gap between big data and learning causality with a comprehensive and structured review of traditional and frontier methods and a discussion about some open problems of learning causality. We begin with preliminaries of learning causality. Then we categorize and revisit methods of learning causality for the typical problems and data types. After that, we discuss the connections between learning causality and machine learning. At the end, some open problems are presented to show the great potential of learning causality with data.

北京阿比特科技有限公司