亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The growing availability and usage of low precision foating point formats has attracts many interests of developing lower or mixed precision algorithms for scientific computing problems. In this paper we investigate the possibility of exploiting lower precision computing in LSQR for solving discrete linear ill-posed problems. We analyze the choice of proper computing precisions in the two main parts of LSQR, including the construction of Lanczos vectors and updating procedure of iterative solutions. We show that, under some mild conditions, the Lanczos vectors can be computed using single precision without loss of any accuracy of final regularized solutions as long as the noise level is not extremely small. We also show that the most time consuming part for updating iterative solutions can be performed using single precision without sacrificing any accuracy. The results indicate that the most time consuming parts of the algorithm can be implemented using single precision, and thus the performance of LSQR for solving discrete linear ill-posed problems can be significantly enhanced. Numerical experiments are made for testing the single precision variants of LSQR and confirming our results.

相關內容

This paper deals with Hermite osculatory interpolating splines. For a partition of a real interval endowed with a refinement consisting in dividing each subinterval into two small subintervals, we consider a space of smooth splines with additional smoothness at the vertices of the initial partition, and of the lowest possible degree. A normalized B-spline-like representation for the considered spline space is provided. In addition, several quasi-interpolation operators based on blossoming and control polynomials have also been developed. Some numerical tests are presented and compared with some recent works to illustrate the performance of the proposed approach.

Many complex tasks can be decomposed into simpler, independent parts. Discovering such underlying compositional structure has the potential to enable compositional generalization. Despite progress, our most powerful systems struggle to compose flexibly. It therefore seems natural to make models more modular to help capture the compositional nature of many tasks. However, it is unclear under which circumstances modular systems can discover hidden compositional structure. To shed light on this question, we study a teacher-student setting with a modular teacher where we have full control over the composition of ground truth modules. This allows us to relate the problem of compositional generalization to that of identification of the underlying modules. In particular we study modularity in hypernetworks representing a general class of multiplicative interactions. We show theoretically that identification up to linear transformation purely from demonstrations is possible without having to learn an exponential number of module combinations. We further demonstrate empirically that under the theoretically identified conditions, meta-learning from finite data can discover modular policies that generalize compositionally in a number of complex environments.

The growing complexity of real-world systems necessitates interdisciplinary solutions to confront myriad challenges in modeling, analysis, management, and control. To meet these demands, the parallel systems method rooted in Artificial systems, Computational experiments, and Parallel execution (ACP) approach has been developed. The method cultivates a cycle, termed parallel intelligence, which iteratively creates data, acquires knowledge, and refines the actual system. Over the past two decades, the parallel systems method has continuously woven advanced knowledge and technologies from various disciplines, offering versatile interdisciplinary solutions for complex systems across diverse fields. This review explores the origins and fundamental concepts of the parallel systems method, showcasing its accomplishments as a diverse array of parallel technologies and applications, while also prognosticating potential challenges. We posit that this method will considerably augment sustainable development while enhancing interdisciplinary communication and cooperation.

We address the problem of the best uniform approximation of a continuous function on a convex domain. The approximation is by linear combinations of a finite system of functions (not necessarily Chebyshev) under arbitrary linear constraints. By modifying the concept of alternance and of the Remez iterative procedure we present a method, which demonstrates its efficiency in numerical problems. The linear rate of convergence is proved under some favourable assumptions. A special attention is paid to systems of complex exponents, Gaussian functions, lacunar algebraic and trigonometric polynomials. Applications to signal processing, linear ODE, switching dynamical systems, and to Markov-Bernstein type inequalities are considered.

Timely implementation of interventions to slow cognitive decline among older adults requires accurate monitoring to detect changes in cognitive function. Data gathered using wearable devices that can continuously monitor factors known to be associated with cognition could be used to train machine learning models and develop wearable-based cognitive monitoring systems. Using data from over 2,400 older adults in the National Health and Nutrition Examination Survey (NHANES) we developed prediction models to differentiate older adults with normal cognition from those with poor cognition based on outcomes from three cognitive tests measuring different domains of cognitive function. During repeated cross-validation, CatBoost, XGBoost, and Random Forest models performed best when predicting cognition based on processing speed, working memory, and attention (median AUCs >0.82) compared to immediate and delayed recall (median AUCs >0.72) and categorical verbal fluency (median AUC >0.68). Activity and sleep parameters were also more strongly associated with processing speed, working memory, and attention compared to other cognitive subdomains. Our work provides proof of concept that wearable-based cognitive monitoring systems may be a viable alternative to traditional methods for monitoring processing speeds, working memory, and attention. We further identified novel metrics that could be targets in future causal studies seeking to better understand how sleep and activity parameters influence cognitive function among older adults.

Deep generative models aim to learn the underlying distribution of data and generate new ones. Despite the diversity of generative models and their high-quality generation performance in practice, most of them lack rigorous theoretical convergence proofs. In this work, we aim to establish some convergence results for OT-Flow, one of the deep generative models. First, by reformulating the framework of OT-Flow model, we establish the $\Gamma$-convergence of the formulation of OT-flow to the corresponding optimal transport (OT) problem as the regularization term parameter $\alpha$ goes to infinity. Second, since the loss function will be approximated by Monte Carlo method in training, we established the convergence between the discrete loss function and the continuous one when the sample number $N$ goes to infinity as well. Meanwhile, the approximation capability of the neural network provides an upper bound for the discrete loss function of the minimizers. The proofs in both aspects provide convincing assurances for OT-Flow.

In the search for highly efficient decoders for short LDPC codes approaching maximum likelihood performance, a relayed decoding strategy, specifically activating the ordered statistics decoding process upon failure of a neural min-sum decoder, is enhanced by instilling three innovations. Firstly, soft information gathered at each step of the neural min-sum decoder is leveraged to forge a new reliability measure using a convolutional neural network. This measure aids in constructing the most reliable basis of ordered statistics decoding, bolstering the decoding process by excluding error-prone bits or concentrating them in a smaller area. Secondly, an adaptive ordered statistics decoding process is introduced, guided by a derived decoding path comprising prioritized blocks, each containing distinct test error patterns. The priority of these blocks is determined from the statistical data during the query phase. Furthermore, effective complexity management methods are devised by adjusting the decoding path's length or refining constraints on the involved blocks. Thirdly, a simple auxiliary criterion is introduced to reduce computational complexity by minimizing the number of candidate codewords before selecting the optimal estimate. Extensive experimental results and complexity analysis strongly support the proposed framework, demonstrating its advantages in terms of high throughput, low complexity, independence from noise variance, in addition to superior decoding performance.

We propose a method for obtaining parsimonious decompositions of networks into higher order interactions which can take the form of arbitrary motifs.The method is based on a class of analytically solvable generative models, where vertices are connected via explicit copies of motifs, which in combination with non-parametric priors allow us to infer higher order interactions from dyadic graph data without any prior knowledge on the types or frequencies of such interactions. Crucially, we also consider 'degree--corrected' models that correctly reflect the degree distribution of the network and consequently prove to be a better fit for many real world--networks compared to non-degree corrected models. We test the presented approach on simulated data for which we recover the set of underlying higher order interactions to a high degree of accuracy. For empirical networks the method identifies concise sets of atomic subgraphs from within thousands of candidates that cover a large fraction of edges and include higher order interactions of known structural and functional significance. The method not only produces an explicit higher order representation of the network but also a fit of the network to analytically tractable models opening new avenues for the systematic study of higher order network structures.

The univariate dimension reduction (UDR) method stands as a way to estimate the statistical moments of the output that is effective in a large class of uncertainty quantification (UQ) problems. UDR's fundamental strategy is to approximate the original function using univariate functions so that the UQ cost only scales linearly with the dimension of the problem. Nonetheless, UDR's effectiveness can diminish when uncertain inputs have high variance, particularly when assessing the output's second and higher-order statistical moments. This paper proposes a new method, gradient-enhanced univariate dimension reduction (GUDR), that enhances the accuracy of UDR by incorporating univariate gradient function terms into the UDR approximation function. Theoretical results indicate that the GUDR approximation is expected to be one order more accurate than UDR in approximating the original function, and it is expected to generate more accurate results in computing the output's second and higher-order statistical moments. Our proposed method uses a computational graph transformation strategy to efficiently evaluate the GUDR approximation function on tensor-grid quadrature inputs, and use the tensor-grid input-output data to compute the statistical moments of the output. With an efficient automatic differentiation method to compute the gradients, our method preserves UDR's linear scaling of computation time with problem dimension. Numerical results show that the GUDR is more accurate than UDR in estimating the standard deviation of the output and has a performance comparable to the method of moments using a third-order Taylor series expansion.

Decision making and learning in the presence of uncertainty has attracted significant attention in view of the increasing need to achieve robust and reliable operations. In the case where uncertainty stems from the presence of adversarial attacks this need is becoming more prominent. In this paper we focus on linear and nonlinear classification problems and propose a novel adversarial training method for robust classifiers, inspired by Support Vector Machine (SVM) margins. We view robustness under a data driven lens, and derive finite sample complexity bounds for both linear and non-linear classifiers in binary and multi-class scenarios. Notably, our bounds match natural classifiers' complexity. Our algorithm minimizes a worst-case surrogate loss using Linear Programming (LP) and Second Order Cone Programming (SOCP) for linear and non-linear models. Numerical experiments on the benchmark MNIST and CIFAR10 datasets show our approach's comparable performance to state-of-the-art methods, without needing adversarial examples during training. Our work offers a comprehensive framework for enhancing binary linear and non-linear classifier robustness, embedding robustness in learning under the presence of adversaries.

北京阿比特科技有限公司