亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Neural integral equations are deep learning models based on the theory of integral equations, where the model consists of an integral operator and the corresponding equation (of the second kind) which is learned through an optimization procedure. This approach allows to leverage the nonlocal properties of integral operators in machine learning, but it is computationally expensive. In this article, we introduce a framework for neural integral equations based on spectral methods that allows us to learn an operator in the spectral domain, resulting in a cheaper computational cost, as well as in high interpolation accuracy. We study the properties of our methods and show various theoretical guarantees regarding the approximation capabilities of the model, and convergence to solutions of the numerical methods. We provide numerical experiments to demonstrate the practical effectiveness of the resulting model.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

We consider a boundary value problem (BVP) modelling one-dimensional heat-conduction with radiation, which is derived from the Stefan-Boltzmann law. The problem strongly depends on the parameters, making difficult to estimate the solution. We use an analytical approach to determine upper and lower bounds to the exact solution of the BVP, which allows estimating the latter. Finally, we support our theoretical arguments with numerical data, by implementing them into the MAPLE computer program.

Bayesian model updating facilitates the calibration of analytical models based on observations and the quantification of uncertainties in model parameters such as stiffness and mass. This process significantly enhances damage assessment and response predictions in existing civil structures. Predominantly, current methods employ modal properties identified from acceleration measurements to evaluate the likelihood of the model parameters. This modal analysis-based likelihood generally involves a prior assumption regarding the mass parameters. In civil structures, accurately determining mass parameters proves challenging owing to the time-varying nature of imposed loads. The resulting inaccuracy potentially introduces biases while estimating the stiffness parameters, which affects the assessment of structural response and associated damage. Addressing this issue, the present study introduces a stress-resultant-based approach for Bayesian model updating independent of mass assumptions. This approach utilizes system identification on strain and acceleration measurements to establish the relationship between nodal displacements and elemental stress resultants. Employing static analysis to depict this relationship aids in assessing the likelihood of stiffness parameters. Integrating this static-analysis-based likelihood with a modal-analysis-based likelihood facilitates the simultaneous estimation of mass and stiffness parameters. The proposed approach was validated using numerical examples on a planar frame and experimental studies on a full-scale moment-resisting steel frame structure.

The convergence of expectation-maximization (EM)-based algorithms typically requires continuity of the likelihood function with respect to all the unknown parameters (optimization variables). The requirement is not met when parameters comprise both discrete and continuous variables, making the convergence analysis nontrivial. This paper introduces a set of conditions that ensure the convergence of a specific class of EM algorithms that estimate a mixture of discrete and continuous parameters. Our results offer a new analysis technique for iterative algorithms that solve mixed-integer non-linear optimization problems. As a concrete example, we prove the convergence of the EM-based sparse Bayesian learning algorithm in [1] that estimates the state of a linear dynamical system with jointly sparse inputs and bursty missing observations. Our results establish that the algorithm in [1] converges to the set of stationary points of the maximum likelihood cost with respect to the continuous optimization variables.

We present a theoretical analysis of the performance of transformer with softmax attention in in-context learning with linear regression tasks. While the existing literature predominantly focuses on the convergence of transformers with single-/multi-head attention, our research centers on comparing their performance. We conduct an exact theoretical analysis to demonstrate that multi-head attention with a substantial embedding dimension performs better than single-head attention. When the number of in-context examples D increases, the prediction loss using single-/multi-head attention is in O(1/D), and the one for multi-head attention has a smaller multiplicative constant. In addition to the simplest data distribution setting, we consider more scenarios, e.g., noisy labels, local examples, correlated features, and prior knowledge. We observe that, in general, multi-head attention is preferred over single-head attention. Our results verify the effectiveness of the design of multi-head attention in the transformer architecture.

We reiterate the contribution made by Harrow, Hassidim, and Llyod to the quantum matrix equation solver with the emphasis on the algorithm description and the error analysis derivation details. Moreover, the behavior of the amplitudes of the phase register on the completion of the Quantum Phase Estimation is studied. This study is beneficial for the comprehension of the choice of the phase register size and its interrelation with the Hamiltonian simulation duration in the algorithm setup phase.

In the field of causal modeling, potential outcomes (PO) and structural causal models (SCMs) stand as the predominant frameworks. However, these frameworks face notable challenges in practically modeling counterfactuals, formalized as parameters of the joint distribution of potential outcomes. Counterfactual reasoning holds paramount importance in contemporary decision-making processes, especially in scenarios that demand personalized incentives based on the joint values of $(Y(0), Y(1))$. This paper begins with an investigation of the PO and SCM frameworks for modeling counterfactuals. Through the analysis, we identify an inherent model capacity limitation, termed as the ``degenerative counterfactual problem'', emerging from the consistency rule that is the cornerstone of both frameworks. To address this limitation, we introduce a novel \textit{distribution-consistency} assumption, and in alignment with it, we propose the Distribution-consistency Structural Causal Models (DiscoSCMs) offering enhanced capabilities to model counterfactuals. To concretely reveal the enhanced model capacity, we introduce a new identifiable causal parameter, \textit{the probability of consistency}, which holds practical significance within DiscoSCM alone, showcased with a personalized incentive example. Furthermore, we provide a comprehensive set of theoretical results about the ``Ladder of Causation'' within the DiscoSCM framework. We hope it opens new avenues for future research of counterfactual modeling, ultimately enhancing our understanding of causality and its real-world applications.

We present efficient methods for calculating linear recurrences of hypergeometric double sums and, more generally, of multiple sums. In particular, we supplement this approach with the algorithmic theory of contiguous relations, which guarantees the applicability of our method for many input sums. In addition, we elaborate new techniques to optimize the underlying key task of our method to compute rational solutions of parameterized linear recurrences.

Transit functions serve not only as abstractions of betweenness and convexity but are also closely connected with clustering systems. Here, we investigate the canonical transit functions of binary clustering systems inspired by pyramids, i.e., interval hypergraphs. We provide alternative characterizations of weak hierarchies, and describe union-closed binary clustering systems as a subclass of pyramids and weakly pyramidal clustering systems as an interesting generalization.

Calibrating simulation models that take large quantities of multi-dimensional data as input is a hard simulation optimization problem. Existing adaptive sampling strategies offer a methodological solution. However, they may not sufficiently reduce the computational cost for estimation and solution algorithm's progress within a limited budget due to extreme noise levels and heteroskedasticity of system responses. We propose integrating stratification with adaptive sampling for the purpose of efficiency in optimization. Stratification can exploit local dependence in the simulation inputs and outputs. Yet, the state-of-the-art does not provide a full capability to adaptively stratify the data as different solution alternatives are evaluated. We devise two procedures for data-driven calibration problems that involve a large dataset with multiple covariates to calibrate models within a fixed overall simulation budget. The first approach dynamically stratifies the input data using binary trees, while the second approach uses closed-form solutions based on linearity assumptions between the objective function and concomitant variables. We find that dynamical adjustment of stratification structure accelerates optimization and reduces run-to-run variability in generated solutions. Our case study for calibrating a wind power simulation model, widely used in the wind industry, using the proposed stratified adaptive sampling, shows better-calibrated parameters under a limited budget.

The family of log-concave density functions contains various kinds of common probability distributions. Due to the shape restriction, it is possible to find the nonparametric estimate of the density, for example, the nonparametric maximum likelihood estimate (NPMLE). However, the associated uncertainty quantification of the NPMLE is less well developed. The current techniques for uncertainty quantification are Bayesian, using a Dirichlet process prior combined with the use of Markov chain Monte Carlo (MCMC) to sample from the posterior. In this paper, we start with the NPMLE and use a version of the martingale posterior distribution to establish uncertainty about the NPMLE. The algorithm can be implemented in parallel and hence is fast. We prove the convergence of the algorithm by constructing suitable submartingales. We also illustrate results with different models and settings and some real data, and compare our method with that within the literature.

北京阿比特科技有限公司