亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose the use of machine learning techniques to find optimal quadrature rules for the construction of stiffness and mass matrices in isogeometric analysis (IGA). We initially consider 1D spline spaces of arbitrary degree spanned over uniform and non-uniform knot sequences, and then the generated optimal rules are used for integration over higher-dimensional spaces using tensor product sense. The quadrature rule search is posed as an optimization problem and solved by a machine learning strategy based on gradient-descent. However, since the optimization space is highly non-convex, the success of the search strongly depends on the number of quadrature points and the parameter initialization. Thus, we use a dynamic programming strategy that initializes the parameters from the optimal solution over the spline space with a lower number of knots. With this method, we found optimal quadrature rules for spline spaces when using IGA discretizations with up to 50 uniform elements and polynomial degrees up to 8, showing the generality of the approach in this scenario. For non-uniform partitions, the method also finds an optimal rule in a reasonable number of test cases. We also assess the generated optimal rules in two practical case studies, namely, the eigenvalue problem of the Laplace operator and the eigenfrequency analysis of freeform curved beams, where the latter problem shows the applicability of the method to curved geometries. In particular, the proposed method results in savings with respect to traditional Gaussian integration of up to 44% in 1D, 68% in 2D, and 82% in 3D spaces.

相關內容

In this paper, we propose a new model for forecasting time series data distributed on a matrix-shaped spatial grid, using the historical spatio-temporal data together with auxiliary vector-valued time series data. We model the matrix time series as an auto-regressive process, where a future matrix is jointly predicted by the historical values of the matrix time series as well as an auxiliary vector time series. The matrix predictors are associated with row/column-specific autoregressive matrix coefficients that map the predictors to the future matrices via a bi-linear transformation. The vector predictors are mapped to matrices by taking mode product with a 3D coefficient tensor. Given the high dimensionality of the tensor coefficient and the underlying spatial structure of the data, we propose to estimate the tensor coefficient by estimating one functional coefficient for each covariate, with 2D input domain, from a Reproducing Kernel Hilbert Space. We jointly estimate the autoregressive matrix coefficients and the functional coefficients under a penalized maximum likelihood estimation framework, and couple it with an alternating minimization algorithm. Large sample asymptotics of the estimators are established and performances of the model are validated with extensive simulation studies and a real data application to forecast the global total electron content distributions.

Recent studies have discovered that Chain-of-Thought prompting (CoT) can dramatically improve the performance of Large Language Models (LLMs), particularly when dealing with complex tasks involving mathematics or reasoning. Despite the enormous empirical success, the underlying mechanisms behind CoT and how it unlocks the potential of LLMs remain elusive. In this paper, we take a first step towards theoretically answering these questions. Specifically, we examine the capacity of LLMs with CoT in solving fundamental mathematical and decision-making problems. We start by giving an impossibility result showing that any bounded-depth Transformer cannot directly output correct answers for basic arithmetic/equation tasks unless the model size grows super-polynomially with respect to the input length. In contrast, we then prove by construction that autoregressive Transformers of a constant size suffice to solve both tasks by generating CoT derivations using a commonly-used math language format. Moreover, we show LLMs with CoT are capable of solving a general class of decision-making problems known as Dynamic Programming, thus justifying its power in tackling complex real-world tasks. Finally, extensive experiments on four tasks show that, while Transformers always fail to predict the answers directly, they can consistently learn to generate correct solutions step-by-step given sufficient CoT demonstrations.

Contextual Bayesian Optimization (CBO) is a powerful framework for optimizing black-box, expensive-to-evaluate functions with respect to design variables, while simultaneously efficiently integrating relevant contextual information regarding the environment, such as experimental conditions. However, in many practical scenarios, the relevance of contextual variables is not necessarily known beforehand. Moreover, the contextual variables can sometimes be optimized themselves, a setting that current CBO algorithms do not take into account. Optimizing contextual variables may be costly, which raises the question of determining a minimal relevant subset. In this paper, we frame this problem as a cost-aware model selection BO task and address it using a novel method, Sensitivity-Analysis-Driven Contextual BO (SADCBO). We learn the relevance of context variables by sensitivity analysis of the posterior surrogate model at specific input points, whilst minimizing the cost of optimization by leveraging recent developments on early stopping for BO. We empirically evaluate our proposed SADCBO against alternatives on synthetic experiments together with extensive ablation studies, and demonstrate a consistent improvement across examples.

Understanding the gradient variance of black-box variational inference (BBVI) is a crucial step for establishing its convergence and developing algorithmic improvements. However, existing studies have yet to show that the gradient variance of BBVI satisfies the conditions used to study the convergence of stochastic gradient descent (SGD), the workhorse of BBVI. In this work, we show that BBVI satisfies a matching bound corresponding to the $ABC$ condition used in the SGD literature when applied to smooth and quadratically-growing log-likelihoods. Our results generalize to nonlinear covariance parameterizations widely used in the practice of BBVI. Furthermore, we show that the variance of the mean-field parameterization has provably superior dimensional dependence.

Loop invariants are software properties that hold before and after every iteration of a loop. As such, invariants provide inductive arguments that are key in automating the verification of program loops. The problem of generating loop invariants; in particular, invariants described by polynomial relations (so called polynomial invariants), is therefore one of the hardest problems in software verification. In this paper we advocate an alternative solution to invariant generation. Rather than inferring invariants from loops, we synthesise loops from invariants. As such, we generate loops that satisfy a given set of polynomials; in other words, our synthesised loops are correct by construction. Our work turns the problem of loop synthesis into a symbolic computation challenge. We employ techniques from algebraic geometry to synthesise loops whose polynomial invariants are described by pure difference binomials. We show that such complex polynomial invariants need ``only'' linear loops, opening up new venues in program optimisation. We prove the existence of non-trivial loops with linear updates for polynomial invariants generated by pure difference binomials. Importantly, we introduce an algorithmic approach that constructs linear loops from such polynomial invariants, by generating linear recurrence sequences that have specified algebraic relations among their terms.

It is well known that Cauchy problem for Laplace equations is an ill-posed problem in Hadamard's sense. Small deviations in Cauchy data may lead to large errors in the solutions. It is observed that if a bound is imposed on the solution, there exists a conditional stability estimate. This gives a reasonable way to construct stable algorithms. However, it is impossible to have good results at all points in the domain. Although numerical methods for Cauchy problems for Laplace equations have been widely studied for quite a long time, there are still some unclear points, for example, how to evaluate the numerical solutions, which means whether we can approximate the Cauchy data well and keep the bound of the solution, and at which points the numerical results are reliable? In this paper, we will prove the conditional stability estimate which is quantitatively related to harmonic measures. The harmonic measure can be used as an indicate function to pointwisely evaluate the numerical result, which further enables us to find a reliable subdomain where the local convergence rate is higher than a certain order.

Electroencephalogram (EEG) provides noninvasive measures of brain activity and is found to be valuable for diagnosis of some chronic disorders. Specifically, pre-treatment EEG signals in alpha and theta frequency bands have demonstrated some association with anti-depressant response, which is well-known to have low response rate. We aim to design an integrated pipeline that improves the response rate of major depressive disorder patients by developing an individualized treatment policy guided by the resting state pre-treatment EEG recordings and other treatment effects modifiers. We first design an innovative automatic site-specific EEG preprocessing pipeline to extract features that possess stronger signals compared with raw data. We then estimate the conditional average treatment effect using causal forests, and use a doubly robust technique to improve the efficiency in the estimation of the average treatment effect. We present evidence of heterogeneity in the treatment effect and the modifying power of EEG features as well as a significant average treatment effect, a result that cannot be obtained by conventional methods. Finally, we employ an efficient policy learning algorithm to learn an optimal depth-2 treatment assignment decision tree and compare its performance with Q-Learning and outcome-weighted learning via simulation studies and an application to a large multi-site, double-blind randomized controlled clinical trial, EMBARC.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

The Q-learning algorithm is known to be affected by the maximization bias, i.e. the systematic overestimation of action values, an important issue that has recently received renewed attention. Double Q-learning has been proposed as an efficient algorithm to mitigate this bias. However, this comes at the price of an underestimation of action values, in addition to increased memory requirements and a slower convergence. In this paper, we introduce a new way to address the maximization bias in the form of a "self-correcting algorithm" for approximating the maximum of an expected value. Our method balances the overestimation of the single estimator used in conventional Q-learning and the underestimation of the double estimator used in Double Q-learning. Applying this strategy to Q-learning results in Self-correcting Q-learning. We show theoretically that this new algorithm enjoys the same convergence guarantees as Q-learning while being more accurate. Empirically, it performs better than Double Q-learning in domains with rewards of high variance, and it even attains faster convergence than Q-learning in domains with rewards of zero or low variance. These advantages transfer to a Deep Q Network implementation that we call Self-correcting DQN and which outperforms regular DQN and Double DQN on several tasks in the Atari 2600 domain.

Graphical causal inference as pioneered by Judea Pearl arose from research on artificial intelligence (AI), and for a long time had little connection to the field of machine learning. This article discusses where links have been and should be established, introducing key concepts along the way. It argues that the hard open problems of machine learning and AI are intrinsically related to causality, and explains how the field is beginning to understand them.

北京阿比特科技有限公司