亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the field of equation learning, exhaustively considering all possible equations derived from a basis function dictionary is infeasible. Sparse regression and greedy algorithms have emerged as popular approaches to tackle this challenge. However, the presence of multicollinearity poses difficulties for sparse regression techniques, and greedy steps may inadvertently exclude terms of the true equation, leading to reduced identification accuracy. In this article, we present an approach that strikes a balance between comprehensiveness and efficiency in equation learning. Inspired by stepwise regression, our approach combines the coefficient of determination, $R^2$, and the Bayesian model evidence, $p(\boldsymbol y|\mathcal M)$, in a novel way. Our procedure is characterized by a comprehensive search with just a minor reduction of the model space at each iteration step. With two flavors of our approach and the adoption of $p(\boldsymbol y|\mathcal M)$ for bi-directional stepwise regression, we present a total of three new avenues for equation learning. Through three extensive numerical experiments involving random polynomials and dynamical systems, we compare our approach against four state-of-the-art methods and two standard approaches. The results demonstrate that our comprehensive search approach surpasses all other methods in terms of identification accuracy. In particular, the second flavor of our approach establishes an efficient overfitting penalty solely based on $R^2$, which achieves highest rates of exact equation recovery.

相關內容

機器學習系統設計系統評估標準

We study the problem of distribution shift generally arising in machine-learning augmented hybrid simulation, where parts of simulation algorithms are replaced by data-driven surrogates. We first establish a mathematical framework to understand the structure of machine-learning augmented hybrid simulation problems, and the cause and effect of the associated distribution shift. We show correlations between distribution shift and simulation error both numerically and theoretically. Then, we propose a simple methodology based on tangent-space regularized estimator to control the distribution shift, thereby improving the long-term accuracy of the simulation results. In the linear dynamics case, we provide a thorough theoretical analysis to quantify the effectiveness of the proposed method. Moreover, we conduct several numerical experiments, including simulating a partially known reaction-diffusion equation and solving Navier-Stokes equations using the projection method with a data-driven pressure solver. In all cases, we observe marked improvements in simulation accuracy under the proposed method, especially for systems with high degrees of distribution shift, such as those with relatively strong non-linear reaction mechanisms, or flows at large Reynolds numbers.

SARRIGUREN, a new complete algorithm for SAT based on counting clauses (which is valid also for Unique-SAT and #SAT) is described, analyzed and tested. Although existing complete algorithms for SAT perform slower with clauses with many literals, that is an advantage for SARRIGUREN, because the more literals are in the clauses the bigger is the probability of overlapping among clauses, a property that makes the clause counting process more efficient. Actually, it provides a $O(m^2 \times n/k)$ time complexity for random $k$-SAT instances of $n$ variables and $m$ relatively dense clauses, where that density level is relative to the number of variables $n$, that is, clauses are relatively dense when $k\geq7\sqrt{n}$. Although theoretically there could be worst-cases with exponential complexity, the probability of those cases to happen in random $k$-SAT with relatively dense clauses is practically zero. The algorithm has been empirically tested and that polynomial time complexity maintains also for $k$-SAT instances with less dense clauses ($k\geq5\sqrt{n}$). That density could, for example, be of only 0.049 working with $n=20000$ variables and $k=989$ literals. In addition, they are presented two more complementary algorithms that provide the solutions to $k$-SAT instances and valuable information about number of solutions for each literal. Although this algorithm does not solve the NP=P problem (it is not a polynomial algorithm for 3-SAT), it broads the knowledge about that subject, because $k$-SAT with $k>3$ and dense clauses is not harder than 3-SAT. Moreover, the Python implementation of the algorithms, and all the input datasets and obtained results in the experiments are made available.

High-frequency issues have been remarkably challenges in numerical methods for partial differential equations. In this paper, a learning based numerical method (LbNM) is proposed for Helmholtz equation with high frequency. The main novelty is using Tikhonov regularization method to stably learn the solution operator by utilizing relevant information especially the fundamental solutions. Then applying the solution operator to a new boundary input could quickly update the solution. Based on the method of fundamental solutions and the quantitative Runge approximation, we give the error estimate. This indicates interpretability and generalizability of the present method. Numerical results validates the error analysis and demonstrates the high-precision and high-efficiency features.

Generating random and pseudorandom numbers with a deterministic system is a long-standing challenge in theoretical research and engineering applications. Several pseudorandom number generators based on the inversive congruential method have been designed as attractive alternatives to those based on the classical linear congruential method. This paper discloses the least period of sequences generated by iterating an inversive pseudorandom number generator over the ring $\mathbb{Z}_e$ by transforming it into a two-order linear congruential recurrence relation. Depending on whether the sequence is periodic or ultimately periodic, all states in the domain can be attributed to two types of objects: some cycles of different lengths and one unilateral connected digraph whose structure remains unchanged concerning parameter $e$. The graph structure of the generator over the ring $\mathbb{Z}_e$ is precisely disclosed with rigorous theoretical analysis and verified experimentally. The adopted analysis methodology can be extended to study the graph structure of other nonlinear maps.

The class of Basic Feasible Functionals BFF is the second-order counterpart of the class of first-order functions computable in polynomial time. We present several implicit characterizations of BFF based on a typed programming language of terms. These terms may perform calls to non-recursive imperative procedures. The type discipline has two layers: the terms follow a standard simply-typed discipline and the procedures follow a standard tier-based type discipline. BFF consists exactly of the second-order functionals that are computed by typable and terminating programs. The completeness of this characterization surprisingly still holds in the absence of lambda-abstraction. Moreover, the termination requirement can be specified as a completeness-preserving instance, which can be decided in time quadratic in the size of the program. As typing is decidable in polynomial time, we obtain the first tractable (i.e., decidable in polynomial time), sound, complete, and implicit characterization of BFF, thus solving a problem opened for more than 20 years.

The hazard function represents one of the main quantities of interest in the analysis of survival data. We propose a general approach for parametrically modelling the dynamics of the hazard function using systems of autonomous ordinary differential equations (ODEs). This modelling approach can be used to provide qualitative and quantitative analyses of the evolution of the hazard function over time. Our proposal capitalises on the extensive literature of ODEs which, in particular, allow for establishing basic rules or laws on the dynamics of the hazard function via the use of autonomous ODEs. We show how to implement the proposed modelling framework in cases where there is an analytic solution to the system of ODEs or where an ODE solver is required to obtain a numerical solution. We focus on the use of a Bayesian modelling approach, but the proposed methodology can also be coupled with maximum likelihood estimation. A simulation study is presented to illustrate the performance of these models and the interplay of sample size and censoring. Two case studies using real data are presented to illustrate the use of the proposed approach and to highlight the interpretability of the corresponding models. We conclude with a discussion on potential extensions of our work and strategies to include covariates into our framework.

We consider nonlinear solvers for the incompressible, steady (or at a fixed time step for unsteady) Navier-Stokes equations in the setting where partial measurement data of the solution is available. The measurement data is incorporated/assimilated into the solution through a nudging term addition to the the Picard iteration that penalized the difference between the coarse mesh interpolants of the true solution and solver solution, analogous to how continuous data assimilation (CDA) is implemented for time dependent PDEs. This was considered in the paper [Li et al. {\it CMAME} 2023], and we extend the methodology by improving the analysis to be in the $L^2$ norm instead of a weighted $H^1$ norm where the weight depended on the coarse mesh width, and to the case of noisy measurement data. For noisy measurement data, we prove that the CDA-Picard method is stable and convergent, up to the size of the noise. Numerical tests illustrate the results, and show that a very good strategy when using noisy data is to use CDA-Picard to generate an initial guess for the classical Newton iteration.

We study the optimal linear prediction of a random function, assuming it takes values in an infinite dimensional Hilbert space. We begin by characterizing the mean square prediction error (MSPE) associated with a linear predictor and discussing the minimal achievable MSPE. This analysis reveals that, in general, there are multiple non-unique linear predictors that minimize the MSPE, and even if a unique solution exists, consistently estimating it from finite samples is generally impossible. Nevertheless, we can define asymptotically optimal linear operators whose empirical MSPEs approach the minimal achievable level as the sample size increases. We show that, interestingly, standard post-dimension reduction estimators, which have been widely used in the literature, attain such asymptotic optimality under minimal conditions.

Categorical compositional distributional semantics is an approach to modelling language that combines the success of vector-based models of meaning with the compositional power of formal semantics. However, this approach was developed without an eye to cognitive plausibility. Vector representations of concepts and concept binding are also of interest in cognitive science, and have been proposed as a way of representing concepts within a biologically plausible spiking neural network. This work proposes a way for compositional distributional semantics to be implemented within a spiking neural network architecture, with the potential to address problems in concept binding, and give a small implementation. We also describe a means of training word representations using labelled images.

We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.

北京阿比特科技有限公司