亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the present paper we prove a new theorem, resulting in an exact updating formula for linear regression model residuals to calculate the segmented cross-validation residuals for any choice of cross-validation strategy without model refitting. The required matrix inversions are limited by the cross-validation segment sizes and can be executed with high efficiency in parallel. The well-known formula for leave-one-out cross-validation follows as a special case of our theorem. In situations where the cross-validation segments consist of small groups of repeated measurements, we suggest a heuristic strategy for fast serial approximations of the cross-validated residuals and associated PRESS statistic. We also suggest strategies for quick estimation of the exact minimum PRESS value and full PRESS function over a selected interval of regularisation values. The computational effectiveness of the parameter selection for Ridge-/Tikhonov regression modelling resulting from our theoretical findings and heuristic arguments is demonstrated for several practical applications.

相關內容

Pin fins are imperative in the cooling of turbine blades. The designs of pin fins, therefore, have seen significant research in the past. With the developments in metal additive manufacturing, novel design approaches toward complex geometries are now feasible. To that end, this article presents a Bayesian optimization approach for designing inline pins that can achieve low pressure loss. The pin-fin shape is defined using featurized (parametrized) piecewise cubic splines in 2D. The complexity of the shape is dependent on the number of splines used for the analysis. From a method development perspective, the study is performed using three splines. Owing to this piece-wise modeling, a unique pin fin design is defined using five features. After specifying the design, a computational fluid dynamics-based model is developed that computes the pressure drop during the flow. Bayesian optimization is carried out on a Gaussian processes-based surrogate to obtain an optimal combination of pin-fin features to minimize the pressure drop. The results show that the optimization tends to approach an aerodynamic design leading to low pressure drop corroborating with the existing knowledge. Furthermore, multiple iterations of optimizations are conducted with varying degree of input data. The results reveal that a convergence to similar optimal design is achieved with a minimum of just twenty five initial design-of-experiments data points for the surrogate. Sensitivity analysis shows that the distance between the rows of the pin fins is the most dominant feature influencing the pressure drop. In summary, the newly developed automated framework demonstrates remarkable capabilities in designing pin fins with superior performance characteristics.

A compression function is a map that slims down an observational set into a subset of reduced size, while preserving its informational content. In multiple applications, the condition that one new observation makes the compressed set change is interpreted that this observation brings in extra information and, in learning theory, this corresponds to misclassification, or misprediction. In this paper, we lay the foundations of a new theory that allows one to keep control on the probability of change of compression (called the "risk"). We identify conditions under which the cardinality of the compressed set is a consistent estimator for the risk (without any upper limit on the size of the compressed set) and prove unprecedentedly tight bounds to evaluate the risk under a generally applicable condition of preference. All results are usable in a fully agnostic setup, without requiring any a priori knowledge on the probability distribution of the observations. Not only these results offer a valid support to develop trust in observation-driven methodologies, they also play a fundamental role in learning techniques as a tool for hyper-parameter tuning.

Community detection is a key aspect of network analysis, as it allows for the identification of groups and patterns within a network. With the ever-increasing size of networks, it is crucial to have fast algorithms to analyze them efficiently. It is a modularity-based greedy algorithm that divides a network into disconnected communities better over several iterations. Even in big, dense networks, it is renowned for establishing high-quality communities. However it can be at least a factor of ten slower than community discovery techniques that rely on label-propagation, which are generally extremely fast but obtain communities of lower quality. The researchers have suggested a number of methods for parallelizing and improving the Louvain algorithm. To decide which strategy is generally the best fit and which parameter values produce the highest performance without compromising community quality, it is critical to assess the performance and accuracy of these existing approaches. As we implement the single-threaded and multi-threaded versions of the static Louvain algorithm in this report, we carefully examine the method's specifics, make the required tweaks and optimizations, and determine the right parameter values. The tolerance between each pass can be changed to adjust the method's performance. With an initial tolerance of 0.01 and a tolerance decline factor of 10, an asynchronous version of the algorithm produced the best results. Generally speaking, according to our findings, the approach is not well suited for shared-memory parallelism; however, one potential workaround is to break the graph into manageable chunks that can be independently executed and then merged back together.

We consider optimal sensor placement for a family of linear Bayesian inverse problems characterized by a deterministic hyper-parameter. The hyper-parameter describes distinct configurations in which measurements can be taken of the observed physical system. To optimally reduce the uncertainty in the system's model with a single set of sensors, the initial sensor placement needs to account for the non-linear state changes of all admissible configurations. We address this requirement through an observability coefficient which links the posteriors' uncertainties directly to the choice of sensors. We propose a greedy sensor selection algorithm to iteratively improve the observability coefficient for all configurations through orthogonal matching pursuit. The algorithm allows explicitly correlated noise models even for large sets of candidate sensors, and remains computationally efficient for high-dimensional forward models through model order reduction. We demonstrate our approach on a large-scale geophysical model of the Perth Basin, and provide numerical studies regarding optimality and scalability with regard to classic optimal experimental design utility functions.

In regression problems where there is no known true underlying model, conformal prediction methods enable prediction intervals to be constructed without any assumptions on the distribution of the underlying data, except that the training and test data are assumed to be exchangeable. However, these methods bear a heavy computational cost-and, to be carried out exactly, the regression algorithm would need to be fitted infinitely many times. In practice, the conformal prediction method is run by simply considering only a finite grid of finely spaced values for the response variable. This paper develops discretized conformal prediction algorithms that are guaranteed to cover the target value with the desired probability, and that offer a tradeoff between computational cost and prediction accuracy.

Peridynamic (PD) theory is significant and promising in engineering and materials science; however, it imposes challenges owing to the enormous computational cost caused by its nonlocality. Our main contribution, which overcomes the restrictions of the existing fast method, is a general computational framework for the linear bond-based peridynamic models based on the meshfree method, called the matrix-structure-based fast method (MSBFM), which is suitable for the general case, including 2D/3D problems, and static/dynamic issues, as well as problems with general boundary conditions, in particular, problems with crack propagation. Consequently, we provide a general calculation flow chart. The proposed computational framework is practical and easily embedded into the existing computational algorithm. With this framework, the computational cost is reduced from $O(N^2)$ to $O(N\log N)$, and the storage request is reduced from $O(N^2)$ to $O(N)$, where N is the degree of freedom. Finally, the vast reduction of the computational and memory requirement is verified by numerical examples.

Learning in neural networks is often framed as a problem in which targeted error signals are directly propagated to parameters and used to produce updates that induce more optimal network behaviour. Backpropagation of error (BP) is an example of such an approach and has proven to be a highly successful application of stochastic gradient descent to deep neural networks. We propose constrained parameter inference (COPI) as a new principle for learning. The COPI approach assumes that learning can be set up in a manner where parameters infer their own values based upon observations of their local neuron activities. We find that this estimation of network parameters is possible under the constraints of decorrelated neural inputs and top-down perturbations of neural states for credit assignment. We show that the decorrelation required for COPI allows learning at extremely high learning rates, competitive with that of adaptive optimizers, as used by BP. We further demonstrate that COPI affords a new approach to feature analysis and network compression. Finally, we argue that COPI may shed new light on learning in biological networks given the evidence for decorrelation in the brain.

In the sequential decision making setting, an agent aims to achieve systematic generalization over a large, possibly infinite, set of environments. Such environments are modeled as discrete Markov decision processes with both states and actions represented through a feature vector. The underlying structure of the environments allows the transition dynamics to be factored into two components: one that is environment-specific and another that is shared. Consider a set of environments that share the laws of motion as an example. In this setting, the agent can take a finite amount of reward-free interactions from a subset of these environments. The agent then must be able to approximately solve any planning task defined over any environment in the original set, relying on the above interactions only. Can we design a provably efficient algorithm that achieves this ambitious goal of systematic generalization? In this paper, we give a partially positive answer to this question. First, we provide a tractable formulation of systematic generalization by employing a causal viewpoint. Then, under specific structural assumptions, we provide a simple learning algorithm that guarantees any desired planning error up to an unavoidable sub-optimality term, while showcasing a polynomial sample complexity.

We present a model inversion algorithm, CKLEMAP, for data assimilation and parameter estimation in partial differential equation models of physical systems with spatially heterogeneous parameter fields. These fields are approximated using low-dimensional conditional Karhunen-Lo\'{e}ve expansions, which are constructed using Gaussian process regression models of these fields trained on the parameters' measurements. We then assimilate measurements of the state of the system and compute the maximum a posteriori estimate of the CKLE coefficients by solving a nonlinear least-squares problem. When solving this optimization problem, we efficiently compute the Jacobian of the vector objective by exploiting the sparsity structure of the linear system of equations associated with the forward solution of the physics problem. The CKLEMAP method provides better scalability compared to the standard MAP method. In the MAP method, the number of unknowns to be estimated is equal to the number of elements in the numerical forward model. On the other hand, in CKLEMAP, the number of unknowns (CKLE coefficients) is controlled by the smoothness of the parameter field and the number of measurements, and is in general much smaller than the number of discretization nodes, which leads to a significant reduction of computational cost with respect to the standard MAP method. To show its advantage in scalability, we apply CKLEMAP to estimate the transmissivity field in a two-dimensional steady-state subsurface flow model of the Hanford Site by assimilating synthetic measurements of transmissivity and hydraulic head. We find that the execution time of CKLEMAP scales nearly linearly as $N^{1.33}$, where $N$ is the number of discretization nodes, while the execution time of standard MAP scales as $N^{2.91}$. The CKLEMAP method improved execution time without sacrificing accuracy when compared to the standard MAP.

A directed acyclic graph (DAG) provides valuable prior knowledge that is often discarded in regression tasks in machine learning. We show that the independences arising from the presence of collider structures in DAGs provide meaningful inductive biases, which constrain the regression hypothesis space and improve predictive performance. We introduce collider regression, a framework to incorporate probabilistic causal knowledge from a collider in a regression problem. When the hypothesis space is a reproducing kernel Hilbert space, we prove a strictly positive generalisation benefit under mild assumptions and provide closed-form estimators of the empirical risk minimiser. Experiments on synthetic and climate model data demonstrate performance gains of the proposed methodology.

北京阿比特科技有限公司