亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper presents a novel approach to solving the Flying Sidekick Travelling Salesman Problem (FSTSP) using a state-of-the-art self-adaptive genetic algorithm. The Flying Sidekick Travelling Salesman Problem is a combinatorial optimisation problem that extends the Travelling Salesman Problem (TSP) by introducing the use of drones. In FSTSP, the objective is to minimise the total time to visit all locations while strategically deploying a drone to serve hard-to-reach customer locations. Also, to the best of my knowledge, this is the first time a self-adaptive genetic algorithm (GA) has been used to solve the FSTSP problem. Experimental results on smaller-sized problem instances demonstrate that this algorithm can find a higher quantity of optimal solutions and a lower percentage gap to the optimal solution compared to rival algorithms. Moreover, on larger-sized problem instances, this algorithm outperforms all rival algorithms on each problem size while maintaining a reasonably low computation time.

相關內容

A novel regression method is introduced and studied. The procedure weights squared residuals based on their magnitude. Unlike the classic least squares which treats every squared residual equally important, the new procedure exponentially down-weights squared-residuals that lie far away from the cloud of all residuals and assigns a constant weight (one) to squared-residuals that lie close to the center of the squared-residual cloud. The new procedure can keep a good balance between robustness and efficiency, it possesses the highest breakdown point robustness for any regression equivariant procedure, much more robust than the classic least squares, yet much more efficient than the benchmark of robust method, the least trimmed squares (LTS) of Rousseeuw (1984). With a smooth weight function, the new procedure could be computed very fast by the first-order (first-derivative) method and the second-order (second-derivative) method. Assertions and other theoretical findings are verified in simulated and real data examples.

In this paper we explore the concept of sequential inductive prediction intervals using theory from sequential testing. We furthermore introduce a 3-parameter PAC definition of prediction intervals that allows us via simulation to achieve almost sharp bounds with high probability.

This paper considers the problem of robust iterative Bayesian smoothing in nonlinear state-space models with additive noise using Gaussian approximations. Iterative methods are known to improve smoothed estimates but are not guaranteed to converge, motivating the development of more robust versions of the algorithms. The aim of this article is to present Levenberg-Marquardt (LM) and line-search extensions of the classical iterated extended Kalman smoother (IEKS) as well as the iterated posterior linearisation smoother (IPLS). The IEKS has previously been shown to be equivalent to the Gauss-Newton (GN) method. We derive a similar GN interpretation for the IPLS. Furthermore, we show that an LM extension for both iterative methods can be achieved with a simple modification of the smoothing iterations, enabling algorithms with efficient implementations. Our numerical experiments show the importance of robust methods, in particular for the IEKS-based smoothers. The computationally expensive IPLS-based smoothers are naturally robust but can still benefit from further regularisation.

As a surrogate for computationally intensive meso-scale simulation of woven composites, this article presents Recurrent Neural Network (RNN) models. Leveraging the power of transfer learning, the initialization challenges and sparse data issues inherent in cyclic shear strain loads are addressed in the RNN models. A mean-field model generates a comprehensive data set representing elasto-plastic behavior. In simulations, arbitrary six-dimensional strain histories are used to predict stresses under random walking as the source task and cyclic loading conditions as the target task. Incorporating sub-scale properties enhances RNN versatility. In order to achieve accurate predictions, the model uses a grid search method to tune network architecture and hyper-parameter configurations. The results of this study demonstrate that transfer learning can be used to effectively adapt the RNN to varying strain conditions, which establishes its potential as a useful tool for modeling path-dependent responses in woven composites.

This paper shows that a class of codes such as Reed-Muller (RM) codes have vanishing bit-error probability below capacity on symmetric channels. The proof relies on the notion of `camellia codes': a class of symmetric codes decomposable into `camellias', i.e., set systems that differ from sunflowers by allowing for scattered petal overlaps. The proof then follows from a boosting argument on the camellia petals with second moment Fourier analysis. For erasure channels, this gives a self-contained proof of the bit-error result in Kudekar et al.'17, without relying on sharp thresholds for monotone properties Friedgut-Kalai'96. For error channels, this gives a shortened proof of Reeves-Pfister'23 with an exponentially tighter bound, and a proof variant of the bit-error result in Abbe-Sandon'23. The control of the full (block) error probability still requires Abbe-Sandon'23 for RM codes.

Robust Markov Decision Processes (RMDPs) are a widely used framework for sequential decision-making under parameter uncertainty. RMDPs have been extensively studied when the objective is to maximize the discounted return, but little is known for average optimality (optimizing the long-run average of the rewards obtained over time) and Blackwell optimality (remaining discount optimal for all discount factors sufficiently close to 1). In this paper, we prove several foundational results for RMDPs beyond the discounted return. We show that average optimal policies can be chosen stationary and deterministic for sa-rectangular RMDPs but, perhaps surprisingly, that history-dependent (Markovian) policies strictly outperform stationary policies for average optimality in s-rectangular RMDPs. We also study Blackwell optimality for sa-rectangular RMDPs, where we show that {\em approximate} Blackwell optimal policies always exist, although Blackwell optimal policies may not exist. We also provide a sufficient condition for their existence, which encompasses virtually any examples from the literature. We then discuss the connection between average and Blackwell optimality, and we describe several algorithms to compute the optimal average return. Interestingly, our approach leverages the connections between RMDPs and stochastic games.

We discuss probabilistic neural networks with a fixed internal representation as models for machine understanding. Here understanding is intended as mapping data to an already existing representation which encodes an {\em a priori} organisation of the feature space. We derive the internal representation by requiring that it satisfies the principles of maximal relevance and of maximal ignorance about how different features are combined. We show that, when hidden units are binary variables, these two principles identify a unique model -- the Hierarchical Feature Model (HFM) -- which is fully solvable and provides a natural interpretation in terms of features. We argue that learning machines with this architecture enjoy a number of interesting properties, like the continuity of the representation with respect to changes in parameters and data, the possibility to control the level of compression and the ability to support functions that go beyond generalisation. We explore the behaviour of the model with extensive numerical experiments and argue that models where the internal representation is fixed reproduce a learning modality which is qualitatively different from that of traditional models such as Restricted Boltzmann Machines.

We present a novel Finite Volume (FV) scheme on unstructured polygonal meshes that is provably compliant with the Second Law of Thermodynamics and the Geometric Conservation Law (GCL) at the same time. The governing equations are provided by a subset of the class of symmetric and hyperbolic thermodynamically compatible (SHTC) models. Our numerical method discretizes the equations for the conservation of momentum, total energy, distortion tensor and thermal impulse vector, hence accounting in one single unified mathematical formalism for a wide range of physical phenomena in continuum mechanics. By means of two conservative corrections directly embedded in the definition of the numerical fluxes, the new schemes are proven to satisfy two extra conservation laws, namely an entropy balance law and a geometric equation that links the distortion tensor to the density evolution. As such, the classical mass conservation equation can be discarded. Firstly, the GCL is derived at the continuous level, and subsequently it is satisfied by introducing the new concepts of general potential and generalized Gibbs relation. Once compatibility of the GCL is ensured, thermodynamic compatibility is tackled in the same manner, thus achieving the satisfaction of a local cell entropy inequality. The two corrections are orthogonal, meaning that they can coexist simultaneously without interfering with each other. The compatibility of the new FV schemes holds true at the semi-discrete level, and time integration of the governing PDE is carried out relying on Runge-Kutta schemes. A large suite of test cases demonstrates the structure preserving properties of the schemes at the discrete level as well.

In this paper we develop a novel neural network model for predicting implied volatility surface. Prior financial domain knowledge is taken into account. A new activation function that incorporates volatility smile is proposed, which is used for the hidden nodes that process the underlying asset price. In addition, financial conditions, such as the absence of arbitrage, the boundaries and the asymptotic slope, are embedded into the loss function. This is one of the very first studies which discuss a methodological framework that incorporates prior financial domain knowledge into neural network architecture design and model training. The proposed model outperforms the benchmarked models with the option data on the S&P 500 index over 20 years. More importantly, the domain knowledge is satisfied empirically, showing the model is consistent with the existing financial theories and conditions related to implied volatility surface.

This paper does not describe a working system. Instead, it presents a single idea about representation which allows advances made by several different groups to be combined into an imaginary system called GLOM. The advances include transformers, neural fields, contrastive representation learning, distillation and capsules. GLOM answers the question: How can a neural network with a fixed architecture parse an image into a part-whole hierarchy which has a different structure for each image? The idea is simply to use islands of identical vectors to represent the nodes in the parse tree. If GLOM can be made to work, it should significantly improve the interpretability of the representations produced by transformer-like systems when applied to vision or language

北京阿比特科技有限公司