亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Parent selection plays an important role in evolutionary algorithms, and many strategies exist to select the parent pool before breeding the next generation. Methods often rely on average error over the entire dataset as a criterion to select the parents, which can lead to an information loss due to aggregation of all test cases. Under epsilon-lexicase selection, the population goes to a selection pool that is iteratively reduced by using each test individually, discarding individuals with an error higher than the elite error plus the median absolute deviation (MAD) of errors for that particular test case. In an attempt to better capture differences in performance of individuals on cases, we propose a new criteria that splits errors into two partitions that minimize the total variance within partitions. Our method was embedded into the FEAT symbolic regression algorithm, and evaluated with the SRBench framework, containing 122 black-box synthetic and real-world regression problems. The empirical results show a better performance of our approach compared to traditional epsilon-lexicase selection in the real-world datasets while showing equivalent performance on the synthetic dataset.

相關內容

The notion of P-stability played an influential role in approximating the permanents, sampling rapidly the realizations of graphic degree sequences, or even studying and improving network privacy. However, we did not have a good insight of the structure of P-stable degree sequence families. In this paper we develop a remedy to overstep this deficiency. We will show, that if an infinite set of graphic degree sequences, characterized by some simple inequalities of their fundamental parameters, is $P$-stable, then it is ``fully graphic'' -- meaning that every degree sequence with an even sum, meeting the specified inequalities, is graphic. The reverse statement also holds: an infinite, fully graphic set of degree sequences characterized by some simple inequalities of their fundamental parameters is P-stable. Along the way, we will significantly strengthen some well-known, older results, and we construct new P-stable families of degree sequences.

We investigate a class of combinatory algebras, called ribbon combinatory algebras, in which we can interpret both the braided untyped linear lambda calculus and framed oriented tangles. Any reflexive object in a ribbon category gives rise to a ribbon combinatory algebra. Conversely, From a ribbon combinatory algebra, we can construct a ribbon category with a reflexive object, from which the combinatory algebra can be recovered. To show this, and also to give the equational characterisation of ribbon combinatory algebras, we make use of the internal PRO construction developed in Hasegawa's recent work. Interestingly, we can characterise ribbon combinatory algebras in two different ways: as balanced combinatory algebras with a trace combinator, and as balanced combinatory algebras with duality.

We study three kinetic Langevin samplers including the Euler discretization, the BU and the UBU splitting scheme. We provide contraction results in $L^1$-Wasserstein distance for non-convex potentials. These results are based on a carefully tailored distance function and an appropriate coupling construction. Additionally, the error in the $L^1$-Wasserstein distance between the true target measure and the invariant measure of the discretization scheme is bounded. To get an $\varepsilon$-accuracy in $L^1$-Wasserstein distance, we show complexity guarantees of order $\mathcal{O}(\sqrt{d}/\varepsilon)$ for the Euler scheme and $\mathcal{O}(d^{1/4}/\sqrt{\varepsilon})$ for the UBU scheme under appropriate regularity assumptions on the target measure. The results are applicable to interacting particle systems and provide bounds for sampling probability measures of mean-field type.

Maximal regularity is a kind of a priori estimates for parabolic-type equations and it plays an important role in the theory of nonlinear differential equations. The aim of this paper is to investigate the temporally discrete counterpart of maximal regularity for the discontinuous Galerkin (DG) time-stepping method. We will establish such an estimate without logarithmic factor over a quasi-uniform temporal mesh. To show the main result, we introduce the temporally regularized Green's function and then reduce the discrete maximal regularity to a weighted error estimate for its DG approximation. Our results would be useful for investigation of DG approximation of nonlinear parabolic problems.

We study the problem of online unweighted bipartite matching with $n$ offline vertices and $n$ online vertices where one wishes to be competitive against the optimal offline algorithm. While the classic RANKING algorithm of Karp et al. [1990] provably attains competitive ratio of $1-1/e > 1/2$, we show that no learning-augmented method can be both 1-consistent and strictly better than $1/2$-robust under the adversarial arrival model. Meanwhile, under the random arrival model, we show how one can utilize methods from distribution testing to design an algorithm that takes in external advice about the online vertices and provably achieves competitive ratio interpolating between any ratio attainable by advice-free methods and the optimal ratio of 1, depending on the advice quality.

The Lamport diagram is a pervasive and intuitive tool for informal reasoning about "happens-before" relationships in a concurrent system. However, traditional axiomatic formalizations of Lamport diagrams can be painful to work with in a mechanized setting like Agda. We propose an alternative, inductive formalization -- the causal separation diagram (CSD) -- that takes inspiration from string diagrams and concurrent separation logic, but enjoys a graphical syntax similar to Lamport diagrams. Critically, CSDs are based on the idea that causal relationships between events are witnessed by the paths that information follows between them. To that end, we model happens-before as a dependent type of paths between events. The inductive formulation of CSDs enables their interpretation into a variety of semantic domains. We demonstrate the interpretability of CSDs with a case study on properties of logical clocks, widely-used mechanisms for reifying causal relationships as data. We carry out this study by implementing a series of interpreters for CSDs, culminating in a generic proof of Lamport's clock condition that is parametric in a choice of clock. We instantiate this proof on Lamport's scalar clock, on Mattern's vector clock, and on the matrix clocks of Raynal et al. and of Wuu and Bernstein, yielding verified implementations of each. The CSD formalism and our case study are mechanized in the Agda proof assistant.

Deflation techniques are typically used to shift isolated clusters of small eigenvalues in order to obtain a tighter distribution and a smaller condition number. Such changes induce a positive effect in the convergence behavior of Krylov subspace methods, which are among the most popular iterative solvers for large sparse linear systems. We develop a deflation strategy for symmetric saddle point matrices by taking advantage of their underlying block structure. The vectors used for deflation come from an elliptic singular value decomposition relying on the generalized Golub-Kahan bidiagonalization process. The block targeted by deflation is the off-diagonal one since it features a problematic singular value distribution for certain applications. One example is the Stokes flow in elongated channels, where the off-diagonal block has several small, isolated singular values, depending on the length of the channel. Applying deflation to specific parts of the saddle point system is important when using solvers such as CRAIG, which operates on individual blocks rather than the whole system. The theory is developed by extending the existing framework for deflating square matrices before applying a Krylov subspace method like MINRES. Numerical experiments confirm the merits of our strategy and lead to interesting questions about using approximate vectors for deflation.

In 1990, Jakeman (see \cite{jakeman1990statistics}) defined the binomial process as a special case of the classical birth-death process, where the probability of birth is proportional to the difference between a fixed number and the number of individuals present. Later, a fractional generalization of the binomial process was studied by Cahoy and Polito (2012) (see \cite{cahoy2012fractional}) and called it as fractional binomial process (FBP). In this paper, we study second-order properties of the FBP and the long-range behavior of the FBP and its noise process. We also estimate the parameters of the FBP using the method of moments procedure. Finally, we present the simulated sample paths and its algorithm for the FBP.

When modeling a vector of risk variables, extreme scenarios are often of special interest. The peaks-over-thresholds method hinges on the notion that, asymptotically, the excesses over a vector of high thresholds follow a multivariate generalized Pareto distribution. However, existing literature has primarily concentrated on the setting when all risk variables are always large simultaneously. In reality, this assumption is often not met, especially in high dimensions. In response to this limitation, we study scenarios where distinct groups of risk variables may exhibit joint extremes while others do not. These discernible groups are derived from the angular measure inherent in the corresponding max-stable distribution, whence the term extreme direction. We explore such extreme directions within the framework of multivariate generalized Pareto distributions, with a focus on their probability density functions in relation to an appropriate dominating measure. Furthermore, we provide a stochastic construction that allows any prespecified set of risk groups to constitute the distribution's extreme directions. This construction takes the form of a smoothed max-linear model and accommodates the full spectrum of conceivable max-stable dependence structures. Additionally, we introduce a generic simulation algorithm tailored for multivariate generalized Pareto distributions, offering specific implementations for extensions of the logistic and H\"usler-Reiss families capable of carrying arbitrary extreme directions.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

北京阿比特科技有限公司