亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We compare two different linear dimensionality reduction strategies for the multigroup classification problem: the trace ratio method and Fisher's discriminant analysis. Recently, trace ratio optimization has gained in popularity due to its computational efficiency, as well as the occasionally better classification results. However, a statistical understanding is still incomplete. We study and compare the properties of the two methods. Then, we propose a robust version of the trace ratio method, to handle the presence of outliers in the data. We reinterpret an asymptotic perturbation bound for the solution to the trace ratio, in a contamination setting. Finally, we compare the performance of the trace ratio method and Fisher's discriminant analysis on both synthetic and real datasets, using classical and robust estimators.

相關內容

We provide a new sequent calculus that enjoys syntactic cut-elimination and strongly terminating backward proof search for the intuitionistic Strong L\"ob logic $\sf{iSL}$, an intuitionistic modal logic with a provability interpretation. A novel measure on sequents is used to prove both the termination of the naive backward proof search strategy, and the admissibility of cut in a syntactic and direct way, leading to a straightforward cut-elimination procedure. All proofs have been formalised in the interactive theorem prover Coq.

Deep neural networks have shown remarkable performance when trained on independent and identically distributed data from a fixed set of classes. However, in real-world scenarios, it can be desirable to train models on a continuous stream of data where multiple classification tasks are presented sequentially. This scenario, known as Continual Learning (CL) poses challenges to standard learning algorithms which struggle to maintain knowledge of old tasks while learning new ones. This stability-plasticity dilemma remains central to CL and multiple metrics have been proposed to adequately measure stability and plasticity separately. However, none considers the increasing difficulty of the classification task, which inherently results in performance loss for any model. In that sense, we analyze some limitations of current metrics and identify the presence of setup-induced forgetting. Therefore, we propose new metrics that account for the task's increasing difficulty. Through experiments on benchmark datasets, we demonstrate that our proposed metrics can provide new insights into the stability-plasticity trade-off achieved by models in the continual learning environment.

One of the central quantities of probabilistic seismic risk assessment studies is the fragility curve, which represents the probability of failure of a mechanical structure conditional to a scalar measure derived from the seismic ground motion. Estimating such curves is a difficult task because for most structures of interest, few data are available. For this reason, a wide range of the methods of the literature rely on a parametric log-normal model. Bayesian approaches allow for efficient learning of the model parameters. However, the choice of the prior distribution has a non-negligible influence on the posterior distribution, and therefore on any resulting estimate. We propose a thorough study of this parametric Bayesian estimation problem when the data are binary (i.e. data indicate the state of the structure, failure or non-failure). Using the reference prior theory as a support, we suggest an objective approach for the prior choice. This approach leads to the Jeffreys' prior which is explicitly derived for this problem for the first time. The posterior distribution is proven to be proper (i.e. it integrates to unity) with Jeffreys' prior and improper with some classical priors from the literature. The posterior distribution with Jeffreys' prior is also shown to vanish at the boundaries of the parameter domain, so sampling of the posterior distribution of the parameters does not produce anomalously small or large values, which in turn does not produce degenerate fragility curves such as unit step functions. The numerical results on three different case studies illustrate these theoretical predictions.

We introduce new control-volume finite-element discretization schemes suitable for solving the Stokes problem. Within a common framework, we present different approaches for constructing such schemes. The first and most established strategy employs a non-overlapping partitioning into control volumes. The second represents a new idea by splitting into two sets of control volumes, the first set yielding a partition of the domain and the second containing the remaining overlapping control volumes required for stability. The third represents a hybrid approach where finite volumes are combined with finite elements based on a hierarchical splitting of the ansatz space. All approaches are based on typical finite element function spaces but yield locally mass and momentum conservative discretization schemes that can be interpreted as finite volume schemes. We apply all strategies to the inf-sub stable MINI finite-element pair. Various test cases, including convergence tests and the numerical observation of the boundedness of the number of preconditioned Krylov solver iterations, as well as more complex scenarios of flow around obstacles or through a three-dimensional vessel bifurcation, demonstrate the stability and robustness of the schemes.

Hawkes processes are often applied to model dependence and interaction phenomena in multivariate event data sets, such as neuronal spike trains, social interactions, and financial transactions. In the nonparametric setting, learning the temporal dependence structure of Hawkes processes is generally a computationally expensive task, all the more with Bayesian estimation methods. In particular, for generalised nonlinear Hawkes processes, Monte-Carlo Markov Chain methods applied to compute the doubly intractable posterior distribution are not scalable to high-dimensional processes in practice. Recently, efficient algorithms targeting a mean-field variational approximation of the posterior distribution have been proposed. In this work, we first unify existing variational Bayes approaches under a general nonparametric inference framework, and analyse the asymptotic properties of these methods under easily verifiable conditions on the prior, the variational class, and the nonlinear model. Secondly, we propose a novel sparsity-inducing procedure, and derive an adaptive mean-field variational algorithm for the popular sigmoid Hawkes processes. Our algorithm is parallelisable and therefore computationally efficient in high-dimensional setting. Through an extensive set of numerical simulations, we also demonstrate that our procedure is able to adapt to the dimensionality of the parameter of the Hawkes process, and is partially robust to some type of model mis-specification.

We define some Schnyder-type combinatorial structures on a class of planar triangulations of the pentagon which are closely related to 5-connected triangulations. The combinatorial structures have three incarnations defined in terms of orientations, corner-labelings, and woods respectively. The wood incarnation consists in 5 spanning trees crossing each other in an orderly fashion. Similarly as for Schnyder woods on triangulations, it induces, for each vertex, a partition of the inner triangles into face-connected regions (5~regions here). We show that the induced barycentric vertex-placement, where each vertex is at the barycenter of the 5 outer vertices with weights given by the number of faces in each region, yields a planar straight-line drawing.

This paper presents the error analysis of numerical methods on graded meshes for stochastic Volterra equations with weakly singular kernels. We first prove a novel regularity estimate for the exact solution via analyzing the associated convolution structure. This reveals that the exact solution exhibits an initial singularity in the sense that its H\"older continuous exponent on any neighborhood of $t=0$ is lower than that on every compact subset of $(0,T]$. Motivated by the initial singularity, we then construct the Euler--Maruyama method, fast Euler--Maruyama method, and Milstein method based on graded meshes. By establishing their pointwise-in-time error estimates, we give the grading exponents of meshes to attain the optimal uniform-in-time convergence orders, where the convergence orders improve those of the uniform mesh case. Numerical experiments are finally reported to confirm the sharpness of theoretical findings.

Recently, several algorithms have been proposed for decomposing reactive synthesis specifications into independent and simpler sub-specifications. Being inspired by one of the approaches, developed by Antonio Iannopollo (2018), who designed the so-called (DC) algorithm, we present here our solution that takes his ideas further and provides mathematical formalisation of the strategy behind DC. We rigorously define the main notions involved in the algorithm, explain the technique, and demonstrate its application on examples. The core technique of DC is based on the detection of independent variables in linear temporal logic formulae by exploiting the power and efficiency of a model checker. Although the DC algorithm is sound, it is not complete, as its author already pointed out. In this paper, we provide a counterexample that shows this fact and propose relevant changes to adapt the original DC strategy to ensure its correctness. The modification of DC and the detailed proof of its soundness and completeness are the main contributions of this work.

We present a multigrid algorithm to solve efficiently the large saddle-point systems of equations that typically arise in PDE-constrained optimization under uncertainty. The algorithm is based on a collective smoother that at each iteration sweeps over the nodes of the computational mesh, and solves a reduced saddle-point system whose size depends on the number $N$ of samples used to discretized the probability space. We show that this reduced system can be solved with optimal $O(N)$ complexity. We test the multigrid method on three problems: a linear-quadratic problem for which the multigrid method is used to solve directly the linear optimality system; a nonsmooth problem with box constraints and $L^1$-norm penalization on the control, in which the multigrid scheme is used within a semismooth Newton iteration; a risk-adverse problem with the smoothed CVaR risk measure where the multigrid method is called within a preconditioned Newton iteration. In all cases, the multigrid algorithm exhibits very good performances and robustness with respect to all parameters of interest.

Permutation tests are widely used for statistical hypothesis testing when the sampling distribution of the test statistic under the null hypothesis is analytically intractable or unreliable due to finite sample sizes. One critical challenge in the application of permutation tests in genomic studies is that an enormous number of permutations are often needed to obtain reliable estimates of very small $p$-values, leading to intensive computational effort. To address this issue, we develop algorithms for the accurate and efficient estimation of small $p$-values in permutation tests for paired and independent two-group genomic data, and our approaches leverage a novel framework for parameterizing the permutation sample spaces of those two types of data respectively using the Bernoulli and conditional Bernoulli distributions, combined with the cross-entropy method. The performance of our proposed algorithms is demonstrated through the application to two simulated datasets and two real-world gene expression datasets generated by microarray and RNA-Seq technologies and comparisons to existing methods such as crude permutations and SAMC, and the results show that our approaches can achieve orders of magnitude of computational efficiency gains in estimating small $p$-values. Our approaches offer promising solutions for the improvement of computational efficiencies of existing permutation test procedures and the development of new testing methods using permutations in genomic data analysis.

北京阿比特科技有限公司