亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Multi-fidelity models are of great importance due to their capability of fusing information coming from different numerical simulations, surrogates, and sensors. We focus on the approximation of high-dimensional scalar functions with low intrinsic dimensionality. By introducing a low dimensional bias we can fight the curse of dimensionality affecting these quantities of interest, especially for many-query applications. We seek a gradient-based reduction of the parameter space through linear active subspaces or a nonlinear transformation of the input space. Then we build a low-fidelity response surface based on such reduction, thus enabling nonlinear autoregressive multi-fidelity Gaussian process regression without the need of running new simulations with simplified physical models. This has a great potential in the data scarcity regime affecting many engineering applications. In this work we present a new multi-fidelity approach that involves active subspaces and the nonlinear level-set learning method, starting from the preliminary analysis previously conducted in Romor et al. 2020. The proposed framework is tested on two high-dimensional benchmark functions, and on a more complex car aerodynamics problem. We show how a low intrinsic dimensionality bias can increase the accuracy of Gaussian process response surfaces.

相關內容

We present a multigrid algorithm to solve efficiently the large saddle-point systems of equations that typically arise in PDE-constrained optimization under uncertainty. The algorithm is based on a collective smoother that at each iteration sweeps over the nodes of the computational mesh, and solves a reduced saddle-point system whose size depends on the number $N$ of samples used to discretized the probability space. We show that this reduced system can be solved with optimal $O(N)$ complexity. We test the multigrid method on three problems: a linear-quadratic problem for which the multigrid method is used to solve directly the linear optimality system; a nonsmooth problem with box constraints and $L^1$-norm penalization on the control, in which the multigrid scheme is used within a semismooth Newton iteration; a risk-adverse problem with the smoothed CVaR risk measure where the multigrid method is called within a preconditioned Newton iteration. In all cases, the multigrid algorithm exhibits very good performances and robustness with respect to all parameters of interest.

Biological sequencing data consist of read counts, e.g. of specified taxa and often exhibit sparsity (zero-count inflation) and overdispersion (extra-Poisson variability). As most sequencing techniques provide an arbitrary total count, taxon-specific counts should ideally be treated as proportions under the compositional data-analytic framework. There is increasing interest in the role of the gut microbiome composition in mediating the effects of different exposures on health outcomes. Most previous approaches to compositional mediation have addressed the problem of identifying potentially mediating taxa among a large number of candidates. We here consider causal inference in compositional mediation when a priori knowledge is available about the hierarchy for a restricted number of taxa, building on a single hypothesis structured in terms of contrasts between appropriate sub-compositions. Based on the theory on multiple contemporaneous mediators and the assumed causal graph, we define non-parametric estimands for overall and coordinate-wise mediation effects, and show how these indirect effects can be estimated from empirical data based on simple parametric linear models. The mediators have straightforward and coherent interpretations, related to specific causal questions about the interrelationships between the sub-compositions. We perform a simulation study focusing on the impact of sparsity and overdispersion on estimation of mediation. While unbiased, the precision of the estimators depends, for any given magnitude of indirect effect, on sparsity and the relative magnitudes of exposure-to-mediator and mediator-to-outcome effects in a complex manner. We demonstrate the approach on empirical data, finding an inverse association of fibre intake on insulin level, mainly attributable to direct rather than indirect effects.

This work investigates multiple testing by considering minimax separation rates in the sparse sequence model, when the testing risk is measured as the sum FDR+FNR (False Discovery Rate plus False Negative Rate). First using the popular beta-min separation condition, with all nonzero signals separated from $0$ by at least some amount, we determine the sharp minimax testing risk asymptotically and thereby explicitly describe the transition from "achievable multiple testing with vanishing risk" to "impossible multiple testing". Adaptive multiple testing procedures achieving the corresponding optimal boundary are provided: the Benjamini--Hochberg procedure with a properly tuned level, and an empirical Bayes $\ell$-value (`local FDR') procedure. We prove that the FDR and FNR make non-symmetric contributions to the testing risk for most optimal procedures, the FNR part being dominant at the boundary. The multiple testing hardness is then investigated for classes of arbitrary sparse signals. A number of extensions, including results for classification losses and convergence rates in the case of large signals, are also investigated.

One of the fundamental problems in shape analysis is to align curves or surfaces before computing geodesic distances between their shapes. Finding the optimal reparametrization realizing this alignment is a computationally demanding task, typically done by solving an optimization problem on the diffeomorphism group. In this paper, we propose an algorithm for constructing approximations of orientation-preserving diffeomorphisms by composition of elementary diffeomorphisms. The algorithm is implemented using PyTorch, and is applicable for both unparametrized curves and surfaces. Moreover, we show universal approximation properties for the constructed architectures, and obtain bounds for the Lipschitz constants of the resulting diffeomorphisms.

Context: Software model optimization is a process that automatically generates design alternatives, typically to enhance quantifiable non-functional properties of software systems, such as performance and reliability. Multi-objective evolutionary algorithms have shown to be effective in this context for assisting the designer in identifying trade-offs between the desired non-functional properties. Objective: In this work, we investigate the effects of imposing a time budget to limit the search for design alternatives, which inevitably affects the quality of the resulting alternatives. Method: The effects of time budgets are analyzed by investigating both the quality of the generated design alternatives and their structural features when varying the budget and the genetic algorithm (NSGA-II, PESA2, SPEA2). This is achieved by employing multi-objective quality indicators and a tree-based representation of the search space. Results: The study reveals that the time budget significantly affects the quality of Pareto fronts, especially for performance and reliability. NSGA-II is the fastest algorithm, while PESA2 generates the highest-quality solutions. The imposition of a time budget results in structurally distinct models compared to those obtained without a budget, indicating that the search process is influenced by both the budget and algorithm selection. Conclusions: In software model optimization, imposing a time budget can be effective in saving optimization time, but designers should carefully consider the trade-off between time and solution quality in the Pareto front, along with the structural characteristics of the generated models. By making informed choices about the specific genetic algorithm, designers can achieve different trade-offs.

Introduction: Oblique Target-rotation in the context of exploratory factor analysis is a relevant method for the investigation of the oblique independent clusters model. It was argued that minimizing single cross-loadings by means of target rotation may lead to large effects of sampling error on the target rotated factor solutions. Method: In order to minimize effects of sampling error on results of Target-rotation we propose to compute the mean cross-loadings for each block of salient loadings of the independent clusters model and to perform target rotation for the block-wise mean cross-loadings. The resulting transformation-matrix is than applied to the complete unrotated loading matrix in order to produce mean Target-rotated factors. Results: A simulation study based on correlated independent factor models revealed that mean oblique Target-rotation resulted in smaller negative bias of factor inter-correlations than conventional Target-rotation based on single loadings, especially when sample size was small and when the number of factors was large. An empirical example revealed that the similarity of Target-rotated factors computed for small subsamples with Target-rotated factors of the total sample was more pronounced for mean Target-rotation than for conventional Target-rotation. Discussion: Mean Target-rotation can be recommended in the context of oblique independent factor models, especially for small samples. An R-script and an SPSS-script for this form of Target-rotation are provided in the Appendix.

Combining sum factorization, weighted quadrature, and row-based assembly enables efficient higher-order computations for tensor product splines. We aim to transfer these concepts to immersed boundary methods, which perform simulations on a regular background mesh cut by a boundary representation that defines the domain of interest. Therefore, we present a novel concept to divide the support of cut basis functions to obtain regular parts suited for sum factorization. These regions require special discontinuous weighted quadrature rules, while Gauss-like quadrature rules integrate the remaining support. Two linear elasticity benchmark problems confirm the derived estimate for the computational costs of the different integration routines and their combination. Although the presence of cut elements reduces the speed-up, its contribution to the overall computation time declines with h-refinement.

In natural language processing (NLP), deep neural networks (DNNs) could model complex interactions between context and have achieved impressive results on a range of NLP tasks. Prior works on feature interaction attribution mainly focus on studying symmetric interaction that only explains the additional influence of a set of words in combination, which fails to capture asymmetric influence that contributes to model prediction. In this work, we propose an asymmetric feature interaction attribution explanation model that aims to explore asymmetric higher-order feature interactions in the inference of deep neural NLP models. By representing our explanation with an directed interaction graph, we experimentally demonstrate interpretability of the graph to discover asymmetric feature interactions. Experimental results on two sentiment classification datasets show the superiority of our model against the state-of-the-art feature interaction attribution methods in identifying influential features for model predictions. Our code is available at //github.com/StillLu/ASIV.

Permutation tests are widely used for statistical hypothesis testing when the sampling distribution of the test statistic under the null hypothesis is analytically intractable or unreliable due to finite sample sizes. One critical challenge in the application of permutation tests in genomic studies is that an enormous number of permutations are often needed to obtain reliable estimates of very small $p$-values, leading to intensive computational effort. To address this issue, we develop algorithms for the accurate and efficient estimation of small $p$-values in permutation tests for paired and independent two-group genomic data, and our approaches leverage a novel framework for parameterizing the permutation sample spaces of those two types of data respectively using the Bernoulli and conditional Bernoulli distributions, combined with the cross-entropy method. The performance of our proposed algorithms is demonstrated through the application to two simulated datasets and two real-world gene expression datasets generated by microarray and RNA-Seq technologies and comparisons to existing methods such as crude permutations and SAMC, and the results show that our approaches can achieve orders of magnitude of computational efficiency gains in estimating small $p$-values. Our approaches offer promising solutions for the improvement of computational efficiencies of existing permutation test procedures and the development of new testing methods using permutations in genomic data analysis.

Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.

北京阿比特科技有限公司