亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Recently established, directed dependence measures for pairs $(X,Y)$ of random variables build upon the natural idea of comparing the conditional distributions of $Y$ given $X=x$ with the marginal distribution of $Y$. They assign pairs $(X,Y)$ values in $[0,1]$, the value is $0$ if and only if $X,Y$ are independent, and it is $1$ exclusively for $Y$ being a function of $X$. Here we show that comparing randomly drawn conditional distributions with each other instead or, equivalently, analyzing how sensitive the conditional distribution of $Y$ given $X=x$ is on $x$, opens the door to constructing novel families of dependence measures $\Lambda_\varphi$ induced by general convex functions $\varphi: \mathbb{R} \rightarrow \mathbb{R}$, containing, e.g., Chatterjee's coefficient of correlation as special case. After establishing additional useful properties of $\Lambda_\varphi$ we focus on continuous $(X,Y)$, translate $\Lambda_\varphi$ to the copula setting, consider the $L^p$-version and establish an estimator which is strongly consistent in full generality. A real data example and a simulation study illustrate the chosen approach and the performance of the estimator. Complementing the afore-mentioned results, we show how a slight modification of the construction underlying $\Lambda_\varphi$ can be used to define new measures of explainability generalizing the fraction of explained variance.

相關內容

Preference-based optimization algorithms are iterative procedures that seek the optimal calibration of a decision vector based only on comparisons between couples of different tunings. At each iteration, a human decision-maker expresses a preference between two calibrations (samples), highlighting which one, if any, is better than the other. The optimization procedure must use the observed preferences to find the tuning of the decision vector that is most preferred by the decision-maker, while also minimizing the number of comparisons. In this work, we formulate the preference-based optimization problem from a utility theory perspective. Then, we propose GLISp-r, an extension of a recent preference-based optimization procedure called GLISp. The latter uses a Radial Basis Function surrogate to describe the tastes of the decision-maker. Iteratively, GLISp proposes new samples to compare with the best calibration available by trading off exploitation of the surrogate model and exploration of the decision space. In GLISp-r, we propose a different criterion to use when looking for new candidate samples that is inspired by MSRS, a popular procedure in the black-box optimization framework. Compared to GLISp, GLISp-r is less likely to get stuck on local optima of the preference-based optimization problem. We motivate this claim theoretically, with a proof of global convergence, and empirically, by comparing the performances of GLISp and GLISp-r on several benchmark optimization problems.

Temporal analysis of products (TAP) reactors enable experiments that probe numerous kinetic processes within a single set of experimental data through variations in pulse intensity, delay, or temperature. Selecting additional TAP experiments often involves arbitrary selection of reaction conditions or the use of chemical intuition. To make experiment selection in TAP more robust, we explore the efficacy of model-based design of experiments (MBDoE) for precision in TAP reactor kinetic modeling. We successfully applied this approach to a case study of synthetic oxidative propane dehydrogenation (OPDH) that involves pulses of propane and oxygen. We found that experiments identified as optimal through the MBDoE for precision generally reduce parameter uncertainties to a higher degree than alternative experiments. The performance of MBDoE for model divergence was also explored for OPDH, with the relevant active sites (catalyst structure) being unknown. An experiment that maximized the divergence between the three proposed mechanisms was identified and led to clear mechanism discrimination. However, re-optimization of kinetic parameters eliminated the ability to discriminate. The findings yield insight into the prospects and limitations of MBDoE for TAP and transient kinetic experiments.

Graph Neural Networks (GNNs) have emerged as formidable resources for processing graph-based information across diverse applications. While the expressive power of GNNs has traditionally been examined in the context of graph-level tasks, their potential for node-level tasks, such as node classification, where the goal is to interpolate missing node labels from the observed ones, remains relatively unexplored. In this study, we investigate the proficiency of GNNs for such classifications, which can also be cast as a function interpolation problem. Explicitly, we focus on ascertaining the optimal configuration of weights and layers required for a GNN to successfully interpolate a band-limited function over Euclidean cubes. Our findings highlight a pronounced efficiency in utilizing GNNs to generalize a bandlimited function within an $\varepsilon$-error margin. Remarkably, achieving this task necessitates only $O_d((\log\varepsilon^{-1})^d)$ weights and $O_d((\log\varepsilon^{-1})^d)$ training samples. We explore how this criterion stacks up against the explicit constructions of currently available Neural Networks (NNs) designed for similar tasks. Significantly, our result is obtained by drawing an innovative connection between the GNN structures and classical sampling theorems. In essence, our pioneering work marks a meaningful contribution to the research domain, advancing our understanding of the practical GNN applications.

Collaborative filtering (CF) has become a popular method for developing recommender systems (RSs) where ratings of a user for new items are predicted based on her past preferences and available preference information of other users. Despite the popularity of CF-based methods, their performance is often greatly limited by the sparsity of observed entries. In this study, we explore the data augmentation and refinement aspects of Maximum Margin Matrix Factorization (MMMF), a widely accepted CF technique for rating predictions, which has not been investigated before. We exploit the inherent characteristics of CF algorithms to assess the confidence level of individual ratings and propose a semi-supervised approach for rating augmentation based on self-training. We hypothesize that any CF algorithm's predictions with low confidence are due to some deficiency in the training data and hence, the performance of the algorithm can be improved by adopting a systematic data augmentation strategy. We iteratively use some of the ratings predicted with high confidence to augment the training data and remove low-confidence entries through a refinement process. By repeating this process, the system learns to improve prediction accuracy. Our method is experimentally evaluated on several state-of-the-art CF algorithms and leads to informative rating augmentation, improving the performance of the baseline approaches.

We investigate a class of parametric elliptic semilinear partial differential equations of second order with homogeneous essential boundary conditions, where the coefficients and the right-hand side (and hence the solution) may depend on a parameter. This model can be seen as a reaction-diffusion problem with a polynomial nonlinearity in the reaction term. The efficiency of various numerical approximations across the entire parameter space is closely related to the regularity of the solution with respect to the parameter. We show that if the coefficients and the right-hand side are analytic or Gevrey class regular with respect to the parameter, the same type of parametric regularity is valid for the solution. The key ingredient of the proof is the combination of the alternative-to-factorial technique from our previous work [1] with a novel argument for the treatment of the power-type nonlinearity in the reaction term. As an application of this abstract result, we obtain rigorous convergence estimates for numerical integration of semilinear reaction-diffusion problems with random coefficients using Gaussian and Quasi-Monte Carlo quadrature. Our theoretical findings are confirmed in numerical experiments.

We consider the problem of counting 4-cycles ($C_4$) in an undirected graph $G$ of $n$ vertices and $m$ edges (in bipartite graphs, 4-cycles are also often referred to as $\textit{butterflies}$). There have been a number of previous algorithms for this problem based on sorting the graph by degree and using randomized hash tables. These are appealing in theory due to compact storage and fast access on average. But, the performance of hash tables can degrade unpredictably and are also vulnerable to adversarial input. We develop a new simpler algorithm for counting $C_4$ requiring $O(m\bar\delta(G))$ time and $O(n)$ space, where $\bar \delta(G) \leq O(\sqrt{m})$ is the $\textit{average degeneracy}$ parameter introduced by Burkhardt, Faber \& Harris (2020). It has several practical improvements over previous algorithms; for example, it is fully deterministic, does not require any sorting of the input graph, and uses only addition and array access in its inner loops. To the best of our knowledge, all previous efficient algorithms for $C_4$ counting have required $\Omega(m)$ space in addition to storing the input graph. Our algorithm is very simple and easily adapted to count 4-cycles incident to each vertex and edge. Empirical tests demonstrate that our array-based approach is $4\times$ -- $7\times$ faster on average compared to popular hash table implementations.

We present an efficient preconditioner for linear problems $A x=y$. It guarantees monotonic convergence of the memory-efficient fixed-point iteration for all accretive systems of the form $A = L + V$, where $L$ is an approximation of $A$, and the system is scaled so that the discrepancy is bounded with $\lVert V \rVert<1$. In contrast to common splitting preconditioners, our approach is not restricted to any particular splitting. Therefore, the approximate problem can be chosen so that an analytic solution is available to efficiently evaluate the preconditioner. We prove that the only preconditioner with this property has the form $(L+I)(I - V)^{-1}$. This unique form moreover permits the elimination of the forward problem from the preconditioned system, often halving the time required per iteration. We demonstrate and evaluate our approach for wave problems, diffusion problems, and pantograph delay differential equations. With the latter we show how the method extends to general, not necessarily accretive, linear systems.

The convergence of the first order Euler scheme and an approximative variant thereof, along with convergence rates, are established for rough differential equations driven by c\`adl\`ag paths satisfying a suitable criterion, namely the so-called Property (RIE), along time discretizations with vanishing mesh size. This property is then verified for almost all sample paths of Brownian motion, It\^o processes, L\'evy processes and general c\`adl\`ag semimartingales, as well as the driving signals of both mixed and rough stochastic differential equations, relative to various time discretizations. Consequently, we obtain pathwise convergence in p-variation of the Euler--Maruyama scheme for stochastic differential equations driven by these processes.

We develop lower bounds on communication in the memory hierarchy or between processors for nested bilinear algorithms, such as Strassen's algorithm for matrix multiplication. We build on a previous framework that establishes communication lower bounds by use of the rank expansion, or the minimum rank of any fixed size subset of columns of a matrix, for each of the three matrices encoding a bilinear algorithm. This framework provides lower bounds for a class of dependency directed acyclic graphs (DAGs) corresponding to the execution of a given bilinear algorithm, in contrast to other approaches that yield bounds for specific DAGs. However, our lower bounds only apply to executions that do not compute the same DAG node multiple times. Two bilinear algorithms can be nested by taking Kronecker products between their encoding matrices. Our main result is a lower bound on the rank expansion of a matrix constructed by a Kronecker product derived from lower bounds on the rank expansion of the Kronecker product's operands. We apply the rank expansion lower bounds to obtain novel communication lower bounds for nested Toom-Cook convolution, Strassen's algorithm, and fast algorithms for contraction of partially symmetric tensors.

The accurate representation of precipitation in Earth system models (ESMs) is crucial for reliable projections of the ecological and socioeconomic impacts in response to anthropogenic global warming. The complex cross-scale interactions of processes that produce precipitation are challenging to model, however, inducing potentially strong biases in ESM fields, especially regarding extremes. State-of-the-art bias correction methods only address errors in the simulated frequency distributions locally at every individual grid cell. Improving unrealistic spatial patterns of the ESM output, which would require spatial context, has not been possible so far. Here, we show that a post-processing method based on physically constrained generative adversarial networks (cGANs) can correct biases of a state-of-the-art, CMIP6-class ESM both in local frequency distributions and in the spatial patterns at once. While our method improves local frequency distributions equally well as gold-standard bias-adjustment frameworks, it strongly outperforms any existing methods in the correction of spatial patterns, especially in terms of the characteristic spatial intermittency of precipitation extremes.

北京阿比特科技有限公司