亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The paper presents a collection of analytical benchmark problems specifically selected to provide a set of stress tests for the assessment of multifidelity optimization methods. In addition, the paper discusses a comprehensive ensemble of metrics and criteria recommended for the rigorous and meaningful assessment of the performance of multifidelity strategies and algorithms.

相關內容

This paper studies policy optimization algorithms for multi-agent reinforcement learning. We begin by proposing an algorithm framework for two-player zero-sum Markov Games in the full-information setting, where each iteration consists of a policy update step at each state using a certain matrix game algorithm, and a value update step with a certain learning rate. This framework unifies many existing and new policy optimization algorithms. We show that the state-wise average policy of this algorithm converges to an approximate Nash equilibrium (NE) of the game, as long as the matrix game algorithms achieve low weighted regret at each state, with respect to weights determined by the speed of the value updates. Next, we show that this framework instantiated with the Optimistic Follow-The-Regularized-Leader (OFTRL) algorithm at each state (and smooth value updates) can find an $\mathcal{\widetilde{O}}(T^{-5/6})$ approximate NE in $T$ iterations, which improves over the current best $\mathcal{\widetilde{O}}(T^{-1/2})$ rate of symmetric policy optimization type algorithms. We also extend this algorithm to multi-player general-sum Markov Games and show an $\mathcal{\widetilde{O}}(T^{-3/4})$ convergence rate to Coarse Correlated Equilibria (CCE). Finally, we provide a numerical example to verify our theory and investigate the importance of smooth value updates, and find that using "eager" value updates instead (equivalent to the independent natural policy gradient algorithm) may significantly slow down the convergence, even on a simple game with $H=2$ layers.

We propose a new iterative method using machine learning algorithms to fit an imprecise regression model to data that consist of intervals rather than point values. The method is based on a single-layer interval neural network which can be trained to produce an interval prediction. It seeks parameters for the optimal model that minimize the mean squared error between the actual and predicted interval values of the dependent variable using a first-order gradient-based optimization and interval analysis computations to model the measurement imprecision of the data. The method captures the relationship between the explanatory variables and a dependent variable by fitting an imprecise regression model, which is linear with respect to unknown interval parameters even the regression model is nonlinear. We consider the explanatory variables to be precise point values, but the measured dependent values are characterized by interval bounds without any probabilistic information. Thus, the imprecision is modeled non-probabilistically even while the scatter of dependent values is modeled probabilistically by homoscedastic Gaussian distributions. The proposed iterative method estimates the lower and upper bounds of the expectation region, which is an envelope of all possible precise regression lines obtained by ordinary regression analysis based on any configuration of real-valued points from the respective intervals and their x-values.

In this article, we address the problem of reducing the number of required samples for Spherical Near-Field Antenna Measurements (SNF) by using Compressed Sensing (CS). A condition to ensure the numerical performance of sparse recovery algorithms is the design of a sensing matrix with low mutual coherence. Without fixing any part of the sampling pattern, we propose sampling points that minimize the mutual coherence of the respective sensing matrix by using augmented Lagrangian method. Numerical experiments show that the proposed sampling scheme yields a higher recovery success in terms of phase transition diagram when compared to other known sampling patterns, such as the spiral and Hammersley sampling schemes. Furthermore, we also demonstrate that the application of CS with an optimized sensing matrix requires fewer samples than classical approaches to reconstruct the Spherical Mode Coefficients (SMCs) and far-field pattern.

This paper presents a multi-scale method for convection-dominated diffusion problems in the regime of large P\'eclet numbers. The application of the solution operator to piecewise constant right-hand sides on some arbitrary coarse mesh defines a finite-dimensional coarse ansatz space with favorable approximation properties. For some relevant error measures, including the $L^2$-norm, the Galerkin projection onto this generalized finite element space even yields $\varepsilon$-independent error bounds, $\varepsilon$ being the singular perturbation parameter. By constructing an approximate local basis, the approach becomes a novel multi-scale method in the spirit of the Super-Localized Orthogonal Decomposition (SLOD). The error caused by basis localization can be estimated in an a-posteriori way. In contrast to existing multi-scale methods, numerical experiments indicate $\varepsilon$-independent convergence without preasymptotic effects even in the under-resolved regime of large mesh P\'eclet numbers.

Solving combinatorial optimization (CO) on graphs is among the fundamental tasks for upper-stream applications in data mining, machine learning and operations research. Despite the inherent NP-hard challenge for CO, heuristics, branch-and-bound, learning-based solvers are developed to tackle CO problems as accurately as possible given limited time budgets. However, a practical metric for the sensitivity of CO solvers remains largely unexplored. Existing theoretical metrics require the optimal solution which is infeasible, and the gradient-based adversarial attack metric from deep learning is not compatible with non-learning solvers that are usually non-differentiable. In this paper, we develop the first practically feasible robustness metric for general combinatorial optimization solvers. We develop a no worse optimal cost guarantee thus do not require optimal solutions, and we tackle the non-differentiable challenge by resorting to black-box adversarial attack methods. Extensive experiments are conducted on 14 unique combinations of solvers and CO problems, and we demonstrate that the performance of state-of-the-art solvers like Gurobi can degenerate by over 20% under the given time limit bound on the hard instances discovered by our robustness metric, raising concerns about the robustness of combinatorial optimization solvers.

The Ensemble Kalman Filters (EnKF) employ a Monte-Carlo approach to represent covariance information, and are affected by sampling errors in operational settings where the number of model realizations is much smaller than the model state dimension. To alleviate the effects of these errors EnKF relies on model-specific heuristics such as covariance localization, which takes advantage of the spatial locality of correlations among the model variables. This work proposes an approach to alleviate sampling errors that utilizes a locally averaged-in-time dynamics of the model, described in terms of a climatological covariance of the dynamical system. We use this covariance as the target matrix in covariance shrinkage methods, and develop a stochastic covariance shrinkage approach where synthetic ensemble members are drawn to enrich both the ensemble subspace and the ensemble transformation. We additionally provide for a way in which this methodology can be localized similar to the state-of-the-art LETKF method, and that for a certain model setup, our methodology significantly outperforms it.

We systematically describe the problem of simultaneous surrogate modeling of mixed variables (i.e., continuous, integer and categorical variables) in the Bayesian optimization (BO) context. We provide a unified hybrid model using both Monte-Carlo tree search (MCTS) and Gaussian processes (GP) that encompasses and generalizes multiple state-of-the-art mixed BO surrogates. Based on the architecture, we propose applying a new dynamic model selection criterion among novel candidate families of covariance kernels, including non-stationary kernels and associated families. Different benchmark problems are studied and presented to support the superiority of our model, along with results highlighting the effectiveness of our method compared to most state-of-the-art mixed-variable methods in BO.

High-dimensional Partial Differential Equations (PDEs) are a popular mathematical modelling tool, with applications ranging from finance to computational chemistry. However, standard numerical techniques for solving these PDEs are typically affected by the curse of dimensionality. In this work, we tackle this challenge while focusing on stationary diffusion equations defined over a high-dimensional domain with periodic boundary conditions. Inspired by recent progress in sparse function approximation in high dimensions, we propose a new method called compressive Fourier collocation. Combining ideas from compressive sensing and spectral collocation, our method replaces the use of structured collocation grids with Monte Carlo sampling and employs sparse recovery techniques, such as orthogonal matching pursuit and $\ell^1$ minimization, to approximate the Fourier coefficients of the PDE solution. We conduct a rigorous theoretical analysis showing that the approximation error of the proposed method is comparable with the best $s$-term approximation (with respect to the Fourier basis) to the solution. Using the recently introduced framework of random sampling in bounded Riesz systems, our analysis shows that the compressive Fourier collocation method mitigates the curse of dimensionality with respect to the number of collocation points under sufficient conditions on the regularity of the diffusion coefficient. We also present numerical experiments that illustrate the accuracy and stability of the method for the approximation of sparse and compressible solutions.

Causal inference is a critical research topic across many domains, such as statistics, computer science, education, public policy and economics, for decades. Nowadays, estimating causal effect from observational data has become an appealing research direction owing to the large amount of available data and low budget requirement, compared with randomized controlled trials. Embraced with the rapidly developed machine learning area, various causal effect estimation methods for observational data have sprung up. In this survey, we provide a comprehensive review of causal inference methods under the potential outcome framework, one of the well known causal inference framework. The methods are divided into two categories depending on whether they require all three assumptions of the potential outcome framework or not. For each category, both the traditional statistical methods and the recent machine learning enhanced methods are discussed and compared. The plausible applications of these methods are also presented, including the applications in advertising, recommendation, medicine and so on. Moreover, the commonly used benchmark datasets as well as the open-source codes are also summarized, which facilitate researchers and practitioners to explore, evaluate and apply the causal inference methods.

Many tasks in natural language processing can be viewed as multi-label classification problems. However, most of the existing models are trained with the standard cross-entropy loss function and use a fixed prediction policy (e.g., a threshold of 0.5) for all the labels, which completely ignores the complexity and dependencies among different labels. In this paper, we propose a meta-learning method to capture these complex label dependencies. More specifically, our method utilizes a meta-learner to jointly learn the training policies and prediction policies for different labels. The training policies are then used to train the classifier with the cross-entropy loss function, and the prediction policies are further implemented for prediction. Experimental results on fine-grained entity typing and text classification demonstrate that our proposed method can obtain more accurate multi-label classification results.

北京阿比特科技有限公司