亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Bayesian approaches are one of the primary methodologies to tackle an inverse problem in high dimensions. Such an inverse problem arises in hydrology to infer the permeability field given flow data in a porous media. It is common practice to decompose the unknown field into some basis and infer the decomposition parameters instead of directly inferring the unknown. Given the multiscale nature of permeability fields, wavelets are a natural choice for parameterizing them. This study uses a Bayesian approach to incorporate the statistical sparsity that characterizes discrete wavelet coefficients. First, we impose a prior distribution incorporating the hierarchical structure of the wavelet coefficient and smoothness of reconstruction via scale-dependent hyperparameters. Then, Sequential Monte Carlo (SMC) method adaptively explores the posterior density on different scales, followed by model selection based on Bayes Factors. Finally, the permeability field is reconstructed from the coefficients using a multiresolution approach based on second-generation wavelets. Here, observations from the pressure sensor grid network are computed via Multilevel Adaptive Wavelet Collocation Method (AWCM). Results highlight the importance of prior modeling on parameter estimation in the inverse problem.

相關內容

We derive and study time-uniform confidence spheres - termed confidence sphere sequences (CSSs) - which contain the mean of random vectors with high probability simultaneously across all sample sizes. Inspired by the original work of Catoni and Giulini, we unify and extend their analysis to cover both the sequential setting and to handle a variety of distributional assumptions. More concretely, our results include an empirical-Bernstein CSS for bounded random vectors (resulting in a novel empirical-Bernstein confidence interval), a CSS for sub-$\psi$ random vectors, and a CSS for heavy-tailed random vectors based on a sequentially valid Catoni-Giulini estimator. Finally, we provide a version of our empirical-Bernstein CSS that is robust to contamination by Huber noise.

The problem of optimizing discrete phases in a reconfigurable intelligent surface (RIS) to maximize the received power at a user equipment is addressed. Necessary and sufficient conditions to achieve this maximization are given. These conditions are employed in an algorithm to achieve the maximization. New versions of the algorithm are given that are proven to achieve convergence in N or fewer steps whether the direct link is completely blocked or not, where N is the number of the RIS elements, whereas previously published results achieve this in KN or 2N number of steps where K is the number of discrete phases, e.g., [1], [2]. Thus, for a discrete-phase RIS, the techniques presented in this paper achieve the optimum received power in the smallest number of steps published in the literature. In addition, in each of those N steps, the techniques presented in this paper determine only one or a small number of phase shifts with a simple elementwise update rule, which result in a substantial reduction of computation time, as compared to the algorithms in the literature, e.g., [2], [3].

Monte Carlo and Quasi-Monte Carlo methods present a convenient approach for approximating the expected value of a random variable. Algorithms exist to adaptively sample the random variable until a user defined absolute error tolerance is satisfied with high probability. This work describes an extension of such methods which supports adaptive sampling to satisfy general error criteria for functions of a common array of expectations. Although several functions involving multiple expectations are being evaluated, only one random sequence is required, albeit sometimes of larger dimension than the underlying randomness. These enhanced Monte Carlo and Quasi-Monte Carlo algorithms are implemented in the QMCPy Python package with support for economic and parallel function evaluation. We exemplify these capabilities on problems from machine learning and global sensitivity analysis.

Influence Maximization (IM) is a crucial problem in data science. The goal is to find a fixed-size set of highly-influential seed vertices on a network to maximize the influence spread along the edges. While IM is NP-hard on commonly-used diffusion models, a greedy algorithm can achieve $(1-1/e)$-approximation, repeatedly selecting the vertex with the highest marginal gain in influence as the seed. Due to theoretical guarantees, rich literature focuses on improving the performance of the greedy algorithm. To estimate the marginal gain, existing work either runs Monte Carlo (MC) simulations of influence spread or pre-stores hundreds of sketches (usually per-vertex information). However, these approaches can be inefficient in time (MC simulation) or space (storing sketches), preventing the ideas from scaling to today's large-scale graphs. This paper significantly improves the scalability of IM using two key techniques. The first is a sketch-compression technique for the independent cascading model on undirected graphs. It allows combining the simulation and sketching approaches to achieve a time-space tradeoff. The second technique includes new data structures for parallel seed selection. Using our new approaches, we implemented PaC-IM: Parallel and Compressed IM. We compare PaC-IM with state-of-the-art parallel IM systems on a 96-core machine with 1.5TB memory. PaC-IM can process large-scale graphs with up to 900M vertices and 74B edges in about 2 hours. On average across all tested graphs, our uncompressed version is 5--18$\times$ faster and about 1.4$\times$ more space-efficient than existing parallel IM systems. Using compression further saves 3.8$\times$ space with only 70% overhead in time on average.

Differentiable Filters, as recursive Bayesian estimators, possess the ability to learn complex dynamics by deriving state transition and measurement models exclusively from data. This data-driven approach eliminates the reliance on explicit analytical models while maintaining the essential algorithmic components of the filtering process. However, the gain mechanism remains non-differentiable, limiting its adaptability to specific task requirements and contextual variations. To address this limitation, this paper introduces an innovative approach called {\alpha}-MDF (Attention-based Multimodal Differentiable Filter). {\alpha}-MDF leverages modern attention mechanisms to learn multimodal latent representations for accurate state estimation in soft robots. By incorporating attention mechanisms, {\alpha}-MDF offers the flexibility to tailor the gain mechanism to the unique nature of the task and context. The effectiveness of {\alpha}-MDF is validated through real-world state estimation tasks on soft robots. Our experimental results demonstrate significant reductions in state estimation errors, consistently surpassing differentiable filter baselines by up to 45% in the domain of soft robotics.

The first part of this thesis focuses on maximizing the overall recommendation accuracy. This accuracy is usually evaluated with some user-oriented metric tailored to the recommendation scenario, but because recommendation is usually treated as a machine learning problem, recommendation models are trained to maximize some other generic criteria that does not necessarily align with the criteria ultimately captured by the user-oriented evaluation metric. Recent research aims at bridging this gap between training and evaluation via direct ranking optimization, but still assumes that the metric used for evaluation should also be the metric used for training. We challenge this assumption, mainly because some metrics are more informative than others. Indeed, we show that models trained via the optimization of a loss inspired by Rank-Biased Precision (RBP) tend to yield higher accuracy, even when accuracy is measured with metrics other than RBP. However, the superiority of this RBP-inspired loss stems from further benefiting users who are already well-served, rather than helping those who are not. This observation inspires the second part of this thesis, where our focus turns to helping non-mainstream users. These are users who are difficult to recommend to either because there is not enough data to model them, or because they have niche taste and thus few similar users to look at when recommending in a collaborative way. These differences in mainstreamness introduce a bias reflected in an accuracy gap between users or user groups, which we try to narrow.

With the emergence of Artificial Intelligence, numerical algorithms are moving towards more approximate approaches. For methods such as PCA or diffusion maps, it is necessary to compute eigenvalues of a large matrix, which may also be dense depending on the kernel. A global method, i.e. a method that requires all data points simultaneously, scales with the data dimension N and not with the intrinsic dimension d; the complexity for an exact dense eigendecomposition leads to $\mathcal{O}(N^{3})$. We have combined the two frameworks, $\mathsf{datafold}$ and $\mathsf{GOFMM}$. The first framework computes diffusion maps, where the computational bottleneck is the eigendecomposition while with the second framework we compute the eigendecomposition approximately within the iterative Lanczos method. A hierarchical approximation approach scales roughly with a runtime complexity of $\mathcal{O}(Nlog(N))$ vs. $\mathcal{O}(N^{3})$ for a classic approach. We evaluate the approach on two benchmark datasets -- scurve and MNIST -- with strong and weak scaling using OpenMP and MPI on dense matrices with maximum size of $100k\times100k$.

Fast inverse kinematics (IK) is a central component in robotic motion planning. For complex robots, IK methods are often based on root search and non-linear optimization algorithms. These algorithms can be massively sped up using a neural network to predict a good initial guess, which can then be refined in a few numerical iterations. Besides previous work on learning-based IK, we present a learning approach for the fundamentally more complex problem of IK with collision avoidance. We do this in diverse and previously unseen environments. From a detailed analysis of the IK learning problem, we derive a network and unsupervised learning architecture that removes the need for a sample data generation step. Using the trained network's prediction as an initial guess for a two-stage Jacobian-based solver allows for fast and accurate computation of the collision-free IK. For the humanoid robot, Agile Justin (19 DoF), the collision-free IK is solved in less than 10 milliseconds (on a single CPU core) and with an accuracy of 10^-4 m and 10^-3 rad based on a high-resolution world model generated from the robot's integrated 3D sensor. Our method massively outperforms a random multi-start baseline in a benchmark with the 19 DoF humanoid and challenging 3D environments. It requires ten times less training time than a supervised training method while achieving comparable results.

Pre-trained Language Models (PLMs) which are trained on large text corpus via self-supervised learning method, have yielded promising performance on various tasks in Natural Language Processing (NLP). However, though PLMs with huge parameters can effectively possess rich knowledge learned from massive training text and benefit downstream tasks at the fine-tuning stage, they still have some limitations such as poor reasoning ability due to the lack of external knowledge. Research has been dedicated to incorporating knowledge into PLMs to tackle these issues. In this paper, we present a comprehensive review of Knowledge-Enhanced Pre-trained Language Models (KE-PLMs) to provide a clear insight into this thriving field. We introduce appropriate taxonomies respectively for Natural Language Understanding (NLU) and Natural Language Generation (NLG) to highlight these two main tasks of NLP. For NLU, we divide the types of knowledge into four categories: linguistic knowledge, text knowledge, knowledge graph (KG), and rule knowledge. The KE-PLMs for NLG are categorized into KG-based and retrieval-based methods. Finally, we point out some promising future directions of KE-PLMs.

While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on the ImageNet classification task has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new Full Reference Image Quality Assessment (FR-IQA) dataset of perceptual human judgments, orders of magnitude larger than previous datasets. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by huge margins. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.

北京阿比特科技有限公司