亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Inductive conformal predictors (ICPs) are algorithms that are able to generate prediction sets, instead of point predictions, which are valid at a user-defined confidence level, only assuming exchangeability. These algorithms are useful for reliable machine learning and are increasing in popularity. The ICP development process involves dividing development data into three parts: training, calibration and test. With access to limited or expensive development data, it is an open question regarding the most efficient way to divide the data. This study provides several experiments to explore this question and consider the case for allowing overlap of examples between training and calibration sets. Conclusions are drawn that will be of value to academics and practitioners planning to use ICPs.

相關內容

Structural dynamics models with nonlinear stiffness appear, for example, when analyzing systems with nonlinear material behavior or undergoing large deformations. For complex systems, these models become too large for real-time applications or multi-query workflows. Hence, model reduction is needed. However, the mathematical operators of these models are often not available since, as is common in industry practice, the models are constructed using commercial simulation software. In this work, we propose an operator inference-based approach aimed at inferring, from data generated by the simulation model, reduced-order models (ROMs) of structural dynamics systems with stiffness terms represented by polynomials of arbitrary degree. To ensure physically meaningful models, we impose constraints on the inference such that the model is guaranteed to exhibit stability properties. Convexity of the optimization problem associated with the inference is maintained by applying a sum-of-squares relaxation to the polynomial term. To further reduce the size of the ROM and improve numerical conditioning of the inference, we also propose a novel clustering-based sparsification of the polynomial term. We validate the proposed method on several numerical examples, including a representative 3D Finite Element Model (FEM) of a steel piston rod.

A scoring system is a simple decision model that checks a set of features, adds a certain number of points to a total score for each feature that is satisfied, and finally makes a decision by comparing the total score to a threshold. Scoring systems have a long history of active use in safety-critical domains such as healthcare and justice, where they provide guidance for making objective and accurate decisions. Given their genuine interpretability, the idea of learning scoring systems from data is obviously appealing from the perspective of explainable AI. In this paper, we propose a practically motivated extension of scoring systems called probabilistic scoring lists (PSL), as well as a method for learning PSLs from data. Instead of making a deterministic decision, a PSL represents uncertainty in the form of probability distributions, or, more generally, probability intervals. Moreover, in the spirit of decision lists, a PSL evaluates features one by one and stops as soon as a decision can be made with enough confidence. To evaluate our approach, we conduct a case study in the medical domain.

We introduce a conceptually simple and efficient algorithm for seamless parametrization, a key element in constructing quad layouts and texture charts on surfaces. More specifically, we consider the construction of parametrizations with prescribed holonomy signatures i.e., a set of angles at singularities, and rotations along homology loops, preserving which is essential for constructing parametrizations following an input field, as well as for user control of the parametrization structure. Our algorithm performs exceptionally well on a large dataset based on Thingi10k [Zhou and Jacobson 2016], (16156 meshes) as well as on a challenging smaller dataset of [Myles et al. 2014], converging, on average, in 9 iterations. Although the algorithm lacks a formal mathematical guarantee, presented empirical evidence and the connections between convex optimization and closely related algorithms, suggest that a similar formulation can be found for this algorithm in the future.

We introduce the novel class $(E_\alpha)_{\alpha \in [-\infty,1)}$ of reverse map projection embeddings, each one defining a unique new method of encoding classical data into quantum states. Inspired by well-known map projections from the unit sphere onto its tangent planes, used in practice in cartography, these embeddings address the common drawback of the amplitude embedding method, wherein scalar multiples of data points are identified and information about the norm of data is lost. We show how reverse map projections can be utilised as equivariant embeddings for quantum machine learning. Using these methods, we can leverage symmetries in classical datasets to significantly strengthen performance on quantum machine learning tasks. Finally, we select four values of $\alpha$ with which to perform a simple classification task, taking $E_\alpha$ as the embedding and experimenting with both equivariant and non-equivariant setups. We compare their results alongside those of standard amplitude embedding.

ROME and MEMIT are largely believed to be two different model editing algorithms, with the major difference between them being the ability to perform batched edits. In this paper, we unify these two algorithms under a single conceptual umbrella, optimizing for the same goal, which we call the preservation-memorization objective. ROME uses an equality constraint to optimize this objective to perform one edit at a time, whereas MEMIT employs a more flexible least-square constraint that allows for batched edits. We generalize ROME and enable batched editing with equality constraint in the form of EMMET - an Equality-constrained Mass Model Editing algorithm for Transformers, a new batched memory-editing algorithm. EMMET can perform batched-edits up to a batch-size of 10,000, with very similar performance to MEMIT across multiple dimensions. With the introduction of EMMET, we truly unify ROME and MEMIT and show that both algorithms are equivalent in terms of their optimization objective, their abilities (singular and batched editing), their model editing performance and their limitations.

We apply functional acceleration to the Policy Mirror Descent (PMD) general family of algorithms, which cover a wide range of novel and fundamental methods in Reinforcement Learning (RL). Leveraging duality, we propose a momentum-based PMD update. By taking the functional route, our approach is independent of the policy parametrization and applicable to large-scale optimization, covering previous applications of momentum at the level of policy parameters as a special case. We theoretically analyze several properties of this approach and complement with a numerical ablation study, which serves to illustrate the policy optimization dynamics on the value polytope, relative to different algorithmic design choices in this space. We further characterize numerically several features of the problem setting relevant for functional acceleration, and lastly, we investigate the impact of approximation on their learning mechanics.

We study the fundamental limits of matching pursuit, or the pure greedy algorithm, for approximating a target function $ f $ by a linear combination $f_n$ of $n$ elements from a dictionary. When the target function is contained in the variation space corresponding to the dictionary, many impressive works over the past few decades have obtained upper and lower bounds on the error $\|f-f_n\|$ of matching pursuit, but they do not match. The main contribution of this paper is to close this gap and obtain a sharp characterization of the decay rate, $n^{-\alpha}$, of matching pursuit. Specifically, we construct a worst case dictionary which shows that the existing best upper bound cannot be significantly improved. It turns out that, unlike other greedy algorithm variants which converge at the optimal rate $ n^{-1/2}$, the convergence rate $n^{-\alpha}$ is suboptimal. Here, $\alpha \approx 0.182$ is determined by the solution to a certain non-linear equation.

Ammann bars are formed by segments (decorations) on the tiles of a tiling such that forming straight lines with them while tiling forces non-periodicity. Only a few cases are known, starting with Robert Ammann's observations on Penrose tiles, but there is no general explanation or construction. In this article we propose a general method for cut and project tilings based on the notion of subperiods and we illustrate it with an aperiodic set of 36 decorated prototiles related to what we called Cyrenaic tilings.

Key Point Analysis (KPA) aims for quantitative summarization that provides key points (KPs) as succinct textual summaries and quantities measuring their prevalence. KPA studies for arguments and reviews have been reported in the literature. A majority of KPA studies for reviews adopt supervised learning to extract short sentences as KPs before matching KPs to review comments for quantification of KP prevalence. Recent abstractive approaches still generate KPs based on sentences, often leading to KPs with overlapping and hallucinated opinions, and inaccurate quantification. In this paper, we propose Prompted Aspect Key Point Analysis (PAKPA) for quantitative review summarization. PAKPA employs aspect sentiment analysis and prompted in-context learning with Large Language Models (LLMs) to generate and quantify KPs grounded in aspects for business entities, which achieves faithful KPs with accurate quantification, and removes the need for large amounts of annotated data for supervised training. Experiments on the popular review dataset Yelp and the aspect-oriented review summarization dataset SPACE show that our framework achieves state-of-the-art performance. Source code and data are available at: //github.com/antangrocket1312/PAKPA

It is always well believed that modeling relationships between objects would be helpful for representing and eventually describing an image. Nevertheless, there has not been evidence in support of the idea on image description generation. In this paper, we introduce a new design to explore the connections between objects for image captioning under the umbrella of attention-based encoder-decoder framework. Specifically, we present Graph Convolutional Networks plus Long Short-Term Memory (dubbed as GCN-LSTM) architecture that novelly integrates both semantic and spatial object relationships into image encoder. Technically, we build graphs over the detected objects in an image based on their spatial and semantic connections. The representations of each region proposed on objects are then refined by leveraging graph structure through GCN. With the learnt region-level features, our GCN-LSTM capitalizes on LSTM-based captioning framework with attention mechanism for sentence generation. Extensive experiments are conducted on COCO image captioning dataset, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, GCN-LSTM increases CIDEr-D performance from 120.1% to 128.7% on COCO testing set.

北京阿比特科技有限公司