亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Scaling analysis is a technique in computational political science that assigns a political actor (e.g. politician or party) a score on a predefined scale based on a (typically long) body of text (e.g. a parliamentary speech or an election manifesto). For example, political scientists have often used the left--right scale to systematically analyse political landscapes of different countries. NLP methods for automatic scaling analysis can find broad application provided they (i) are able to deal with long texts and (ii) work robustly across domains and languages. In this work, we implement and compare two approaches to automatic scaling analysis of political-party manifestos: label aggregation, a pipeline strategy relying on annotations of individual statements from the manifestos, and long-input-Transformer-based models, which compute scaling values directly from raw text. We carry out the analysis of the Comparative Manifestos Project dataset across 41 countries and 27 languages and find that the task can be efficiently solved by state-of-the-art models, with label aggregation producing the best results.

相關內容

Finite volume method (FVM) is a widely used mesh-based technique, renowned for its computational efficiency and accuracy but it bears significant drawbacks, particularly in mesh generation and handling complex boundary interfaces or conditions. On the other hand, smoothed particle hydrodynamics (SPH) method, a popular meshless alternative, inherently circumvents the mesh generation and yields smoother numerical outcomes but at the expense of computational efficiency. Therefore, numerous researchers have strategically amalgamated the strengths of both methods to investigate complex flow phenomena and this synergy has yielded precise and computationally efficient outcomes. However, algorithms involving the weak coupling of these two methods tend to be intricate, which has issues pertaining to versatility, implementation, and mutual adaptation to hardware and coding structures. Thus, achieving a robust and strong coupling of FVM and SPH in a unified framework is imperative. Due to differing boundary algorithms between these methods in Wang's work, the crucial step for establishing a strong coupling of both methods within a unified SPH framework lies in incorporating the FVM boundary algorithm into the Eulerian SPH method. In this paper, we propose a straightforward algorithm in the Eulerian SPH method, algorithmically equivalent to that in FVM, grounded in the principle of zero-order consistency. Moreover, several numerical examples, including fully and weakly compressible flows with various boundary conditions in the Eulerian SPH method, validate the stability and accuracy of the proposed algorithm.

We present an accurate and interpretable method for answer extraction in machine reading comprehension that is reminiscent of case-based reasoning (CBR) from classical AI. Our method (CBR-MRC) builds upon the hypothesis that contextualized answers to similar questions share semantic similarities with each other. Given a test question, CBR-MRC first retrieves a set of similar cases from a nonparametric memory and then predicts an answer by selecting the span in the test context that is most similar to the contextualized representations of answers in the retrieved cases. The semi-parametric nature of our approach allows it to attribute a prediction to the specific set of evidence cases, making it a desirable choice for building reliable and debuggable QA systems. We show that CBR-MRC provides high accuracy comparable with large reader models and outperforms baselines by 11.5 and 8.4 EM on NaturalQuestions and NewsQA, respectively. Further, we demonstrate the ability of CBR-MRC in identifying not just the correct answer tokens but also the span with the most relevant supporting evidence. Lastly, we observe that contexts for certain question types show higher lexical diversity than others and find that CBR-MRC is robust to these variations while performance using fully-parametric methods drops.

Domain experts often possess valuable physical insights that are overlooked in fully automated decision-making processes such as Bayesian optimisation. In this article we apply high-throughput (batch) Bayesian optimisation alongside anthropological decision theory to enable domain experts to influence the selection of optimal experiments. Our methodology exploits the hypothesis that humans are better at making discrete choices than continuous ones and enables experts to influence critical early decisions. At each iteration we solve an augmented multi-objective optimisation problem across a number of alternate solutions, maximising both the sum of their utility function values and the determinant of their covariance matrix, equivalent to their total variability. By taking the solution at the knee point of the Pareto front, we return a set of alternate solutions at each iteration that have both high utility values and are reasonably distinct, from which the expert selects one for evaluation. We demonstrate that even in the case of an uninformed practitioner, our algorithm recovers the regret of standard Bayesian optimisation.

In biomedical research, computational methods have become indispensable and their use is increasing, making the efficient allocation of computing resources paramount. Practitioners routinely allocate resources far in excess of what is required for batch processing jobs, leading to not just inflated wait times and costs, but also unnecessary carbon emissions. This is not without reason however, as accurately determining resource needs is complex, affected by the nature of tools, data size, and analysis parameters, especially on popular servers that handle numerous jobs. The Galaxy platform, a web-based hub for biomedical analysis used globally by scientists, exemplifies this challenge. Serving nearly half a million registered users and managing around 2 million monthly jobs, Galaxy's growth outpaces the resources at its disposal. This is necessitating smarter resource utilization. To address this, we have developed a tool named Total Perspective Vortex (TPV) - a software package that right-sizes resource allocations for each job. TPV is able to dynamically set resource requirements for individual jobs and perform meta-scheduling across heterogeneous resources. It also includes a first-ever community-curated database of default resource requirements for nearly 1,000 popular bioinformatics tools. Deployments in Galaxy Australia and Europe demonstrate its effectiveness with meta-scheduling user jobs and an improved experience for systems administrators managing Galaxy servers.

Disinformation research has proliferated in reaction to widespread false, problematic beliefs purported to explain major social phenomena. Yet while the effects of disinformation are well-known, there is less consensus about its causes; the research spans several disciplines, each focusing on different pieces. This article contributes to this growing field by reviewing prevalent U.S. disinformation discourse (academic writing, media, and corporate and government narrative) and outlining the dominant understanding, or paradigm, of the disinformation problem by analyzing cross-disciplinary discourse about the content, individual, group, and institutional layers of the problem. The result is an individualistic explanation largely blaming social media, malicious individuals or nations, and irrational people. Yet this understanding has shortcomings: notably, that its limited, individualistic views of truth and rationality obscures the influence of oppressive ideologies and media or domestic actors in creating flawed worldviews and spreading disinformation. The article then concludes by putting forth an alternative, sociopolitical paradigm that allows subjective models of the world to govern rationality and information processing -- largely informed by social and group identity -- which are being formed and catered to by institutional actors (corporations, media, political parties, and the government) to maintain or gain legitimacy for their actions.

In many applications, it is desired to obtain extreme eigenvalues and eigenvectors of large Hermitian matrices by efficient and compact algorithms. In particular, orthogonalization-free methods are preferred for large-scale problems for finding eigenspaces of extreme eigenvalues without explicitly computing orthogonal vectors in each iteration. For the top $p$ eigenvalues, the simplest orthogonalization-free method is to find the best rank-$p$ approximation to a positive semi-definite Hermitian matrix by algorithms solving the unconstrained Burer-Monteiro formulation. We show that the nonlinear conjugate gradient method for the unconstrained Burer-Monteiro formulation is equivalent to a Riemannian conjugate gradient method on a quotient manifold with the Bures-Wasserstein metric, thus its global convergence to a stationary point can be proven. Numerical tests suggest that it is efficient for computing the largest $k$ eigenvalues for large-scale matrices if the largest $k$ eigenvalues are nearly distributed uniformly.

We consider the application of the generalized Convolution Quadrature (gCQ) to approximate the solution of an important class of sectorial problems. The gCQ is a generalization of Lubich's Convolution Quadrature (CQ) that allows for variable steps. The available stability and convergence theory for the gCQ requires non realistic regularity assumptions on the data, which do not hold in many applications of interest, such as the approximation of subdiffusion equations. It is well known that for non smooth enough data the original CQ, with uniform steps, presents an order reduction close to the singularity. We generalize the analysis of the gCQ to data satisfying realistic regularity assumptions and provide sufficient conditions for stability and convergence on arbitrary sequences of time points. We consider the particular case of graded meshes and show how to choose them optimally, according to the behaviour of the data. An important advantage of the gCQ method is that it allows for a fast and memory reduced implementation. We describe how the fast and oblivious gCQ can be implemented and illustrate our theoretical results with several numerical experiments.

We introduce the new setting of open-vocabulary object 6D pose estimation, in which a textual prompt is used to specify the object of interest. In contrast to existing approaches, in our setting (i) the object of interest is specified solely through the textual prompt, (ii) no object model (e.g. CAD or video sequence) is required at inference, (iii) the object is imaged from two different viewpoints of two different scenes, and (iv) the object was not observed during the training phase. To operate in this setting, we introduce a novel approach that leverages a Vision-Language Model to segment the object of interest from two distinct scenes and to estimate its relative 6D pose. The key of our approach is a carefully devised strategy to fuse object-level information provided by the prompt with local image features, resulting in a feature space that can generalize to novel concepts. We validate our approach on a new benchmark based on two popular datasets, REAL275 and Toyota-Light, which collectively encompass 39 object instances appearing in four thousand image pairs. The results demonstrate that our approach outperforms both a well-established hand-crafted method and a recent deep learning-based baseline in estimating the relative 6D pose of objects in different scenes. Project website: //jcorsetti.github.io/oryon-website/.

We propose an innovative and generic methodology to analyse individual and collective behaviour through individual trajectory data. The work is motivated by the analysis of GPS trajectories of fishing vessels collected from regulatory tracking data in the context of marine biodiversity conservation and ecosystem-based fisheries management. We build a low-dimensional latent representation of trajectories using convolutional neural networks as non-linear mapping. This is done by training a conditional variational auto-encoder taking into account covariates. The posterior distributions of the latent representations can be linked to the characteristics of the actual trajectories. The latent distributions of the trajectories are compared with the Bhattacharyya coefficient, which is well-suited for comparing distributions. Using this coefficient, we analyse the variation of the individual behaviour of each vessel during time. For collective behaviour analysis, we build proximity graphs and use an extension of the stochastic block model for multiple networks. This model results in a clustering of the individuals based on their set of trajectories. The application to French fishing vessels enables us to obtain groups of vessels whose individual and collective behaviours exhibit spatio-temporal patterns over the period 2014-2018.

We study the statistical capacity of the classical binary perceptrons with general thresholds $\kappa$. After recognizing the connection between the capacity and the bilinearly indexed (bli) random processes, we utilize a recent progress in studying such processes to characterize the capacity. In particular, we rely on \emph{fully lifted} random duality theory (fl RDT) established in \cite{Stojnicflrdt23} to create a general framework for studying the perceptrons' capacities. Successful underlying numerical evaluations are required for the framework (and ultimately the entire fl RDT machinery) to become fully practically operational. We present results obtained in that directions and uncover that the capacity characterizations are achieved on the second (first non-trivial) level of \emph{stationarized} full lifting. The obtained results \emph{exactly} match the replica symmetry breaking predictions obtained through statistical physics replica methods in \cite{KraMez89}. Most notably, for the famous zero-threshold scenario, $\kappa=0$, we uncover the well known $\alpha\approx0.8330786$ scaled capacity.

北京阿比特科技有限公司