亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We revisit the recent breakthrough result of Gkatzelis et al. on (single-winner) metric voting, which showed that the optimal distortion of 3 can be achieved by a mechanism called Plurality Matching. The rule picks an arbitrary candidate for whom a certain candidate-specific bipartite graph contains a perfect matching, and thus, it is not neutral (i.e, symmetric with respect to candidates). Subsequently, a much simpler rule called Plurality Veto was shown to achieve distortion 3 as well. This rule only constructs such a matching implicitly but the winner depends on the order that voters are processed, and thus, it is not anonymous (i.e., symmetric with respect to voters). We provide an intuitive interpretation of this matching by generalizing the classical notion of the (proportional) veto core in social choice theory. This interpretation opens up a number of immediate consequences. Previous methods for electing a candidate from the veto core can be interpreted simply as matching algorithms. Different election methods realize different matchings, in turn leading to different sets of candidates as winners. For a broad generalization of the veto core, we show that the generalized veto core is equal to the set of candidates who can emerge as winners under a natural class of matching algorithms reminiscent of Serial Dictatorship. Extending these matching algorithms into continuous time, we obtain a highly practical voting rule with optimal distortion 3, which is also intuitive and easy to explain: Each candidate starts off with public support equal to his plurality score. From time 0 to 1, every voter continuously brings down, at rate 1, the support of her bottom choice among not-yet-eliminated candidates. A candidate is eliminated if he is opposed by a voter after his support reaches 0. On top of being anonymous and neutral, this rule satisfies many other axioms desirable in practice.

相關內容

We study a change point model based on a stochastic partial differential equation (SPDE) corresponding to the heat equation governed by the weighted Laplacian $\Delta_\vartheta = \nabla\vartheta\nabla$, where $\vartheta=\vartheta(x)$ is a space-dependent diffusivity. As a basic problem the domain $(0,1)$ is considered with a piecewise constant diffusivity with a jump at an unknown point $\tau$. Based on local measurements of the solution in space with resolution $\delta$ over a finite time horizon, we construct a simultaneous M-estimator for the diffusivity values and the change point. The change point estimator converges at rate $\delta$, while the diffusivity constants can be recovered with convergence rate $\delta^{3/2}$. Moreover, when the diffusivity parameters are known and the jump height vanishes with the spatial resolution tending to zero, we derive a limit theorem for the change point estimator and identify the limiting distribution. For the mathematical analysis, a precise understanding of the SPDE with discontinuous $\vartheta$, tight concentration bounds for quadratic functionals in the solution, and a generalisation of classical M-estimators are developed.

Encrypted mempools are a class of solutions aimed at preventing or reducing negative externalities of MEV extraction using cryptographic privacy. Mempool encryption aims to hide information related to pending transactions until a block including the transactions is committed, targeting the prevention of frontrunning and similar behaviour. Among the various methods of encryption, threshold schemes are particularly interesting for the design of MEV mitigation mechanisms, as their distributed nature and minimal hardware requirements harmonize with a broader goal of decentralization. This work looks beyond the formal and technical cryptographic aspects of threshold encryption schemes to focus on the market and incentive implications of implementing encrypted mempools as MEV mitigation techniques. In particular, this paper argues that the deployment of such protocols without proper consideration and understanding of market impact invites several undesired outcomes, with the ultimate goal of stimulating further analysis of this class of solutions outside of pure cryptograhic considerations. Included in the paper is an overview of a series of problems, various candidate solutions in the form of mempool encryption techniques with a focus on threshold encryption, potential drawbacks to these solutions, and Osmosis as a case study. The paper targets a broad audience and remains agnostic to blockchain design where possible while drawing from mostly financial examples.

A key challenge in Bayesian decentralized data fusion is the `rumor propagation' or `double counting' phenomenon, where previously sent data circulates back to its sender. It is often addressed by approximate methods like covariance intersection (CI) which takes a weighted average of the estimates to compute the bound. The problem is that this bound is not tight, i.e. the estimate is often over-conservative. In this paper, we show that by exploiting the probabilistic independence structure in multi-agent decentralized fusion problems a tighter bound can be found using (i) an expansion to the CI algorithm that uses multiple (non-monolithic) weighting factors instead of one (monolithic) factor in the original CI and (ii) a general optimization scheme that is able to compute optimal bounds and fully exploit an arbitrary dependency structure. We compare our methods and show that on a simple problem, they converge to the same solution. We then test our new non-monolithic CI algorithm on a large-scale target tracking simulation and show that it achieves a tighter bound and a more accurate estimate compared to the original monolithic CI.

Laplace learning is a popular machine learning algorithm for finding missing labels from a small number of labelled feature vectors using the geometry of a graph. More precisely, Laplace learning is based on minimising a graph-Dirichlet energy, equivalently a discrete Sobolev $\Wkp{2}{1}$ semi-norm, constrained to taking the values of known labels on a given subset. The variational problem is asymptotically ill-posed as the number of unlabeled feature vectors goes to infinity for finite given labels due to a lack of regularity in minimisers of the continuum Dirichlet energy in any dimension higher than one. In particular, continuum minimisers are not continuous. One solution is to consider higher-order regularisation, which is the analogue of minimising Sobolev $\Wkp{s}{2}$ semi-norms. In this paper we consider the asymptotics of minimising a graph variant of the Sobolev $\Wkp{s}{2}$ semi-norm with pointwise constraints. We show that, as expected, one needs $s>d/2$ where $d$ is the dimension of the data manifold. We also show that there must be an upper bound on the connectivity of the graph; that is, highly connected graphs lead to degenerate behaviour of the minimiser even when $s>d/2$.

We study dynamic algorithms in the model of algorithms with predictions. We assume the algorithm is given imperfect predictions regarding future updates, and we ask how such predictions can be used to improve the running time. This can be seen as a model interpolating between classic online and offline dynamic algorithms. Our results give smooth tradeoffs between these two extreme settings. First, we give algorithms for incremental and decremental transitive closure and approximate APSP that take as an additional input a predicted sequence of updates (edge insertions, or edge deletions, respectively). They preprocess it in $\tilde{O}(n^{(3+\omega)/2})$ time, and then handle updates in $\tilde{O}(1)$ worst-case time and queries in $\tilde{O}(\eta^2)$ worst-case time. Here $\eta$ is an error measure that can be bounded by the maximum difference between the predicted and actual insertion (deletion) time of an edge, i.e., by the $\ell_\infty$-error of the predictions. The second group of results concerns fully dynamic problems with vertex updates, where the algorithm has access to a predicted sequence of the next $n$ updates. We show how to solve fully dynamic triangle detection, maximum matching, single-source reachability, and more, in $O(n^{\omega-1}+n\eta_i)$ worst-case update time. Here $\eta_i$ denotes how much earlier the $i$-th update occurs than predicted. Our last result is a reduction that transforms a worst-case incremental algorithm without predictions into a fully dynamic algorithm which is given a predicted deletion time for each element at the time of its insertion. As a consequence we can, e.g., maintain fully dynamic exact APSP with such predictions in $\tilde{O}(n^2)$ worst-case vertex insertion time and $\tilde{O}(n^2 (1+\eta_i))$ worst-case vertex deletion time (for the prediction error $\eta_i$ defined as above).

In this paper, we present a novel learning-based shared control framework. This framework deploys first-order Dynamical Systems (DS) as motion generators providing the desired reference motion, and a Variable Stiffness Dynamical Systems (VSDS) \cite{chen2021closed} for haptic guidance. We show how to shape several features of our controller in order to achieve authority allocation, local motion refinement, in addition to the inherent ability of the controller to automatically synchronize with the human state during joint task execution. We validate our approach in a teleoperated task scenario, where we also showcase the ability of our framework to deal with situations that require updating task knowledge due to possible changes in the task scenario, or changes in the environment. Finally, we conduct a user study to compare the performance of our VSDS controller for guidance generation to two state-of-the-art controllers in a target reaching task. The result shows that our VSDS controller has the highest successful rate of task execution among all conditions. Besides, our VSDS controller helps reduce the execution time and task load significantly, and was selected as the most favorable controller by participants.

Moderate calibration, the expected event probability among observations with predicted probability $\pi$ being equal to $\pi$, is a desired property of risk prediction models. Current graphical and numerical techniques for evaluating moderate calibration of clinical prediction models are mostly based on smoothing or grouping the data. As well, there is no widely accepted inferential method for the null hypothesis that a model is moderately calibrated. In this work, we discuss recently-developed, and propose novel, methods for the assessment of moderate calibration for binary responses. The methods are based on the limiting distributions of functions of standardized partial sums of prediction errors converging to the corresponding laws of Brownian motion. The novel method relies on well-known properties of the Brownian bridge which enables joint inference on mean and moderate calibration, leading to a unified 'bridge' test for detecting miscalibration. Simulation studies indicate that the bridge test is more powerful, often substantially, than the alternative test. As a case study we consider a prediction model for short-term mortality after a heart attack. Moderate calibration can be assessed without requiring arbitrary grouping of data or using methods that require tuning of parameters. We suggest graphical presentation of the partial sum curves and reporting the strength of evidence indicated by the proposed methods when examining model calibration.

The central problem we address in this work is estimation of the parameter support set S, the set of indices corresponding to nonzero parameters, in the context of a sparse parametric likelihood model for count-valued multivariate time series. We develop a computationally-intensive algorithm that performs the estimation by aggregating support sets obtained by applying the LASSO to data subsamples. Our approach is to identify several well-fitting candidate models and estimate S by the most frequently-used parameters, thus \textit{aggregating} candidate models rather than selecting a single candidate deemed optimal in some sense. While our method is broadly applicable to any selection problem, we focus on the generalized vector autoregressive model class, and in particular the Poisson case, due to (i) the difficulty of the support estimation problem due to complex dependence in the data, (ii) recent work applying the LASSO in this context, and (iii) interesting applications in network recovery from discrete multivariate time series. We establish benchmark methods based on the LASSO and present empirical results demonstrating the superior performance of our method. Additionally, we present an application estimating ecological interaction networks from paleoclimatology data.

We develop a statistical inference method for an optimal transport map between distributions on real numbers with uniform confidence bands. The concept of optimal transport (OT) is used to measure distances between distributions, and OT maps are used to construct the distance. OT has been applied in many fields in recent years, and its statistical properties have attracted much interest. In particular, since the OT map is a function, a uniform norm-based statistical inference is significant for visualization and interpretation. In this study, we derive a limit distribution of a uniform norm of an estimation error for the OT map, and then develop a uniform confidence band based on it. In addition to our limit theorem, we develop a smoothed bootstrap method with its validation and guarantee on an asymptotic coverage probability of the confidence band. Our proof is based on the functional delta method and the representation of OT maps on the reals.

Nonparametric estimators for the mean and the covariance functions of functional data are proposed. The setup covers a wide range of practical situations. The random trajectories are, not necessarily differentiable, have unknown regularity, and are measured with error at discrete design points. The measurement error could be heteroscedastic. The design points could be either randomly drawn or common for all curves. The estimators depend on the local regularity of the stochastic process generating the functional data. We consider a simple estimator of this local regularity which exploits the replication and regularization features of functional data. Next, we use the ``smoothing first, then estimate'' approach for the mean and the covariance functions. They can be applied with both sparsely or densely sampled curves, are easy to calculate and to update, and perform well in simulations. Simulations built upon an example of real data set, illustrate the effectiveness of the new approach.

北京阿比特科技有限公司