亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Selling a perfectly divisible item to potential buyers is a fundamental task with apparent applications to pricing communication bandwidth and cloud computing services. Surprisingly, despite the rich literature on single-item auctions, revenue maximization when selling a divisible item is a much less understood objective. We introduce a Bayesian setting, in which the potential buyers have concave valuation functions (defined for each possible item fraction) that are randomly chosen according to known probability distributions. Extending the sequential posted pricing paradigm, we focus on mechanisms that use linear pricing, charging a fixed price for the whole item and proportional prices for fractions of it. Our goal is to understand the power of such mechanisms by bounding the gap between the expected revenue that can be achieved by the best among these mechanisms and the maximum expected revenue that can be achieved by any mechanism assuming mild restrictions on the behavior of the buyers. Under regularity assumptions for the probability distributions, we show that this revenue gap depends only logarithmically on a natural parameter characterizing the valuation functions and the number of agents. Our results follow by bounding the objective value of a mathematical program that maximizes the ex-ante relaxation of optimal revenue under linear pricing revenue constraints.

相關內容

Gradient boosting for decision tree algorithms are increasingly used in actuarial applications as they show superior predictive performance over traditional generalized linear models. Many improvements and sophistications to the first gradient boosting machine algorithm exist. We present in a unified notation, and contrast, all the existing point and probabilistic gradient boosting for decision tree algorithms: GBM, XGBoost, DART, LightGBM, CatBoost, EGBM, PGBM, XGBoostLSS, cyclic GBM, and NGBoost. In this comprehensive numerical study, we compare their performance on five publicly available datasets for claim frequency and severity, of various size and comprising different number of (high cardinality) categorical variables. We explain how varying exposure-to-risk can be handled with boosting in frequency models. We compare the algorithms on the basis of computational efficiency, predictive performance, and model adequacy. LightGBM and XGBoostLSS win in terms of computational efficiency. The fully interpretable EGBM achieves competitive predictive performance compared to the black box algorithms considered. We find that there is no trade-off between model adequacy and predictive accuracy: both are achievable simultaneously.

Many inverse problems are ill-posed and need to be complemented by prior information that restricts the class of admissible models. Bayesian approaches encode this information as prior distributions that impose generic properties on the model such as sparsity, non-negativity or smoothness. However, in case of complex structured models such as images, graphs or three-dimensional (3D) objects,generic prior distributions tend to favor models that differ largely from those observed in the real world. Here we explore the use of diffusion models as priors that are combined with experimental data within a Bayesian framework. We use 3D point clouds to represent 3D objects such as household items or biomolecular complexes formed from proteins and nucleic acids. We train diffusion models that generate coarse-grained 3D structures at a medium resolution and integrate these with incomplete and noisy experimental data. To demonstrate the power of our approach, we focus on the reconstruction of biomolecular assemblies from cryo-electron microscopy (cryo-EM) images, which is an important inverse problem in structural biology. We find that posterior sampling with diffusion model priors allows for 3D reconstruction from very sparse, low-resolution and partial observations.

We study the recovery of one-dimensional semipermeable barriers for a stochastic process in a planar domain. The considered process acts like Brownian motion when away from the barriers and is reflected upon contact until a sufficient but random amount of interaction has occurred, determined by the permeability, after which it passes through. Given a sequence of samples, we wonder when one can determine the location and shape of the barriers. This paper identifies several different recovery regimes, determined by the available observation period and the time between samples, with qualitatively different behavior. The observation period $T$ dictates if the full barriers or only certain pieces can be recovered, and the sampling rate significantly influences the convergence rate as $T\to \infty$. This rate turns out polynomial for fixed-frequency data, but exponentially fast in a high-frequency regime. Further, the environment's impact on the difficulty of the problem is quantified using interpretable parameters in the recovery guarantees, and is found to also be regime-dependent. For instance, the curvature of the barriers affects the convergence rate for fixed-frequency data, but becomes irrelevant when $T\to \infty$ with high-frequency data. The results are accompanied by explicit algorithms, and we conclude by illustrating the application to real-life data.

Preconditioned eigenvalue solvers offer the possibility to incorporate preconditioners for the solution of large-scale eigenvalue problems, as they arise from the discretization of partial differential equations. The convergence analysis of such methods is intricate. Even for the relatively simple preconditioned inverse iteration (PINVIT), which targets the smallest eigenvalue of a symmetric positive definite matrix, the celebrated analysis by Neymeyr is highly nontrivial and only yields convergence if the starting vector is fairly close to the desired eigenvector. In this work, we prove a new non-asymptotic convergence result for a variant of PINVIT. Our proof proceeds by analyzing an equivalent Riemannian steepest descent method and leveraging convexity-like properties. We show a convergence rate that nearly matches the one of PINVIT. As a major benefit, we require a condition on the starting vector that tends to be less stringent. This improved global convergence property is demonstrated for two classes of preconditioners with theoretical bounds and a range of numerical experiments.

Feature attributions attempt to highlight what inputs drive predictive power. Good attributions or explanations are thus those that produce inputs that retain this predictive power; accordingly, evaluations of explanations score their quality of prediction. However, evaluations produce scores better than what appears possible from the values in the explanation for a class of explanations, called encoding explanations. Probing for encoding remains a challenge because there is no general characterization of what gives the extra predictive power. We develop a definition of encoding that identifies this extra predictive power via conditional dependence and show that the definition fits existing examples of encoding. This definition implies, in contrast to encoding explanations, that non-encoding explanations contain all the informative inputs used to produce the explanation, giving them a "what you see is what you get" property, which makes them transparent and simple to use. Next, we prove that existing scores (ROAR, FRESH, EVAL-X) do not rank non-encoding explanations above encoding ones, and develop STRIPE-X which ranks them correctly. After empirically demonstrating the theoretical insights, we use STRIPE-X to show that despite prompting an LLM to produce non-encoding explanations for a sentiment analysis task, the LLM-generated explanations encode.

A statistical network model with overlapping communities can be generated as a superposition of mutually independent random graphs of varying size. The model is parameterized by the number of nodes, the number of communities, and the joint distribution of the community size and the edge probability. This model admits sparse parameter regimes with power-law limiting degree distributions and non-vanishing clustering coefficients. This article presents large-scale approximations of clique and cycle frequencies for graph samples generated by the model, which are valid for regimes with unbounded numbers of overlapping communities. Our results reveal the growth rates of these subgraph frequencies and show that their theoretical densities can be reliably estimated from data.

We prove, for stably computably enumerable formal systems, direct analogues of the first and second incompleteness theorems of G\"odel. A typical stably computably enumerable set is the set of Diophantine equations with no integer solutions, and in particular such sets are generally not computably enumerable. And so this gives the first extension of the second incompleteness theorem to non classically computable formal systems. Let's motivate this with a somewhat physical application. Let $\mathcal{H} $ be the suitable infinite time limit (stabilization in the sense of the paper) of the mathematical output of humanity, specializing to first order sentences in the language of arithmetic (for simplicity), and understood as a formal system. Suppose that all the relevant physical processes in the formation of $\mathcal{H} $ are Turing computable. Then as defined $\mathcal{H} $ may \emph{not} be computably enumerable, but it is stably computably enumerable. Thus, the classical G\"odel disjunction applied to $\mathcal{H} $ is meaningless, but applying our incompleteness theorems to $\mathcal{H} $ we then get a sharper version of G\"odel's disjunction: assume $\mathcal{H} \vdash PA$ then either $\mathcal{H} $ is not stably computably enumerable or $\mathcal{H} $ is not 1-consistent (in particular is not sound) or $\mathcal{H} $ cannot prove a certain true statement of arithmetic (and cannot disprove it if in addition $\mathcal{H} $ is 2-consistent).

Delay Tolerant Networking (DTN) aims to address a myriad of significant networking challenges that appear in time-varying settings, such as mobile and satellite networks, wherein changes in network topology are frequent and often subject to environmental constraints. Within this paradigm, routing problems are often solved by extending classical graph-theoretic path finding algorithms, such as the Bellman-Ford or Floyd-Warshall algorithms, to the time-varying setting; such extensions are simple to understand, but they have strict optimality criteria and can exhibit non-polynomial scaling. Acknowledging this, we study time-varying shortest path problems on metric graphs whose vertices are traced by semi-algebraic curves. As an exemplary application, we establish a polynomial upper bound on the number of topological critical events encountered by a set of $n$ satellites moving along elliptic curves in low Earth orbit (per orbital period). Experimental evaluations on networks derived from STARLINK satellite TLE's demonstrate that not only does this geometric framework allow for routing schemes between satellites requiring recomputation an order of magnitude less than graph-based methods, but it also demonstrates metric spanner properties exist in metric graphs derived from real-world data, opening the door for broader applications of geometric DTN routing.

Insurance losses due to flooding can be estimated by simulating and then summing a large number of losses for each in a large set of hypothetical years of flood events. Replicated realisations lead to Monte Carlo return-level estimates and associated uncertainty. The procedure, however, is highly computationally intensive. We develop and use a new, Bennett-like concentration inequality to provide conservative but relatively accurate estimates of return levels. Bennett's inequality accounts for the different variances of each of the variables in a sum but uses a uniform upper bound on their support. Motivated by the variability in the total insured value of risks within a portfolio, we incorporate both individual upper bounds and variances and obtain tractable concentration bounds. Simulation studies and application to a representative portfolio demonstrate a substantial tightening compared with Bennett's bound. We then develop an importance-sampling procedure that repeatedly samples the loss for each year from the distribution implied by the concentration inequality, leading to conservative estimates of the return levels and their uncertainty using orders of magnitude less computation. This enables a simulation study of the sensitivity of the predictions to perturbations in quantities that are usually assumed fixed and known but, in truth, are not.

As the spatial features of multivariate data are increasingly central in researchers' applied problems, there is a growing demand for novel spatially-aware methods that are flexible, easily interpretable, and scalable to large data. We develop inside-out cross-covariance (IOX) models for multivariate spatial likelihood-based inference. IOX leads to valid cross-covariance matrix functions which we interpret as inducing spatial dependence on independent replicates of a correlated random vector. The resulting sample cross-covariance matrices are "inside-out" relative to the ubiquitous linear model of coregionalization (LMC). However, unlike LMCs, our methods offer direct marginal inference, easy prior elicitation of covariance parameters, the ability to model outcomes with unequal smoothness, and flexible dimension reduction. As a covariance model for a q-variate Gaussian process, IOX leads to scalable models for noisy vector data as well as flexible latent models. For large n cases, IOX complements Vecchia approximations and related process-based methods based on sparse graphical models. We demonstrate superior performance of IOX on synthetic datasets as well as on colorectal cancer proteomics data. An R package implementing the proposed methods is available at github.com/mkln/spiox.

北京阿比特科技有限公司