亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper deals with a projection least squares estimator of the drift function of a jump diffusion process $X$ computed from multiple independent copies of $X$ observed on $[0,T]$. Risk bounds are established on this estimator and on an associated adaptive estimator. Finally, some numerical experiments are provided.

相關內容

This paper investigates the mean square error (MSE)-optimal conditional mean estimator (CME) in one-bit quantized systems in the context of channel estimation with jointly Gaussian inputs. We analyze the relationship of the generally nonlinear CME to the linear Bussgang estimator, a well-known method based on Bussgang's theorem. We highlight a novel observation that the Bussgang estimator is equal to the CME for different special cases, including the case of univariate Gaussian inputs and the case of multiple observations in the absence of additive noise prior to the quantization. For the general cases we conduct numerical simulations to quantify the gap between the Bussgang estimator and the CME. This gap increases for higher dimensions and longer pilot sequences. We propose an optimal pilot sequence, motivated by insights from the CME, and derive a novel closed-form expression of the MSE for that case. Afterwards, we find a closed-form limit of the MSE in the asymptotically large number of pilots regime that also holds for the Bussgang estimator. Lastly, we present numerical experiments for various system parameters and for different performance metrics which illuminate the behavior of the optimal channel estimator in the quantized regime. In this context, the well-known stochastic resonance effect that appears in quantized systems can be quantified.

Privacy-preserving inference via edge or encrypted computing paradigms encourages users of machine learning services to confidentially run a model on their personal data for a target task and only share the model's outputs with the service provider; e.g., to activate further services. Nevertheless, despite all confidentiality efforts, we show that a ''vicious'' service provider can approximately reconstruct its users' personal data by observing only the model's outputs, while keeping the target utility of the model very close to that of a ''honest'' service provider. We show the possibility of jointly training a target model (to be run at users' side) and an attack model for data reconstruction (to be secretly used at server's side). We introduce the ''reconstruction risk'': a new measure for assessing the quality of reconstructed data that better captures the privacy risk of such attacks. Experimental results on 6 benchmark datasets show that for low-complexity data types, or for tasks with larger number of classes, a user's personal data can be approximately reconstructed from the outputs of a single target inference task. We propose a potential defense mechanism that helps to distinguish vicious vs. honest classifiers at inference time. We conclude this paper by discussing current challenges and open directions for future studies. We open-source our code and results, as a benchmark for future work.

An implicit variable-step BDF2 scheme is established for solving the space fractional Cahn-Hilliard equation, involving the fractional Laplacian, derived from a gradient flow in the negative order Sobolev space $H^{-\alpha}$, $\alpha\in(0,1)$. The Fourier pseudo-spectral method is applied for the spatial approximation. The proposed scheme inherits the energy dissipation law in the form of the modified discrete energy under the sufficient restriction of the time-step ratios. The convergence of the fully discrete scheme is rigorously provided utilizing the newly proved discrete embedding type convolution inequality dealing with the fractional Laplacian. Besides, the mass conservation and the unique solvability are also theoretically guaranteed. Numerical experiments are carried out to show the accuracy and the energy dissipation both for various interface widths. In particular, the multiple-time-scale evolution of the solution is captured by an adaptive time-stepping strategy in the short-to-long time simulation.

We develop a novel Monte Carlo algorithm for the vector consisting of the supremum, the time at which the supremum is attained and the position at a given (constant) time of an exponentially tempered L\'evy process. The algorithm, based on the increments of the process without tempering, converges geometrically fast (as a function of the computational cost) for discontinuous and locally Lipschitz functions of the vector. We prove that the corresponding multilevel Monte Carlo estimator has optimal computational complexity (i.e. of order $\varepsilon^{-2}$ if the mean squared error is at most $\varepsilon^2$) and provide its central limit theorem (CLT). Using the CLT we construct confidence intervals for barrier option prices and various risk measures based on drawdown under the tempered stable (CGMY) model calibrated/estimated on real-world data. We provide non-asymptotic and asymptotic comparisons of our algorithm with existing approximations, leading to rule-of-thumb guidelines for users to the best method for a given set of parameters. We illustrate the performance of the algorithm with numerical examples.

Linear mixed models (LMMs) are suitable for clustered data and are common in biometrics, medicine, survey statistics and many other fields. In those applications it is essential to carry out a valid inference after selecting a subset of the available variables. We construct confidence sets for the fixed effects in Gaussian LMMs that are based on Lasso-type estimators. Aside from providing confidence regions, this also allows to quantify the joint uncertainty of both variable selection and parameter estimation in the procedure. To show that the resulting confidence sets for the fixed effects are uniformly valid over the parameter spaces of both the regression coefficients and the covariance parameters, we also prove the novel result on uniform Cramer consistency of the restricted maximum likelihood (REML) estimators of the covariance parameters. The superiority of the constructed confidence sets to naive post-selection procedures is validated in simulations and illustrated with a study of the acid neutralization capacity of lakes in the United States.

We study a natural extension of classical empirical risk minimization, where the hypothesis space is a random subspace of a given space. In particular, we consider possibly data dependent subspaces spanned by a random subset of the data, recovering as a special case Nystrom approaches for kernel methods. Considering random subspaces naturally leads to computational savings, but the question is whether the corresponding learning accuracy is degraded. These statistical-computational tradeoffs have been recently explored for the least squares loss and self-concordant loss functions, such as the logistic loss. Here, we work to extend these results to convex Lipschitz loss functions, that might not be smooth, such as the hinge loss used in support vector machines. This unified analysis requires developing new proofs, that use different technical tools, such as sub-gaussian inputs, to achieve fast rates. Our main results show the existence of different settings, depending on how hard the learning problem is, for which computational efficiency can be improved with no loss in performance.

The optimal stopping problem is one of the core problems in financial markets, with broad applications such as pricing American and Bermudan options. The deep BSDE method [Han, Jentzen and E, PNAS, 115(34):8505-8510, 2018] has shown great power in solving high-dimensional forward-backward stochastic differential equations (FBSDEs), and inspired many applications. However, the method solves backward stochastic differential equations (BSDEs) in a forward manner, which can not be used for optimal stopping problems that in general require running BSDE backwardly. To overcome this difficulty, a recent paper [Wang, Chen, Sudjianto, Liu and Shen, arXiv:1807.06622, 2018] proposed the backward deep BSDE method to solve the optimal stopping problem. In this paper, we provide the rigorous theory for the backward deep BSDE method. Specifically, 1. We derive the a posteriori error estimation, i.e., the error of the numerical solution can be bounded by the training loss function; and; 2. We give an upper bound of the loss function, which can be sufficiently small subject to universal approximations. We give two numerical examples, which present consistent performance with the proved theory.

Low-dose computed tomography (CT) plays a significant role in reducing the radiation risk in clinical applications. However, lowering the radiation dose will significantly degrade the image quality. With the rapid development and wide application of deep learning, it has brought new directions for the development of low-dose CT imaging algorithms. Therefore, we propose a fully unsupervised one sample diffusion model (OSDM)in projection domain for low-dose CT reconstruction. To extract sufficient prior information from single sample, the Hankel matrix formulation is employed. Besides, the penalized weighted least-squares and total variation are introduced to achieve superior image quality. Specifically, we first train a score-based generative model on one sinogram by extracting a great number of tensors from the structural-Hankel matrix as the network input to capture prior distribution. Then, at the inference stage, the stochastic differential equation solver and data consistency step are performed iteratively to obtain the sinogram data. Finally, the final image is obtained through the filtered back-projection algorithm. The reconstructed results are approaching to the normal-dose counterparts. The results prove that OSDM is practical and effective model for reducing the artifacts and preserving the image quality.

Constant Function Market Makers (CFMMs) are a crucial tool for creating exchange markets, have been deployed effectively in the context of prediction markets, and are now especially prominent within the modern Decentralized Finance ecosystem. We show that for any set of beliefs about future asset prices, there exists an optimal CFMM trading function that maximizes the fraction of trades that a CFMM can settle. This trading function is the optimal solution of a convex program. This program therefore gives a tractable framework for market-makers to compile their belief-distribution on the future prices of the underlying assets into the trading function of a maximally capital-efficient CFMM. Our optimization framework further extends to capture the tradeoffs between fee revenue, arbitrage loss, and opportunity costs of liquidity providers. Analyzing the program shows how consideration of profit and loss qualitatively distort the optimal liquidity allocation. Our model additionally explains the diversity of CFMM designs that appear in practice. We show that careful analysis of our convex program enables inference of a market-maker's beliefs about future asset prices, and show that these beliefs mirror the folklore intuition for several widely used CFMMs. Developing the program requires a new notion of the liquidity of a CFMM at any price point, and the core technical challenge is in the analysis of the KKT conditions of an optimization over an infinite-dimensional Banach space.

We address the problem of integrating data from multiple observational and interventional studies to eventually compute counterfactuals in structural causal models. We derive a likelihood characterisation for the overall data that leads us to extend a previous EM-based algorithm from the case of a single study to that of multiple ones. The new algorithm learns to approximate the (unidentifiability) region of model parameters from such mixed data sources. On this basis, it delivers interval approximations to counterfactual results, which collapse to points in the identifiable case. The algorithm is very general, it works on semi-Markovian models with discrete variables and can compute any counterfactual. Moreover, it automatically determines if a problem is feasible (the parameter region being nonempty), which is a necessary step not to yield incorrect results. Systematic numerical experiments show the effectiveness and accuracy of the algorithm, while hinting at the benefits of integrating heterogeneous data to get informative bounds in case of unidentifiability.

北京阿比特科技有限公司