亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we study a spline collocation method for a numerical solution to the optimal transport problem We mainly solve the \MAE with the second boundary condition numerically by proposing a center matching algorithm. We prove a pointwise convergence of our iterative algorithm under the assumption the boundedness of spline iterates. We use the \MAE with Dirichlet boundary condition and some known solutions to the \MAE with second boundary condition to demonstrate the effectiveness of our algorithm. Then we use our method to solve some real-life problems. One application problem is to use the optimal transportation for the conversion of fisheye view images into standard rectangular images.

相關內容

In this paper, we identify the criteria for the selection of the minimal and most efficient covariate adjustment sets for the regression calibration method developed by Carroll, Rupert and Stefanski (CRS, 1992), used to correct bias due to continuous exposure measurement error. We utilize directed acyclic graphs to illustrate how subject matter knowledge can aid in the selection of such adjustment sets. Valid measurement error correction requires the collection of data on any (1) common causes of true exposure and outcome and (2) common causes of measurement error and outcome, in both the main study and validation study. For the CRS regression calibration method to be valid, researchers need to minimally adjust for covariate set (1) in both the measurement error model (MEM) and the outcome model and adjust for covariate set (2) at least in the MEM. In practice, we recommend including the minimal covariate adjustment set in both the MEM and the outcome model. In contrast with the regression calibration method developed by Rosner, Spiegelman and Willet, it is valid and more efficient to adjust for correlates of the true exposure or of measurement error that are not risk factors in the MEM only under CRS method. We applied the proposed covariate selection approach to the Health Professional Follow-up Study, examining the effect of fiber intake on cardiovascular incidence. In this study, we demonstrated potential issues with a data-driven approach to building the MEM that is agnostic to the structural assumptions. We extend the originally proposed estimators to settings where effect modification by a covariate is allowed. Finally, we caution against the use of the regression calibration method to calibrate the true nutrition intake using biomarkers.

In this paper, we provide novel tail bounds on the optimization error of Stochastic Mirror Descent for convex and Lipschitz objectives. Our analysis extends the existing tail bounds from the classical light-tailed Sub-Gaussian noise case to heavier-tailed noise regimes. We study the optimization error of the last iterate as well as the average of the iterates. We instantiate our results in two important cases: a class of noise with exponential tails and one with polynomial tails. A remarkable feature of our results is that they do not require an upper bound on the diameter of the domain. Finally, we support our theory with illustrative experiments that compare the behavior of the average of the iterates with that of the last iterate in heavy-tailed noise regimes.

In the present paper, we introduce a new method for the automated generation of residential distribution grid models based on novel building load estimation methods and a two-stage optimization for the generation of the 20 kV and 400 V grid topologies. Using the introduced load estimation methods, various open or proprietary data sources can be utilized to estimate the load of residential buildings. These data sources include available building footprints from OpenStreetMap, 3D building data from OSM Buildings, and the number of electricity meters per address provided by the respective distribution system operator (DSO). For the evaluation of the introduced methods, we compare the resulting grid models by utilizing different available data sources for a specific suburban residential area and the real grid topology provided by the DSO. This evaluation yields two key findings: First, the automated 20 kV network generation methodology works well when compared to the real network. Second, the utilization of public 3D building data for load estimation significantly increases the resulting model accuracy compared to 2D data and enables results similar to models based on DSO-supplied meter data. This substantially reduces the dependence on such normally proprietary data.

In this work, we consider the problem of regularization in minimum mean-squared error (MMSE) linear filters. Exploiting the relationship with statistical machine learning methods, the regularization parameter is found from the observed signals in a simple and automatic manner. The proposed approach is illustrated through system identification examples, where the automatic regularization yields near-optimal results.

In this paper, we address the issue of increasing the performance of reinforcement learning (RL) solutions for autonomous racing cars when navigating under conditions where practical vehicle modelling errors (commonly known as \emph{model mismatches}) are present. To address this challenge, we propose a partial end-to-end algorithm that decouples the planning and control tasks. Within this framework, an RL agent generates a trajectory comprising a path and velocity, which is subsequently tracked using a pure pursuit steering controller and a proportional velocity controller, respectively. In contrast, many current learning-based (i.e., reinforcement and imitation learning) algorithms utilise an end-to-end approach whereby a deep neural network directly maps from sensor data to control commands. By leveraging the robustness of a classical controller, our partial end-to-end driving algorithm exhibits better robustness towards model mismatches than standard end-to-end algorithms.

In this paper we investigate the parameterized complexity of the task of counting and detecting occurrences of small patterns in unit disk graphs: Given an $n$-vertex unit disk graph $G$ with an embedding of ply $p$ (that is, the graph is represented as intersection graph with closed disks of unit size, and each point is contained in at most $p$ disks) and a $k$-vertex unit disk graph $P$, count the number of (induced) copies of $P$ in $G$. For general patterns $P$, we give an $2^{O(p k /\log k)}n^{O(1)}$ time algorithm for counting pattern occurrences. We show this is tight, even for ply $p=2$ and $k=n$: any $2^{o(n/\log n)}n^{O(1)}$ time algorithm violates the Exponential Time Hypothesis (ETH). For most natural classes of patterns, such as connected graphs and independent sets we present the following results: First, we give an $(pk)^{O(\sqrt{pk})}n^{O(1)}$ time algorithm, which is nearly tight under the ETH for bounded ply and many patterns. Second, for $p= k^{O(1)}$ we provide a Turing kernelization (i.e. we give a polynomial time preprocessing algorithm to reduce the instance size to $k^{O(1)}$). Our approach combines previous tools developed for planar subgraph isomorphism such as `efficient inclusion-exclusion' from [Nederlof STOC'20], and `isomorphisms checks' from [Bodlaender et al. ICALP'16] with a different separator hierarchy and a new bound on the number of non-isomorphic separations of small order tailored for unit disk graphs.

In this paper, we study the (decentralized) distributed optimization problem with high-dimensional sparse structure. Building upon the FedDA algorithm, we propose a (Decentralized) FedDA-GT algorithm, which combines the \textbf{gradient tracking} technique. It is able to eliminate the heterogeneity among different clients' objective functions while ensuring a dimension-free convergence rate. Compared to the vanilla FedDA approach, (D)FedDA-GT can significantly reduce the communication complexity, from ${O}(s^2\log d/\varepsilon^{3/2})$ to a more efficient ${O}(s^2\log d/\varepsilon)$. In cases where strong convexity is applicable, we introduce a multistep mechanism resulting in the Multistep ReFedDA-GT algorithm, a minor modified version of FedDA-GT. This approach achieves an impressive communication complexity of ${O}\left(s\log d \log \frac{1}{\varepsilon}\right)$ through repeated calls to the ReFedDA-GT algorithm. Finally, we conduct numerical experiments, illustrating that our proposed algorithms enjoy the dual advantage of being dimension-free and heterogeneity-free.

In this paper, we study the problem of optimizing a linear program whose variables are the answers to a conjunctive query. For this we propose the language LP(CQ) for specifying linear programs whose constraints and objective functions depend on the answer sets of conjunctive queries. We contribute an efficient algorithm for solving programs in a fragment of LP(CQ). The natural approach constructs a linear program having as many variables as there are elements in the answer set of the queries. Our approach constructs a linear program having the same optimal value but fewer variables. This is done by exploiting the structure of the conjunctive queries using generalized hypertree decompositions of small width to factorize elements of the answer set together. We illustrate the various applications of LP(CQ) programs on three examples: optimizing deliveries of resources, minimizing noise for differential privacy, and computing the s-measure of patterns in graphs as needed for data mining.

In this paper we consider the inverse problem of electrical conductivity retrieval starting from boundary measurements, in the framework of Electrical Resistance Tomography (ERT). In particular, the focus is on non-iterative reconstruction algorithms, compatible with real-time applications. In this work a new non-iterative reconstruction method for Electrical Resistance Tomography, termed Kernel Method, is presented. The imaging algorithm deals with the problem of retrieving the shape of one or more anomalies embedded in a known background. The foundation of the proposed method is given by the idea that if there exists a current flux at the boundary (Neumann data) able to produce the same voltage measurements on two different configurations, with and without the anomaly, respectively, then the corresponding electric current density for the problem involving only the background material vanishes in the region occupied by the anomaly. Coherently with this observation, the Kernel Method consists in (i) evaluating a proper current flux at the boundary $g$, (ii) solving one direct problem on a configuration without anomaly and driven by $g$, (iii) reconstructing the anomaly from the spatial plot of the power density as the region in which the power density vanishes. This new tomographic method has a very simple numerical implementation at a very low computational cost. Beside theoretical results and justifications of our method, we present a large number of numerical examples to show the potential of this new algorithm.

In this paper, we tackle two challenges in multimodal learning for visual recognition: 1) when missing-modality occurs either during training or testing in real-world situations; and 2) when the computation resources are not available to finetune on heavy transformer models. To this end, we propose to utilize prompt learning and mitigate the above two challenges together. Specifically, our modality-missing-aware prompts can be plugged into multimodal transformers to handle general missing-modality cases, while only requiring less than 1% learnable parameters compared to training the entire model. We further explore the effect of different prompt configurations and analyze the robustness to missing modality. Extensive experiments are conducted to show the effectiveness of our prompt learning framework that improves the performance under various missing-modality cases, while alleviating the requirement of heavy model re-training. Code is available.

北京阿比特科技有限公司