亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This work proposes $\texttt{NePhi}$, a neural deformation model which results in approximately diffeomorphic transformations. In contrast to the predominant voxel-based approaches, $\texttt{NePhi}$ represents deformations functionally which allows for memory-efficient training and inference. This is of particular importance for large volumetric registrations. Further, while medical image registration approaches representing transformation maps via multi-layer perceptrons have been proposed, $\texttt{NePhi}$ facilitates both pairwise optimization-based registration $\textit{as well as}$ learning-based registration via predicted or optimized global and local latent codes. Lastly, as deformation regularity is a highly desirable property for most medical image registration tasks, $\texttt{NePhi}$ makes use of gradient inverse consistency regularization which empirically results in approximately diffeomorphic transformations. We show the performance of $\texttt{NePhi}$ on two 2D synthetic datasets as well as on real 3D lung registration. Our results show that $\texttt{NePhi}$ can achieve similar accuracies as voxel-based representations in a single-resolution registration setting while using less memory and allowing for faster instance-optimization.

相關內容

圖(tu)(tu)像配(pei)準是(shi)圖(tu)(tu)像處理研究領域中的(de)(de)(de)一(yi)(yi)個典型問題(ti)和技(ji)術難點(dian),其目的(de)(de)(de)在(zai)于(yu)比較(jiao)或融(rong)合(he)針對同一(yi)(yi)對象在(zai)不(bu)同條件下(xia)獲(huo)取的(de)(de)(de)圖(tu)(tu)像,例如圖(tu)(tu)像會來(lai)自(zi)不(bu)同的(de)(de)(de)采(cai)集設備,取自(zi)不(bu)同的(de)(de)(de)時(shi)(shi)間,不(bu)同的(de)(de)(de)拍攝視(shi)(shi)角等等,有時(shi)(shi)也(ye)需(xu)要用(yong)到(dao)針對不(bu)同對象的(de)(de)(de)圖(tu)(tu)像配(pei)準問題(ti)。具(ju)體地說,對于(yu)一(yi)(yi)組圖(tu)(tu)像數(shu)據集中的(de)(de)(de)兩幅(fu)圖(tu)(tu)像,通過尋(xun)找一(yi)(yi)種空(kong)間變(bian)換把(ba)一(yi)(yi)幅(fu)圖(tu)(tu)像映射到(dao)另一(yi)(yi)幅(fu)圖(tu)(tu)像,使得兩圖(tu)(tu)中對應(ying)于(yu)空(kong)間同一(yi)(yi)位置的(de)(de)(de)點(dian)一(yi)(yi)一(yi)(yi)對應(ying)起來(lai),從而達到(dao)信息融(rong)合(he)的(de)(de)(de)目的(de)(de)(de)。 該技(ji)術在(zai)計(ji)算機視(shi)(shi)覺、醫學圖(tu)(tu)像處理以(yi)及材料力(li)學等領域都具(ju)有廣泛的(de)(de)(de)應(ying)用(yong)。根據具(ju)體應(ying)用(yong)的(de)(de)(de)不(bu)同,有的(de)(de)(de)側重于(yu)通過變(bian)換結果融(rong)合(he)兩幅(fu)圖(tu)(tu)像,有的(de)(de)(de)側重于(yu)研究變(bian)換本身以(yi)獲(huo)得對象的(de)(de)(de)一(yi)(yi)些力(li)學屬性。

We present an alternating direction method of multipliers (ADMM) for a generic overlapping group lasso problem, where the groups can be overlapping in an arbitrary way. Meanwhile, we prove the lower bounds and upper bounds for both the $\ell_1$ sparse group lasso problem and the $\ell_0$ sparse group lasso problem. Also, we propose the algorithms for computing these bounds.

Target similarity tuning (TST) is a method of selecting relevant examples in natural language (NL) to code generation through large language models (LLMs) to improve performance. Its goal is to adapt a sentence embedding model to have the similarity between two NL inputs match the similarity between their associated code outputs. In this paper, we propose different methods to apply and improve TST in the real world. First, we replace the sentence transformer with embeddings from a larger model, which reduces sensitivity to the language distribution and thus provides more flexibility in synthetic generation of examples, and we train a tiny model that transforms these embeddings to a space where embedding similarity matches code similarity, which allows the model to remain a black box and only requires a few matrix multiplications at inference time. Second, we show how to efficiently select a smaller number of training examples to train the TST model. Third, we introduce a ranking-based evaluation for TST that does not require end-to-end code generation experiments, which can be expensive to perform.

Given a vector dataset $\mathcal{X}$ and a query vector $\vec{x}_q$, graph-based Approximate Nearest Neighbor Search (ANNS) aims to build a graph index $G$ and approximately return vectors with minimum distances to $\vec{x}_q$ by searching over $G$. The main drawback of graph-based ANNS is that a graph index would be too large to fit into the memory especially for a large-scale $\mathcal{X}$. To solve this, a Product Quantization (PQ)-based hybrid method called DiskANN is proposed to store a low-dimensional PQ index in memory and retain a graph index in SSD, thus reducing memory overhead while ensuring a high search accuracy. However, it suffers from two I/O issues that significantly affect the overall efficiency: (1) long routing path from an entry vertex to the query's neighborhood that results in large number of I/O requests and (2) redundant I/O requests during the routing process. We propose an optimized DiskANN++ to overcome above issues. Specifically, for the first issue, we present a query-sensitive entry vertex selection strategy to replace DiskANN's static graph-central entry vertex by a dynamically determined entry vertex that is close to the query. For the second I/O issue, we present an isomorphic mapping on DiskANN's graph index to optimize the SSD layout and propose an asynchronously optimized Pagesearch based on the optimized SSD layout as an alternative to DiskANN's beamsearch. Comprehensive experimental studies on eight real-world datasets demonstrate our DiskANN++'s superiority on efficiency. We achieve a notable 1.5 X to 2.2 X improvement on QPS compared to DiskANN, given the same accuracy constraint.

This note is concerned with deterministic constructions of $m \times N$ matrices satisfying a restricted isometry property from $\ell_2$ to $\ell_1$ on $s$-sparse vectors. Similarly to the standard ($\ell_2$ to $\ell_2$) restricted isometry property, such constructions can be found in the regime $m \asymp s^2$, at least in theory. With effectiveness of implementation in mind, two simple constructions are presented in the less pleasing but still relevant regime $m \asymp s^4$. The first one, executing a Las Vegas strategy, is quasideterministic and applies in the real setting. The second one, exploiting Golomb rulers, is explicit and applies to the complex setting. As a stepping stone, an explicit isometric embedding from $\ell_2^n(\mathbb{C})$ to $\ell_4^{cn^2}(\mathbb{C})$ is presented. Finally, the extension of the problem from sparse vectors to low-rank matrices is raised as an open question.

Differential privacy guarantees allow the results of a statistical analysis involving sensitive data to be released without compromising the privacy of any individual taking part. Achieving such guarantees generally requires the injection of noise, either directly into parameter estimates or into the estimation process. Instead of artificially introducing perturbations, sampling from Bayesian posterior distributions has been shown to be a special case of the exponential mechanism, producing consistent, and efficient private estimates without altering the data generative process. The application of current approaches has, however, been limited by their strong bounding assumptions which do not hold for basic models, such as simple linear regressors. To ameliorate this, we propose $\beta$D-Bayes, a posterior sampling scheme from a generalised posterior targeting the minimisation of the $\beta$-divergence between the model and the data generating process. This provides private estimation that is generally applicable without requiring changes to the underlying model and consistently learns the data generating parameter. We show that $\beta$D-Bayes produces more precise inference estimation for the same privacy guarantees, and further facilitates differentially private estimation via posterior sampling for complex classifiers and continuous regression models such as neural networks for the first time.

We present ASPIRO, an approach for structured data verbalisation into short template sentences in zero to few-shot settings. Unlike previous methods, our approach prompts large language models (LLMs) to directly produce entity-agnostic templates, rather than relying on LLMs to faithfully copy the given example entities, or validating/crafting the templates manually. We incorporate LLM re-prompting, triggered by algorithmic parsing checks, as well as the PARENT metric induced consistency validation to identify and rectify template generation problems in real-time. ASPIRO, compared to direct LLM output, averages 66\% parsing error rate reduction in generated verbalisations of RDF triples on the DART dataset. Our best 5-shot text-davinci-003 setup, scoring BLEU of 50.62, METEOR of 45.16, BLEURT of 0.82, NUBIA of 0.87, and PARENT of 0.8962 on the Rel2Text dataset, competes effectively with recent fine-tuned pre-trained language models.

Computing the proximal operator of the sparsity-promoting piece-wise exponential (PiE) penalty $1-e^{-|x|/\sigma}$ with a given shape parameter $\sigma>0$, which is treated as a popular nonconvex surrogate of $\ell_0$-norm, is fundamental in feature selection via support vector machines, image reconstruction, zero-one programming problems, compressed sensing, etc. Due to the nonconvexity of PiE, for a long time, its proximal operator is frequently evaluated via an iteratively reweighted $\ell_1$ algorithm, which substitutes PiE with its first-order approximation, however, the obtained solutions only are the critical point. Based on the exact characterization of the proximal operator of PiE, we explore how the iteratively reweighted $\ell_1$ solution deviates from the true proximal operator in certain regions, which can be explicitly identified in terms of $\sigma$, the initial value and the regularization parameter in the definition of the proximal operator. Moreover, the initial value can be adaptively and simply chosen to ensure that the iteratively reweighted $\ell_1$ solution belongs to the proximal operator of PiE.

We explore the possibility of fully replacing a plasma physics kinetic simulator with a graph neural network-based simulator. We focus on this class of surrogate models given the similarity between their message-passing update mechanism and the traditional physics solver update, and the possibility of enforcing known physical priors into the graph construction and update. We show that our model learns the kinetic plasma dynamics of the one-dimensional plasma model, a predecessor of contemporary kinetic plasma simulation codes, and recovers a wide range of well-known kinetic plasma processes, including plasma thermalization, electrostatic fluctuations about thermal equilibrium, and the drag on a fast sheet and Landau damping. We compare the performance against the original plasma model in terms of run-time, conservation laws, and temporal evolution of key physical quantities. The limitations of the model are presented and possible directions for higher-dimensional surrogate models for kinetic plasmas are discussed.

We present a $O(1)$-approximate fully dynamic algorithm for the $k$-median and $k$-means problems on metric spaces with amortized update time $\tilde O(k)$ and worst-case query time $\tilde O(k^2)$. We complement our theoretical analysis with the first in-depth experimental study for the dynamic $k$-median problem on general metrics, focusing on comparing our dynamic algorithm to the current state-of-the-art by Henzinger and Kale [ESA'20]. Finally, we also provide a lower bound for dynamic $k$-median which shows that any $O(1)$-approximate algorithm with $\tilde O(\text{poly}(k))$ query time must have $\tilde \Omega(k)$ amortized update time, even in the incremental setting.

Multiple-input multiple-output (MIMO) system has been the defining mobile communications technology in recent generations. With the ever-increasing demands looming towards the sixth generation (6G), we are in need of additional degrees of freedom that deliver further gains beyond MIMO. To this goal, fluid antenna system (FAS) has emerged as a new way to obtain spatial diversity using reconfigurable position-switchable antennas. Considering the case with more than one ports activated on a 2D fluid antenna surface at both ends, we take the information-theoretic approach to study the achievable performance limits of the MIMO-FAS. First of all, we propose a suboptimal scheme, referred to as QR MIMO-FAS, to maximize the rate at high signal-to-noise ratio (SNR) via joint port selection, transmit and receive beamforming and power allocation. We then derive the optimal diversity and multiplexing tradeoff (DMT) of MIMO-FAS. From the DMT, we highlight that MIMO-FAS outperforms traditional MIMO antenna systems. Further, we introduce a new metric, namely q-outage capacity, which can jointly consider rate and outage probability. Through this metric, our results indicate that MIMO-FAS surpasses traditional MIMO greatly.

北京阿比特科技有限公司