亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The method of common lines is a well-established reconstruction technique in cryogenic electron microscopy (cryo-EM), which can be used to extract the relative orientations of an object given tomographic projection images from different directions. In this paper, we deal with an analogous problem in optical diffraction tomography. Based on the Fourier diffraction theorem, we show that rigid motions of the object, i.e., rotations and translations, can be determined by detecting common circles in the Fourier-transformed data. We introduce two methods to identify common circles. The first one is motivated by the common line approach for projection images and detects the relative orientation by parameterizing the common circles in the two images. The second one assumes a smooth motion over time and calculates the angular velocity of the rotational motion via an infinitesimal version of the common circle method. Interestingly, using the stereographic projection, both methods can be reformulated as common line methods, but these lines are, in contrast to those used in cryo-EM, not confined to pass through the origin and allow for a full reconstruction of the relative orientations. Numerical proof-of-the-concept examples demonstrate the performance of our reconstruction methods.

相關內容

This paper presents a novel, efficient, high-order accurate, and stable spectral element-based model for computing the complete three-dimensional linear radiation and diffraction problem for floating offshore structures. We present a solution to a pseudo-impulsive formulation in the time domain, where the frequency-dependent quantities, such as added mass, radiation damping, and wave excitation force for arbitrary heading angle, $\beta$, are evaluated using Fourier transforms from the tailored time-domain responses. The spatial domain is tessellated by an unstructured high-order hybrid configured mesh and represented by piece-wise polynomial basis functions in the spectral element space. Fourth-order accurate time integration is employed through an explicit four-stage Runge-Kutta method and complemented by fourth-order finite difference approximations for time differentiation. To reduce the computational burden, the model can make use of symmetry boundaries in the domain representation. The key piece of the numerical model -- the discrete Laplace solver -- is validated through $p$- and $h$-convergence studies. Moreover, to highlight the capabilities of the proposed model, we present prof-of-concept examples of simple floating bodies (a sphere and a box). Lastly, a much more involved case is performed of an oscillating water column, including generalized modes resembling the piston motion and wave sloshing effects inside the wave energy converter chamber. In this case, the spectral element model trivially computes the infinite-frequency added mass, which is a singular problem for conventional boundary element type solvers.

Exoplanet detection by direct imaging is a difficult task: the faint signals from the objects of interest are buried under a spatially structured nuisance component induced by the host star. The exoplanet signals can only be identified when combining several observations with dedicated detection algorithms. In contrast to most of existing methods, we propose to learn a model of the spatial, temporal and spectral characteristics of the nuisance, directly from the observations. In a pre-processing step, a statistical model of their correlations is built locally, and the data are centered and whitened to improve both their stationarity and signal-to-noise ratio (SNR). A convolutional neural network (CNN) is then trained in a supervised fashion to detect the residual signature of synthetic sources in the pre-processed images. Our method leads to a better trade-off between precision and recall than standard approaches in the field. It also outperforms a state-of-the-art algorithm based solely on a statistical framework. Besides, the exploitation of the spectral diversity improves the performance compared to a similar model built solely from spatio-temporal data.

In this paper, we present new high-probability PAC-Bayes bounds for different types of losses. Firstly, for losses with a bounded range, we present a strengthened version of Catoni's bound that holds uniformly for all parameter values. This leads to new fast rate and mixed rate bounds that are interpretable and tighter than previous bounds in the literature. Secondly, for losses with more general tail behaviors, we introduce two new parameter-free bounds: a PAC-Bayes Chernoff analogue when the loss' cumulative generating function is bounded, and a bound when the loss' second moment is bounded. These two bounds are obtained using a new technique based on a discretization of the space of possible events for the "in probability" parameter optimization problem. Finally, we extend all previous results to anytime-valid bounds using a simple technique applicable to any existing bound.

We study dynamic $(1-\epsilon)$-approximate rounding of fractional matchings -- a key ingredient in numerous breakthroughs in the dynamic graph algorithms literature. Our first contribution is a surprisingly simple deterministic rounding algorithm in bipartite graphs with amortized update time $O(\epsilon^{-1} \log^2 (\epsilon^{-1} \cdot n))$, matching an (unconditional) recourse lower bound of $\Omega(\epsilon^{-1})$ up to logarithmic factors. Moreover, this algorithm's update time improves provided the minimum (non-zero) weight in the fractional matching is lower bounded throughout. Combining this algorithm with novel dynamic \emph{partial rounding} algorithms to increase this minimum weight, we obtain several algorithms that improve this dependence on $n$. For example, we give a high-probability randomized algorithm with $\tilde{O}(\epsilon^{-1}\cdot (\log\log n)^2)$-update time against adaptive adversaries. (We use Soft-Oh notation, $\tilde{O}$, to suppress polylogarithmic factors in the argument, i.e., $\tilde{O}(f)=O(f\cdot \mathrm{poly}(\log f))$.) Using our rounding algorithms, we also round known $(1-\epsilon)$-decremental fractional bipartite matching algorithms with no asymptotic overhead, thus improving on state-of-the-art algorithms for the decremental bipartite matching problem. Further, we provide extensions of our results to general graphs and to maintaining almost-maximal matchings.

Motion prediction for automated vehicles in complex environments is a difficult task that is to be mastered when automated vehicles are to be used in arbitrary situations. Many factors influence the future motion of traffic participants starting with traffic rules and reaching from the interaction between each other to personal habits of human drivers. Therefore we present a novel approach for a graph-based prediction based on a heterogeneous holistic graph representation that combines temporal information, properties and relations between traffic participants as well as relations with static elements like the road network. The information are encoded through different types of nodes and edges that both are enriched with arbitrary features. We evaluated the approach on the INTERACTION and the Argoverse dataset and conducted an informative ablation study to demonstrate the benefit of different types of information for the motion prediction quality.

Variational regularization is commonly used to solve linear inverse problems, and involves augmenting a data fidelity by a regularizer. The regularizer is used to promote a priori information, and is weighted by a regularization parameter. Selection of an appropriate regularization parameter is critical, with various choices leading to very different reconstructions. Existing strategies such as the discrepancy principle and L-curve can be used to determine a suitable parameter value, but in recent years a supervised machine learning approach called bilevel learning has been employed. Bilevel learning is a powerful framework to determine optimal parameters, and involves solving a nested optimisation problem. While previous strategies enjoy various theoretical results, the well-posedness of bilevel learning in this setting is still a developing field. One necessary property is positivity of the determined regularization parameter. In this work, we provide a new condition that better characterises positivity of optimal regularization parameters than the existing theory. Numerical results verify and explore this new condition for both small and large dimensional problems.

While road obstacle detection techniques have become increasingly effective, they typically ignore the fact that, in practice, the apparent size of the obstacles decreases as their distance to the vehicle increases. In this paper, we account for this by computing a scale map encoding the apparent size of a hypothetical object at every image location. We then leverage this perspective map to (i) generate training data by injecting onto the road synthetic objects whose size corresponds to the perspective foreshortening; and (ii) incorporate perspective information in the decoding part of the detection network to guide the obstacle detector. Our results on standard benchmarks show that, together, these two strategies significantly boost the obstacle detection performance, allowing our approach to consistently outperform state-of-the-art methods in terms of instance-level obstacle detection.

Quantitative tissue information, like the light scattering properties, is considered as a key player in the detection of cancerous cells in medical diagnosis. A promising method to obtain these data is optical coherence tomography (OCT). In this article, we will therefore discuss the refractive index reconstruction from OCT data, employing a Gaussian beam based forward model. We consider in particular samples with a layered structure, meaning that the refractive index as a function of depth is well approximated by a piece-wise constant function. For the reconstruction, we present a layer-by-layer method where in every step the refractive index is obtained via a discretized least squares minimization. For an approximated form of the minimization problem, we present an existence and uniqueness result. The applicability of the proposed method is then verified by reconstructing refractive indices of layered media from both simulated and experimental OCT data.

Change-point analysis has been successfully applied to the detect changes in multivariate data streams over time. In many applications, when data are observed over a graph/network, change does not occur simultaneously but instead spread from an initial source coordinate to the neighbouring coordinates over time. We propose a new method, SpreadDetect, that estimates both the source coordinate and the initial timepoint of change in such a setting. We prove that under appropriate conditions, the SpreadDetect algorithm consistently estimates both the source coordinate and the timepoint of change and that the minimal signal size detectable by the algorithm is minimax optimal. The practical utility of the algorithm is demonstrated through numerical experiments and a COVID-19 real dataset.

In recent years, Graph Neural Networks have reported outstanding performance in tasks like community detection, molecule classification and link prediction. However, the black-box nature of these models prevents their application in domains like health and finance, where understanding the models' decisions is essential. Counterfactual Explanations (CE) provide these understandings through examples. Moreover, the literature on CE is flourishing with novel explanation methods which are tailored to graph learning. In this survey, we analyse the existing Graph Counterfactual Explanation methods, by providing the reader with an organisation of the literature according to a uniform formal notation for definitions, datasets, and metrics, thus, simplifying potential comparisons w.r.t to the method advantages and disadvantages. We discussed seven methods and sixteen synthetic and real datasets providing details on the possible generation strategies. We highlight the most common evaluation strategies and formalise nine of the metrics used in the literature. We first introduce the evaluation framework GRETEL and how it is possible to extend and use it while providing a further dimension of comparison encompassing reproducibility aspects. Finally, we provide a discussion on how counterfactual explanation interplays with privacy and fairness, before delving into open challenges and future works.

北京阿比特科技有限公司