亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This work investigates conditions for quantitative image reconstruction in multispectral computed tomography (MSCT), which remains a topic of active research. In MSCT, one seeks to obtain from data the spatial distribution of linear attenuation coefficient, referred to as a virtual monochromatic image (VMI), at a given X-ray energy, within the subject imaged. As a VMI is decomposed often into a linear combination of basis images with known decomposition coefficients, the reconstruction of a VMI is thus tantamount to that of the basis images. An empirical, but highly effective, two-step data-domain-decomposition (DDD) method has been developed and used widely for quantitative image reconstruction in MSCT. In the two-step DDD method, step (1) estimates the so-called basis sinogram from data through solving a nonlinear transform, whereas step (2) reconstructs basis images from their basis sinograms estimated. Subsequently, a VMI can readily be obtained from the linear combination of basis images reconstructed. As step (2) involves the inversion of a straightforward linear system, step (1) is the key component of the DDD method in which a nonlinear system needs to be inverted for estimating the basis sinograms from data. In this work, we consider a {\it discrete} form of the nonlinear system in step (1), and then carry out theoretical and numerical analyses of conditions on the existence, uniqueness, and stability of a solution to the discrete nonlinear system for accurately estimating the discrete basis sinograms, leading to quantitative reconstruction of VMIs in MSCT.

相關內容

We explore sim-to-real transfer of deep reinforcement learning controllers for a heavy vehicle with active suspensions designed for traversing rough terrain. While related research primarily focuses on lightweight robots with electric motors and fast actuation, this study uses a forestry vehicle with a complex hydraulic driveline and slow actuation. We simulate the vehicle using multibody dynamics and apply system identification to find an appropriate set of simulation parameters. We then train policies in simulation using various techniques to mitigate the sim-to-real gap, including domain randomization, action delays, and a reward penalty to encourage smooth control. In reality, the policies trained with action delays and a penalty for erratic actions perform at nearly the same level as in simulation. In experiments on level ground, the motion trajectories closely overlap when turning to either side, as well as in a route tracking scenario. When faced with a ramp that requires active use of the suspensions, the simulated and real motions are in close alignment. This shows that the actuator model together with system identification yields a sufficiently accurate model of the actuators. We observe that policies trained without the additional action penalty exhibit fast switching or bang-bang control. These present smooth motions and high performance in simulation but transfer poorly to reality. We find that policies make marginal use of the local height map for perception, showing no indications of look-ahead planning. However, the strong transfer capabilities entail that further development concerning perception and performance can be largely confined to simulation.

Generating accurate extremes from an observational data set is crucial when seeking to estimate risks associated with the occurrence of future extremes which could be larger than those already observed. Applications range from the occurrence of natural disasters to financial crashes. Generative approaches from the machine learning community do not apply to extreme samples without careful adaptation. Besides, asymptotic results from extreme value theory (EVT) give a theoretical framework to model multivariate extreme events, especially through the notion of multivariate regular variation. Bridging these two fields, this paper details a variational autoencoder (VAE) approach for sampling multivariate heavy-tailed distributions, i.e., distributions likely to have extremes of particularly large intensities. We illustrate the relevance of our approach on a synthetic data set and on a real data set of discharge measurements along the Danube river network. The latter shows the potential of our approach for flood risks' assessment. In addition to outperforming the standard VAE for the tested data sets, we also provide a comparison with a competing EVT-based generative approach. On the tested cases, our approach improves the learning of the dependency structure between extremes.

The paper concerns the $d$-dimensional stochastic approximation recursion, $$ \theta_{n+1}= \theta_n + \alpha_{n + 1} f(\theta_n, \Phi_{n+1}) $$ in which $\Phi$ is a geometrically ergodic Markov chain on a general state space $\textsf{X}$ with stationary distribution $\pi$, and $f:\Re^d\times\textsf{X}\to\Re^d$. The main results are established under a version of the Donsker-Varadhan Lyapunov drift condition known as (DV3), and a stability condition for the mean flow with vector field $\bar{f}(\theta)=\textsf{E}[f(\theta,\Phi)]$, with $\Phi\sim\pi$. (i) $\{ \theta_n\}$ is convergent a.s. and in $L_4$ to the unique root $\theta^*$ of $\bar{f}(\theta)$. (ii) A functional CLT is established, as well as the usual one-dimensional CLT for the normalized error. (iii) The CLT holds for the normalized version, $z_n{=:} \sqrt{n} (\theta^{\text{PR}}_n -\theta^*)$, of the averaged parameters, $\theta^{\text{PR}}_n {=:} n^{-1} \sum_{k=1}^n\theta_k$, subject to standard assumptions on the step-size. Moreover, the normalized covariance converges, $$ \lim_{n \to \infty} n \textsf{E} [ {\widetilde{\theta}}^{\text{ PR}}_n ({\widetilde{\theta}}^{\text{ PR}}_n)^T ] = \Sigma_\theta^*,\;\;\;\textit{with $\widetilde{\theta}^{\text{ PR}}_n = \theta^{\text{ PR}}_n -\theta^*$,} $$ where $\Sigma_\theta^*$ is the minimal covariance of Polyak and Ruppert. (iv) An example is given where $f$ and $\bar{f}$ are linear in $\theta$, and the Markov chain $\Phi$ is geometrically ergodic but does not satisfy (DV3). While the algorithm is convergent, the second moment is unbounded: $ \textsf{E} [ \| \theta_n \|^2 ] \to \infty$ as $n\to\infty$.

We study the problem of estimating a large, low-rank matrix corrupted by additive noise of unknown covariance, assuming one has access to additional side information in the form of noise-only measurements. We study the Whiten-Shrink-reColor (WSC) workflow, where a "noise covariance whitening" transformation is applied to the observations, followed by appropriate singular value shrinkage and a "noise covariance re-coloring" transformation. We show that under the mean square error loss, a unique, asymptotically optimal shrinkage nonlinearity exists for the WSC denoising workflow, and calculate it in closed form. To this end, we calculate the asymptotic eigenvector rotation of the random spiked F-matrix ensemble, a result which may be of independent interest. With sufficiently many pure-noise measurements, our optimally-tuned WSC denoising workflow outperforms, in mean square error, matrix denoising algorithms based on optimal singular value shrinkage which do not make similar use of noise-only side information; numerical experiments show that our procedure's relative performance is particularly strong in challenging statistical settings with high dimensionality and large degree of heteroscedasticity.

Large language models (LLMs), like ChatGPT, have shown some human-like cognitive abilities. For comparing these abilities of different models, several benchmarks (i.e. sets of standard test questions) from different fields (e.g., Literature, Biology and Psychology) are often adopted and the test results under traditional metrics such as accuracy, recall and F1, are reported. However, such way for evaluating LLMs can be inefficient and inaccurate from the cognitive science perspective. Inspired by Computerized Adaptive Testing (CAT) used in psychometrics, we propose an adaptive testing framework for LLM evaluation. Rather than using a standard test set and simply reporting accuracy, this approach dynamically adjusts the characteristics of the test questions, such as difficulty, based on the model's performance. This allows for a more accurate estimation of the model's abilities, using fewer questions. More importantly, it allows LLMs to be compared with humans easily, which is essential for NLP models that aim for human-level ability. Our diagnostic reports have found that ChatGPT often behaves like a ``careless student'', prone to slip and occasionally guessing the questions. We conduct a fine-grained diagnosis and rank the latest 6 instruction-tuned LLMs from three aspects of Subject Knowledge, Mathematical Reasoning, and Programming, where GPT4 can outperform other models significantly and reach the cognitive ability of middle-level students. Different tests for different models using efficient adaptive testing -- we believe this has the potential to become a new norm in evaluating large language models.

The autonomous control of flippers plays an important role in enhancing the intelligent operation of tracked robots within complex environments. While existing methods mainly rely on hand-crafted control models, in this paper, we introduce a novel approach that leverages deep reinforcement learning (DRL) techniques for autonomous flipper control in complex terrains. Specifically, we propose a new DRL network named AT-D3QN, which ensures safe and smooth flipper control for tracked robots. It comprises two modules, a feature extraction and fusion module for extracting and integrating robot and environment state features, and a deep Q-Learning control generation module for incorporating expert knowledge to obtain a smooth and efficient control strategy. To train the network, a novel reward function is proposed, considering both learning efficiency and passing smoothness. A simulation environment is constructed using the Pymunk physics engine for training. We then directly apply the trained model to a more realistic Gazebo simulation for quantitative analysis. The consistently high performance of the proposed approach validates its superiority over manual teleoperation.

In this article, we present the time-space Chebyshev pseudospectral method (TS-CPsM) to approximate a solution to the generalised Burgers-Fisher (gBF) equation. The Chebyshev-Gauss-Lobatto (CGL) points serve as the foundation for the recommended method, which makes use of collocations in both the time and space directions. Further, using a mapping, the non-homogeneous initial-boundary value problem is transformed into a homogeneous problem, and a system of algebraic equations is obtained. The numerical approach known as Newton-Raphson is implemented in order to get the desired results for the system. The proposed method's stability analysis has been performed. Different researchers' considerations on test problems have been explored to illustrate the robustness and practicality of the approach presented. The approximate solutions we found using the proposed method are highly accurate and significantly better than the existing results.

In a simple connected graph $G=(V,E)$, a subset of vertices $S \subseteq V$ is a dominating set if any vertex $v \in V\setminus S$ is adjacent to some vertex $x$ from this subset. A number of real-life problems can be modeled using this problem which is known to be among the difficult NP-hard problems in its class. We formulate the problem as an integer liner program (ILP) and compare the performance with the two earlier existing exact state-of-the-art algorithms and exact implicit enumeration and heuristic algorithms that we propose here. Our exact algorithm was able to find optimal solutions much faster than ILP and the above two exact algorithms for middle-dense instances. For graphs with a considerable size, our heuristic algorithm was much faster than both, ILP and our exact algorithm. It found an optimal solution for more than half of the tested instances, whereas it improved the earlier known state-of-the-art solutions for almost all the tested benchmark instances. Among the instances where the optimum was not found, it gave an average approximation error of $1.18$.

Existing traffic signal control systems rely on oversimplified rule-based methods, and even RL-based methods are often suboptimal and unstable. To address this, we propose a cooperative multi-objective architecture called Multi-Objective Multi-Agent Deep Deterministic Policy Gradient (MOMA-DDPG), which estimates multiple reward terms for traffic signal control optimization using age-decaying weights. Our approach involves two types of agents: one focuses on optimizing local traffic at each intersection, while the other aims to optimize global traffic throughput. We evaluate our method using real-world traffic data collected from an Asian country's traffic cameras. Despite the inclusion of a global agent, our solution remains decentralized as this agent is no longer necessary during the inference stage. Our results demonstrate the effectiveness of MOMA-DDPG, outperforming state-of-the-art methods across all performance metrics. Additionally, our proposed system minimizes both waiting time and carbon emissions. Notably, this paper is the first to link carbon emissions and global agents in traffic signal control.

Generative Adversarial Networks (GANs) can produce images of surprising complexity and realism, but are generally modeled to sample from a single latent source ignoring the explicit spatial interaction between multiple entities that could be present in a scene. Capturing such complex interactions between different objects in the world, including their relative scaling, spatial layout, occlusion, or viewpoint transformation is a challenging problem. In this work, we propose to model object composition in a GAN framework as a self-consistent composition-decomposition network. Our model is conditioned on the object images from their marginal distributions to generate a realistic image from their joint distribution by explicitly learning the possible interactions. We evaluate our model through qualitative experiments and user evaluations in both the scenarios when either paired or unpaired examples for the individual object images and the joint scenes are given during training. Our results reveal that the learned model captures potential interactions between the two object domains given as input to output new instances of composed scene at test time in a reasonable fashion.

北京阿比特科技有限公司