亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Regularized imaging spectroscopy was introduced for the construction of electron flux images at different energies from count visibilities recorded by the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). In this work we seek to extend this approach to data from the Spectrometer/Telescope for Imaging X-rays (STIX) on-board the Solar Orbiter mission. Our aims are to demonstrate the feasibility of regularized imaging spectroscopy as a method for analysis of STIX data, and also to show how such analysis can lead to insights into the physical processes affecting the nonthermal electrons responsible for the hard X-ray emission observed by STIX. STIX records imaging data in an intrinsically different manner from RHESSI. Rather than sweeping the angular frequency plane in a set of concentric circles (one circle per detector), STIX uses $30$ collimators, each corresponding to a specific angular frequency. In this paper we derive an appropriate modification of the previous computational approach for the analysis of the visibilities observed by STIX. This approach also allows for the observed count data to be placed into non-uniformly-spaced energy bins. We show that the regularized imaging spectroscopy approach is not only feasible for analysis of the visibilities observed by STIX, but also more reliable. Application of the regularized imaging spectroscopy technique to several well-observed flares reveals details of the variation of the electron flux spectrum throughout the flare sources. We conclude that the visibility-based regularized imaging spectroscopy approach is well-suited to analysis of STIX data. We also use STIX electron flux spectral images to track, for the first time, the behavior of the accelerated electrons during their path from the acceleration site in the solar corona toward the chromosphere

相關內容

The multigrid V-cycle method is a popular method for solving systems of linear equations. It computes an approximate solution by using smoothing on fine levels and solving a system of linear equations on the coarsest level. Solving on the coarsest level depends on the size and difficulty of the problem. If the size permits, it is typical to use a direct method based on LU or Cholesky decomposition. In settings with large coarsest-level problems, approximate solvers such as iterative Krylov subspace methods, or direct methods based on low-rank approximation, are often used. The accuracy of the coarsest-level solver is typically determined based on the experience of the users with the concrete problems and methods. In this paper we present an approach to analyzing the effects of approximate coarsest-level solves on the convergence of the V-cycle method for symmetric positive definite problems. Using these results, we derive coarsest-level stopping criterion through which we may control the difference between the approximation computed by a V-cycle method with approximate coarsest-level solver and the approximation which would be computed if the coarsest-level problems were solved exactly. The coarsest-level stopping criterion may thus be set up such that the V-cycle method converges to a chosen finest-level accuracy in (nearly) the same number of V-cycle iterations as the V-cycle method with exact coarsest-level solver. We also utilize the theoretical results to discuss how the convergence of the V-cycle method may be affected by the choice of a tolerance in a coarsest-level stopping criterion based on the relative residual norm.

We introduce a novel sampler called the energy based diffusion generator for generating samples from arbitrary target distributions. The sampling model employs a structure similar to a variational autoencoder, utilizing a decoder to transform latent variables from a simple distribution into random variables approximating the target distribution, and we design an encoder based on the diffusion model. Leveraging the powerful modeling capacity of the diffusion model for complex distributions, we can obtain an accurate variational estimate of the Kullback-Leibler divergence between the distributions of the generated samples and the target. Moreover, we propose a decoder based on generalized Hamiltonian dynamics to further enhance sampling performance. Through empirical evaluation, we demonstrate the effectiveness of our method across various complex distribution functions, showcasing its superiority compared to existing methods.

We consider wave scattering from a system of highly contrasting resonators with time-modulated material parameters. In this setting, the wave equation reduces to a system of coupled Helmholtz equations that models the scattering problem. We consider the one-dimensional setting. In order to understand the energy of the system, we prove a novel higher-order discrete, capacitance matrix approximation of the subwavelength resonant quasifrequencies. Further, we perform numerical experiments to support and illustrate our analytical results and show how periodically time-dependent material parameters affect the scattered wave field.

In this paper we analyze a space-time unfitted finite element method for the discretization of scalar surface partial differential equations on evolving surfaces. For higher order approximations of the evolving surface we use the technique of (iso)parametric mappings for which a level set representation of the evolving surface is essential. We derive basic results in which certain geometric characteristics of the exact space-time surface are related to corresponding ones of the numerical surface approximation. These results are used in a complete error analysis of a higher order space-time TraceFEM.

Recently, text-to-image (T2I) synthesis has undergone significant advancements, particularly with the emergence of Large Language Models (LLM) and their enhancement in Large Vision Models (LVM), greatly enhancing the instruction-following capabilities of traditional T2I models. Nevertheless, previous methods focus on improving generation quality but introduce unsafe factors into prompts. We explore that appending specific camera descriptions to prompts can enhance safety performance. Consequently, we propose a simple and safe prompt engineering method (SSP) to improve image generation quality by providing optimal camera descriptions. Specifically, we create a dataset from multi-datasets as original prompts. To select the optimal camera, we design an optimal camera matching approach and implement a classifier for original prompts capable of automatically matching. Appending camera descriptions to original prompts generates optimized prompts for further LVM image generation. Experiments demonstrate that SSP improves semantic consistency by an average of 16% compared to others and safety metrics by 48.9%.

We propose a semi-analytic Stokes expansion ansatz for finite-depth standing water waves and devise a recursive algorithm to solve the system of differential equations governing the expansion coefficients. We implement the algorithm on a supercomputer using arbitrary-precision arithmetic. The Stokes expansion introduces hyperbolic trigonometric terms that require exponentiation of power series. We handle this efficiently using Bell polynomials. Under mild assumptions on the fluid depth, we prove that there are no exact resonances, though small divisors may occur. Sudden changes in growth rate in the expansion coefficients are found to correspond to imperfect bifurcations observed when families of standing waves are computed using a shooting method. A direct connection between small divisors in the recursive algorithm and imperfect bifurcations in the solution curves is observed, where the small divisor excites higher-frequency parasitic standing waves that oscillate on top of the main wave. A 109th order Pad\'e approximation maintains 25--30 digits of accuracy on both sides of the first imperfect bifurcation encountered for the unit-depth problem. This suggests that even if the Stokes expansion is divergent, there may be a closely related convergent sequence of rational approximations.

After introducing a bit-plane quantum representation for a multi-image, we present a novel way to encrypt/decrypt multiple images using a quantum computer. Our encryption scheme is based on a two-stage scrambling of the images and of the bit planes on one hand and of the pixel positions on the other hand, each time using quantum baker maps. The resulting quantum multi-image is then diffused with controlled CNOT gates using a sine chaotification of a two-dimensional H\'enon map as well as Chebyshev polynomials. The decryption is processed by operating all the inverse quantum gates in the reverse order.

Numerous statistical methods have been developed to explore genomic imprinting and maternal effects, which are causes of parent-of-origin patterns in complex human diseases. Most of the methods, however, either only model one of these two confounded epigenetic effects, or make strong yet unrealistic assumptions about the population to avoid over-parameterization. A recent partial likelihood method (LIMEDSP ) can identify both epigenetic effects based on discordant sibpair family data without those assumptions. Theoretical and empirical studies have shown its validity and robustness. As LIMEDSP method obtains parameter estimation by maximizing partial likelihood, it is interesting to compare its efficiency with full likelihood maximizer. To overcome the difficulty in over-parameterization when using full likelihood, this study proposes a discordant sib-pair design based Monte Carlo Expectation Maximization (MCEMDSP ) method to detect imprinting and maternal effects jointly. Those unknown mating type probabilities, the nuisance parameters, are considered as latent variables in EM algorithm. Monte Carlo samples are used to numerically approximate the expectation function that cannot be solved algebraically. Our simulation results show that though this MCEMDSP algorithm takes longer computation time, it can generally detect both epigenetic effects with higher power, which demonstrates that it can be a good complement of LIMEDSP method

To obtain strong convergence rates of numerical schemes, an overwhelming majority of existing works impose a global monotonicity condition on coefficients of SDEs. On the contrary, a majority of SDEs from applications do not have globally monotone coefficients. As a recent breakthrough, the authors of [Hutzenthaler, Jentzen, Ann. Probab., 2020] originally presented a perturbation theory for stochastic differential equations (SDEs), which is crucial to recovering strong convergence rates of numerical schemes in a non-globally monotone setting. However, only a convergence rate of order $1/2$ was obtained there for time-stepping schemes such as a stopped increment-tamed Euler-Maruyama (SITEM) method. As an open problem, a natural question was raised by the aforementioned work as to whether higher convergence rate than $1/2$ can be obtained when higher order schemes are used. The present work attempts to solve the tough problem. To this end, we develop some new perturbation estimates that are able to reveal the order-one strong convergence of numerical methods. As the first application of the newly developed estimates, we identify the expected order-one pathwise uniformly strong convergence of the SITEM method for additive noise driven SDEs and multiplicative noise driven second order SDEs with non-globally monotone coefficients. As the other application, we propose and analyze a positivity preserving explicit Milstein-type method for Lotka-Volterra competition model driven by multi-dimensional noise, with a pathwise uniformly strong convergence rate of order one recovered under mild assumptions. These obtained results are completely new and significantly improve the existing theory. Numerical experiments are also provided to confirm the theoretical findings.

Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.

北京阿比特科技有限公司