亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a study of the standard plasma physics test, Landau damping, using the Particle-In-Cell (PIC) algorithm. The Landau damping phenomenon consists of the damping of small oscillations in plasmas without collisions. In the PIC method, a hybrid discretization is constructed with a grid of finitely supported basis functions to represent the electric, magnetic and/or gravitational fields, and a distribution of delta functions to represent the particle field. Approximations to the dispersion relation are found to be inadequate in accurately calculating values for the electric field frequency and damping rate when parameters of the physical system, such as the plasma frequency or thermal velocity, are varied. We present a full derivation and numerical solution for the dispersion relation, and verify the PETSC-PIC numerical solutions to the Vlasov-Poisson for a large range of wave numbers and charge densities.

相關內容

This paper is concerned with the multi-frequency factorization method for imaging the support of a wave-number-dependent source function. It is supposed that the source function is given by the Fourier transform of some time-dependent source with a priori given radiating period. Using the multi-frequency far-field data at a fixed observation direction, we provide a computational criterion for characterizing the smallest strip containing the support and perpendicular to the observation direction. The far-field data from sparse observation directions can be used to recover a $\Theta$-convex polygon of the support. The inversion algorithm is proven valid even with multi-frequency near-field data in three dimensions. The connections to time-dependent inverse source problems are discussed in the near-field case. We also comment on possible extensions to source functions with two disconnected supports. Numerical tests in both two and three dimensions are implemented to show effectiveness and feasibility of the approach. This paper provides numerical analysis for a frequency-domain approach to recover the support of an admissible class of time-dependent sources.

This paper studies the performance of a transmission and reception scheme for massive access under some practical challenges. One challenge is the near-far problem, i.e., an access point often receives signals from different transmitting devices at vastly different signal strengths. Another challenge is that the signals from different devices may be subject to arbitrary, analog, and heterogeneous delays. This paper considers a fully asynchronous model which is more realistic than the frame or symbol level synchrony needed in most existing work. A main theorem characterizes the asymptotic scaling of the codelength with the number of devices, a device delay upper bound, and the dynamic range of received signal strengths across devices. The scaling result suggests potential advantages of grouping devices with similar received signal strengths and letting the groups use time sharing. The performance of the proposed scheme is evaluated using simulations with and without grouping.

Evolutionary computation has shown its superiority in dynamic optimization, but for the (dynamic) time-linkage problems, some theoretical studies have revealed the possible weakness of evolutionary computation. Since the theoretically analyzed time-linkage problem only considers the influence of an extremely strong negative time-linkage effect, it remains unclear whether the weakness also appears in problems with more general time-linkage effects. Besides, understanding in depth the relationship between time-linkage effect and algorithmic features is important to build up our knowledge of what algorithmic features are good at what kinds of problems. In this paper, we analyze the general time-linkage effect and consider the time-linkage OneMax with general weights whose absolute values reflect the strength and whose sign reflects the positive or negative influence. We prove that except for some small and positive time-linkage effects (that is, for weights $0$ and $1$), randomized local search (RLS) and (1+1)EA cannot converge to the global optimum with a positive probability. More precisely, for the negative time-linkage effect (for negative weights), both algorithms cannot efficiently reach the global optimum and the probability of failing to converge to the global optimum is at least $1-o(1)$. For the not so small positive time-linkage effect (positive weights greater than $1$), such a probability is at most $c+o(1)$ where $c$ is a constant strictly less than $1$.

At STOC 2002, Eiter, Gottlob, and Makino presented a technique called ordered generation that yields an $n^{O(d)}$-delay algorithm listing all minimal transversals of an $n$-vertex hypergraph of degeneracy $d$, for an appropriate definition of degeneracy. Recently at IWOCA 2019, Conte, Kant\'e, Marino, and Uno asked whether, even for a more restrictive notion of degeneracy, this XP-delay algorithm parameterized by $d$ could be made FPT-delay parameterized by $d$ and the maximum degree $\Delta$, i.e., an algorithm with delay $f(d,\Delta)\cdot n^{O(1)}$ for some computable function $f$. We answer this question in the affirmative whenever the hypergraph corresponds to the closed neighborhoods of a graph, i.e., we show that the intimately related problem of enumerating minimal dominating sets in graphs admits an FPT-delay algorithm parameterized by the degeneracy and the maximum degree.

Many modern devices, including critical infrastructures, depend on the reliable operation of electrical power conversion systems. The small size and versatility of switched-mode power converters has resulted in their widespread adoption. Whereas transformer-based systems passively convert voltage, switched-mode converters feature an actively regulated feedback loop, which relies on accurate sensor measurements. Previous academic work has shown that many types of sensors are vulnerable to Intentional Electromagnetic Interference (IEMI) attacks, and it has been postulated that power converters, too, are affected. In this paper, we present the first detailed study on switched-mode power converters by targeting their voltage and current sensors through IEMI attacks. We present a theoretical framework for evaluating IEMI attacks against feedback-based power supplies in the general case. We experimentally validate our theoretical predictions by analyzing multiple AC-DC and DC-DC converters, automotive grade current sensors, and dedicated battery chargers, and demonstrate the systematic vulnerability of all examined categories under real-world conditions. Finally, we demonstrate that sensor attacks on power converters can cause permanent damage to Li-Ion batteries during the charging process.

In causal inference, sensitivity analysis is important to assess the robustness of study conclusions to key assumptions. We perform sensitivity analysis of the assumption that missing outcomes are missing completely at random. We follow a Bayesian approach, which is nonparametric for the outcome distribution and can be combined with an informative prior on the sensitivity parameter. We give insight in the posterior and provide theoretical guarantees in the form of Bernstein-von Mises theorems for estimating the mean outcome. We study different parametrisations of the model involving Dirichlet process priors on the distribution of the outcome and on the distribution of the outcome conditional on the subject being treated. We show that these parametrisations incorporate a prior on the sensitivity parameter in different ways and discuss the relative merits. We also present a simulation study, showing the performance of the methods in finite sample scenarios.

This paper introduces a novel Bayesian approach to detect changes in the variance of a Gaussian sequence model, focusing on quantifying the uncertainty in the change point locations and providing a scalable algorithm for inference. Such a measure of uncertainty is necessary when change point methods are deployed in sensitive applications, for example, when one is interested in determining whether an organ is viable for transplant. The key of our proposal is framing the problem as a product of multiple single changes in the scale parameter. We fit the model through an iterative procedure similar to what is done for additive models. The novelty is that each iteration returns a probability distribution on time instances, which captures the uncertainty in the change point location. Leveraging a recent result in the literature, we can show that our proposal is a variational approximation of the exact model posterior distribution. We study the algorithm's convergence and the change point localization rate. Extensive experiments in simulation studies illustrate the performance of our method and the possibility of generalizing it to more complex data-generating mechanisms. We apply the new model to an experiment involving a novel technique to assess the viability of a liver and oceanographic data.

This letter proposes a new user cooperative offloading protocol called user reciprocity in backscatter communication (BackCom)-aided mobile edge computing systems with efficient computation, whose quintessence is that each user can switch alternately between the active or the BackCom mode in different slots, and one user works in the active mode and the other user works in the BackCom mode in each time slot. In particular, the user in the BackCom mode can always use the signal transmitted by the user in the active mode for more data transmission in a spectrum-sharing manner. To evaluate the proposed protocol, a computation efficiency (CE) maximization-based optimization problem is formulated by jointly power control, time scheduling, reflection coefficient adjustment, and computing frequency allocation, while satisfying various physical constraints on the maximum energy budget, the computing frequency threshold, the minimum computed bits, and harvested energy threshold. To solve this non-convex problem, Dinkelbach's method and quadratic transform are first employed to transform the complex fractional forms into linear ones. Then, an iterative algorithm is designed by decomposing the resulting problem to obtain the suboptimal solution. The closed-form solutions for the transmit power, the RC, and the local computing frequency are provided for more insights. Besides, the analytical performance gain with the reciprocal mode is also derived. Simulation results demonstrate that the proposed scheme outperforms benchmark schemes regarding the CE.

Diffusion models (DMs) can generate realistic images with text guidance using large-scale datasets. However, they demonstrate limited controllability in the output space of the generated images. We propose a novel learning method for text-guided image editing, namely \texttt{iEdit}, that generates images conditioned on a source image and a textual edit prompt. As a fully-annotated dataset with target images does not exist, previous approaches perform subject-specific fine-tuning at test time or adopt contrastive learning without a target image, leading to issues on preserving the fidelity of the source image. We propose to automatically construct a dataset derived from LAION-5B, containing pseudo-target images with their descriptive edit prompts given input image-caption pairs. This dataset gives us the flexibility of introducing a weakly-supervised loss function to generate the pseudo-target image from the latent noise of the source image conditioned on the edit prompt. To encourage localised editing and preserve or modify spatial structures in the image, we propose a loss function that uses segmentation masks to guide the editing during training and optionally at inference. Our model is trained on the constructed dataset with 200K samples and constrained GPU resources. It shows favourable results against its counterparts in terms of image fidelity, CLIP alignment score and qualitatively for editing both generated and real images.

Minimum flow decomposition (MFD) is the NP-hard problem of finding a smallest decomposition of a network flow/circulation $X$ on a directed graph $G$ into weighted source-to-sink paths whose superposition equals $X$. We show that, for acyclic graphs, considering the \emph{width} of the graph (the minimum number of paths needed to cover all of its edges) yields advances in our understanding of its approximability. For the version of the problem that uses only non-negative weights, we identify and characterise a new class of \emph{width-stable} graphs, for which a popular heuristic is a \gwsimple-approximation ($|X|$ being the total flow of $X$), and strengthen its worst-case approximation ratio from $\Omega(\sqrt{m})$ to $\Omega(m / \log m)$ for sparse graphs, where $m$ is the number of edges in the graph. We also study a new problem on graphs with cycles, Minimum Cost Circulation Decomposition (MCCD), and show that it generalises MFD through a simple reduction. For the version allowing also negative weights, we give a $(\lceil \log \Vert X \Vert \rceil +1)$-approximation ($\Vert X \Vert$ being the maximum absolute value of $X$ on any edge) using a power-of-two approach, combined with parity fixing arguments and a decomposition of unitary circulations ($\Vert X \Vert \leq 1$), using a generalised notion of width for this problem. Finally, we disprove a conjecture about the linear independence of minimum (non-negative) flow decompositions posed by Kloster et al. [ALENEX 2018], but show that its useful implication (polynomial-time assignments of weights to a given set of paths to decompose a flow) holds for the negative version.

北京阿比特科技有限公司