亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we delve into the capacity problem of additive vertically-drifted first arrival position noise channel, which models a communication system where the position of molecules is harnessed to convey information. Drawing inspiration from the principles governing vector Gaussian interference channels, we examine this capacity problem within the context of a covariance constraint on input distributions. We offer analytical upper and lower bounds on this capacity for a three-dimensional spatial setting. This is achieved through a meticulous analysis of the characteristic function coupled with an investigation into the stability properties. The results of this study contribute to the ongoing effort to understand the fundamental limits of molecular communication systems.

相關內容

A query game is a pair of a set $Q$ of queries and a set $\mathcal{F}$ of functions, or codewords $f:Q\rightarrow \mathbb{Z}.$ We think of this as a two-player game. One player, Codemaker, picks a hidden codeword $f\in \mathcal{F}$. The other player, Codebreaker, then tries to determine $f$ by asking a sequence of queries $q\in Q$, after each of which Codemaker must respond with the value $f(q)$. The goal of Codebreaker is to uniquely determine $f$ using as few queries as possible. Two classical examples of such games are coin-weighing with a spring scale, and Mastermind, which are of interest both as recreational games and for their connection to information theory. In this paper, we will present a general framework for finding short solutions to query games. As applications, we give new self-contained proofs of the query complexity of variations of the coin-weighing problems, and prove new results that the deterministic query complexity of Mastermind with $n$ positions and $k$ colors is $\Theta(n \log k/ \log n + k)$ if only black-peg information is provided, and $\Theta(n \log k / \log n + k/n)$ if both black- and white-peg information is provided. In the deterministic setting, these are the first up to constant factor optimal solutions to Mastermind known for any $k\geq n^{1-o(1)}$.

This work, for the first time, introduces two constant factor approximation algorithms with linear query complexity for non-monotone submodular maximization over a ground set of size $n$ subject to a knapsack constraint, $\mathsf{DLA}$ and $\mathsf{RLA}$. $\mathsf{DLA}$ is a deterministic algorithm that provides an approximation factor of $6+\epsilon$ while $\mathsf{RLA}$ is a randomized algorithm with an approximation factor of $4+\epsilon$. Both run in $O(n \log(1/\epsilon)/\epsilon)$ query complexity. The key idea to obtain a constant approximation ratio with linear query lies in: (1) dividing the ground set into two appropriate subsets to find the near-optimal solution over these subsets with linear queries, and (2) combining a threshold greedy with properties of two disjoint sets or a random selection process to improve solution quality. In addition to the theoretical analysis, we have evaluated our proposed solutions with three applications: Revenue Maximization, Image Summarization, and Maximum Weighted Cut, showing that our algorithms not only return comparative results to state-of-the-art algorithms but also require significantly fewer queries.

The central limit theorem (CLT) is one of the most fundamental results in probability; and establishing its rate of convergence has been a key question since the 1940s. For independent random variables, a series of recent works established optimal error bounds under the Wasserstein-p distance (with p>=1). In this paper, we extend those results to locally dependent random variables, which include m-dependent random fields and U-statistics. Under conditions on the moments and the dependency neighborhoods, we derive optimal rates in the CLT for the Wasserstein-p distance. Our proofs rely on approximating the empirical average of dependent observations by the empirical average of i.i.d. random variables. To do so, we expand the Stein equation to arbitrary orders by adapting the Stein's dependency neighborhood method. Finally we illustrate the applicability of our results by obtaining efficient tail bounds.

Electroencephalogram (EEG) signals reflect brain activity across different brain states, characterized by distinct frequency distributions. Through multifractal analysis tools, we investigate the scaling behaviour of different classes of EEG signals and artifacts. We show that brain states associated to sleep and general anaesthesia are not in general characterized by scale invariance. The lack of scale invariance motivates the development of artifact removal algorithms capable of operating independently at each scale. We examine here the properties of the wavelet quantile normalization algorithm, a recently introduced adaptive method for real-time correction of transient artifacts in EEG signals. We establish general results regarding the regularization properties of the WQN algorithm, showing how it can eliminate singularities introduced by artefacts, and we compare it to traditional thresholding algorithms. Furthermore, we show that the algorithm performance is independent of the wavelet basis. We finally examine its continuity and boundedness properties and illustrate its distinctive non-local action on the wavelet coefficients through pathological examples.

The covariance of two random variables measures the average joint deviations from their respective means. We generalise this well-known measure by replacing the means with other statistical functionals such as quantiles, expectiles, or thresholds. Deviations from these functionals are defined via generalised errors, often induced by identification or moment functions. As a normalised measure of dependence, a generalised correlation is constructed. Replacing the common Cauchy-Schwarz normalisation by a novel Fr\'echet-Hoeffding normalisation, we obtain attainability of the entire interval $[-1, 1]$ for any given marginals. We uncover favourable properties of these new dependence measures. The families of quantile and threshold correlations give rise to function-valued distributional correlations, exhibiting the entire dependence structure. They lead to tail correlations, which should arguably supersede the coefficients of tail dependence. Finally, we construct summary covariances (correlations), which arise as (normalised) weighted averages of distributional covariances. We retrieve Pearson covariance and Spearman correlation as special cases. The applicability and usefulness of our new dependence measures is illustrated on demographic data from the Panel Study of Income Dynamics.

Exploring the origin and properties of magnetic fields is crucial to the development of many fields such as physics, astronomy and meteorology. We focus on the edge element approximation and theoretical analysis of celestial dynamo system with quasi-vacuum boundary conditions. The system not only ensures that the magnetic field on the spherical shell is generated from the dynamo model, but also provides convenience for the application of the edge element method. We demonstrate the existence, uniqueness and stability of the solution to the system by the fixed point theorem. Then, we approximate the system using the edge element method, which is more efficient in dealing with electromagnetic field problems. Moreover, we also discuss the stability of the corresponding discrete scheme. And the convergence is demonstrated by later numerical tests. Finally, we simulate the three-dimensional time evolution of the spherical interface dynamo model, and the characteristics of the simulated magnetic field are consistent with existing work.

Many stochastic processes in the physical and biological sciences can be modelled using Brownian dynamics with multiplicative noise. However, numerical integrators for these processes can lose accuracy or even fail to converge when the diffusion term is configuration-dependent. One remedy is to construct a transform to a constant-diffusion process and sample the transformed process instead. In this work, we explain how coordinate-based and time-rescaling-based transforms can be used either individually or in combination to map a general class of variable-diffusion Brownian motion processes into constant-diffusion ones. The transforms are invertible, thus allowing recovery of the original dynamics. We motivate our methodology using examples in one dimension before then considering multivariate diffusion processes. We illustrate the benefits of the transforms through numerical simulations, demonstrating how the right combination of integrator and transform can improve computational efficiency and the order of convergence to the invariant distribution. Notably, the transforms that we derive are applicable to a class of multibody, anisotropic Stokes-Einstein diffusion that has applications in biophysical modelling.

Elementary trapping sets (ETSs) are the main culprits for the performance of LDPC codes in the error floor region. Due to the large quantity, complex structures, and computational difficulties of ETSs, how to eliminate dominant ETSs in designing LDPC codes becomes a pivotal issue to improve the error floor behavior. In practice, researchers commonly address this problem by avoiding some special graph structures to free specific ETSs in Tanner graph. In this paper, we deduce the accurate Tur\'an number of $\theta(1,2,2)$ and prove that all $(a,b)$-ETSs in Tanner graph with variable-regular degree $d_L(v)=\gamma$ must satisfy the bound $b\geq a\gamma-\frac{1}{2}a^2$, which improves the lower bound obtained by Amirzade when the girth is 6. For the case of girth 8, by limiting the relation between any two 8-cycles in the Tanner graph, we prove a similar inequality $b\geq a\gamma-\frac{a(\sqrt{8a-7}-1)}{2}$. The simulation results show that the designed codes have good performance with lower error floor over additive white Gaussian noise channels.

The Chinese Remainder Theorem for the integers says that every system of congruence equations is solvable as long as the system satisfies an obvious necessary condition. This statement can be generalized in a natural way to arbitrary algebraic structures using the language of Universal Algebra. In this context, an algebra is a structure of a first-order language with no relation symbols, and a congruence on an algebra is an equivalence relation on its base set compatible with its fundamental operations. A tuple of congruences of an algebra is called a Chinese Remainder tuple if every system involving them is solvable. In this article we study the complexity of deciding whether a tuple of congruences of a finite algebra is a Chinese Remainder tuple. This problem, which we denote CRT, is easily seen to lie in coNP. We prove that it is actually coNP-complete and also show that it is tractable when restricted to several well-known classes of algebras, such as vector spaces and distributive lattices. The polynomial algorithms we exhibit are made possible by purely algebraic characterizations of Chinese Remainder tuples for algebras in these classes, which constitute interesting results in their own right. Among these, an elegant characterization of Chinese Remainder tuples of finite distributive lattices stands out. Finally, we address the restriction of CRT to an arbitrary equational class $\mathcal{V}$ generated by a two-element algebra. Here we establish an (almost) dichotomy by showing that, unless $\mathcal{V}$ is the class of semilattices, the problem is either coNP-complete or tractable.

Many food products involve mixtures of ingredients, where the mixtures can be expressed as combinations of ingredient proportions. In many cases, the quality and the consumer preference may also depend on the way in which the mixtures are processed. The processing is generally defined by the settings of one or more process variables. Experimental designs studying the joint impact of the mixture ingredient proportions and the settings of the process variables are called mixture-process variable experiments. In this article, we show how to combine mixture-process variable experiments and discrete choice experiments, to quantify and model consumer preferences for food products that can be viewed as processed mixtures. First, we describe the modeling of data from such combined experiments. Next, we describe how to generate D- and I-optimal designs for choice experiments involving mixtures and process variables, and we compare the two kinds of designs using two examples.

北京阿比特科技有限公司