亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We establish a coding theorem and a matching converse theorem for separate encodings and joint decoding of individual sequences using finite-state machines. The achievable rate region is characterized in terms of the Lempel-Ziv (LZ) complexities, the conditional LZ complexities and the joint LZ complexity of the two source sequences. An important feature that is needed to this end, which may be interesting on its own right, is a certain asymptotic form of a chain rule for LZ complexities, which we establish in this work. The main emphasis in the achievability scheme is on the universal decoder and its properties. We then show that the achievable rate region is universally attainable by a modified version of Draper's universal incremental Slepian-Wolf (SW) coding scheme, provided that there exists a low-rate reliable feedback link.

相關內容

 人類接受高層次教育、進行原創性研究的場所。 現在的大學一般包括一個能授予碩士和博士學位的研究生院和數個專業學院,以及能授予學士學位的一個本科生院。大學還包括高等專科學校

In contemporary problems involving genetic or neuroimaging data, thousands of hypotheses need to be tested. Due to their high power, and finite sample guarantees on type-1 error under weak assumptions, Monte-Carlo permutation tests are often considered as gold standard for these settings. However, the enormous computational effort required for (thousands of) permutation tests is a major burden. Recently, Fischer and Ramdas (2024) constructed a permutation test for a single hypothesis in which the permutations are drawn sequentially one-by-one and the testing process can be stopped at any point without inflating the type I error. They showed that the number of permutations can be substantially reduced (under null and alternative) while the power remains similar. We show how their approach can be modified to make it suitable for a broad class of multiple testing procedures. In particular, we discuss its use with the Benjamini-Hochberg procedure and illustrate the application on a large dataset.

We propose a unified theoretical framework to examine the energy dissipation properties at all stages of explicit exponential Runge-Kutta (EERK) methods for gradient flow problems. The main part of the novel framework is to construct the differential form of EERK method by using the difference coefficients of method and the so-called discrete orthogonal convolution kernels. As the main result, we prove that an EERK method can preserve the original energy dissipation law unconditionally if the associated differentiation matrix is positive semi-definite. A simple indicator, namely average dissipation rate, is also introduced for these multi-stage methods to evaluate the overall energy dissipation rate of an EERK method such that one can choose proper parameters in some parameterized EERK methods or compare different kinds of EERK methods. Some existing EERK methods in the literature are evaluated from the perspective of preserving the original energy dissipation law and the energy dissipation rate. Some numerical examples are also included to support our theory.

The characterization of the solution set for a class of algebraic Riccati inequalities is studied. This class arises in the passivity analysis of linear time invariant control systems. Eigenvalue perturbation theory for the Hamiltonian matrix associated with the Riccati inequality is used to analyze the extremal points of the solution set.

Boundary condition (BC) calibration to assimilate clinical measurements is an essential step in any subject-specific simulation of cardiovascular fluid dynamics. Bayesian calibration approaches have successfully quantified the uncertainties inherent in identified parameters. Yet, routinely estimating the posterior distribution for all BC parameters in 3D simulations has been unattainable due to the infeasible computational demand. We propose an efficient method to identify Windkessel parameter posteriors using results from a single high-fidelity three-dimensional (3D) model evaluation. We only evaluate the 3D model once for an initial choice of BCs and use the result to create a highly accurate zero-dimensional (0D) surrogate. We then perform Sequential Monte Carlo (SMC) using the optimized 0D model to derive the high-dimensional Windkessel BC posterior distribution. We validate this approach in a publicly available dataset of N=72 subject-specific vascular models. We found that optimizing 0D models to match 3D data a priori lowered their median approximation error by nearly one order of magnitude. In a subset of models, we confirm that the optimized 0D models still generalize to a wide range of BCs. Finally, we present the high-dimensional Windkessel parameter posterior for different measured signal-to-noise ratios in a vascular model using SMC. We further validate that the 0D-derived posterior is a good approximation of the 3D posterior. The minimal computational demand of our method using a single 3D simulation, combined with the open-source nature of all software and data used in this work, will increase access and efficiency of Bayesian Windkessel calibration in cardiovascular fluid dynamics simulations.

We provide full theoretical guarantees for the convergence behaviour of diffusion-based generative models under the assumption of strongly log-concave data distributions while our approximating class of functions used for score estimation is made of Lipschitz continuous functions. We demonstrate via a motivating example, sampling from a Gaussian distribution with unknown mean, the powerfulness of our approach. In this case, explicit estimates are provided for the associated optimization problem, i.e. score approximation, while these are combined with the corresponding sampling estimates. As a result, we obtain the best known upper bound estimates in terms of key quantities of interest, such as the dimension and rates of convergence, for the Wasserstein-2 distance between the data distribution (Gaussian with unknown mean) and our sampling algorithm. Beyond the motivating example and in order to allow for the use of a diverse range of stochastic optimizers, we present our results using an $L^2$-accurate score estimation assumption, which crucially is formed under an expectation with respect to the stochastic optimizer and our novel auxiliary process that uses only known information. This approach yields the best known convergence rate for our sampling algorithm.

We detail the mathematical formulation of the line of "functional quantizer" modules developed by the Mathematics and Music Lab (MML) at Michigan Technological University, for the VCV Rack software modular synthesizer platform, which allow synthesizer players to tune oscillators to new musical scales based on mathematical functions. For example, we describe the recently-released MML Logarithmic Quantizer (LOG QNT) module that tunes synthesizer oscillators to the non-Pythagorean musical scale introduced by indie band The Apples in Stereo.

We consider problems where many, somewhat redundant, hypotheses are tested and we are interested in reporting the most precise rejections, with false discovery rate (FDR) control. This is the case, for example, when researchers are interested both in individual hypotheses as well as group hypotheses corresponding to intersections of sets of the original hypotheses, at several resolution levels. A concrete application is in genome-wide association studies, where, depending on the signal strengths, it might be possible to resolve the influence of individual genetic variants on a phenotype with greater or lower precision. To adapt to the unknown signal strength, analyses are conducted at multiple resolutions and researchers are most interested in the more precise discoveries. Assuring FDR control on the reported findings with these adaptive searches is, however, often impossible. To design a multiple comparison procedure that allows for an adaptive choice of resolution with FDR control, we leverage e-values and linear programming. We adapt this approach to problems where knockoffs and group knockoffs have been successfully applied to test conditional independence hypotheses. We demonstrate its efficacy by analyzing data from the UK Biobank.

In symbolic integration, the Risch--Norman algorithm aims to find closed forms of elementary integrals over differential fields by an ansatz for the integral, which usually is based on heuristic degree bounds. Norman presented an approach that avoids degree bounds and only relies on the completion of reduction systems. We give a formalization of his approach and we develop a refined completion process, which terminates in more instances. In some situations when the algorithm does not terminate, one can detect patterns allowing to still describe infinite reduction systems that are complete. We present such infinite systems for the fields generated by Airy functions and complete elliptic integrals, respectively. Moreover, we show how complete reduction systems can be used to find rigorous degree bounds. In particular, we give a general formula for weighted degree bounds and we apply it to find tight bounds for above examples.

For a reaction-dominated diffusion problem we study a primal and a dual hybrid finite element method where weak continuity conditions are enforced by Lagrange multipliers. Uniform robustness of the discrete methods is achieved by enriching the local discretization spaces with modified face bubble functions which decay exponentially in the interior of an element depending on the ratio of the singular perturbation parameter and the local mesh-size. A posteriori error estimators are derived using Fortin operators. They are robust with respect to the singular perturbation parameter. Numerical experiments are presented that show that oscillations, if present, are significantly smaller then those observed in common finite element methods.

This paper studies the convergence of a spatial semidiscretization of a three-dimensional stochastic Allen-Cahn equation with multiplicative noise. For non-smooth initial values, the regularity of the mild solution is investigated, and an error estimate is derived with the spatial $ L^2 $-norm. For smooth initial values, two error estimates with the general spatial $ L^q $-norms are established.

北京阿比特科技有限公司