亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we develop explicit and semi-implicit second-order high-resolution finite difference schemes for a structured coagulation-fragmentation model formulated on the space of Radon measures. We prove the convergence of each of the two schemes to the unique weak solution of the model. We perform numerical simulations to demonstrate that the second order accuracy is achieved by both schemes.

相關內容

In many fields of biomedical sciences, it is common that random variables are measured repeatedly across different subjects. In such a repeated measurement setting, dependence structures among random variables that are between subjects and within a subject may differ and should be estimated differently. Ignoring this fact may lead to questionable or even erroneous scientific conclusions. In this paper, we study the problem of sparse and positive-definite estimation of between-subject and within-subject covariance matrices for high-dimensional repeated measurements. Our estimators are defined as solutions to convex optimization problems that can be solved efficiently. We establish estimation error rates for our proposed estimators of the two target matrices, and demonstrate their favorable performance through theoretical analysis and comprehensive simulation studies. We further apply our methods to recover two covariance graphs of clinical variables from hemodialysis patients.

We present a novel deep learning-based framework: Embedded Feature Correlation Optimization with Specific Parameter Initialization (COSPI) for 2D/3D registration which is a most challenging problem due to the difficulty such as dimensional mismatch, heavy computation load and lack of golden evaluating standard. The framework we designed includes a parameter specification module to efficiently choose initialization pose parameter and a fine-registration network to align images. The proposed framework takes extracting multi-scale features into consideration using a novel composite connection encoder with special training techniques. The method is compared with both learning-based methods and optimization-based methods to further evaluate the performance. Our experiments demonstrate that the method in this paper has improved the registration performance, and thereby outperforms the existing methods in terms of accuracy and running time. We also show the potential of the proposed method as an initial pose estimator.

Let $G$ be a graph on $n$ vertices of maximum degree $\Delta$. We show that, for any $\delta > 0$, the down-up walk on independent sets of size $k \leq (1-\delta)\alpha_c(\Delta)n$ mixes in time $O_{\Delta,\delta}(k\log{n})$, thereby resolving a conjecture of Davies and Perkins in an optimal form. Here, $\alpha_{c}(\Delta)n$ is the NP-hardness threshold for the problem of counting independent sets of a given size in a graph on $n$ vertices of maximum degree $\Delta$. Our mixing time has optimal dependence on $k,n$ for the entire range of $k$; previously, even polynomial mixing was not known. In fact, for $k = \Omega_{\Delta}(n)$ in this range, we establish a log-Sobolev inequality with optimal constant $\Omega_{\Delta,\delta}(1/n)$. At the heart of our proof are three new ingredients, which may be of independent interest. The first is a method for lifting $\ell_\infty$-independence from a suitable distribution on the discrete cube -- in this case, the hard-core model -- to the slice by proving stability of an Edgeworth expansion using a multivariate zero-free region for the base distribution. The second is a generalization of the Lee-Yau induction to prove log-Sobolev inequalities for distributions on the slice with considerably less symmetry than the uniform distribution. The third is a sharp decomposition-type result which provides a lossless comparison between the Dirichlet form of the original Markov chain and that of the so-called projected chain in the presence of a contractive coupling.

Port-Hamiltonian (PH) systems provide a framework for modeling, analysis and control of complex dynamical systems, where the complexity might result from multi-physical couplings, non-trivial domains and diverse nonlinearities. A major benefit of the PH representation is the explicit formulation of power interfaces, so-called ports, which allow for a power-preserving interconnection of subsystems to compose flexible multibody systems in a modular way. In this work, we present a PH representation of geometrically exact strings with nonlinear material behaviour. Furthermore, using structure-preserving discretization techniques a corresponding finite-dimensional PH state space model is developed. Applying mixed finite elements, the semi-discrete model retains the PH structure and the ports (pairs of velocities and forces) on the discrete level. Moreover, discrete derivatives are used in order to obtain an energy-consistent time-stepping method. The numerical properties of the newly devised model are investigated in a representative example. The developed PH state space model can be used for structure-preserving simulation and model order reduction as well as feedforward and feedback control design.

This paper introduces Distribution-Flexible Subset Quantization (DFSQ), a post-training quantization method for super-resolution networks. Our motivation for developing DFSQ is based on the distinctive activation distributions of current super-resolution models, which exhibit significant variance across samples and channels. To address this issue, DFSQ conducts channel-wise normalization of the activations and applies distribution-flexible subset quantization (SQ), wherein the quantization points are selected from a universal set consisting of multi-word additive log-scale values. To expedite the selection of quantization points in SQ, we propose a fast quantization points selection strategy that uses K-means clustering to select the quantization points closest to the centroids. Compared to the common iterative exhaustive search algorithm, our strategy avoids the enumeration of all possible combinations in the universal set, reducing the time complexity from exponential to linear. Consequently, the constraint of time costs on the size of the universal set is greatly relaxed. Extensive evaluations of various super-resolution models show that DFSQ effectively retains performance even without fine-tuning. For example, when quantizing EDSRx2 on the Urban benchmark, DFSQ achieves comparable performance to full-precision counterparts on 6- and 8-bit quantization, and incurs only a 0.1 dB PSNR drop on 4-bit quantization.

The heterogeneous distributed quickest change detection (HetDQCD) problem with 1-bit feedback is studied, in which a fusion center monitors an abrupt change through a bunch of heterogeneous sensors via anonymous 1-bit feedbacks. Two fusion rules, one-shot and voting rules, are considered. We analyze the performance in terms of the worst-case expected detection delay and the average run length to false alarm for the two fusion rules. Our analysis unveils the mixed impact of involving more sensors into the decision and enables us to find near optimal choices of parameters in the two schemes. Notably, it is shown that, in contrast to the homogeneous setting, the first alarm rule may no longer lead to the best performance among one-shot schemes. The non-anonymous setting is then investigated where a novel weighted voting rule is proposed that assigns different weights to votes from different types of sensors. Simulation results show that the proposed scheme is able to outperform all the above schemes and the mixture CUSUM scheme for the anonymous HetDQCD, hinting at the price of anonymity.

Probability predictions are essential to inform decision making in medicine, economics, image classification, sports analytics, entertainment, and many other fields. Ideally, probability predictions are (i) well calibrated, (ii) accurate, and (iii) bold, i.e., far from the base rate of the event. Predictions that satisfy these three criteria are informative for decision making. However, there is a fundamental tension between calibration and boldness, since calibration metrics can be high when predictions are overly cautious, i.e., non-bold. The purpose of this work is to develop a hypothesis test and Bayesian model selection approach to assess calibration, and a strategy for boldness-recalibration that enables practitioners to responsibly embolden predictions subject to their required level of calibration. Specifically, we allow the user to pre-specify their desired posterior probability of calibration, then maximally embolden predictions subject to this constraint. We verify the performance of our procedures via simulation, then demonstrate the breadth of applicability by applying these methods to real world case studies in each of the fields mentioned above. We find that very slight relaxation of calibration probability (e.g., from 0.99 to 0.95) can often substantially embolden predictions (e.g., widening Hockey predictions' range from .25-.75 to .10-.90)

In this article, we address the challenge of solving the ill-posed reconstruction problem in computed tomography using a translation invariant diagonal frame decomposition (TI-DFD). First, we review the concept of a TI-DFD for general linear operators and the corresponding filter-based regularization concept. We then introduce the TI-DFD for the Radon transform on $L^2(\R^2)$ and provide an exemplary construction using the TI wavelet transform. Presented numerical results clearly demonstrate the benefits of our approach over non-translation invariant counterparts.

Several task and motion planning algorithms have been proposed recently to design paths for mobile robot teams with collaborative high-level missions specified using formal languages, such as Linear Temporal Logic (LTL). However, the designed paths often lack reactivity to failures of robot capabilities (e.g., sensing, mobility, or manipulation) that can occur due to unanticipated events (e.g., human intervention or system malfunctioning) which in turn may compromise mission performance. To address this novel challenge, in this paper, we propose a new resilient mission planning algorithm for teams of heterogeneous robots with collaborative LTL missions. The robots are heterogeneous with respect to their capabilities while the mission requires applications of these skills at certain areas in the environment in a temporal/logical order. The proposed method designs paths that can adapt to unexpected failures of robot capabilities. This is accomplished by re-allocating sub-tasks to the robots based on their currently functioning skills while minimally disrupting the existing team motion plans. We provide experiments and theoretical guarantees demonstrating the efficiency and resiliency of the proposed algorithm.

Stochastic Klein--Gordon--Schr\"odinger (KGS) equations are important mathematical models and describe the interaction between scalar nucleons and neutral scalar mesons in the stochastic environment. In this paper, we propose novel structure-preserving schemes to numerically solve stochastic KGS equations with additive noise, which preserve averaged charge evolution law, averaged energy evolution law, symplecticity, and multi-symplecticity. By applying central difference, sine pseudo-spectral method, or finite element method in space and modifying finite difference in time, we present some charge and energy preserved fully-discrete scheme for the original system. In addition, combining the symplectic Runge-Kutta method in time and finite difference in space, we propose a class of multi-symplectic discretizations preserving the geometric structure of the stochastic KGS equation. Finally, numerical experiments confirm theoretical findings.

北京阿比特科技有限公司