亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study a class of McKean--Vlasov Stochastic Differential Equations (MV-SDEs) with drifts and diffusions having super-linear growth in measure and space -- the maps have general polynomial form but also satisfy a certain monotonicity condition. The combination of the drift's super-linear growth in measure (by way of a convolution) and the super-linear growth in space and measure of the diffusion coefficient require novel technical elements in order to obtain the main results. We establish wellposedness, propagation of chaos (PoC), and under further assumptions on the model parameters we show an exponential ergodicity property alongside the existence of an invariant distribution. No differentiability or non-degeneracy conditions are required. Further, we present a particle system based Euler-type split-step scheme (SSM) for the simulation of this type of MV-SDEs. The scheme attains, in stepsize, the strong error rate $1/2$ in the non-path-space root-mean-square error metric and we demonstrate the property of mean-square contraction. Our results are illustrated by numerical examples including: estimation of PoC rates across dimensions, preservation of periodic phase-space, and the observation that taming appears to be not a suitable method unless strong dissipativity is present.

相關內容

Without writing a single line of code by a human, an example Monte Carlo simulation based application for stochastic dependence modeling with copulas is developed using a state-of-the-art large language model (LLM) fine-tuned for conversations. This includes interaction with ChatGPT in natural language and using mathematical formalism, which, under careful supervision by a human-expert, led to producing a working code in MATLAB, Python and R for sampling from a given copula model, evaluation of the model's density, performing maximum likelihood estimation, optimizing the code for parallel computing for CPUs as well as for GPUs, and visualization of the computed results. In contrast to other emerging studies that assess the accuracy of LLMs like ChatGPT on tasks from a selected area, this work rather investigates ways how to achieve a successful solution of a standard statistical task in a collaboration of a human-expert and artificial intelligence (AI). Particularly, through careful prompt engineering, we separate successful solutions generated by ChatGPT from unsuccessful ones, resulting in a comprehensive list of related pros and cons. It is demonstrated that if the typical pitfalls are avoided, we can substantially benefit from collaborating with an AI partner. For example, we show that if ChatGPT is not able to provide a correct solution due to a lack of or incorrect knowledge, the human-expert can feed it with the correct knowledge, e.g., in the form of mathematical theorems and formulas, and make it to apply the gained knowledge in order to provide a solution that is correct. Such ability presents an attractive opportunity to achieve a programmed solution even for users with rather limited knowledge of programming techniques.

Classical model order reduction (MOR) for parametric problems may become computationally inefficient due to large sizes of the required projection bases, especially for problems with slowly decaying Kolmogorov n-widths. Additionally, Hamiltonian structure of dynamical systems may be available and should be preserved during the reduction. In the current presentation, we address these two aspects by proposing a corresponding dictionary-based, online-adaptive MOR approach. The method requires dictionaries for the state-variable, non-linearities and discrete empirical interpolation (DEIM) points. During the online simulation, local basis extensions/simplifications are performed in an online-efficient way, i.e. the runtime complexity of basis modifications and online simulation of the reduced models do not depend on the full state dimension. Experiments on a linear wave equation and a non-linear Sine-Gordon example demonstrate the efficiency of the approach.

A block Markov chain is a Markov chain whose state space can be partitioned into a finite number of clusters such that the transition probabilities only depend on the clusters. Block Markov chains thus serve as a model for Markov chains with communities. This paper establishes limiting laws for the singular value distributions of the empirical transition matrix and empirical frequency matrix associated to a sample path of the block Markov chain whenever the length of the sample path is $\Theta(n^2)$ with $n$ the size of the state space. The proof approach is split into two parts. First, we introduce a class of symmetric random matrices with dependent entries called approximately uncorrelated random matrices with variance profile. We establish their limiting eigenvalue distributions by means of the moment method. Second, we develop a coupling argument to show that this general-purpose result applies to the singular value distributions associated with the block Markov chain.

In tug-of-war, two players compete by moving a counter along edges of a graph, each winning the right to move at a given turn according to the flip of a possibly biased coin. The game ends when the counter reaches the boundary, a fixed subset of the vertices, at which point one player pays the other an amount determined by the boundary vertex. Economists and mathematicians have independently studied tug-of-war for many years, focussing respectively on resource-allocation forms of the game, in which players iteratively spend precious budgets in an effort to influence the bias of the coins that determine the turn victors; and on PDE arising in fine mesh limits of the constant-bias game in a Euclidean setting. In this article, we offer a mathematical treatment of a class of tug-of-war games with allocated budgets: each player is initially given a fixed budget which she draws on throughout the game to offer a stake at the start of each turn, and her probability of winning the turn is the ratio of her stake and the sum of the two stakes. We consider the game played on a tree, with boundary being the set of leaves, and the payment function being the indicator of a single distinguished leaf. We find the game value and the essentially unique Nash equilibrium of a leisurely version of the game, in which the move at any given turn is cancelled with constant probability after stakes have been placed. We show that the ratio of the players' remaining budgets is maintained at its initial value $\lambda$; game value is a biased infinity harmonic function; and the proportion of remaining budget that players stake at a given turn is given in terms of the spatial gradient and the $\lambda$-derivative of game value. We also indicate examples in which the solution takes a different form in the non-leisurely game.

We investigate the linear stability analysis of a pathway-based diffusion model (PBDM), which characterizes the dynamics of the engineered Escherichia coli populations [X. Xue and C. Xue and M. Tang, P LoS Computational Biology, 14 (2018), pp. e1006178]. This stability analysis considers small perturbations of the density and chemical concentration around two non-trivial steady states, and the linearized equations are transformed into a generalized eigenvalue problem. By formal analysis, when the internal variable responds to the outside signal fast enough, the PBDM converges to an anisotropic diffusion model, for which the probability density distribution in the internal variable becomes a delta function. We introduce an asymptotic preserving (AP) scheme for the PBDM that converges to a stable limit scheme consistent with the anisotropic diffusion model. Further numerical simulations demonstrate the theoretical results of linear stability analysis, i.e., the pattern formation, and the convergence of the AP scheme.

To improve how neural networks function it is crucial to understand their learning process. The information bottleneck theory of deep learning proposes that neural networks achieve good generalization by compressing their representations to disregard information that is not relevant to the task. However, empirical evidence for this theory is conflicting, as compression was only observed when networks used saturating activation functions. In contrast, networks with non-saturating activation functions achieved comparable levels of task performance but did not show compression. In this paper we developed more robust mutual information estimation techniques, that adapt to hidden activity of neural networks and produce more sensitive measurements of activations from all functions, especially unbounded functions. Using these adaptive estimation techniques, we explored compression in networks with a range of different activation functions. With two improved methods of estimation, firstly, we show that saturation of the activation function is not required for compression, and the amount of compression varies between different activation functions. We also find that there is a large amount of variation in compression between different network initializations. Secondary, we see that L2 regularization leads to significantly increased compression, while preventing overfitting. Finally, we show that only compression of the last layer is positively correlated with generalization.

Existing conversational models are handled by a database(DB) and API based systems. However, very often users' questions require information that cannot be handled by such systems. Nonetheless, answers to these questions are available in the form of customer reviews and FAQs. DSTC-11 proposes a three stage pipeline consisting of knowledge seeking turn detection, knowledge selection and response generation to create a conversational model grounded on this subjective knowledge. In this paper, we focus on improving the knowledge selection module to enhance the overall system performance. In particular, we propose entity retrieval methods which result in an accurate and faster knowledge search. Our proposed Named Entity Recognition (NER) based entity retrieval method results in 7X faster search compared to the baseline model. Additionally, we also explore a potential keyword extraction method which can improve the accuracy of knowledge selection. Preliminary results show a 4 \% improvement in exact match score on knowledge selection task. The code is available //github.com/raja-kumar/knowledge-grounded-TODS

When is heterogeneity in the composition of an autonomous robotic team beneficial and when is it detrimental? We investigate and answer this question in the context of a minimally viable model that examines the role of heterogeneous speeds in perimeter defense problems, where defenders share a total allocated speed budget. We consider two distinct problem settings and develop strategies based on dynamic programming and on local interaction rules. We present a theoretical analysis of both approaches and our results are extensively validated using simulations. Interestingly, our results demonstrate that the viability of heterogeneous teams depends on the amount of information available to the defenders. Moreover, our results suggest a universality property: across a wide range of problem parameters the optimal ratio of the speeds of the defenders remains nearly constant.

Object detectors usually achieve promising results with the supervision of complete instance annotations. However, their performance is far from satisfactory with sparse instance annotations. Most existing methods for sparsely annotated object detection either re-weight the loss of hard negative samples or convert the unlabeled instances into ignored regions to reduce the interference of false negatives. We argue that these strategies are insufficient since they can at most alleviate the negative effect caused by missing annotations. In this paper, we propose a simple but effective mechanism, called Co-mining, for sparsely annotated object detection. In our Co-mining, two branches of a Siamese network predict the pseudo-label sets for each other. To enhance multi-view learning and better mine unlabeled instances, the original image and corresponding augmented image are used as the inputs of two branches of the Siamese network, respectively. Co-mining can serve as a general training mechanism applied to most of modern object detectors. Experiments are performed on MS COCO dataset with three different sparsely annotated settings using two typical frameworks: anchor-based detector RetinaNet and anchor-free detector FCOS. Experimental results show that our Co-mining with RetinaNet achieves 1.4%~2.1% improvements compared with different baselines and surpasses existing methods under the same sparsely annotated setting.

To address the sparsity and cold start problem of collaborative filtering, researchers usually make use of side information, such as social networks or item attributes, to improve recommendation performance. This paper considers the knowledge graph as the source of side information. To address the limitations of existing embedding-based and path-based methods for knowledge-graph-aware recommendation, we propose Ripple Network, an end-to-end framework that naturally incorporates the knowledge graph into recommender systems. Similar to actual ripples propagating on the surface of water, Ripple Network stimulates the propagation of user preferences over the set of knowledge entities by automatically and iteratively extending a user's potential interests along links in the knowledge graph. The multiple "ripples" activated by a user's historically clicked items are thus superposed to form the preference distribution of the user with respect to a candidate item, which could be used for predicting the final clicking probability. Through extensive experiments on real-world datasets, we demonstrate that Ripple Network achieves substantial gains in a variety of scenarios, including movie, book and news recommendation, over several state-of-the-art baselines.

北京阿比特科技有限公司