亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We characterize the Schr\"odinger bridge problems by a family of Mckean-Vlasov stochastic control problems with no terminal time distribution constraint. In doing so, we use the theory of Hilbert space embeddings of probability measures and then describe the constraint as penalty terms defined by the maximum mean discrepancy in the control problems. A sequence of the probability laws of the state processes resulting from $\epsilon$-optimal controls converges to a unique solution of the Schr\"odinger's problem under mild conditions on given initial and terminal time distributions and an underlying diffusion process. We propose a neural SDE based deep learning algorithm for the Mckean-Vlasov stochastic control problems. Several numerical experiments validate our methods.

相關內容

Neural network with quadratic decision functions have been introduced as alternatives to standard neural networks with affine linear one. They are advantageous when the objects to be identified are of compact basic geometries like circles, ellipsis etc. In this paper we investigate the use of such ansatz functions for classification. In particular we test and compare the algorithm on the MNIST dataset for classification of handwritten digits and for classification of subspecies. We also show, that the implementation can be based on the neural network structure in the software Tensorflow and Keras, respectively.

Multivariate spatio-temporal models are widely applicable, but specifying their structure is complicated and may inhibit wider use. We introduce the R package tinyVAST from two viewpoints: the software user and the statistician. From the user viewpoint, tinyVAST adapts a widely used formula interface to specify generalized additive models, and combines this with arguments to specify spatial and spatio-temporal interactions among variables. These interactions are specified using arrow notation (from structural equation models), or an extended arrow-and-lag notation that allows simultaneous, lagged, and recursive dependencies among variables over time. The user also specifies a spatial domain for areal (gridded), continuous (point-count), or stream-network data. From the statistician viewpoint, tinyVAST constructs sparse precision matrices representing multivariate spatio-temporal variation, and parameters are estimated by specifying a generalized linear mixed model (GLMM). This expressive interface encompasses vector autoregressive, empirical orthogonal functions, spatial factor analysis, and ARIMA models. To demonstrate, we fit to data from two survey platforms sampling corals, sponges, rockfishes, and flatfishes in the Gulf of Alaska and Aleutian Islands. We then compare eight alternative model structures using different assumptions about habitat drivers and survey detectability. Model selection suggests that towed-camera and bottom trawl gears have spatial variation in detectability but sample the same underlying density of flatfishes and rockfishes, and that rockfishes are positively associated with sponges while flatfishes are negatively associated with corals. We conclude that tinyVAST can be used to test complicated dependencies representing alternative structural assumptions for research and real-world policy evaluation.

We generalize the Poisson limit theorem to binary functions of random objects whose law is invariant under the action of an amenable group. Examples include stationary random fields, exchangeable sequences, and exchangeable graphs. A celebrated result of E. Lindenstrauss shows that normalized sums over certain increasing subsets of such groups approximate expectations. Our results clarify that the corresponding unnormalized sums of binary statistics are asymptotically Poisson, provided suitable mixing conditions hold. They extend further to randomly subsampled sums and also show that strict invariance of the distribution is not needed if the requisite mixing condition defined by the group holds. We illustrate the results with applications to random fields, Cayley graphs, and Poisson processes on groups.

While deep neural networks have achieved remarkable performance, data augmentation has emerged as a crucial strategy to mitigate overfitting and enhance network performance. These techniques hold particular significance in industrial manufacturing contexts. Recently, image mixing-based methods have been introduced, exhibiting improved performance on public benchmark datasets. However, their application to industrial tasks remains challenging. The manufacturing environment generates massive amounts of unlabeled data on a daily basis, with only a few instances of abnormal data occurrences. This leads to severe data imbalance. Thus, creating well-balanced datasets is not straightforward due to the high costs associated with labeling. Nonetheless, this is a crucial step for enhancing productivity. For this reason, we introduce ContextMix, a method tailored for industrial applications and benchmark datasets. ContextMix generates novel data by resizing entire images and integrating them into other images within the batch. This approach enables our method to learn discriminative features based on varying sizes from resized images and train informative secondary features for object recognition using occluded images. With the minimal additional computation cost of image resizing, ContextMix enhances performance compared to existing augmentation techniques. We evaluate its effectiveness across classification, detection, and segmentation tasks using various network architectures on public benchmark datasets. Our proposed method demonstrates improved results across a range of robustness tasks. Its efficacy in real industrial environments is particularly noteworthy, as demonstrated using the passive component dataset.

In biomedical studies, we are often interested in the association between different types of covariates and the times to disease events. Because the relationship between the covariates and event times is often complex, standard survival models that assume a linear covariate effect are inadequate. A flexible class of models for capturing complex interaction effects among types of covariates is the varying coefficient models, where the effects of a type of covariates can be modified by another type of covariates. In this paper, we study kernel-based estimation methods for varying coefficient additive hazards models. Unlike many existing kernel-based methods that use a local neighborhood of subjects for the estimation of the varying coefficient function, we propose a novel global approach that is generally more efficient. We establish theoretical properties of the proposed estimators and demonstrate their superior performance compared with existing local methods through large-scale simulation studies. To illustrate the proposed method, we provide an application to a motivating cancer genomic study.

In recent years, the Adaptive Antoulas-Anderson AAA algorithm has established itself as the method of choice for solving rational approximation problems. Data-driven Model Order Reduction (MOR) of large-scale Linear Time-Invariant (LTI) systems represents one of the many applications in which this algorithm has proven to be successful since it typically generates reduced-order models (ROMs) efficiently and in an automated way. Despite its effectiveness and numerical reliability, the classical AAA algorithm is not guaranteed to return a ROM that retains the same structural features of the underlying dynamical system, such as the stability of the dynamics. In this paper, we propose a novel algebraic characterization for the stability of ROMs with transfer function obeying the AAA barycentric structure. We use this characterization to formulate a set of convex constraints on the free coefficients of the AAA model that, whenever verified, guarantee by construction the asymptotic stability of the resulting ROM. We suggest how to embed such constraints within the AAA optimization routine, and we validate experimentally the effectiveness of the resulting algorithm, named stabAAA, over a set of relevant MOR applications.

We initiate the study of Boolean function analysis on high-dimensional expanders. We give a random-walk based definition of high-dimensional expansion, which coincides with the earlier definition in terms of two-sided link expanders. Using this definition, we describe an analog of the Fourier expansion and the Fourier levels of the Boolean hypercube for simplicial complexes. Our analog is a decomposition into approximate eigenspaces of random walks associated with the simplicial complexes. Our random-walk definition and the decomposition have the additional advantage that they extend to the more general setting of posets, encompassing both high-dimensional expanders and the Grassmann poset, which appears in recent work on the unique games conjecture. We then use this decomposition to extend the Friedgut-Kalai-Naor theorem to high-dimensional expanders. Our results demonstrate that a constant-degree high-dimensional expander can sometimes serve as a sparse model for the Boolean slice or hypercube, and quite possibly additional results from Boolean function analysis can be carried over to this sparse model. Therefore, this model can be viewed as a derandomization of the Boolean slice, containing only $|X(k-1)|=O(n)$ points in contrast to $\binom{n}{k}$ points in the $(k)$-slice (which consists of all $n$-bit strings with exactly $k$ ones).

Open sets are central to mathematics, especially analysis and topology, in ways few notions are. In most, if not all, computational approaches to mathematics, open sets are only studied indirectly via their 'codes' or 'representations'. In this paper, we study how hard it is to compute, given an arbitrary open set of reals, the most common representation, i.e. a countable set of open intervals. We work in Kleene's higher-order computability theory, which was historically based on the S1-S9 schemes and which now has an intuitive lambda calculus formulation due to the authors. We establish many computational equivalences between on one hand the 'structure' functional that converts open sets to the aforementioned representation, and on the other hand functionals arising from mainstream mathematics, like basic properties of semi-continuous functions, the Urysohn lemma, and the Tietze extension theorem. We also compare these functionals to known operations on regulated and bounded variation functions, and the Lebesgue measure restricted to closed sets. We obtain a number of natural computational equivalences for the latter involving theorems from mainstream mathematics.

We tackle the challenging tasks of monitoring on unstable HPC platforms the performance of CFD applications all along their development. We have designed and implemented a monitoring framework, integrated at the end of a CI-CD pipeline. Measures retrieved during the automatic execution of production simulations are analyzed within a visual analytics interface we developed, providing advanced visualizations and interaction. We have validated this approach by monitoring the CFD code Alya over two years, detecting and resolving issues related to the platform, and highlighting performance improvement.

In recent years, object detection has experienced impressive progress. Despite these improvements, there is still a significant gap in the performance between the detection of small and large objects. We analyze the current state-of-the-art model, Mask-RCNN, on a challenging dataset, MS COCO. We show that the overlap between small ground-truth objects and the predicted anchors is much lower than the expected IoU threshold. We conjecture this is due to two factors; (1) only a few images are containing small objects, and (2) small objects do not appear enough even within each image containing them. We thus propose to oversample those images with small objects and augment each of those images by copy-pasting small objects many times. It allows us to trade off the quality of the detector on large objects with that on small objects. We evaluate different pasting augmentation strategies, and ultimately, we achieve 9.7\% relative improvement on the instance segmentation and 7.1\% on the object detection of small objects, compared to the current state of the art method on MS COCO.

北京阿比特科技有限公司