亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Sampling from generative models has become a crucial tool for applications like data synthesis and augmentation. Diffusion, Flow Matching and Continuous Normalizing Flows have shown effectiveness across various modalities, and rely on Gaussian latent variables for generation. For search-based or creative applications that require additional control over the generation process, it has become common to manipulate the latent variable directly. However, existing approaches for performing such manipulations (e.g. interpolation or forming low-dimensional representations) only work well in special cases or are network or data-modality specific. We propose Combination of Gaussian variables (COG) as a general purpose interpolation method that is easy to implement yet outperforms recent sophisticated methods. Moreover, COG naturally addresses the broader task of forming general linear combinations of latent variables, allowing the construction of subspaces of the latent space, dramatically simplifying the creation of expressive low-dimensional spaces of high-dimensional objects.

相關內容

The Segment Anything Model (SAM) and similar models build a family of promptable foundation models (FMs) for image and video segmentation. The object of interest is identified using prompts, such as bounding boxes or points. With these FMs becoming part of medical image segmentation, extensive evaluation studies are required to assess their strengths and weaknesses in clinical setting. Since the performance is highly dependent on the chosen prompting strategy, it is important to investigate different prompting techniques to define optimal guidelines that ensure effective use in medical image segmentation. Currently, no dedicated evaluation studies exist specifically for bone segmentation in CT scans, leaving a gap in understanding the performance for this task. Thus, we use non-iterative, ``optimal'' prompting strategies composed of bounding box, points and combinations to test the zero-shot capability of SAM-family models for bone CT segmentation on three different skeletal regions. Our results show that the best settings depend on the model type and size, dataset characteristics and objective to optimize. Overall, SAM and SAM2 prompted with a bounding box in combination with the center point for all the components of an object yield the best results across all tested settings. As the results depend on multiple factors, we provide a guideline for informed decision-making in 2D prompting with non-interactive, ''optimal'' prompts.

We address the problem of identifying functional interactions among stochastic neurons with variable-length memory from their spiking activity. The neuronal network is modeled by a stochastic system of interacting point processes with variable-length memory. Each chain describes the activity of a single neuron, indicating whether it spikes at a given time. One neuron's influence on another can be either excitatory or inhibitory. To identify the existence and nature of an interaction between a neuron and its postsynaptic counterpart, we propose a model selection procedure based on the observation of the spike activity of a finite set of neurons over a finite time. The proposed procedure is also based on the maximum likelihood estimator for the synaptic weight matrix of the network neuronal model. In this sense, we prove the consistency of the maximum likelihood estimator followed by a proof of the consistency of the neighborhood interaction estimation procedure. The effectiveness of the proposed model selection procedure is demonstrated using simulated data, which validates the underlying theory. The method is also applied to analyze spike train data recorded from hippocampal neurons in rats during a visual attention task, where a computational model reconstructs the spiking activity and the results reveal interesting and biologically relevant information.

Conditional independence and graphical models are crucial concepts for sparsity and statistical modeling in higher dimensions. For L\'evy processes, a widely applied class of stochastic processes, these notions have not been studied. By the L\'evy-It\^o decomposition, a multivariate L\'evy process can be decomposed into the sum of a Brownian motion part and an independent jump process. We show that conditional independence statements between the marginal processes can be studied separately for these two parts. While the Brownian part is well-understood, we derive a novel characterization of conditional independence between the sample paths of the jump process in terms of the L\'evy measure. We define L\'evy graphical models as L\'evy processes that satisfy undirected or directed Markov properties. We prove that the graph structure is invariant under changes of the univariate marginal processes. L\'evy graphical models allow the construction of flexible, sparse dependence models for L\'evy processes in large dimensions, which are interpretable thanks to the underlying graph. For trees, we develop statistical methodology to learn the underlying structure from low- or high-frequency observations of the L\'evy process and show consistent graph recovery. We apply our method to model stock returns from U.S. companies to illustrate the advantages of our approach.

Conditional simulation is a fundamental task in statistical modeling: Generate samples from the conditionals given finitely many data points from a joint distribution. One promising approach is to construct conditional Brenier maps, where the components of the map pushforward a reference distribution to conditionals of the target. While many estimators exist, few, if any, come with statistical or algorithmic guarantees. To this end, we propose a non-parametric estimator for conditional Brenier maps based on the computational scalability of \emph{entropic} optimal transport. Our estimator leverages a result of Carlier et al. (2010), which shows that optimal transport maps under a rescaled quadratic cost asymptotically converge to conditional Brenier maps; our estimator is precisely the entropic analogues of these converging maps. We provide heuristic justifications for choosing the scaling parameter in the cost as a function of the number of samples by fully characterizing the Gaussian setting. We conclude by comparing the performance of the estimator to other machine learning and non-parametric approaches on benchmark datasets and Bayesian inference problems.

Stochastic gradient descent (SGD) is a workhorse algorithm for solving large-scale optimization problems in data science and machine learning. Understanding the convergence of SGD is hence of fundamental importance. In this work we examine the SGD convergence (with various step sizes) when applied to unconstrained convex quadratic programming (essentially least-squares (LS) problems), and in particular analyze the error components respect to the eigenvectors of the Hessian. The main message is that the convergence depends largely on the corresponding eigenvalues (singular values of the coefficient matrix in the LS context), namely the components for the large singular values converge faster in the initial phase. We then show there is a phase transition in the convergence where the convergence speed of the components, especially those corresponding to the larger singular values, will decrease. Finally, we show that the convergence of the overall error (in the solution) tends to decay as more iterations are run, that is, the initial convergence is faster than the asymptote.

We investigate the systematic design of compliant morphing structures composed of materials reacting to an external stimulus. We add a perimeter penalty term to ensure existence of solutions. We propose a phase-field approximation of this sharp interface problem, prove its convergence as the regularization length approaches 0 and present an efficient numerical implementation. We illustrate the strengths of our approach through a series of numerical examples.

Cirquent calculus is a proof system with inherent ability to account for sharing subcomponents in logical expressions. Within its framework, this article constructs an axiomatization CL18 of the basic propositional fragment of computability logic the game-semantically conceived logic of computational resources and tasks. The nonlogical atoms of this fragment represent arbitrary so called static games, and the connectives of its logical vocabulary are negation and the parallel and choice versions of conjunction and disjunction. The main technical result of the article is a proof of the soundness and completeness of CL18 with respect to the semantics of computability logic.

This study examines the effect that different feature selection methods have on models created with XGBoost, a popular machine learning algorithm with superb regularization methods. It shows that three different ways for reducing the dimensionality of features produces no statistically significant change in the prediction accuracy of the model. This suggests that the traditional idea of removing the noisy training data to make sure models do not overfit may not apply to XGBoost. But it may still be viable in order to reduce computational complexity.

In practice, non-destructive testing (NDT) procedures tend to consider experiments (and their respective models) as distinct, conducted in isolation and associated with independent data. In contrast, this work looks to capture the interdependencies between acoustic emission (AE) experiments (as meta-models) and then use the resulting functions to predict the model hyperparameters for previously unobserved systems. We utilise a Bayesian multilevel approach (similar to deep Gaussian Processes) where a higher level meta-model captures the inter-task relationships. Our key contribution is how knowledge of the experimental campaign can be encoded between tasks as well as within tasks. We present an example of AE time-of-arrival mapping for source localisation, to illustrate how multilevel models naturally lend themselves to representing aggregate systems in engineering. We constrain the meta-model based on domain knowledge, then use the inter-task functions for transfer learning, predicting hyperparameters for models of previously unobserved experiments (for a specific design).

Most algorithms for representation learning and link prediction in relational data have been designed for static data. However, the data they are applied to usually evolves with time, such as friend graphs in social networks or user interactions with items in recommender systems. This is also the case for knowledge bases, which contain facts such as (US, has president, B. Obama, [2009-2017]) that are valid only at certain points in time. For the problem of link prediction under temporal constraints, i.e., answering queries such as (US, has president, ?, 2012), we propose a solution inspired by the canonical decomposition of tensors of order 4. We introduce new regularization schemes and present an extension of ComplEx (Trouillon et al., 2016) that achieves state-of-the-art performance. Additionally, we propose a new dataset for knowledge base completion constructed from Wikidata, larger than previous benchmarks by an order of magnitude, as a new reference for evaluating temporal and non-temporal link prediction methods.

北京阿比特科技有限公司