亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Random probabilities are a key component to many nonparametric methods in Statistics and Machine Learning. To quantify comparisons between different laws of random probabilities several works are starting to use the elegant Wasserstein over Wasserstein distance. In this paper we prove that the infinite-dimensionality of the space of probabilities drastically deteriorates its sample complexity, which is slower than any polynomial rate in the sample size. We thus propose a new distance that preserves many desirable properties of the former while achieving a parametric rate of convergence. In particular, our distance 1) metrizes weak convergence; 2) can be estimated numerically through samples with low complexity; 3) can be bounded analytically from above and below. The main ingredient are integral probability metrics, which lead to the name hierarchical IPM.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

Generative diffusion models have achieved spectacular performance in many areas of generative modeling. While the fundamental ideas behind these models come from non-equilibrium physics, variational inference and stochastic calculus, in this paper we show that many aspects of these models can be understood using the tools of equilibrium statistical mechanics. Using this reformulation, we show that generative diffusion models undergo second-order phase transitions corresponding to symmetry breaking phenomena. We show that these phase-transitions are always in a mean-field universality class, as they are the result of a self-consistency condition in the generative dynamics. We argue that the critical instability that arises from the phase transitions lies at the heart of their generative capabilities, which are characterized by a set of mean field critical exponents. Furthermore, using the statistical physics of disordered systems, we show that memorization can be understood as a form of critical condensation corresponding to a disordered phase transition. Finally, we show that the dynamic equation of the generative process can be interpreted as a stochastic adiabatic transformation that minimizes the free energy while keeping the system in thermal equilibrium.

Significant research efforts have been dedicated to designing cryptographic algorithms that are quantum-resistant. The motivation is clear: robust quantum computers, once available, will render current cryptographic standards vulnerable. Thus, we need new Post-Quantum Cryptography (PQC) algorithms, and, due to the inherent complexity of such algorithms, there is also a demand to accelerate them in hardware. In this paper, we show that PQC hardware accelerators can be backdoored by two different adversaries located in the chip supply chain. We propose REPQC, a sophisticated reverse engineering algorithm that can be employed to confidently identify hashing operations (i.e., Keccak) within the PQC accelerator - the location of which serves as an anchor for finding secret information to be leaked. Armed with REPQC, an adversary proceeds to insert malicious logic in the form of a stealthy Hardware Trojan Horse (HTH). Using Dilithium as a study case, our results demonstrate that HTHs that increase the accelerator's layout density by as little as 0.1\% can be inserted without any impact on the performance of the circuit and with a marginal increase in power consumption. An essential aspect is that the entire reverse engineering in REPQC is automated, and so is the HTH insertion that follows it, empowering adversaries to explore multiple HTH designs and identify the most suitable one.

Mission-critical operations, particularly in the context of Search-and-Rescue (SAR) and emergency response situations, demand optimal performance and efficiency from every component involved to maximize the success probability of such operations. In these settings, cellular-enabled collaborative robotic systems have emerged as invaluable assets, assisting first responders in several tasks, ranging from victim localization to hazardous area exploration. However, a critical limitation in the deployment of cellular-enabled collaborative robots in SAR missions is their energy budget, primarily supplied by batteries, which directly impacts their task execution and mobility. This paper tackles this problem, and proposes a search-and-rescue framework for cellular-enabled collaborative robots use cases that, taking as input the area size to be explored, the robots fleet size, their energy profile, exploration rate required and target response time, finds the minimum number of robots able to meet the SAR mission goals and the path they should follow to explore the area. Our results, i) show that first responders can rely on a SAR cellular-enabled robotics framework when planning mission-critical operations to take informed decisions with limited resources, and, ii) illustrate the number of robots versus explored area and response time trade-off depending on the type of robot: wheeled vs quadruped.

Simulations play a crucial role in the modern scientific process. Yet despite (or due to) their ubiquity, the Data Science community shares neither a comprehensive definition for a "high-quality" study nor a consolidated guide to designing one. Inspired by the Predictability-Computability-Stability (PCS) framework for 'veridical' Data Science, we propose six MERITS that a Data Science simulation should satisfy. Modularity and Efficiency support the Computability of a study, encouraging clean and flexible implementation. Realism and Stability address the conceptualization of the research problem: How well does a study Predict reality, such that its conclusions generalize to new data/contexts? Finally, Intuitiveness and Transparency encourage good communication and trustworthiness of study design and results. Drawing an analogy between simulation and cooking, we moreover offer (a) a conceptual framework for thinking about the anatomy of a simulation 'recipe'; (b) a baker's dozen in guidelines to aid the Data Science practitioner in designing one; and (c) a case study deconstructing a simulation through the lens of our framework to demonstrate its practical utility. By contributing this "PCS primer" for high-quality Data Science simulation, we seek to distill and enrich the best practices of simulation across disciplines into a cohesive recipe for trustworthy, veridical Data Science.

Convolutional Neural Networks (CNNs) are nowadays the model of choice in Computer Vision, thanks to their ability to automatize the feature extraction process in visual tasks. However, the knowledge acquired during training is fully subsymbolic, and hence difficult to understand and explain to end users. In this paper, we propose a new technique called HOLMES (HOLonym-MEronym based Semantic inspection) that decomposes a label into a set of related concepts, and provides component-level explanations for an image classification model. Specifically, HOLMES leverages ontologies, web scraping and transfer learning to automatically construct meronym (parts)-based detectors for a given holonym (class). Then, it produces heatmaps at the meronym level and finally, by probing the holonym CNN with occluded images, it highlights the importance of each part on the classification output. Compared to state-of-the-art saliency methods, HOLMES takes a step further and provides information about both where and what the holonym CNN is looking at, without relying on densely annotated datasets and without forcing concepts to be associated to single computational units. Extensive experimental evaluation on different categories of objects (animals, tools and vehicles) shows the feasibility of our approach. On average, HOLMES explanations include at least two meronyms, and the ablation of a single meronym roughly halves the holonym model confidence. The resulting heatmaps were quantitatively evaluated using the deletion/insertion/preservation curves. All metrics were comparable to those achieved by GradCAM, while offering the advantage of further decomposing the heatmap in human-understandable concepts, thus highlighting both the relevance of meronyms to object classification, as well as HOLMES ability to capture it. The code is available at //github.com/FrancesC0de/HOLMES.

We show convergence rates for a sparse grid approximation of the distribution of solutions of the stochastic Landau-Lifshitz-Gilbert equation. Beyond being a frequently studied equation in engineering and physics, the stochastic Landau-Lifshitz-Gilbert equation poses many interesting challenges that do not appear simultaneously in previous works on uncertainty quantification: The equation is strongly non-linear, time-dependent, and has a non-convex side constraint. Moreover, the parametrization of the stochastic noise features countably many unbounded parameters and low regularity compared to other elliptic and parabolic problems studied in uncertainty quantification. We use a novel technique to establish uniform holomorphic regularity of the parameter-to-solution map based on a Gronwall-type estimate and the implicit function theorem. This method is very general and based on a set of abstract assumptions. Thus, it can be applied beyond the Landau-Lifshitz-Gilbert equation as well. We demonstrate numerically the feasibility of approximating with sparse grid and show a clear advantage of a multi-level sparse grid scheme.

The random batch method (RBM) proposed in [Jin et al., J. Comput. Phys., 400(2020), 108877] for large interacting particle systems is an efficient with linear complexity in particle numbers and highly scalable algorithm for $N$-particle interacting systems and their mean-field limits when $N$ is large. We consider in this work the quantitative error estimate of RBM toward its mean-field limit, the Fokker-Planck equation. Under mild assumptions, we obtain a uniform-in-time $O(\tau^2 + 1/N)$ bound on the scaled relative entropy between the joint law of the random batch particles and the tensorized law at the mean-field limit, where $\tau$ is the time step size and $N$ is the number of particles. Therefore, we improve the existing rate in discretization step size from $O(\sqrt{\tau})$ to $O(\tau)$ in terms of the Wasserstein distance.

We propose an operator learning approach to accelerate geometric Markov chain Monte Carlo (MCMC) for solving infinite-dimensional nonlinear Bayesian inverse problems. While geometric MCMC employs high-quality proposals that adapt to posterior local geometry, it requires computing local gradient and Hessian information of the log-likelihood, incurring a high cost when the parameter-to-observable (PtO) map is defined through expensive model simulations. We consider a delayed-acceptance geometric MCMC method driven by a neural operator surrogate of the PtO map, where the proposal is designed to exploit fast surrogate approximations of the log-likelihood and, simultaneously, its gradient and Hessian. To achieve a substantial speedup, the surrogate needs to be accurate in predicting both the observable and its parametric derivative (the derivative of the observable with respect to the parameter). Training such a surrogate via conventional operator learning using input--output samples often demands a prohibitively large number of model simulations. In this work, we present an extension of derivative-informed operator learning [O'Leary-Roseberry et al., J. Comput. Phys., 496 (2024)] using input--output--derivative training samples. Such a learning method leads to derivative-informed neural operator (DINO) surrogates that accurately predict the observable and its parametric derivative at a significantly lower training cost than the conventional method. Cost and error analysis for reduced basis DINO surrogates are provided. Numerical studies on PDE-constrained Bayesian inversion demonstrate that DINO-driven MCMC generates effective posterior samples 3--9 times faster than geometric MCMC and 60--97 times faster than prior geometry-based MCMC. Furthermore, the training cost of DINO surrogates breaks even after collecting merely 10--25 effective posterior samples compared to geometric MCMC.

Algorithms for initializing particle distribution in SPH simulations of complex geometries have been proven essential for improving the accuracy of SPH simulations. However, no such algorithms exist for boundary integral SPH models, which can model complex geometries without needing virtual particle layers. This study introduces a Boundary Integral based Particle Initialization (BIPI) algorithm. It consists of a particle-shifting technique carefully designed to redistribute particles to fit the boundary by using the boundary integral formulation for particles adjacent to the boundary. The proposed BIPI algorithm gives special consideration to particles adjacent to the boundary to prevent artificial volume compression. It can automatically produce a "uniform" particle distribution with reduced and stabilized concentration gradient for domains with complex geometrical shapes. Finally, a number of examples are presented to demonstrate the effectiveness of the proposed algorithm.

We establish a coding theorem and a matching converse theorem for separate encodings and joint decoding of individual sequences using finite-state machines. The achievable rate region is characterized in terms of the Lempel-Ziv (LZ) complexities, the conditional LZ complexities and the joint LZ complexity of the two source sequences. An important feature that is needed to this end, which may be interesting on its own right, is a certain asymptotic form of a chain rule for LZ complexities, which we establish in this work. The main emphasis in the achievability scheme is on the universal decoder and its properties. We then show that the achievable rate region is universally attainable by a modified version of Draper's universal incremental Slepian-Wolf (SW) coding scheme, provided that there exists a low-rate reliable feedback link.

北京阿比特科技有限公司