亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

It is known that results on universal sampling discretization of the square norm are useful in sparse sampling recovery with error measured in the square norm. In this paper we demonstrate how known results on universal sampling discretization of the uniform norm and recent results on universal sampling representation allow us to provide good universal methods of sampling recovery for anisotropic Sobolev and Nikol'skii classes of periodic functions of several variables. The sharpest results are obtained in the case of functions on two variables, where the Fibonacci point sets are used for recovery.

相關內容

Normalizing flow is a class of deep generative models for efficient sampling and density estimation. In practice, the flow often appears as a chain of invertible neural network blocks; to facilitate training, existing works have regularized flow trajectories and designed special network architectures. The current paper develops a neural ODE flow network inspired by the Jordan-Kinderleherer-Otto (JKO) scheme, which allows efficient block-wise training of the residual blocks without sampling SDE trajectories or inner loops of score matching or variational learning. As the JKO scheme unfolds the dynamic of gradient flow, the proposed model naturally stacks residual network blocks one by one, reducing the memory load and difficulty in performing end-to-end deep flow network training. We also develop adaptive time reparameterization of the flow network with a progressive refinement of the trajectory in probability space, which improves the model training efficiency and accuracy in practice. Using numerical experiments with synthetic and real data, we show that the proposed JKO-iFlow model achieves similar or better performance in generating new samples compared with the existing flow and diffusion models at a significantly reduced computational and memory cost.

We describe an algorithm that allows one to find dense packing configurations of a number of congruent disks in arbitrary domains in two or more dimensions. We have applied it to a large class of two dimensional domains such as rectangles, ellipses, crosses, multiply connected domains and even to the cardioid. For many of the cases that we have studied no previous result was available. The fundamental idea in our approach is the introduction of "image" disks, which allows one to work with a fixed container, thus lifting the limitations of the packing algorithms of \cite{Nurmela97,Amore21,Amore23}. We believe that the extension of our algorithm to three (or higher) dimensional containers (not considered here) can be done straightforwardly.

As the development of formal proofs is a time-consuming task, it is important to devise ways of sharing the already written proofs to prevent wasting time redoing them. One of the challenges in this domain is to translate proofs written in proof assistants based on impredicative logics to proof assistants based on predicative logics, whenever impredicativity is not used in an essential way. In this paper we present a transformation for sharing proofs with a core predicative system supporting prenex universe polymorphism (like in Agda). It consists in trying to elaborate each term into a predicative universe polymorphic term as general as possible. The use of universe polymorphism is justified by the fact that mapping each universe to a fixed one in the target theory is not sufficient in most cases. During the elaboration, we need to solve unification problems in the equational theory of universe levels. In order to do this, we give a complete characterization of when a single equation admits a most general unifier. This characterization is then employed in an algorithm which uses a constraint-postponement strategy to solve unification problems. The proposed translation is of course partial, but in practice allows one to translate many proofs that do not use impredicativity in an essential way. Indeed, it was implemented in the tool Predicativize and then used to translate semi-automatically many non-trivial developments from Matita's library to Agda, including proofs of Bertrand's Postulate and Fermat's Little Theorem, which (as far as we know) were not available in Agda yet.

Superpositions of plane waves are known to approximate well the solutions of the Helmholtz equation. Their use in discretizations is typical of Trefftz methods for Helmholtz problems, aiming to achieve high accuracy with a small number of degrees of freedom. However, Trefftz methods lead to ill-conditioned linear systems, and it is often impossible to obtain the desired accuracy in floating-point arithmetic. In this paper we show that a judicious choice of plane waves can ensure high-accuracy solutions in a numerically stable way, in spite of having to solve such ill-conditioned systems. Numerical accuracy of plane wave methods is linked not only to the approximation space, but also to the size of the coefficients in the plane wave expansion. We show that the use of plane waves can lead to exponentially large coefficients, regardless of the orientations and the number of plane waves, and this causes numerical instability. We prove that all Helmholtz fields are continuous superposition of evanescent plane waves, i.e., plane waves with complex propagation vectors associated with exponential decay, and show that this leads to bounded representations. We provide a constructive scheme to select a set of real and complex-valued propagation vectors numerically. This results in an explicit selection of plane waves and an associated Trefftz method that achieves accuracy and stability. The theoretical analysis is provided for a two-dimensional domain with circular shape. However, the principles are general and we conclude the paper with a numerical experiment demonstrating practical applicability also for polygonal domains.

This work investigates multiple testing by considering minimax separation rates in the sparse sequence model, when the testing risk is measured as the sum FDR+FNR (False Discovery Rate plus False Negative Rate). First using the popular beta-min separation condition, with all nonzero signals separated from $0$ by at least some amount, we determine the sharp minimax testing risk asymptotically and thereby explicitly describe the transition from "achievable multiple testing with vanishing risk" to "impossible multiple testing". Adaptive multiple testing procedures achieving the corresponding optimal boundary are provided: the Benjamini--Hochberg procedure with a properly tuned level, and an empirical Bayes $\ell$-value (`local FDR') procedure. We prove that the FDR and FNR make non-symmetric contributions to the testing risk for most optimal procedures, the FNR part being dominant at the boundary. The multiple testing hardness is then investigated for classes of arbitrary sparse signals. A number of extensions, including results for classification losses and convergence rates in the case of large signals, are also investigated.

One of the fundamental problems in shape analysis is to align curves or surfaces before computing geodesic distances between their shapes. Finding the optimal reparametrization realizing this alignment is a computationally demanding task, typically done by solving an optimization problem on the diffeomorphism group. In this paper, we propose an algorithm for constructing approximations of orientation-preserving diffeomorphisms by composition of elementary diffeomorphisms. The algorithm is implemented using PyTorch, and is applicable for both unparametrized curves and surfaces. Moreover, we show universal approximation properties for the constructed architectures, and obtain bounds for the Lipschitz constants of the resulting diffeomorphisms.

Widely used pipelines for the analysis of high-dimensional data utilize two-dimensional visualizations. These are created, e.g., via t-distributed stochastic neighbor embedding (t-SNE). When it comes to large data sets, applying these visualization techniques creates suboptimal embeddings, as the hyperparameters are not suitable for large data. Cranking up these parameters usually does not work as the computations become too expensive for practical workflows. In this paper, we argue that a sampling-based embedding approach can circumvent these problems. We show that hyperparameters must be chosen carefully, depending on the sampling rate and the intended final embedding. Further, we show how this approach speeds up the computation and increases the quality of the embeddings.

Context: Software model optimization is a process that automatically generates design alternatives, typically to enhance quantifiable non-functional properties of software systems, such as performance and reliability. Multi-objective evolutionary algorithms have shown to be effective in this context for assisting the designer in identifying trade-offs between the desired non-functional properties. Objective: In this work, we investigate the effects of imposing a time budget to limit the search for design alternatives, which inevitably affects the quality of the resulting alternatives. Method: The effects of time budgets are analyzed by investigating both the quality of the generated design alternatives and their structural features when varying the budget and the genetic algorithm (NSGA-II, PESA2, SPEA2). This is achieved by employing multi-objective quality indicators and a tree-based representation of the search space. Results: The study reveals that the time budget significantly affects the quality of Pareto fronts, especially for performance and reliability. NSGA-II is the fastest algorithm, while PESA2 generates the highest-quality solutions. The imposition of a time budget results in structurally distinct models compared to those obtained without a budget, indicating that the search process is influenced by both the budget and algorithm selection. Conclusions: In software model optimization, imposing a time budget can be effective in saving optimization time, but designers should carefully consider the trade-off between time and solution quality in the Pareto front, along with the structural characteristics of the generated models. By making informed choices about the specific genetic algorithm, designers can achieve different trade-offs.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.

北京阿比特科技有限公司