亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Lattices with minimal normalized second moments are designed using a new numerical optimization algorithm. Starting from a random lower-triangular generator matrix and applying stochastic gradient descent, all elements are updated towards the negative gradient, which makes it the most efficient algorithm proposed so far for this purpose. A graphical illustration of the theta series, called theta image, is introduced and shown to be a powerful tool for converting numerical lattice representations into their underlying exact forms. As a proof of concept, optimized lattices are designed in dimensions up to 16. In all dimensions, the algorithm converges to either the previously best known lattice or a better one. The dual of the 15-dimensional laminated lattice is conjectured to be optimal in its dimension.

相關內容

A Bayesian nonparametric method of James, Lijoi \& Prunster (2009) used to predict future values of observations from normalized random measures with independent increments is modified to a class of models based on negative binomial processes for which the increments are not independent, but are independent conditional on an underlying gamma variable. Like in James et al., the new algorithm is formulated in terms of two variables, one a function of the past observations, and the other an updating by means of a new observation. We outline an application of the procedure to population genetics, for the construction of realisations of genealogical trees and coalescents from samples of alleles.

The optimization of expensive-to-evaluate black-box functions is prevalent in various scientific disciplines. Bayesian optimization is an automatic, general and sample-efficient method to solve these problems with minimal knowledge of the underlying function dynamics. However, the ability of Bayesian optimization to incorporate prior knowledge or beliefs about the function at hand in order to accelerate the optimization is limited, which reduces its appeal for knowledgeable practitioners with tight budgets. To allow domain experts to customize the optimization routine, we propose ColaBO, the first Bayesian-principled framework for incorporating prior beliefs beyond the typical kernel structure, such as the likely location of the optimizer or the optimal value. The generality of ColaBO makes it applicable across different Monte Carlo acquisition functions and types of user beliefs. We empirically demonstrate ColaBO's ability to substantially accelerate optimization when the prior information is accurate, and to retain approximately default performance when it is misleading.

Bayesian optimization is a widely used technique for optimizing black-box functions, with Expected Improvement (EI) being the most commonly utilized acquisition function in this domain. While EI is often viewed as distinct from other information-theoretic acquisition functions, such as entropy search (ES) and max-value entropy search (MES), our work reveals that EI can be considered a special case of MES when approached through variational inference (VI). In this context, we have developed the Variational Entropy Search (VES) methodology and the VES-Gamma algorithm, which adapts EI by incorporating principles from information-theoretic concepts. The efficacy of VES-Gamma is demonstrated across a variety of test functions and read datasets, highlighting its theoretical and practical utilities in Bayesian optimization scenarios.

Motivated by the fact that input distributions are often unknown in advance, distribution-free property testing considers a setting where the algorithmic task is to accept functions $f : [n] \to \{0,1\}$ with a certain property P and reject functions that are $\eta$-far from P, where the distance is measured according to an arbitrary and unknown input distribution $D \sim [n]$. As usual in property testing, the tester can only make a sublinear number of input queries, but as the distribution is unknown, we also allow a sublinear number of samples from the distribution D. In this work we initiate the study of distribution-free interactive proofs of proximity (df-IPPs) in which the distribution-free testing algorithm is assisted by an all powerful but untrusted prover. Our main result is that for any problem P $\in$ NC, any proximity parameter $\eta > 0$, and any (trade-off) parameter $t\leq\sqrt{n}$, we construct a df-IPP for P with respect to $\eta$, that has query and sample complexities $t+O(1/\eta)$, and communication complexity $\tilde{O}(n/t + 1/\eta)$. For t as above and sufficiently large $\eta$ (namely, when $\eta > t/n$), this result matches the parameters of the best-known general purpose IPPs in the standard uniform setting. Moreover, for such t, its parameters are optimal up to poly-logarithmic factors under reasonable cryptographic assumptions for the same regime of $\eta$ as the uniform setting, i.e., when $\eta \geq 1/t$. For small $\eta$ (i.e., $\eta< t/n$), our protocol has communication complexity $\Omega(1/\eta)$, which is worse than the $\tilde{O}(n/t)$ communication complexity of the uniform IPPs (with the same query complexity). To improve on this gap, we show that for IPPs over specialised, but large distribution families, such as sufficiently smooth distributions and product distributions, the communication complexity reduces to $\tilde{O}(n/t^{1-o(1)})$.

Computational problems are classified into computable and uncomputable problems.If there exists an effective procedure (algorithm) to compute a problem then the problem is computable otherwise it is uncomputable.Turing machines can execute any algorithm therefore every computable problem is Turing computable.There are some variants of Turing machine that appear computationally more powerful but all these variants have been proven equally powerful.The main objective of this work is to revisit and examine the computational power of different variants of Turing machines at very fine-grain level.We achieve this objective by constructing a transform technique for Turing computable problems that transforms computable problems into another type of problems, and then we try to compute the transformed problems through different variants of Turing machine.This paper shows the existence of a realizable computational scheme that can establish a framework to analyze computational characteristics of different variants of Turing machine at infinitesimal scale.

The Quantum Alternating Operator Ansatz (QAOA) is a prominent variational quantum algorithm for solving combinatorial optimization problems. Its effectiveness depends on identifying input parameters that yield high-quality solutions. However, understanding the complexity of training QAOA remains an under-explored area. Previous results have given analytical performance guarantees for a small, fixed number of parameters. At the opposite end of the spectrum, barren plateaus are likely to emerge at $\Omega(n)$ parameters for $n$ qubits. In this work, we study the difficulty of training in the intermediate regime, which is the focus of most current numerical studies and near-term hardware implementations. Through extensive numerical analysis of the quality and quantity of local minima, we argue that QAOA landscapes can exhibit a superpolynomial growth in the number of low-quality local minima even when the number of parameters scales logarithmically with $n$. This means that the common technique of gradient descent from randomly initialized parameters is doomed to fail beyond small $n$, and emphasizes the need for good initial guesses of the optimal parameters.

A new method is explored for analyzing the performance of coset codes over the binary erasure wiretap channel (BEWC) by decomposing the code over subspaces of the code space. This technique leads to an improved algorithm for calculating equivocation loss. It also provides a continuous-valued function for equivocation loss, permitting proofs of local optimality for certain finite-blocklength code constructions, including a code formed by excluding from the generator matrix all columns which lie within a particular subspace. Subspace decomposition is also used to explore the properties of an alternative secrecy code metric, the chi squared divergence. The chi squared divergence is shown to be far simpler to calculate than equivocation loss. Additionally, the codes which are shown to be locally optimal in terms of equivocation are also proved to be globally optimal in terms of chi squared divergence.

Large Language Models (LLMs) have shown excellent generalization capabilities that have led to the development of numerous models. These models propose various new architectures, tweaking existing architectures with refined training strategies, increasing context length, using high-quality training data, and increasing training time to outperform baselines. Analyzing new developments is crucial for identifying changes that enhance training stability and improve generalization in LLMs. This survey paper comprehensively analyses the LLMs architectures and their categorization, training strategies, training datasets, and performance evaluations and discusses future research directions. Moreover, the paper also discusses the basic building blocks and concepts behind LLMs, followed by a complete overview of LLMs, including their important features and functions. Finally, the paper summarizes significant findings from LLM research and consolidates essential architectural and training strategies for developing advanced LLMs. Given the continuous advancements in LLMs, we intend to regularly update this paper by incorporating new sections and featuring the latest LLM models.

Game theory has by now found numerous applications in various fields, including economics, industry, jurisprudence, and artificial intelligence, where each player only cares about its own interest in a noncooperative or cooperative manner, but without obvious malice to other players. However, in many practical applications, such as poker, chess, evader pursuing, drug interdiction, coast guard, cyber-security, and national defense, players often have apparently adversarial stances, that is, selfish actions of each player inevitably or intentionally inflict loss or wreak havoc on other players. Along this line, this paper provides a systematic survey on three main game models widely employed in adversarial games, i.e., zero-sum normal-form and extensive-form games, Stackelberg (security) games, zero-sum differential games, from an array of perspectives, including basic knowledge of game models, (approximate) equilibrium concepts, problem classifications, research frontiers, (approximate) optimal strategy seeking techniques, prevailing algorithms, and practical applications. Finally, promising future research directions are also discussed for relevant adversarial games.

Cold-start problems are long-standing challenges for practical recommendations. Most existing recommendation algorithms rely on extensive observed data and are brittle to recommendation scenarios with few interactions. This paper addresses such problems using few-shot learning and meta learning. Our approach is based on the insight that having a good generalization from a few examples relies on both a generic model initialization and an effective strategy for adapting this model to newly arising tasks. To accomplish this, we combine the scenario-specific learning with a model-agnostic sequential meta-learning and unify them into an integrated end-to-end framework, namely Scenario-specific Sequential Meta learner (or s^2 meta). By doing so, our meta-learner produces a generic initial model through aggregating contextual information from a variety of prediction tasks while effectively adapting to specific tasks by leveraging learning-to-learn knowledge. Extensive experiments on various real-world datasets demonstrate that our proposed model can achieve significant gains over the state-of-the-arts for cold-start problems in online recommendation. Deployment is at the Guess You Like session, the front page of the Mobile Taobao.

北京阿比特科技有限公司