亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We initiate a formal investigation into the design and analysis of LLM-based algorithms, i.e. algorithms that contain one or multiple calls of large language models (LLMs) as sub-routines and critically rely on the capabilities of LLMs. While LLM-based algorithms, ranging from basic LLM calls with prompt engineering to complicated LLM-powered agent systems and compound AI systems, have achieved remarkable empirical success, the design and optimization of them have mostly relied on heuristics and trial-and-errors, which is largely due to a lack of formal and analytical study for these algorithms. To fill this gap, we start by identifying the computational-graph representation of LLM-based algorithms, the design principle of task decomposition, and some key abstractions, which then facilitate our formal analysis for the accuracy and efficiency of LLM-based algorithms, despite the black-box nature of LLMs. Through extensive analytical and empirical investigation in a series of case studies, we demonstrate that the proposed framework is broadly applicable to a wide range of scenarios and diverse patterns of LLM-based algorithms, such as parallel, hierarchical and recursive task decomposition. Our proposed framework holds promise for advancing LLM-based algorithms, by revealing the reasons behind curious empirical phenomena, guiding the choices of hyperparameters, predicting the empirical performance of algorithms, and inspiring new algorithm design. To promote further study of LLM-based algorithms, we release our source code at //github.com/modelscope/agentscope/tree/main/examples/paper_llm_based_algorithm.

相關內容

We study the computational complexity of computing Bayes-Nash equilibria in first-price auctions with discrete value distributions and discrete bidding space, under general subjective beliefs. It is known that such auctions do not always have pure equilibria. In this paper, we prove that the problem of deciding their existence is NP-complete, even for approximate equilibria. On the other hand, it can be shown that mixed equilibria are guaranteed to exist; however, their computational complexity has not been studied before. We establish the PPAD-completeness of computing a mixed equilibrium and we complement this by an efficient algorithm for finding symmetric approximate equilibria in the special case of iid priors. En route to these results, we develop a computational equivalence framework between continuous and discrete first-price auctions, which can be of independent interest, and which allows us to transfer existing positive and negative results from one setting to the other. Finally, we show that correlated equilibria of the auction can be computed in polynomial time.

The template design problem (TDP) is a hard combinatorial problem with a high number of symmetries which makes solving it more complicated. A number of techniques have been proposed in the literature to optimise its resolution, ranging from complete methods to stochastic ones. However, although metaheuristics are considered efficient methods that can find enough-quality solutions at a reasonable computational cost, these techniques have not proven to be truly efficient enough to deal with this problem. This paper explores and analyses a wide range of metaheuristics to tackle the problem with the aim of assessing their suitability for finding template designs. We tackle the problem using a wide set of metaheuristics whose implementation is guided by a number of issues such as problem formulation, solution encoding, the symmetrical nature of the problem, and distinct forms of hybridisation. For the TDP, we also propose a slot-based alternative problem formulation (distinct to other slot-based proposals), which represents another option other than the classical variation-based formulation of the problem. An empirical analysis, assessing the performance of all the metaheuristics (i.e., basic, integrative and collaborative algorithms working on different search spaces and with/without symmetry breaking) shows that some of our proposals can be considered the state-of-the-art when they are applied to specific problem instances.

All machine learning algorithms use a loss, cost, utility or reward function to encode the learning objective and oversee the learning process. This function that supervises learning is a frequently unrecognized hyperparameter that determines how incorrect outputs are penalized and can be tuned to improve performance. This paper shows that training speed and final accuracy of neural networks can significantly depend on the loss function used to train neural networks. In particular derivative values can be significantly different with different loss functions leading to significantly different performance after gradient descent based Backpropagation (BP) training. This paper explores the effect on performance of using new loss functions that are also convex but penalize errors differently compared to the popular Cross-entropy loss. Two new classification loss functions that significantly improve performance on a wide variety of benchmark tasks are proposed. A new loss function call smooth absolute error that outperforms the Squared error, Huber and Log-Cosh losses on datasets with significantly many outliers is proposed. This smooth absolute error loss function is infinitely differentiable and more closely approximates the absolute error loss compared to the Huber and Log-Cosh losses used for robust regression.

We consider the problem of estimating the number of clusters (k) in a dataset. We propose a non-parametric approach to the problem that utilizes similarity graphs to construct a robust statistic that effectively captures similarity information among observations. This graph-based statistic is applicable to datasets of any dimension, is computationally efficient to obtain, and can be paired with any kind of clustering technique. Asymptotic theory is developed to establish the selection consistency of the proposed approach. Simulation studies demonstrate that the graph-based statistic outperforms existing methods for estimating k, especially in the high-dimensional setting. We illustrate its utility on an imaging dataset and an RNA-seq dataset.

Quantum no-cloning theorem gives rise to the intriguing possibility of quantum copy protection where we encode a program or functionality in a quantum state such that a user in possession of k copies cannot create k+1 copies, for any k. Introduced by Aaronson (CCC'09) over a decade ago, copy protection has proven to be notoriously hard to achieve. Previous work has been able to achieve copy-protection for various functionalities only in restricted models: (i) in the bounded collusion setting where k -> k+1 security is achieved for a-priori fixed collusion bound k (in the plain model with the same computational assumptions as ours, by Liu, Liu, Qian, Zhandry [TCC'22]), or, (ii) only k -> 2k security is achieved (relative to a structured quantum oracle, by Aaronson [CCC'09]). In this work, we give the first unbounded collusion-resistant (i.e. multiple-copy secure) copy-protection schemes, answering the long-standing open question of constructing such schemes, raised by multiple previous works starting with Aaronson (CCC'09). More specifically, we obtain the following results. - We construct (i) public-key encryption, (ii) public-key functional encryption, (iii) signature and (iv) pseudorandom function schemes whose keys are copy-protected against unbounded collusions in the plain model (i.e. without any idealized oracles), assuming (post-quantum) subexponentially secure iO and LWE. - We show that any unlearnable functionality can be copy-protected against unbounded collusions, relative to a classical oracle. - As a corollary of our results, we rule out the existence of hyperefficient quantum shadow tomography, * even given non-black-box access to the measurements, assuming subexponentially secure iO and LWE, or, * unconditionally relative to a quantumly accessible classical oracle, and hence answer an open question by Aaronson (STOC'18).

In this paper, we consider interpolation by \textit{completely monotonous} polynomials (CMPs for short), that is, polynomials with non-negative real coefficients. In particular, given a finite set $S\subset \mathbb{R}_{>0} \times \mathbb{R}_{\geq 0}$, we consider \textit{the minimal polynomial} of $S$, introduced by Berg [1985], which is `minimal,' in the sense that it is eventually majorized by all the other CMPs interpolating $S$. We give an upper bound of the degree of the minimal polynomial of $S$ when it exists. Furthermore, we give another algorithm for computing the minimal polynomial of given $S$ which utilizes an order structure on sign sequences. Applying the upper bound above, we also analyze the computational complexity of algorithms for computing minimal polynomials including ours.

The prophet secretary problem is a combination of the prophet inequality and the secretary problem, where elements are drawn from known independent distributions and arrive in uniformly random order. In this work, we design 1) a $0.688$-competitive algorithm, that breaks the $0.675$ barrier of blind strategies (Correa, Saona, Ziliotto, 2021), and 2) a $0.641$-competitive algorithm for the prophet secretary matching problem, that breaks the $1-1/e\approx 0.632$ barrier for the first time. Our second result also applies to the query-commit model of weighted stochastic matching and improves the state-of-the-art ratio (Derakhshan and Farhadi, 2023).

We consider a matching problem in a bipartite graph $G$ where every vertex has a capacity and a strict preference order on its neighbors. Furthermore, there is a cost function on the edge set. We assume $G$ admits a perfect matching, i.e., one that fully matches all vertices. It is only perfect matchings that are feasible for us and we are interested in those perfect matchings that are popular within the set of perfect matchings. It is known that such matchings (called popular perfect matchings) always exist and can be efficiently computed. What we seek here is not any popular perfect matching, but a min-cost one. We show a polynomial-time algorithm for finding such a matching; this is via a characterization of popular perfect matchings in $G$ in terms of stable matchings in a colorful auxiliary instance. This is a generalization of such a characterization that was known in the one-to-one setting.

Solving societal problems on a global scale requires the collection and processing of ideas and methods from diverse sets of international experts. As the number and diversity of human experts increase, so does the likelihood that elements in this collective knowledge can be combined and refined to discover novel and better solutions. However, it is difficult to identify, combine, and refine complementary information in an increasingly large and diverse knowledge base. This paper argues that artificial intelligence (AI) can play a crucial role in this process. An evolutionary AI framework, termed RHEA, fills this role by distilling knowledge from diverse models created by human experts into equivalent neural networks, which are then recombined and refined in a population-based search. The framework was implemented in a formal synthetic domain, demonstrating that it is transparent and systematic. It was then applied to the results of the XPRIZE Pandemic Response Challenge, in which over 100 teams of experts across 23 countries submitted models based on diverse methodologies to predict COVID-19 cases and suggest non-pharmaceutical intervention policies for 235 nations, states, and regions across the globe. Building upon this expert knowledge, by recombining and refining the 169 resulting policy suggestion models, RHEA discovered a broader and more effective set of policies than either AI or human experts alone, as evaluated based on real-world data. The results thus suggest that AI can play a crucial role in realizing the potential of human expertise in global problem-solving.

Rishi Bommasani,Drew A. Hudson,Ehsan Adeli,Russ Altman,Simran Arora,Sydney von Arx,Michael S. Bernstein,Jeannette Bohg,Antoine Bosselut,Emma Brunskill,Erik Brynjolfsson,Shyamal Buch,Dallas Card,Rodrigo Castellon,Niladri Chatterji,Annie Chen,Kathleen Creel,Jared Quincy Davis,Dora Demszky,Chris Donahue,Moussa Doumbouya,Esin Durmus,Stefano Ermon,John Etchemendy,Kawin Ethayarajh,Li Fei-Fei,Chelsea Finn,Trevor Gale,Lauren Gillespie,Karan Goel,Noah Goodman,Shelby Grossman,Neel Guha,Tatsunori Hashimoto,Peter Henderson,John Hewitt,Daniel E. Ho,Jenny Hong,Kyle Hsu,Jing Huang,Thomas Icard,Saahil Jain,Dan Jurafsky,Pratyusha Kalluri,Siddharth Karamcheti,Geoff Keeling,Fereshte Khani,Omar Khattab,Pang Wei Kohd,Mark Krass,Ranjay Krishna,Rohith Kuditipudi,Ananya Kumar,Faisal Ladhak,Mina Lee,Tony Lee,Jure Leskovec,Isabelle Levent,Xiang Lisa Li,Xuechen Li,Tengyu Ma,Ali Malik,Christopher D. Manning,Suvir Mirchandani,Eric Mitchell,Zanele Munyikwa,Suraj Nair,Avanika Narayan,Deepak Narayanan,Ben Newman,Allen Nie,Juan Carlos Niebles,Hamed Nilforoshan,Julian Nyarko,Giray Ogut,Laurel Orr,Isabel Papadimitriou,Joon Sung Park,Chris Piech,Eva Portelance,Christopher Potts,Aditi Raghunathan,Rob Reich,Hongyu Ren,Frieda Rong,Yusuf Roohani,Camilo Ruiz,Jack Ryan,Christopher Ré,Dorsa Sadigh,Shiori Sagawa,Keshav Santhanam,Andy Shih,Krishnan Srinivasan,Alex Tamkin,Rohan Taori,Armin W. Thomas,Florian Tramèr,Rose E. Wang,William Wang,Bohan Wu,Jiajun Wu,Yuhuai Wu,Sang Michael Xie,Michihiro Yasunaga,Jiaxuan You,Matei Zaharia,Michael Zhang,Tianyi Zhang,Xikun Zhang,Yuhui Zhang,Lucia Zheng,Kaitlyn Zhou,Percy Liang
Rishi Bommasani,Drew A. Hudson,Ehsan Adeli,Russ Altman,Simran Arora,Sydney von Arx,Michael S. Bernstein,Jeannette Bohg,Antoine Bosselut,Emma Brunskill,Erik Brynjolfsson,Shyamal Buch,Dallas Card,Rodrigo Castellon,Niladri Chatterji,Annie Chen,Kathleen Creel,Jared Quincy Davis,Dora Demszky,Chris Donahue,Moussa Doumbouya,Esin Durmus,Stefano Ermon,John Etchemendy,Kawin Ethayarajh,Li Fei-Fei,Chelsea Finn,Trevor Gale,Lauren Gillespie,Karan Goel,Noah Goodman,Shelby Grossman,Neel Guha,Tatsunori Hashimoto,Peter Henderson,John Hewitt,Daniel E. Ho,Jenny Hong,Kyle Hsu,Jing Huang,Thomas Icard,Saahil Jain,Dan Jurafsky,Pratyusha Kalluri,Siddharth Karamcheti,Geoff Keeling,Fereshte Khani,Omar Khattab,Pang Wei Kohd,Mark Krass,Ranjay Krishna,Rohith Kuditipudi,Ananya Kumar,Faisal Ladhak,Mina Lee,Tony Lee,Jure Leskovec,Isabelle Levent,Xiang Lisa Li,Xuechen Li,Tengyu Ma,Ali Malik,Christopher D. Manning,Suvir Mirchandani,Eric Mitchell,Zanele Munyikwa,Suraj Nair,Avanika Narayan,Deepak Narayanan,Ben Newman,Allen Nie,Juan Carlos Niebles,Hamed Nilforoshan,Julian Nyarko,Giray Ogut,Laurel Orr,Isabel Papadimitriou,Joon Sung Park,Chris Piech,Eva Portelance,Christopher Potts,Aditi Raghunathan,Rob Reich,Hongyu Ren,Frieda Rong,Yusuf Roohani,Camilo Ruiz,Jack Ryan,Christopher Ré,Dorsa Sadigh,Shiori Sagawa,Keshav Santhanam,Andy Shih,Krishnan Srinivasan,Alex Tamkin,Rohan Taori,Armin W. Thomas,Florian Tramèr,Rose E. Wang,William Wang,Bohan Wu,Jiajun Wu,Yuhuai Wu,Sang Michael Xie,Michihiro Yasunaga,Jiaxuan You,Matei Zaharia,Michael Zhang,Tianyi Zhang,Xikun Zhang,Yuhui Zhang,Lucia Zheng,Kaitlyn Zhou,Percy Liang

AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles(e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations). Though foundation models are based on standard deep learning and transfer learning, their scale results in new emergent capabilities,and their effectiveness across so many tasks incentivizes homogenization. Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties. To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature.

北京阿比特科技有限公司