亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Solving societal problems on a global scale requires the collection and processing of ideas and methods from diverse sets of international experts. As the number and diversity of human experts increase, so does the likelihood that elements in this collective knowledge can be combined and refined to discover novel and better solutions. However, it is difficult to identify, combine, and refine complementary information in an increasingly large and diverse knowledge base. This paper argues that artificial intelligence (AI) can play a crucial role in this process. An evolutionary AI framework, termed RHEA, fills this role by distilling knowledge from diverse models created by human experts into equivalent neural networks, which are then recombined and refined in a population-based search. The framework was implemented in a formal synthetic domain, demonstrating that it is transparent and systematic. It was then applied to the results of the XPRIZE Pandemic Response Challenge, in which over 100 teams of experts across 23 countries submitted models based on diverse methodologies to predict COVID-19 cases and suggest non-pharmaceutical intervention policies for 235 nations, states, and regions across the globe. Building upon this expert knowledge, by recombining and refining the 169 resulting policy suggestion models, RHEA discovered a broader and more effective set of policies than either AI or human experts alone, as evaluated based on real-world data. The results thus suggest that AI can play a crucial role in realizing the potential of human expertise in global problem-solving.

相關內容

Middleware, third-party software intermediaries between users and platforms, has been broached as a means to decentralize the power of social media platforms and enhance user agency. Middleware may enable a more user-centric and democratic approach to shaping digital experiences, offering a flexible architecture as an alternative to both centrally controlled, opaque platforms and an unmoderated, uncurated internet. The widespread adoption of open middleware has long hinged on the cooperation of established major platforms; however, the recent growth of federated platforms, such as Mastodon and Bluesky, has led to increased offerings and user awareness. In this report we consider the potential of middleware as a means of enabling greater user control over curation and moderation - two aspects of the social media experience that are often mired in controversy. We evaluate the trade-offs and negative externalities it might create, and discuss the technological, regulatory, and market dynamics that could either support or hinder its implementation.

We consider the optimal experimental design problem of allocating subjects to treatment or control when subjects participate in multiple, separate controlled experiments within a short time-frame and subject covariate information is available. Here, in addition to subject covariates, we consider the dependence among the responses coming from the subject's random effect across experiments. In this setting, the goal of the allocation is to provide precise estimates of treatment effects for each experiment. Deriving the precision matrix of the treatment effects and using D-optimality as our allocation criterion, we demonstrate the advantage of collaboratively designing and analyzing multiple experiments over traditional independent design and analysis, and propose two randomized algorithms to provide solutions to the D-optimality problem for collaborative design. The first algorithm decomposes the D-optimality problem into a sequence of subproblems, where each subproblem is a quadratic binary program that can be solved through a semi-definite relaxation based randomized algorithm with performance guarantees. The second algorithm involves solving a single semi-definite program, and randomly generating allocations for each experiment from the solution of this program. We showcase the performance of these algorithms through a simulation study, finding that our algorithms outperform covariate-agnostic methods when there are a large number of covariates.

We present Bluebell, a program logic for reasoning about probabilistic programs where unary and relational styles of reasoning come together to create new reasoning tools. Unary-style reasoning is very expressive and is powered by foundational mechanisms to reason about probabilistic behaviour like independence and conditioning. The relational style of reasoning, on the other hand, naturally shines when the properties of interest compare the behaviour of similar programs (e.g. when proving differential privacy) managing to avoid having to characterize the output distributions of the individual programs. So far, the two styles of reasoning have largely remained separate in the many program logics designed for the deductive verification of probabilistic programs. In Bluebell, we unify these styles of reasoning through the introduction of a new modality called "joint conditioning" that can encode and illuminate the rich interaction between conditional independence and relational liftings; the two powerhouses from the two styles of reasoning.

We study the Boolean Satisfiability problem (SAT) in the framework of diversity, where one asks for multiple solutions that are mutually far apart (i.e., sufficiently dissimilar from each other) for a suitable notion of distance/dissimilarity between solutions. Interpreting assignments as bit vectors, we take their Hamming distance to quantify dissimilarity, and we focus on problem of finding two solutions. Specifically, we define the problem MAX DIFFER SAT (resp. EXACT DIFFER SAT) as follows: Given a Boolean formula $\phi$ on $n$ variables, decide whether $\phi$ has two satisfying assignments that differ on at least (resp. exactly) $d$ variables. We study classical and parameterized (in parameters $d$ and $n-d$) complexities of MAX DIFFER SAT and EXACT DIFFER SAT, when restricted to some formula-classes on which SAT is known to be polynomial-time solvable. In particular, we consider affine formulas, $2$-CNF formulas and hitting formulas. For affine formulas, we show the following: Both problems are polynomial-time solvable when each equation has at most two variables. EXACT DIFFER SAT is NP-hard, even when each equation has at most three variables and each variable appears in at most four equations. Also, MAX DIFFER SAT is NP-hard, even when each equation has at most four variables. Both problems are W[1]-hard in the parameter $n-d$. In contrast, when parameterized by $d$, EXACT DIFFER SAT is W[1]-hard, but MAX DIFFER SAT admits a single-exponential FPT algorithm and a polynomial-kernel. For 2-CNF formulas, we show the following: Both problems are polynomial-time solvable when each variable appears in at most two clauses. Also, both problems are W[1]-hard in the parameter $d$ (and therefore, it turns out, also NP-hard), even on monotone inputs (i.e., formulas with no negative literals). Finally, for hitting formulas, we show that both problems are polynomial-time solvable.

Traditional data influence estimation methods, like influence function, assume that learning algorithms are permutation-invariant with respect to training data. However, modern training paradigms, especially for foundation models using stochastic algorithms and multi-stage curricula, are sensitive to data ordering, thus violating this assumption. This mismatch renders influence functions inadequate for answering a critical question in machine learning: How can we capture the dependence of data influence on the optimization trajectory during training? To address this gap, we formalize the concept of trajectory-specific leave-one-out (LOO) influence, which quantifies the impact of removing a data point from a specific iteration during training, accounting for the exact sequence of data encountered and the model's optimization trajectory. However, exactly evaluating the trajectory-specific LOO presents a significant computational challenge. To address this, we propose data value embedding, a novel technique enabling efficient approximation of trajectory-specific LOO. Specifically, we compute a training data embedding that encapsulates the cumulative interactions between data and the evolving model parameters. The LOO can then be efficiently approximated through a simple dot-product between the data value embedding and the gradient of the given test data. As data value embedding captures training data ordering, it offers valuable insights into model training dynamics. In particular, we uncover distinct phases of data influence, revealing that data points in the early and late stages of training exert a greater impact on the final model. These insights translate into actionable strategies for managing the computational overhead of data selection by strategically timing the selection process, potentially opening new avenues in data curation research.

The evaluation of image generators remains a challenge due to the limitations of traditional metrics in providing nuanced insights into specific image regions. This is a critical problem as not all regions of an image may be learned with similar ease. In this work, we propose a novel approach to disentangle the cosine similarity of mean embeddings into the product of cosine similarities for individual pixel clusters via central kernel alignment. Consequently, we can quantify the contribution of the cluster-wise performance to the overall image generation performance. We demonstrate how this enhances the explainability and the likelihood of identifying pixel regions of model misbehavior across various real-world use cases.

With the rapid development of artificial intelligence, robotics, and Internet of Things, multi-robot systems are progressively acquiring human-like environmental perception and understanding capabilities, empowering them to complete complex tasks through autonomous decision-making and interaction. However, the Internet of Robotic Things (IoRT) faces significant challenges in terms of spectrum resources, sensing accuracy, communication latency, and energy supply. To address these issues, a reconfigurable intelligent surface (RIS)-aided IoRT network is proposed to enhance the overall performance of robotic communication, sensing, computation, and energy harvesting. In the case studies, by jointly optimizing parameters such as transceiver beamforming, robot trajectories, and RIS coefficients, solutions based on multi-agent deep reinforcement learning and multi-objective optimization are proposed to solve problems such as beamforming design, path planning, target sensing, and data aggregation. Numerical results are provided to demonstrate the effectiveness of proposed solutions in improve communication quality, sensing accuracy, computation error, and energy efficiency of RIS-aided IoRT networks.

Recent improvements in the quality of the generations by large language models have spurred research into identifying machine-generated text. Such work often presents high-performing detectors. However, humans and machines can produce text in different styles and domains, yet the performance impact of such on machine generated text detection systems remains unclear. In this paper, we audit the classification performance for detecting machine-generated text by evaluating on texts with varying writing styles. We find that classifiers are highly sensitive to stylistic changes and differences in text complexity, and in some cases degrade entirely to random classifiers. We further find that detection systems are particularly susceptible to misclassify easy-to-read texts while they have high performance for complex texts, leading to concerns about the reliability of detection systems. We recommend that future work attends to stylistic factors and reading difficulty levels of human-written and machine-generated text.

Finding unambiguous diagrammatic representations for first-order logical formulas and relational queries with arbitrarily nested disjunctions has been a surprisingly long-standing unsolved problem. We refer to this problem as the disjunction problem (of diagrammatic query representations). This work solves the disjunction problem. Our solution unifies, generalizes, and overcomes the shortcomings of prior approaches for disjunctions. It extends the recently proposed Relational Diagrams and is identical for disjunction-free queries. However, it can preserve the relational patterns and the safety for all well-formed Tuple Relational Calculus (TRC) queries, even with arbitrary disjunctions. Additionally, its size is proportional to the original TRC query and can thus be exponentially more succinct than Relational Diagrams.

Reasoning, a crucial ability for complex problem-solving, plays a pivotal role in various real-world settings such as negotiation, medical diagnosis, and criminal investigation. It serves as a fundamental methodology in the field of Artificial General Intelligence (AGI). With the ongoing development of foundation models, e.g., Large Language Models (LLMs), there is a growing interest in exploring their abilities in reasoning tasks. In this paper, we introduce seminal foundation models proposed or adaptable for reasoning, highlighting the latest advancements in various reasoning tasks, methods, and benchmarks. We then delve into the potential future directions behind the emergence of reasoning abilities within foundation models. We also discuss the relevance of multimodal learning, autonomous agents, and super alignment in the context of reasoning. By discussing these future research directions, we hope to inspire researchers in their exploration of this field, stimulate further advancements in reasoning with foundation models, and contribute to the development of AGI.

北京阿比特科技有限公司