亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Let $W$ be a finite set which simultaneously serves as the universe of any poset $(W,\preceq)$ and as the vertex set of any graph $G$. Our algorithm, abbreviated A-I-I, enumerates (in a compressed format using don't-care symbols) all $G$-independent order ideals of $(W,\preceq)$. For certain instances the high-end Mathematica implementation of A-I-I compares favorably to the hardwired Mathematica commands {\tt BooleanConvert} and {\tt SatisfiabilityCount}. The A-I-I can be parallelized and adapts to a polynomial total time algorithm that enumerates the modelset of any Boolean 2-CNF.

相關內容

There are many examples of cases where access to improved models of human behavior and cognition has allowed creation of robots which can better interact with humans, and not least in road vehicle automation this is a rapidly growing area of research. Human-robot interaction (HRI) therefore provides an important applied setting for human behavior modeling - but given the vast complexity of human behavior, how complete and accurate do these models need to be? Here, we outline some possible ways of thinking about this problem, starting from the suggestion that modelers need to keep the right end goal in sight: A successful human-robot interaction, in terms of safety, performance, and human satisfaction. Efforts toward model completeness and accuracy should be focused on those aspects of human behavior to which interaction success is most sensitive. We emphasise that identifying which those aspects are is a difficult scientific objective in its own right, distinct for each given HRI context. We propose and exemplify an approach to formulating a priori hypotheses on this matter, in cases where robots are to be involved in interactions which currently take place between humans, such as in automated driving. Our perspective also highlights some possible risks of overreliance on machine-learned models of human behavior in HRI, and how to mitigate against those risks.

Classification systems are often deployed in resource-constrained settings where labels must be assigned to inputs on a budget of time, memory, etc. Budgeted, sequential classifiers (BSCs) address these scenarios by processing inputs through a sequence of partial feature acquisition and evaluation steps with early-exit options. This allows for an efficient evaluation of inputs that prevents unneeded feature acquisition. To approximate an intractable combinatorial problem, current approaches to budgeted classification rely on well-behaved loss functions that account for two primary objectives (processing cost and error). These approaches offer improved efficiency over traditional classifiers but are limited by analytic constraints in formulation and do not manage additional performance objectives. Notably, such methods do not explicitly account for an important aspect of real-time detection systems -- the fraction of "accepted" predictions satisfying a confidence criterion imposed by a risk-averse monitor. We propose a problem-specific genetic algorithm to build budgeted, sequential classifiers with confidence-based reject options. Three objectives -- accuracy, processing time/cost, and coverage -- are considered. The algorithm emphasizes Pareto efficiency while accounting for a notion of aggregate performance via a unique scalarization. Experiments show our method can quickly find globally Pareto optimal solutions in very large search spaces and is competitive with existing approaches while offering advantages for selective, budgeted deployment scenarios.

Electric machine design optimization is a computationally expensive multi-objective optimization problem. While the objectives require time-consuming finite element analysis, optimization constraints can often be based on mathematical expressions, such as geometric constraints. This article investigates this optimization problem of mixed computationally expensive nature by proposing an optimization method incorporated into a popularly-used evolutionary multi-objective optimization algorithm - NSGA-II. The proposed method exploits the inexpensiveness of geometric constraints to generate feasible designs by using a custom repair operator. The proposed method also addresses the time-consuming objective functions by incorporating surrogate models for predicting machine performance. The article successfully establishes the superiority of the proposed method over the conventional optimization approach. This study clearly demonstrates how a complex engineering design can be optimized for multiple objectives and constraints requiring heterogeneous evaluation times and optimal solutions can be analyzed to select a single preferred solution and importantly harnessed to reveal vital design features common to optimal solutions as design principles.

The problem of constrained Markov decision process is considered. An agent aims to maximize the expected accumulated discounted reward subject to multiple constraints on its costs (the number of constraints is relatively small). A new dual approach is proposed with the integration of two ingredients: entropy regularized policy optimizer and Vaidya's dual optimizer, both of which are critical to achieve faster convergence. The finite-time error bound of the proposed approach is provided. Despite the challenge of the nonconcave objective subject to nonconcave constraints, the proposed approach is shown to converge (with linear rate) to the global optimum. The complexity expressed in terms of the optimality gap and the constraint violation significantly improves upon the existing primal-dual approaches.

A frequent problem in statistical science is how to properly handle missing data in matched paired observations. There is a large body of literature coping with the univariate case. Yet, the ongoing technological progress in measuring biological systems raises the need for addressing more complex data, e.g., graphs, strings and probability distributions, among others. In order to fill this gap, this paper proposes new estimators of the maximum mean discrepancy (MMD) to handle complex matched pairs with missing data. These estimators can detect differences in data distributions under different missingness mechanisms. The validity of this approach is proven and further studied in an extensive simulation study, and results of statistical consistency are provided. Data from continuous glucose monitoring in a longitudinal population-based diabetes study are used to illustrate the application of this approach. By employing the new distributional representations together with cluster analysis, new clinical criteria on how glucose changes vary at the distributional level over five years can be explored.

A long line of research about connectivity in the Massively Parallel Computation model has culminated in the seminal works of Andoni et al. [FOCS'18] and Behnezhad et al. [FOCS'19]. They provide a randomized algorithm for low-space MPC with conjectured to be optimal round complexity $O(\log D + \log \log_{\frac m n} n)$ and $O(m)$ space, for graphs on $n$ vertices with $m$ edges and diameter $D$. Surprisingly, a recent result of Coy and Czumaj [STOC'22] shows how to achieve the same deterministically. Unfortunately, however, their algorithm suffers from large local computation time. We present a deterministic connectivity algorithm that matches all the parameters of the randomized algorithm and, in addition, significantly reduces the local computation time to nearly linear. Our derandomization method is based on reducing the amount of randomness needed to allow for a simpler efficient search. While similar randomness reduction approaches have been used before, our result is not only strikingly simpler, but it is the first to have efficient local computation. This is why we believe it to serve as a starting point for the systematic development of computation-efficient derandomization approaches in low-memory MPC.

We study identifiability of the parameters in autoregressions defined on a network. Most identification conditions that are available for these models either rely on the network being observed repeatedly, are only sufficient, or require strong distributional assumptions. This paper derives conditions that apply even when the individuals composing the network are observed only once, are necessary and sufficient for identification, and require weak distributional assumptions. We find that the model parameters are generically, in the measure theoretic sense, identified even without repeated observations, and analyze the combinations of the interaction matrix and the regressor matrix causing identification failures. This is done both in the original model and after certain transformations in the sample space, the latter case being relevant, for example, in some fixed effects specifications.

We consider the problem of controlling an unknown linear dynamical system under adversarially changing convex costs and full feedback of both the state and cost function. We present the first computationally-efficient algorithm that attains an optimal $\smash{\sqrt{T}}$-regret rate compared to the best stabilizing linear controller in hindsight, while avoiding stringent assumptions on the costs such as strong convexity. Our approach is based on a careful design of non-convex lower confidence bounds for the online costs, and uses a novel technique for computationally-efficient regret minimization of these bounds that leverages their particular non-convex structure.

Transformer-based pretrained language models (T-PTLMs) have achieved great success in almost every NLP task. The evolution of these models started with GPT and BERT. These models are built on the top of transformers, self-supervised learning and transfer learning. Transformed-based PTLMs learn universal language representations from large volumes of text data using self-supervised learning and transfer this knowledge to downstream tasks. These models provide good background knowledge to downstream tasks which avoids training of downstream models from scratch. In this comprehensive survey paper, we initially give a brief overview of self-supervised learning. Next, we explain various core concepts like pretraining, pretraining methods, pretraining tasks, embeddings and downstream adaptation methods. Next, we present a new taxonomy of T-PTLMs and then give brief overview of various benchmarks including both intrinsic and extrinsic. We present a summary of various useful libraries to work with T-PTLMs. Finally, we highlight some of the future research directions which will further improve these models. We strongly believe that this comprehensive survey paper will serve as a good reference to learn the core concepts as well as to stay updated with the recent happenings in T-PTLMs.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

北京阿比特科技有限公司