亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We investigate logics and classes of problems below Fagin's existential second-order logic (ESO) and above Feder and Vardi's logic for constraint satisfaction problems (CSP), the so called monotone monadic SNP without inequality (MMSNP). It is known that MMSNP has a dichotomy between P and NP-complete but that the removal of any of these three restrictions imposed on SNP yields a logic that is Ptime equivalent to ESO: so by Ladner's theorem we have three stronger sibling logics that are nondichotomic above MMSNP. In this paper, we explore the area between these four logics, mostly by considering guarded extensions of MMSNP, with the ultimate goal being to obtain logics above MMSNP that exhibit such a dichotomy.

相關內容

iOS 8 提供的應用間和應用跟系統的功能交互特性。
  • Today (iOS and OS X): widgets for the Today view of Notification Center
  • Share (iOS and OS X): post content to web services or share content with others
  • Actions (iOS and OS X): app extensions to view or manipulate inside another app
  • Photo Editing (iOS): edit a photo or video in Apple's Photos app with extensions from a third-party apps
  • Finder Sync (OS X): remote file storage in the Finder with support for Finder content annotation
  • Storage Provider (iOS): an interface between files inside an app and other apps on a user's device
  • Custom Keyboard (iOS): system-wide alternative keyboards

Source:

We examine the effect of noise on societies of agents using an agent-based model of evolutionary norm emergence. Generally, we see that noisy societies are more selfish, smaller and discontent, and are caught in rounds of perpetual punishment preventing them from flourishing. Surprisingly, despite the effect of noise on the population, it does not seem to evolve away. In fact, in some cases it seems that the level of noise increases. We carry out further analysis and provide reasons for why this may be the case. Furthermore, we claim that our framework that evolves the noise/ambiguity of norms may be a new way to model the tight/loose framework of norms, suggesting that despite ambiguous norms detrimental effect on society, evolution does not favour clarity.

In today's world, many technologically advanced countries have realized that real power lies not in physical strength but in educated minds. As a result, every country has embarked on restructuring its education system to meet the demands of technology. As a country in the midst of these developments, we cannot remain indifferent to this transformation in education. In the Information Age of the 21st century, rapid access to information is crucial for the development of individuals and societies. To take our place among the knowledge societies in a world moving rapidly towards globalization, we must closely follow technological innovations and meet the requirements of technology. This can be achieved by providing learning opportunities to anyone interested in acquiring education in their area of interest. This study focuses on the advantages and disadvantages of internet-based learning compared to traditional teaching methods, the importance of computer usage in internet-based learning, negative factors affecting internet-based learning, and the necessary recommendations for addressing these issues. In today's world, it is impossible to talk about education without technology or technology without education.

The sensitivity of loss reserving techniques to outliers in the data or deviations from model assumptions is a well known challenge. It has been shown that the popular chain-ladder reserving approach is at significant risk to such aberrant observations in that reserve estimates can be significantly shifted in the presence of even one outlier. As a consequence the chain-ladder reserving technique is non-robust. In this paper we investigate the sensitivity of reserves and mean squared errors of prediction under Mack's Model (Mack, 1993). This is done through the derivation of impact functions which are calculated by taking the first derivative of the relevant statistic of interest with respect to an observation. We also provide and discuss the impact functions for quantiles when total reserves are assumed to be lognormally distributed. Additionally, comparisons are made between the impact functions for individual accident year reserves under Mack's Model and the Bornhuetter-Ferguson methodology. It is shown that the impact of incremental claims on these statistics of interest varies widely throughout a loss triangle and is heavily dependent on other cells in the triangle. Results are illustrated using data from a Belgian non-life insurer.

Repeated use of a data sample via adaptively chosen queries can rapidly lead to overfitting, wherein the empirical evaluation of queries on the sample significantly deviates from their mean with respect to the underlying data distribution. It turns out that simple noise addition algorithms suffice to prevent this issue, and differential privacy-based analysis of these algorithms shows that they can handle an asymptotically optimal number of queries. However, differential privacy's worst-case nature entails scaling such noise to the range of the queries even for highly-concentrated queries, or introducing more complex algorithms. In this paper, we prove that straightforward noise-addition algorithms already provide variance-dependent guarantees that also extend to unbounded queries. This improvement stems from a novel characterization that illuminates the core problem of adaptive data analysis. We show that the harm of adaptivity results from the covariance between the new query and a Bayes factor-based measure of how much information about the data sample was encoded in the responses given to past queries. We then leverage this characterization to introduce a new data-dependent stability notion that can bound this covariance.

The Plackett--Luce model is a popular approach for ranking data analysis, where a utility vector is employed to determine the probability of each outcome based on Luce's choice axiom. In this paper, we investigate the asymptotic theory of utility vector estimation by maximizing different types of likelihood, such as the full-, marginal-, and quasi-likelihood. We provide a rank-matching interpretation for the estimating equations of these estimators and analyze their asymptotic behavior as the number of items being compared tends to infinity. In particular, we establish the uniform consistency of these estimators under conditions characterized by the topology of the underlying comparison graph sequence and demonstrate that the proposed conditions are sharp for common sampling scenarios such as the nonuniform random hypergraph model and the hypergraph stochastic block model; we also obtain the asymptotic normality of these estimators and discuss the trade-off between statistical efficiency and computational complexity for practical uncertainty quantification. Both results allow for nonuniform and inhomogeneous comparison graphs with varying edge sizes and different asymptotic orders of edge probabilities. We verify our theoretical findings by conducting detailed numerical experiments.

Cuckoo hashing is a powerful primitive that enables storing items using small space with efficient querying. At a high level, cuckoo hashing maps $n$ items into $b$ entries storing at most $\ell$ items such that each item is placed into one of $k$ randomly chosen entries. Additionally, there is an overflow stash that can store at most $s$ items. Many cryptographic primitives rely upon cuckoo hashing to privately embed and query data where it is integral to ensure small failure probability when constructing cuckoo hashing tables as it directly relates to the privacy guarantees. As our main result, we present a more query-efficient cuckoo hashing construction using more hash functions. For construction failure probability $\epsilon$, the query overhead of our scheme is $O(1 + \sqrt{\log(1/\epsilon)/\log n})$. Our scheme has quadratically smaller query overhead than prior works for any target failure probability $\epsilon$. We also prove lower bounds matching our construction. Our improvements come from a new understanding of the locality of cuckoo hashing failures for small sets of items. We also initiate the study of robust cuckoo hashing where the input set may be chosen with knowledge of the hash functions. We present a cuckoo hashing scheme using more hash functions with query overhead $\tilde{O}(\log \lambda)$ that is robust against poly$(\lambda)$ adversaries. Furthermore, we present lower bounds showing that this construction is tight and that extending previous approaches of large stashes or entries cannot obtain robustness except with $\Omega(n)$ query overhead. As applications of our results, we obtain improved constructions for batch codes and PIR. In particular, we present the most efficient explicit batch code and blackbox reduction from single-query PIR to batch PIR.

Cross-entropy is a widely used loss function in applications. It coincides with the logistic loss applied to the outputs of a neural network, when the softmax is used. But, what guarantees can we rely on when using cross-entropy as a surrogate loss? We present a theoretical analysis of a broad family of loss functions, comp-sum losses, that includes cross-entropy (or logistic loss), generalized cross-entropy, the mean absolute error and other cross-entropy-like loss functions. We give the first $H$-consistency bounds for these loss functions. These are non-asymptotic guarantees that upper bound the zero-one loss estimation error in terms of the estimation error of a surrogate loss, for the specific hypothesis set $H$ used. We further show that our bounds are tight. These bounds depend on quantities called minimizability gaps. To make them more explicit, we give a specific analysis of these gaps for comp-sum losses. We also introduce a new family of loss functions, smooth adversarial comp-sum losses, that are derived from their comp-sum counterparts by adding in a related smooth term. We show that these loss functions are beneficial in the adversarial setting by proving that they admit $H$-consistency bounds. This leads to new adversarial robustness algorithms that consist of minimizing a regularized smooth adversarial comp-sum loss. While our main purpose is a theoretical analysis, we also present an extensive empirical analysis comparing comp-sum losses. We further report the results of a series of experiments demonstrating that our adversarial robustness algorithms outperform the current state-of-the-art, while also achieving a superior non-adversarial accuracy.

In 1975 the first author proved that every finite tight two-person game form $g$ is Nash-solvable, that is, for every payoffs $u$ and $w$ of two players the obtained game $(g;u,w)$, in normal form, has a Nash equilibrium (NE) in pure strategies. This result was extended in several directions; here we strengthen it further. We construct two special NE realized by a lexicographically safe (lexsafe) strategy of one player and a best response of the other. We obtain a polynomial algorithm computing these lexsafe NE. This is trivial when game form $g$ is given explicitly. Yet, in applications $g$ is frequently realized by an oracle $\cO$ such that size of $g$ is exponential in size $|\cO|$ of $\cO$. We assume that game form $g = g(\cO)$ generated by $\cO$ is tight and that an arbitrary {\em win-lose game} $(g;u,w)$ (in which payoffs $u$ and $w$ are zero-sum and take only values $\pm 1$) can be solved, in time polynomial in $|\cO|$. These assumptions allow us to construct an algorithm computing two (one for each player) lexsafe NE in time polynomial in $|\cO|$. We consider four types of oracles known in the literature and show that all four satisfy the above assumptions.

Rationality is often related to optimal decision making. Humans are known to be bounded rational agents. However, recent advances in computing, and other scientific and technical fields along with large amount of data have led to a feeling that this could result in extending the limits of bounded rationality in humans through augmented machine intelligence. In this paper, results from a computational model show that as more agents reach global optimality, faster with enhanced computing, etc., solving the same problem independently, this leads to accelerated "tragedy of the commons" due to quicker resource consumption. Thus, bounded rationality could be seen as blessing in disguise (providing diversity to solutions for the same problem) from sustainability standpoint.

We examine the problem of question answering over knowledge graphs, focusing on simple questions that can be answered by the lookup of a single fact. Adopting a straightforward decomposition of the problem into entity detection, entity linking, relation prediction, and evidence combination, we explore simple yet strong baselines. On the popular SimpleQuestions dataset, we find that basic LSTMs and GRUs plus a few heuristics yield accuracies that approach the state of the art, and techniques that do not use neural networks also perform reasonably well. These results show that gains from sophisticated deep learning techniques proposed in the literature are quite modest and that some previous models exhibit unnecessary complexity.

北京阿比特科技有限公司