亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Over the course of the last 50 years, many questions in the field of computability were left surprisingly unanswered. One example is the question of $P$ vs $NP\cap co-NP$. It could be phrased in loose terms as "If a person has the ability to verify a proof and a disproof to a problem, does this person know a solution to that problem?". When talking about people, one can of course see that the question depends on the knowledge the specific person has on this problem. Our main goal will be to extend this observation to formal models of set theory $ZFC$: given a model $M$ and a specific problem $L$ in $NP\cap co-NP$, we can show that the problem $L$ is in $P$ if we have "knowledge" of $L$. In this paper, we'll define the concept of knowledge and elaborate why it agrees with the intuitive concept of knowledge. Next we will construct a model in which we have knowledge on many functions. From the existence of that model, we will deduce that in any model with a worldly cardinal we have knowledge on a broad class of functions. As a result, we show that if we assume a worldly cardinal exists, then the statement "a given definable language which is provably in $NP\cap co-NP$ is also in $P$" is provable. Assuming a worldly cardinal, we show by a simple use of these theorems that one can factor numbers in poly-logarithmic time. This article won't solve the $P$ vs $NP\cap co-NP$ question, but its main result brings us one step closer to deciding that question.

相關內容

通(tong)過學習、實踐或探索所獲得的認識、判斷或技能(neng)。

Modern shock-capturing schemes often suffer from numerical shock anomalies if the flow field contains strong shocks, which may limit their further application in hypersonic flow computations. In the current study, we devote our efforts to exploring the primary numerical characteristics and the underlying mechanism of shock instability for second-order finite-volume schemes. To this end, we, for the first time, develop the matrix stability analysis method for the finite-volume MUSCL approach. Such a linearized analysis method allows to investigate the shock instability problem of the finite-volume shock-capturing schemes in a quantitative and efficient manner. Results of the stability analysis demonstrate that the shock stability of second-order scheme is strongly related to the Riemann solver, Mach number, limiter function, numerical shock structure, and computational grid. Unique stability characteristics associated with these factors for second-order methods are revealed quantitatively with the established method. Source location of instability is also clarified by the matrix stability analysis method. Results show that the shock instability originates from the numerical shock structure. Such conclusions pave the way to better understand the shock instability problem and may shed new light on developing more reliable shock-capturing methods for compressible flows with high Mach number.

While transformer-based systems have enabled greater accuracies with fewer training examples, data acquisition obstacles still persist for rare-class tasks -- when the class label is very infrequent (e.g. < 5% of samples). Active learning has in general been proposed to alleviate such challenges, but choice of selection strategy, the criteria by which rare-class examples are chosen, has not been systematically evaluated. Further, transformers enable iterative transfer-learning approaches. We propose and investigate transfer- and active learning solutions to the rare class problem of dissonance detection through utilizing models trained on closely related tasks and the evaluation of acquisition strategies, including a proposed probability-of-rare-class (PRC) approach. We perform these experiments for a specific rare class problem: collecting language samples of cognitive dissonance from social media. We find that PRC is a simple and effective strategy to guide annotations and ultimately improve model accuracy while transfer-learning in a specific order can improve the cold-start performance of the learner but does not benefit iterations of active learning.

We study the query version of the approximate heavy hitter and quantile problems. In the former problem, the input is a parameter $\varepsilon$ and a set $P$ of $n$ points in $\mathbb{R}^d$ where each point is assigned a color from a set $C$, and we want to build a structure s.t. given any geometric range $\gamma$, we can efficiently find a list of approximate heavy hitters in $\gamma\cap P$, i.e., colors that appear at least $\varepsilon |\gamma \cap P|$ times in $\gamma \cap P$, as well as their frequencies with an additive error of $\varepsilon |\gamma \cap P|$. In the latter problem, each point is assigned a weight from a totally ordered universe and the query must output a sequence $S$ of $1+1/\varepsilon$ weights s.t. the $i$-th weight in $S$ has approximate rank $i\varepsilon|\gamma\cap P|$, meaning, rank $i\varepsilon|\gamma\cap P|$ up to an additive error of $\varepsilon|\gamma\cap P|$. Previously, optimal results were only known in 1D [WY11] but a few sub-optimal methods were available in higher dimensions [AW17, ACH+12]. We study the problems for 3D halfspace and dominance queries. We consider the real RAM model with integer registers of size $w=\Theta(\log n)$ bits. For dominance queries, we show optimal solutions for both heavy hitter and quantile problems: using linear space, we can answer both queries in time $O(\log n + 1/\varepsilon)$. Note that as the output size is $\frac{1}{\varepsilon}$, after investing the initial $O(\log n)$ searching time, our structure takes on average $O(1)$ time to find a heavy hitter or a quantile! For more general halfspace heavy hitter queries, the same optimal query time can be achieved by increasing the space by an extra $\log_w\frac{1}{\varepsilon}$ (resp. $\log\log_w\frac{1}{\varepsilon}$) factor in 3D (resp. 2D). By spending extra $\log^{O(1)}\frac{1}{\varepsilon}$ factors in time and space, we can also support quantile queries.

In this paper we discuss potentially practical ways to produce expander graphs with good spectral properties and a compact description. We focus on several classes of uniform and bipartite expander graphs defined as random Schreier graphs of the general linear group over the finite field of size two. We perform numerical experiments and show that such constructions produce spectral expanders that can be useful for practical applications. To find a theoretical explanation of the observed experimental results, we used the method of moments to prove upper bounds for the expected second largest eigenvalue of the random Schreier graphs used in our constructions. We focus on bounds for which it is difficult to study the asymptotic behaviour but it is possible to compute non-trivial conclusions for relatively small graphs with parameters from our numerical experiments (e.g., with less than 2^200 vertices and degree at least logarithmic in the number of vertices).

In 2013, Marcus, Spielman, and Srivastava resolved the famous Kadison-Singer conjecture. It states that for $n$ independent random vectors $v_1,\cdots, v_n$ that have expected squared norm bounded by $\epsilon$ and are in the isotropic position in expectation, there is a positive probability that the determinant polynomial $\det(xI - \sum_{i=1}^n v_iv_i^\top)$ has roots bounded by $(1 + \sqrt{\epsilon})^2$. An interpretation of the Kadison-Singer theorem is that we can always find a partition of the vectors $v_1,\cdots,v_n$ into two sets with a low discrepancy in terms of the spectral norm (in other words, rely on the determinant polynomial). In this paper, we provide two results for a broader class of polynomials, the hyperbolic polynomials. Furthermore, our results are in two generalized settings: $\bullet$ The first one shows that the Kadison-Singer result requires a weaker assumption that the vectors have a bounded sum of hyperbolic norms. $\bullet$ The second one relaxes the Kadison-Singer result's distribution assumption to the Strongly Rayleigh distribution. To the best of our knowledge, the previous results only support determinant polynomials [Anari and Oveis Gharan'14, Kyng, Luh and Song'20]. It is unclear whether they can be generalized to a broader class of polynomials. In addition, we also provide a sub-exponential time algorithm for constructing our results.

Defining the effect of exposure of interest and selecting an appropriate estimation method are prerequisite for causal inference. Understanding the ways in which association between heatwaves (i.e., consecutive days of extreme high temperature) and an outcome depends on whether adjustment was made for temperature and how such adjustment was conducted, is limited. This paper aims to investigate this dependency, demonstrate that temperature is a confounder in heatwave-outcome associations, and introduce a new modeling approach to estimate a new heatwave-outcome relation: E[R(Y)|HW=1, Z]/E[R(Y)|T=OT, Z], where HW is a daily binary variable to indicate the presence of a heatwave; R(Y) is the risk of an outcome, Y; T is a temperature variable; OT is optimal temperature; and Z is a set of confounders including typical confounders but also some types of T as a confounder. We recommend characterization of heatwave-outcome relations and careful selection of modeling approaches to understand the impacts of heatwaves under climate change. We demonstrate our approach using real-world data for Seoul, which suggests that the effect of heatwaves may be larger than what may be inferred from the extant literature. An R package, HEAT (Heatwave effect Estimation via Adjustment for Temperature), was developed and made publicly available.

An edge $e$ of a graph $G$ is called deletable for some orientation $o$ if the restriction of $o$ to $G-e$ is a strong orientation. In 2021, H\"orsch and Szigeti proposed a new parameter for $3$-edge-connected graphs, called the Frank number, which refines $k$-edge-connectivity. The Frank number is defined as the minimum number of orientations of $G$ for which every edge of $G$ is deletable in at least one of them. They showed that every $3$-edge-connected graph has Frank number at most $7$ and that in case these graphs are also $3$-edge-colourable graphs the parameter is at most $3$. Here we strengthen the latter result by showing that such graphs have Frank number $2$, which also confirms a conjecture by Bar\'at and Bl\'aszik. Furthermore, we prove two sufficient conditions for cubic graphs to have Frank number $2$ and use them in an algorithm to computationally show that the Petersen graph is the only cyclically $4$-edge-connected cubic graph up to $36$ vertices having Frank number greater than $2$.

SARSA, a classical on-policy control algorithm for reinforcement learning, is known to chatter when combined with linear function approximation: SARSA does not diverge but oscillates in a bounded region. However, little is known about how fast SARSA converges to that region and how large the region is. In this paper, we make progress towards this open problem by showing the convergence rate of projected SARSA to a bounded region. Importantly, the region is much smaller than the region that we project into, provided that the magnitude of the reward is not too large. Existing works regarding the convergence of linear SARSA to a fixed point all require the Lipschitz constant of SARSA's policy improvement operator to be sufficiently small; our analysis instead applies to arbitrary Lipschitz constants and thus characterizes the behavior of linear SARSA for a new regime.

We examine the complexity of the ``Texas Hold'em'' variant of poker from a topological perspective. We show that there exists a natural simplicial complex governing the multi-way winning probabilities between various hands, and that this simplicial complex contains $4$-dimensional spheres as induced subcomplexes. We deduce that evaluating the strength of a pair of cards in Texas Hold'em is an intricate problem, and that even the notion of who is bluffing against whom is ill-defined in some situations.

The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.

北京阿比特科技有限公司