亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Can you decide if there is a coincidence in the numbers counting two different combinatorial objects? For example, can you decide if two regions in $\mathbb{R}^3$ have the same number of domino tilings? There are two versions of the problem, with $2\times 1 \times 1$ and $2\times 2 \times 1$ boxes. We prove that in both cases the coincidence problem is not in the polynomial hierarchy unless the polynomial hierarchy collapses to a finite level. While the conclusions are the same, the proofs are notably different and generalize in different directions. We proceed to explore the coincidence problem for counting independent sets and matchings in graphs, matroid bases, order ideals and linear extensions in posets, permutation patterns, and the Kronecker coefficients. We also make a number of conjectures for counting other combinatorial objects such as plane triangulations, contingency tables, standard Young tableaux, reduced factorizations and the Littlewood--Richardson coefficients.

相關內容

CC在計算復雜性方面表現突出。它的學科處于數學與計算機理論科學的交叉點,具有清晰的數學輪廓和嚴格的數學格式。官網鏈接: · Analysis · Weight · 泛函 · 估計/估計量 ·
2023 年 10 月 9 日

While well-established methods for time-to-event data are available when the proportional hazards assumption holds, there is no consensus on the best inferential approach under non-proportional hazards (NPH). However, a wide range of parametric and non-parametric methods for testing and estimation in this scenario have been proposed. To provide recommendations on the statistical analysis of clinical trials where non proportional hazards are expected, we conducted a comprehensive simulation study under different scenarios of non-proportional hazards, including delayed onset of treatment effect, crossing hazard curves, subgroups with different treatment effect and changing hazards after disease progression. We assessed type I error rate control, power and confidence interval coverage, where applicable, for a wide range of methods including weighted log-rank tests, the MaxCombo test, summary measures such as the restricted mean survival time (RMST), average hazard ratios, and milestone survival probabilities as well as accelerated failure time regression models. We found a trade-off between interpretability and power when choosing an analysis strategy under NPH scenarios. While analysis methods based on weighted logrank tests typically were favorable in terms of power, they do not provide an easily interpretable treatment effect estimate. Also, depending on the weight function, they test a narrow null hypothesis of equal hazard functions and rejection of this null hypothesis may not allow for a direct conclusion of treatment benefit in terms of the survival function. In contrast, non-parametric procedures based on well interpretable measures as the RMST difference had lower power in most scenarios. Model based methods based on specific survival distributions had larger power, however often gave biased estimates and lower than nominal confidence interval coverage.

We construct a new family of permutationally invariant codes that correct $t$ Pauli errors for any $t\ge 1$. We also show that codes in the new family correct spontaneous decay errors as well as deletion errors. In many cases the codes in this family are shorter than the best previously known explicit families of permutationally invariant codes both for Pauli errors, deletions, and for the amplitude damping channel. As a separate result, we generalize the conditions for permutationally invariant codes to correct $t$ Pauli errors from the previously known results for $t=1$ to any number of errors. For small $t$, these conditions can be used to construct new examples of codes by computer.

In this work, we tackle the problem of minimising the Conditional-Value-at-Risk (CVaR) of output quantities of complex differential models with random input data, using gradient-based approaches in combination with the Multi-Level Monte Carlo (MLMC) method. In particular, we consider the framework of multi-level Monte Carlo for parametric expectations and propose modifications of the MLMC estimator, error estimation procedure, and adaptive MLMC parameter selection to ensure the estimation of the CVaR and sensitivities for a given design with a prescribed accuracy. We then propose combining the MLMC framework with an alternating inexact minimisation-gradient descent algorithm, for which we prove exponential convergence in the optimisation iterations under the assumptions of strong convexity and Lipschitz continuity of the gradient of the objective function. We demonstrate the performance of our approach on two numerical examples of practical relevance, which evidence the same optimal asymptotic cost-tolerance behaviour as standard MLMC methods for fixed design computations of output expectations.

Electrical circuits are present in a variety of technologies, making their design an important part of computer aided engineering. The growing number of tunable parameters that affect the final design leads to a need for new approaches of quantifying their impact. Machine learning may play a key role in this regard, however current approaches often make suboptimal use of existing knowledge about the system at hand. In terms of circuits, their description via modified nodal analysis is well-understood. This particular formulation leads to systems of differential-algebraic equations (DAEs) which bring with them a number of peculiarities, e.g. hidden constraints that the solution needs to fulfill. We aim to use the recently introduced dissection concept for DAEs that can decouple a given system into ordinary differential equations, only depending on differential variables, and purely algebraic equations that describe the relations between differential and algebraic variables. The idea then is to only learn the differential variables and reconstruct the algebraic ones using the relations from the decoupling. This approach guarantees that the algebraic constraints are fulfilled up to the accuracy of the nonlinear system solver, which represents the main benefit highlighted in this article.

Stress testing refers to the application of adverse financial or macroeconomic scenarios to a portfolio. For this purpose, financial or macroeconomic risk factors are linked with asset returns, typically via a factor model. We expand the range of risk factors by adapting dimension-reduction techniques from unsupervised learning, namely PCA and autoencoders. This results in aggregated risk factors, encompassing a global factor, factors representing broad geographical regions, and factors specific to cyclical and defensive industries. As the adapted PCA and autoencoders provide an interpretation of the latent factors, this methodology is also valuable in other areas where dimension-reduction and explainability are crucial.

A central challenge in the verification of quantum computers is benchmarking their performance as a whole and demonstrating their computational capabilities. In this work, we find a universal model of quantum computation, Bell sampling, that can be used for both of those tasks and thus provides an ideal stepping stone towards fault-tolerance. In Bell sampling, we measure two copies of a state prepared by a quantum circuit in the transversal Bell basis. We show that the Bell samples are classically intractable to produce and at the same time constitute what we call a circuit shadow: from the Bell samples we can efficiently extract information about the quantum circuit preparing the state, as well as diagnose circuit errors. In addition to known properties that can be efficiently extracted from Bell samples, we give two new and efficient protocols, a test for the depth of the circuit and an algorithm to estimate a lower bound to the number of T gates in the circuit. With some additional measurements, our algorithm learns a full description of states prepared by circuits with low T-count.

We couple the L1 discretization of the Caputo fractional derivative in time with the Galerkin scheme to devise a linear numerical method for the semilinear subdiffusion equation. Two important points that we make are: nonsmooth initial data and time-dependent diffusion coefficient. We prove the stability and convergence of the method under weak assumptions concerning regularity of the diffusivity. We find optimal pointwise in space and global in time errors, which are verified with several numerical experiments.

We show how the relatively initial or relatively terminal fixed points for a well-behaved functor $F$ form a pair of adjoint functors between $F$-coalgebras and $F$-algebras. We use the language of locally presentable categories to find sufficient conditions for existence of this adjunction. We show that relative fixed points may be characterized as (co)equalizers of the free (co)monad on $F$. In particular, when $F$ is a polynomial functor on $\mathsf{Set}$ the relative fixed points are a quotient or subset of the free term algebra or the cofree term coalgebra. We give examples of the relative fixed points for polynomial functors and an example which is the Sierpinski carpet. Lastly, we prove a general preservation result for relative fixed points.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life challenges. This article provides a non-systematic review of the theory and the practice of forecasting. We provide an overview of a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts. We then demonstrate how such theoretical concepts are applied in a variety of real-life contexts. We do not claim that this review is an exhaustive list of methods and applications. However, we wish that our encyclopedic presentation will offer a point of reference for the rich work that has been undertaken over the last decades, with some key insights for the future of forecasting theory and practice. Given its encyclopedic nature, the intended mode of reading is non-linear. We offer cross-references to allow the readers to navigate through the various topics. We complement the theoretical concepts and applications covered by large lists of free or open-source software implementations and publicly-available databases.

北京阿比特科技有限公司