亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we explore the concept of sequential inductive prediction intervals using theory from sequential testing. We furthermore introduce a 3-parameter PAC definition of prediction intervals that allows us via simulation to achieve almost sharp bounds with high probability.

相關內容

PAC學習理論不關心假設選擇算法,他關心的是能否從假設空間H中學習一個好的假設h。此理論不關心怎樣在假設空間中尋找好的假設,只關心能不能找得到。現在我們在來看一下什么叫“好假設”?只要滿足兩個條件(PAC辨識條件)即可

Human participants play a central role in the development of modern artificial intelligence (AI) technology, in psychological science, and in user research. Recent advances in generative AI have attracted growing interest to the possibility of replacing human participants in these domains with AI surrogates. We survey several such "substitution proposals" to better understand the arguments for and against substituting human participants with modern generative AI. Our scoping review indicates that the recent wave of these proposals is motivated by goals such as reducing the costs of research and development work and increasing the diversity of collected data. However, these proposals ignore and ultimately conflict with foundational values of work with human participants: representation, inclusion, and understanding. This paper critically examines the principles and goals underlying human participation to help chart out paths for future work that truly centers and empowers participants.

In this paper, a direct finite element method is proposed for solving interface problems on simple unfitted meshes. The fact that the two interface conditions form a $H^{\frac12}(\Gamma)\times H^{-\frac12}(\Gamma)$ pair leads to a simple and direct weak formulation with an integral term for the mutual interaction over the interface, and the well-posedness of this weak formulation is proved. Based on this formulation, a direct finite element method is proposed to solve the problem on two adjacent subdomains separated by the interface by conforming finite element and conforming mixed finite element, respectively. The well-posedness and an optimal a priori analysis are proved for this direct finite element method under some reasonable assumptions. A simple lowest order direct finite element method by using the linear element method and the lowest order Raviart-Thomas element method is proposed and analyzed to admit the optimal a priori error estimate by verifying the aforementioned assumptions. Numerical tests are also conducted to verify the theoretical results and the effectiveness of the direct finite element method.

This paper redefines the foundations of asymmetric cryptography's homomorphic cryptosystems through the application of the Yoneda Lemma. It explicitly illustrates that widely adopted systems, including ElGamal, RSA, Benaloh, Regev's LWE, and NTRUEncrypt, directly derive from the principles of the Yoneda Lemma. This synthesis gives rise to a holistic homomorphic encryption framework named the Yoneda Encryption Scheme. Within this scheme, encryption is elucidated through the bijective maps of the Yoneda Lemma Isomorphism, and decryption seamlessly follows from the naturality of these maps. This unification suggests a conjecture for a unified model theory framework, providing a basis for reasoning about both homomorphic and fully homomorphic encryption (FHE) schemes. As a practical demonstration, the paper introduces an FHE scheme capable of processing arbitrary finite sequences of encrypted multiplications and additions without the need for additional tweaking techniques, such as squashing or bootstrapping. This not only underscores the practical implications of the proposed theoretical advancements but also introduces new possibilities for leveraging model theory and forcing techniques in cryptography to facilitate the design of FHE schemes.

We propose a novel algorithm for the support estimation of partially known Gaussian graphical models that incorporates prior information about the underlying graph. In contrast to classical approaches that provide a point estimate based on a maximum likelihood or a maximum a posteriori criterion using (simple) priors on the precision matrix, we consider a prior on the graph and rely on annealed Langevin diffusion to generate samples from the posterior distribution. Since the Langevin sampler requires access to the score function of the underlying graph prior, we use graph neural networks to effectively estimate the score from a graph dataset (either available beforehand or generated from a known distribution). Numerical experiments demonstrate the benefits of our approach.

In this paper we consider online multiple testing with familywise error rate (FWER) control, where the probability of committing at least one type I error shall remain under control while testing a possibly infinite sequence of hypotheses over time. Currently, Adaptive-Discard (ADDIS) procedures seem to be the most promising online procedures with FWER control in terms of power. Now, our main contribution is a uniform improvement of the ADDIS principle and thus of all ADDIS procedures. This means, the methods we propose reject as least as much hypotheses as ADDIS procedures and in some cases even more, while maintaining FWER control. In addition, we show that there is no other FWER controlling procedure that enlarges the event of rejecting any hypothesis. Finally, we apply the new principle to derive uniform improvements of the ADDIS-Spending and ADDIS-Graph.

Let R_eps denote randomized query complexity for error probability eps, and R:=R_{1/3}. In this work we investigate whether a perfect composition theorem R(f o g^n)=Omega(R(f).R(g)) holds for a relation f in {0,1}^n * S and a total inner function g:{0,1}^m \to {0, 1}. Let D^(prod) denote the maximum distributional query complexity with respect to any product (over variables) distribution. In this work we show the composition theorem R(f o g^n)=Omega(R(f).D^{prod}(g)) up to logarithmic factors. In light of the minimax theorem which states that R(g) is the maximum distributional complexity of g over any distribution, our result makes progress towards answering the composition question. We prove our result by means of a complexity measure R^(prod)_(eps) that we define for total Boolean functions. We show it to be equivalent (up to logarithmic factors) to the sabotage complexity measure RS() defined by Ben-David and Kothari (ICALP 2019): RS(g) = Theta(R^(prod)_(1/3)(g)) (up to log factors). We ask if our bound RS(g) = Omega(D^(prod)(g)) (up to log factors) is tight. We answer this question in the negative, by showing that for the NAND tree function, sabotage complexity is polynomially larger than D^(prod). Our proof yields an alternative and different derivation of the tight lower bound on the bounded error randomized query complexity of the NAND tree function (originally proved by Santha in 1985), which may be of independent interest. Our result gives an explicit polynomial separation between R and D^(prod) which, to our knowledge, was not known prior to our work.

Statistical inference for high dimensional parameters (HDPs) can be based on their intrinsic correlation; that is, parameters that are close spatially or temporally tend to have more similar values. This is why nonlinear mixed-effects models (NMMs) are commonly (and appropriately) used for models with HDPs. Conversely, in many practical applications of NMM, the random effects (REs) are actually correlated HDPs that should remain constant during repeated sampling for frequentist inference. In both scenarios, the inference should be conditional on REs, instead of marginal inference by integrating out REs. In this paper, we first summarize recent theory of conditional inference for NMM, and then propose a bias-corrected RE predictor and confidence interval (CI). We also extend this methodology to accommodate the case where some REs are not associated with data. Simulation studies indicate that this new approach leads to substantial improvement in the conditional coverage rate of RE CIs, including CIs for smooth functions in generalized additive models, as compared to the existing method based on marginal inference.

This note shows how to compute, to high relative accuracy under mild assumptions, complex Jacobi rotations for diagonalization of Hermitian matrices of order two, using the correctly rounded functions $\mathtt{cr\_hypot}$ and $\mathtt{cr\_rsqrt}$, proposed for standardization in the C programming language as recommended by the IEEE-754 floating-point standard. The rounding to nearest (ties to even) and the non-stop arithmetic are assumed. The numerical examples compare the observed with theoretical bounds on the relative errors in the rotations' elements, and show that the maximal observed departure of the rotations' determinants from unity is smaller than that of the transformations computed by LAPACK.

In this work, a Generalized Finite Difference (GFD) scheme is presented for effectively computing the numerical solution of a parabolic-elliptic system modelling a bacterial strain with density-suppressed motility. The GFD method is a meshless method known for its simplicity for solving non-linear boundary value problems over irregular geometries. The paper first introduces the basic elements of the GFD method, and then an explicit-implicit scheme is derived. The convergence of the method is proven under a bound for the time step, and an algorithm is provided for its computational implementation. Finally, some examples are considered comparing the results obtained with a regular mesh and an irregular cloud of points.

In this paper, we focus on the self-supervised learning of visual correspondence using unlabeled videos in the wild. Our method simultaneously considers intra- and inter-video representation associations for reliable correspondence estimation. The intra-video learning transforms the image contents across frames within a single video via the frame pair-wise affinity. To obtain the discriminative representation for instance-level separation, we go beyond the intra-video analysis and construct the inter-video affinity to facilitate the contrastive transformation across different videos. By forcing the transformation consistency between intra- and inter-video levels, the fine-grained correspondence associations are well preserved and the instance-level feature discrimination is effectively reinforced. Our simple framework outperforms the recent self-supervised correspondence methods on a range of visual tasks including video object tracking (VOT), video object segmentation (VOS), pose keypoint tracking, etc. It is worth mentioning that our method also surpasses the fully-supervised affinity representation (e.g., ResNet) and performs competitively against the recent fully-supervised algorithms designed for the specific tasks (e.g., VOT and VOS).

北京阿比特科技有限公司