亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose a technique called Rotate-and-Kill for solving the polygon inclusion and circumscribing problems. By applying this technique, we obtain $O(n)$ time algorithms for computing (1) the maximum area triangle in a given $n$-sided convex polygon $P$, (2) the minimum area triangle enclosing $P$, (3) the minimum area triangle enclosing $P$ touching edge-to-edge, i.e. the minimum area triangle that is the intersection of three half-planes out of the $n$ half-planes defining $P$, and (4) the minimum perimeter triangle enclosing $P$ touching edge-to-edge. Our algorithm for computing the maximum area triangle is simpler than the alternatives given in [Chandran and Mount, IJCGA'92] and [Kallus, arXiv'17]. Our algorithms for computing the minimum area or perimeter triangle enclosing $P$ touching edge-to-edge improve the $O(n\log n)$ or $O(n\log^2n)$ time algorithms given in [Boyce \emph{et al.}, STOC'82], [Aggarwal \emph{et al.}, Algorithmica'87], [Aggarwal and J. Park., FOCS'88], [Aggarwal \emph{et al.}, DCG'94], and [Schieber, SODA'95].

相關內容

We present a numerical scheme for the solution of the initial-value problem for the ``bad'' Boussinesq equation. The accuracy of the scheme is tested by comparison with exact soliton solutions as well as with recently obtained asymptotic formulas for the solution.

In this paper, we propose novel proper orthogonal decomposition (POD)--based model reduction methods that effectively address the issue of inverse crime in solving parabolic inverse problems. Both the inverse initial value problems and inverse source problems are studied. By leveraging the inherent low-dimensional structures present in the data, our approach enables a reduction in the forward model complexity without compromising the accuracy of the inverse problem solution. Besides, we prove the convergence analysis of the proposed methods for solving parabolic inverse problems. Through extensive experimentation and comparative analysis, we demonstrate the effectiveness of our method in overcoming inverse crime and achieving improved inverse problem solutions. The proposed POD model reduction method offers a promising direction for improving the reliability and applicability of inverse problem-solving techniques in various domains.

We study interpolation inequalities between H\"older Integral Probability Metrics (IPMs) in the case where the measures have densities on closed submanifolds. Precisely, it is shown that if two probability measures $\mu$ and $\mu^\star$ have $\beta$-smooth densities with respect to the volume measure of some submanifolds $\mathcal{M}$ and $\mathcal{M}^\star$ respectively, then the H\"older IPMs $d_{\mathcal{H}^\gamma_1}$ of smoothness $\gamma\geq 1$ and $d_{\mathcal{H}^\eta_1}$ of smoothness $\eta>\gamma$, satisfy $d_{ \mathcal{H}_1^{\gamma}}(\mu,\mu^\star)\lesssim d_{ \mathcal{H}_1^{\eta}}(\mu,\mu^\star)^\frac{\beta+\gamma}{\beta+\eta}$, up to logarithmic factors. We provide an application of this result to high-dimensional inference. These functional inequalities turn out to be a key tool for density estimation on unknown submanifold. In particular, it allows to build the first estimator attaining optimal rates of estimation for all the distances $d_{\mathcal{H}_1^\gamma}$, $\gamma \in [1,\infty)$ simultaneously.

Symplectic integrators are widely implemented numerical integrators for Hamiltonian mechanics, which preserve the Hamiltonian structure (symplecticity) of the system. Although the symplectic integrator does not conserve the energy of the system, it is well known that there exists a conserving modified Hamiltonian, called the shadow Hamiltonian. For the Nambu mechanics, which is a kind of generalized Hamiltonian mechanics, we can also construct structure-preserving integrators by the same procedure used to construct the symplectic integrators. In the structure-preserving integrator, however, the existence of shadow Hamiltonians is nontrivial. This is because the Nambu mechanics is driven by multiple Hamiltonians and it is nontrivial whether the time evolution by the integrator can be cast into the Nambu mechanical time evolution driven by multiple shadow Hamiltonians. In this paper we present a general procedure to calculate the shadow Hamiltonians of structure-preserving integrators for Nambu mechanics, and give an example where the shadow Hamiltonians exist. This is the first attempt to determine the concrete forms of the shadow Hamiltonians for a Nambu mechanical system. We show that the fundamental identity, which corresponds to the Jacobi identity in Hamiltonian mechanics, plays an important role in calculating the shadow Hamiltonians using the Baker-Campbell-Hausdorff formula. It turns out that the resulting shadow Hamiltonians have indefinite forms depending on how the fundamental identities are used. This is not a technical artifact, because the exact shadow Hamiltonians obtained independently have the same indefiniteness.

Multivariate Cryptography is one of the main candidates for Post-quantum Cryptography. Multivariate schemes are usually constructed by applying two secret affine invertible transformations $\mathcal S,\mathcal T$ to a set of multivariate polynomials $\mathcal{F}$ (often quadratic). The secret polynomials $\mathcal{F}$ posses a trapdoor that allows the legitimate user to find a solution of the corresponding system, while the public polynomials $\mathcal G=\mathcal S\circ\mathcal F\circ\mathcal T$ look like random polynomials. The polynomials $\mathcal G$ and $\mathcal F$ are said to be affine equivalent. In this article, we present a more general way of constructing a multivariate scheme by considering the CCZ equivalence, which has been introduced and studied in the context of vectorial Boolean functions.

We consider the nonparametric regression problem when the covariates are located on an unknown smooth compact submanifold of a Euclidean space. Under defining a random geometric graph structure over the covariates we analyze the asymptotic frequentist behaviour of the posterior distribution arising from Bayesian priors designed through random basis expansion in the graph Laplacian eigenbasis. Under Holder smoothness assumption on the regression function and the density of the covariates over the submanifold, we prove that the posterior contraction rates of such methods are minimax optimal (up to logarithmic factors) for any positive smoothness index.

We study three systems of equations, together with a way to count the number of solutions. One of the results was used in the recent computation of D(9), the others have potential to speed up existing techniques in the future.

This study explores the impact of adversarial perturbations on Convolutional Neural Networks (CNNs) with the aim of enhancing the understanding of their underlying mechanisms. Despite numerous defense methods proposed in the literature, there is still an incomplete understanding of this phenomenon. Instead of treating the entire model as vulnerable, we propose that specific feature maps learned during training contribute to the overall vulnerability. To investigate how the hidden representations learned by a CNN affect its vulnerability, we introduce the Adversarial Intervention framework. Experiments were conducted on models trained on three well-known computer vision datasets, subjecting them to attacks of different nature. Our focus centers on the effects that adversarial perturbations to a model's initial layer have on the overall behavior of the model. Empirical results revealed compelling insights: a) perturbing selected channel combinations in shallow layers causes significant disruptions; b) the channel combinations most responsible for the disruptions are common among different types of attacks; c) despite shared vulnerable combinations of channels, different attacks affect hidden representations with varying magnitudes; d) there exists a positive correlation between a kernel's magnitude and its vulnerability. In conclusion, this work introduces a novel framework to study the vulnerability of a CNN model to adversarial perturbations, revealing insights that contribute to a deeper understanding of the phenomenon. The identified properties pave the way for the development of efficient ad-hoc defense mechanisms in future applications.

We consider the application of the generalized Convolution Quadrature (gCQ) to approximate the solution of an important class of sectorial problems. The gCQ is a generalization of Lubich's Convolution Quadrature (CQ) that allows for variable steps. The available stability and convergence theory for the gCQ requires non realistic regularity assumptions on the data, which do not hold in many applications of interest, such as the approximation of subdiffusion equations. It is well known that for non smooth enough data the original CQ, with uniform steps, presents an order reduction close to the singularity. We generalize the analysis of the gCQ to data satisfying realistic regularity assumptions and provide sufficient conditions for stability and convergence on arbitrary sequences of time points. We consider the particular case of graded meshes and show how to choose them optimally, according to the behaviour of the data. An important advantage of the gCQ method is that it allows for a fast and memory reduced implementation. We describe how the fast and oblivious gCQ can be implemented and illustrate our theoretical results with several numerical experiments.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

小貼士
登錄享
相關主題
北京阿比特科技有限公司