亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We revisit a natural variant of geometric set cover, called minimum-membership geometric set cover (MMGSC). In this problem, the input consists of a set $S$ of points and a set $\mathcal{R}$ of geometric objects, and the goal is to find a subset $\mathcal{R}^*\subseteq\mathcal{R}$ to cover all points in $S$ such that the \textit{membership} of $S$ with respect to $\mathcal{R}^*$, denoted by $\mathsf{memb}(S,\mathcal{R}^*)$, is minimized, where $\mathsf{memb}(S,\mathcal{R}^*)=\max_{p\in S}|\{R\in\mathcal{R}^*: p\in R\}|$. We achieve the following two main results. * We give the first polynomial-time constant-approximation algorithm for MMGSC with unit squares. This answers a question left open since the work of Erlebach and Leeuwen [SODA'08], who gave a constant-approximation algorithm with running time $n^{O(\mathsf{opt})}$ where $\mathsf{opt}$ is the optimum of the problem (i.e., the minimum membership). * We give the first polynomial-time approximation scheme (PTAS) for MMGSC with halfplanes. Prior to this work, it was even unknown whether the problem can be approximated with a factor of $o(\log n)$ in polynomial time, while it is well-known that the minimum-size set cover problem with halfplanes can be solved in polynomial time. We also consider a problem closely related to MMGSC, called minimum-ply geometric set cover (MPGSC), in which the goal is to find $\mathcal{R}^*\subseteq\mathcal{R}$ to cover $S$ such that the ply of $\mathcal{R}^*$ is minimized, where the ply is defined as the maximum number of objects in $\mathcal{R}^*$ which have a nonempty common intersection. Very recently, Durocher et al. gave the first constant-approximation algorithm for MPGSC with unit squares which runs in $O(n^{12})$ time. We give a significantly simpler constant-approximation algorithm with near-linear running time.

相關內容

We propose a novel approach that utilizes fuzzification theory to perform feature selection on website content for encryption purposes. Our objective is to identify and select the most relevant features from the website by harnessing the principles of fuzzy logic. Fuzzification allows us to transform the crisp website content into fuzzy representations, enabling a more nuanced analysis of their characteristics. By considering the degree of membership of each feature in different fuzzy categories, we can evaluate their importance and relevance for encryption. This approach enables us to prioritize and focus on the features that exhibit higher membership degrees, indicating their significance in the encryption process. By employing fuzzification-based feature selection, we aim to enhance the effectiveness and efficiency of website content encryption, ultimately improving the overall internet security.

In this study, we focus on learning Hamiltonian systems, which involves predicting the coordinate (q) and momentum (p) variables generated by a symplectic mapping. Based on Chen & Tao (2021), the symplectic mapping is represented by a generating function. To extend the prediction time period, we develop a new learning scheme by splitting the time series (q_i, p_i) into several partitions. We then train a large-step neural network (LSNN) to approximate the generating function between the first partition (i.e. the initial condition) and each one of the remaining partitions. This partition approach makes our LSNN effectively suppress the accumulative error when predicting the system evolution. Then we train the LSNN to learn the motions of the 2:3 resonant Kuiper belt objects for a long time period of 25000 yr. The results show that there are two significant improvements over the neural network constructed in our previous work (Li et al. 2022): (1) the conservation of the Jacobi integral, and (2) the highly accurate predictions of the orbital evolution. Overall, we propose that the designed LSNN has the potential to considerably improve predictions of the long-term evolution of more general Hamiltonian systems.

We consider nonlinear delay differential and renewal equations with infinite delay. We extend the work of Gyllenberg et al, Appl. Math. Comput. (2018) by introducing a unifying abstract framework and derive a finite-dimensional approximating system via pseudospectral discretization. For renewal equations, via integration we consider a reformulation in a space of absolutely continuous functions that ensures that point evaluation is well defined. We prove the one-to-one correspondence of equilibria between the original equation and its approximation, and that linearization and discretization commute. Our most important result is the proof of convergence of the characteristic roots of the pseudospectral approximation of the linear(ized) equations, which ensures that the finite-dimensional system correctly reproduces the stability properties of the original linear equation if the dimension of the approximation is large enough. This result is illustrated with several numerical tests, which also demonstrate the effectiveness of the approach for the bifurcation analysis of equilibria of nonlinear equations.

We focus on decentralized stochastic non-convex optimization, where $n$ agents work together to optimize a composite objective function which is a sum of a smooth term and a non-smooth convex term. To solve this problem, we propose two single-time scale algorithms: Prox-DASA and Prox-DASA-GT. These algorithms can find $\epsilon$-stationary points in $\mathcal{O}(n^{-1}\epsilon^{-2})$ iterations using constant batch sizes (i.e., $\mathcal{O}(1)$). Unlike prior work, our algorithms achieve comparable complexity without requiring large batch sizes, more complex per-iteration operations (such as double loops), or stronger assumptions. Our theoretical findings are supported by extensive numerical experiments, which demonstrate the superiority of our algorithms over previous approaches. Our code is available at //github.com/xuxingc/ProxDASA.

Generative diffusion models, including Stable Diffusion and Midjourney, can generate visually appealing, diverse, and high-resolution images for various applications. These models are trained on billions of internet-sourced images, raising significant concerns about the potential unauthorized use of copyright-protected images. In this paper, we examine whether it is possible to determine if a specific image was used in the training set, a problem known in the cybersecurity community and referred to as a membership inference attack. Our focus is on Stable Diffusion, and we address the challenge of designing a fair evaluation framework to answer this membership question. We propose a methodology to establish a fair evaluation setup and apply it to Stable Diffusion, enabling potential extensions to other generative models. Utilizing this evaluation setup, we execute membership attacks (both known and newly introduced). Our research reveals that previously proposed evaluation setups do not provide a full understanding of the effectiveness of membership inference attacks. We conclude that the membership inference attack remains a significant challenge for large diffusion models (often deployed as black-box systems), indicating that related privacy and copyright issues will persist in the foreseeable future.

We propose a label poisoning attack on geometric data sets against $k$-nearest neighbor classification. We provide an algorithm that can compute an $\varepsilon n$-additive approximation of the optimal poisoning in $n\cdot 2^{2^{O(d+k/\varepsilon)}}$ time for a given data set $X \in \mathbb{R}^d$, where $|X| = n$. Our algorithm achieves its objectives through the application of multi-scale random partitions.

The algebraic degree is an important parameter of Boolean functions used in cryptography. When a function in a large number of variables is not given explicitly in algebraic normal form, it might not be feasible to compute its degree. Instead, one can try to estimate the degree using probabilistic tests. We propose a probabilistic test for deciding whether the algebraic degree of a Boolean function $f$ is below a certain value $k$. The test involves picking an affine space of dimension $k$ and testing whether the values on $f$ on that space sum up to zero. If $deg(f)<k$, then $f$ will always pass the test, otherwise it will sometimes pass and sometimes fail the test, depending on which affine space was chosen. The probability of failing the proposed test is closely related to the number of monomials of degree $k$ in a polynomial $g$, averaged over all the polynomials $g$ which are affine equivalent to $f$. We initiate the study of the probability of failing the proposed ``$deg(f)<k$'' test. We show that in the particular case when the degree of $f$ is actually equal to $k$, the probability will be in the interval $(0.288788, 0.5]$, and therefore a small number of runs of the test is sufficient to give, with very high probability, the correct answer. Exact values of this probability for all the polynomials in 8 variables were computed using the representatives listed by Hou and by Langevin and Leander.

Knowledge graph embedding (KGE) is a increasingly popular technique that aims to represent entities and relations of knowledge graphs into low-dimensional semantic spaces for a wide spectrum of applications such as link prediction, knowledge reasoning and knowledge completion. In this paper, we provide a systematic review of existing KGE techniques based on representation spaces. Particularly, we build a fine-grained classification to categorise the models based on three mathematical perspectives of the representation spaces: (1) Algebraic perspective, (2) Geometric perspective, and (3) Analytical perspective. We introduce the rigorous definitions of fundamental mathematical spaces before diving into KGE models and their mathematical properties. We further discuss different KGE methods over the three categories, as well as summarise how spatial advantages work over different embedding needs. By collating the experimental results from downstream tasks, we also explore the advantages of mathematical space in different scenarios and the reasons behind them. We further state some promising research directions from a representation space perspective, with which we hope to inspire researchers to design their KGE models as well as their related applications with more consideration of their mathematical space properties.

Over the past few years, the rapid development of deep learning technologies for computer vision has greatly promoted the performance of medical image segmentation (MedISeg). However, the recent MedISeg publications usually focus on presentations of the major contributions (e.g., network architectures, training strategies, and loss functions) while unwittingly ignoring some marginal implementation details (also known as "tricks"), leading to a potential problem of the unfair experimental result comparisons. In this paper, we collect a series of MedISeg tricks for different model implementation phases (i.e., pre-training model, data pre-processing, data augmentation, model implementation, model inference, and result post-processing), and experimentally explore the effectiveness of these tricks on the consistent baseline models. Compared to paper-driven surveys that only blandly focus on the advantages and limitation analyses of segmentation models, our work provides a large number of solid experiments and is more technically operable. With the extensive experimental results on both the representative 2D and 3D medical image datasets, we explicitly clarify the effect of these tricks. Moreover, based on the surveyed tricks, we also open-sourced a strong MedISeg repository, where each of its components has the advantage of plug-and-play. We believe that this milestone work not only completes a comprehensive and complementary survey of the state-of-the-art MedISeg approaches, but also offers a practical guide for addressing the future medical image processing challenges including but not limited to small dataset learning, class imbalance learning, multi-modality learning, and domain adaptation. The code has been released at: //github.com/hust-linyi/MedISeg

It is always well believed that modeling relationships between objects would be helpful for representing and eventually describing an image. Nevertheless, there has not been evidence in support of the idea on image description generation. In this paper, we introduce a new design to explore the connections between objects for image captioning under the umbrella of attention-based encoder-decoder framework. Specifically, we present Graph Convolutional Networks plus Long Short-Term Memory (dubbed as GCN-LSTM) architecture that novelly integrates both semantic and spatial object relationships into image encoder. Technically, we build graphs over the detected objects in an image based on their spatial and semantic connections. The representations of each region proposed on objects are then refined by leveraging graph structure through GCN. With the learnt region-level features, our GCN-LSTM capitalizes on LSTM-based captioning framework with attention mechanism for sentence generation. Extensive experiments are conducted on COCO image captioning dataset, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, GCN-LSTM increases CIDEr-D performance from 120.1% to 128.7% on COCO testing set.

北京阿比特科技有限公司