亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Biedl et al. introduced the minimum ply cover problem in CG 2021 following the seminal work of Erlebach and van Leeuwen in SODA 2008. They showed that determining the minimum ply cover number for a given set of points by a given set of axis-parallel unit squares is NP-hard, and gave a polynomial time $2$-approximation algorithm for instances in which the minimum ply cover number is bounded by a constant. Durocher et al. recently presented a polynomial time $(8 + \epsilon)$-approximation algorithm for the general case when the minimum ply cover number is $\omega(1)$, for every fixed $\epsilon > 0$. They divide the problem into subproblems by using a standard grid decomposition technique. They have designed an involved dynamic programming scheme to solve the subproblem where each subproblem is defined by a unit side length square gridcell. Then they merge the solutions of the subproblems to obtain the final ply cover. We use a horizontal slab decomposition technique to divide the problem into subproblems. Our algorithm uses a simple greedy heuristic to obtain a $(27+\epsilon)$-approximation algorithm for the general problem, for a small constant $\epsilon>0$. Our algorithm runs considerably faster than the algorithm of Durocher et al. We also give a fast $2$-approximation algorithm for the special case where the input squares are intersected by a horizontal line. The hardness of this special case is still open. Our algorithm is potentially extendable to minimum ply covering with other geometric objects such as unit disks, identical rectangles etc.

相關內容

We investigate the parameterized complexity of several problems formalizing cluster identification in graphs. In other words we ask whether a graph contains a large enough and sufficiently connected subgraph. We study here three relaxations of CLIQUE: $s$-CLUB and $s$-CLIQUE, in which the relaxation is focused on the distances in respectively the cluster and the original graph, and $\gamma$-COMPLETE SUBGRAPH in which the relaxation is made on the minimal degree in the cluster. As these three problems are known to be NP-hard, we study here their parameterized complexities. We prove that $s$-CLUB and $s$-CLIQUE are NP-hard even restricted to graphs of degeneracy $\le 3$ whenever $s \ge 3$, and to graphs of degeneracy $\le 2$ whenever $s \ge 5$, which is a strictly stronger result than its W[1]-hardness parameterized by the degeneracy. We also obtain that these problems are solvable in polynomial time on graphs of degeneracy $1$. Concerning $\gamma$-COMPLETE SUBGRAPH, we prove that it is W[1]-hard parameterized by both the degeneracy, which implies the W[1]-hardness parameterized by the number of vertices in the $\gamma$-complete-subgraph, and the number of elements outside the $\gamma$-complete subgraph.

Over-the-air federated edge learning (Air-FEEL) is a communication-efficient framework for distributed machine learning using training data distributed at edge devices. This framework enables all edge devices to transmit model updates simultaneously over the entire available bandwidth, allowing for over-the-air aggregation. A one-bit digital over-the-air aggregation (OBDA) scheme has been recently proposed, featuring one-bit gradient quantization at edge devices and majority-voting based decoding at the edge server. However, the low-resolution one-bit gradient quantization slows down the model convergence and leads to performance degradation. On the other hand, the aggregation errors caused by fading channels in Air-FEEL is still remained to be solved. To address these issues, we propose the error-feedback one-bit broadband digital aggregation (EFOBDA) and an optimized power control policy. To this end, we first provide a theoretical analysis to evaluate the impact of error feedback on the convergence of FL with EFOBDA. The analytical results show that, by setting an appropriate feedback strength, EFOBDA is comparable to the Air-FEEL without quantization, thus enhancing the performance of OBDA. Then, we further introduce a power control policy by maximizing the convergence rate under instantaneous power constraints. The convergence analysis and optimized power control policy are verified by the experiments, which show that the proposed scheme achieves significantly faster convergence and higher test accuracy in image classification tasks compared with the one-bit quantization scheme without error feedback or optimized power control policy.

In the first part of this article, we study feedback stabilization of a parabolic coupled system by using localized interior controls. The system is feedback stabilizable with exponential decay $-\omega<0$ for any $\omega>0$. A stabilizing control is found in feedback form by solving a suitable algebraic Riccati equation. In the second part, a conforming finite element method is employed to approximate the continuous system by a finite dimensional discrete system. The approximated system is also feedback stabilizable (uniformly) with exponential decay $-\omega+\epsilon$, for any $\epsilon>0$ and the feedback control is obtained by solving a discrete algebraic Riccati equation. The error estimate of stabilized solutions as well as stabilizing feedback controls are obtained. We validate the theoretical results by numerical implementations.

A linear arrangement is a mapping $\pi$ from the $n$ vertices of a graph $G$ to $n$ distinct consecutive integers. Linear arrangements can be represented by drawing the vertices along a horizontal line and drawing the edges as semicircles above said line. In this setting, the length of an edge is defined as the absolute value of the difference between the positions of its two vertices in the arrangement, and the cost of an arrangement as the sum of all edge lengths. Here we study two variants of the Maximum Linear Arrangement problem (MaxLA), which consists of finding an arrangement that maximizes the cost. In the planar variant for free trees, vertices have to be arranged in such a way that there are no edge crossings. In the projective variant for rooted trees, arrangements have to be planar and the root of the tree cannot be covered by any edge. In this paper we present algorithms that are linear in time and space to solve planar and projective MaxLA for trees. We also prove several properties of maximum projective and planar arrangements, and show that caterpillar trees maximize planar MaxLA over all trees of a fixed size thereby generalizing a previous extremal result on trees.

A class of implicit Milstein type methods is introduced and analyzed in the present article for stochastic differential equations (SDEs) with non-globally Lipschitz drift and diffusion coefficients. By incorporating a pair of method parameters $\theta, \eta \in [0, 1]$ into both the drift and diffusion parts, the new schemes are indeed a kind of drift-diffusion double implicit methods. Within a general framework, we offer upper mean-square error bounds for the proposed schemes, based on certain error terms only getting involved with the exact solution processes. Such error bounds help us to easily analyze mean-square convergence rates of the schemes, without relying on a priori high-order moment estimates of numerical approximations. Putting further globally polynomial growth condition, we successfully recover the expected mean-square convergence rate of order one for the considered schemes with $\theta \in [\tfrac12, 1], \eta \in [0, 1]$. Also, some of the proposed schemes are applied to solve three SDE models evolving in the positive domain $(0, \infty)$. More specifically, the particular drift-diffusion implicit Milstein method ($ \theta = \eta = 1 $) is utilized to approximate the Heston $\tfrac32$-volatility model and the stochastic Lotka-Volterra competition model. The semi-implicit Milstein method ($\theta =1, \eta = 0$) is used to solve the Ait-Sahalia interest rate model. Thanks to the previously obtained error bounds, we reveal the optimal mean-square convergence rate of the positivity preserving schemes under more relaxed conditions, compared with existing relevant results in the literature. Numerical examples are also reported to confirm the previous findings.

An $(n,k,\ell)$ array code has $k$ information coordinates and $r = n-k$ parity coordinates, where each coordinate is a vector in $\mathbb{F}_q^{\ell}$ for some field $\mathbb{F}_q$. An $(n,k,\ell)$ MDS array code has the additional property that any $k$ out of $n$ coordinates suffice to recover the whole codeword. Dimakis et al. considered the problem of repairing the erasure of a single coordinate and proved a lower bound on the amount of data transmission that is needed for the repair. A minimum storage regenerating (MSR) array code with repair degree $d$ is an MDS array code that achieves this lower bound for the repair of any single erased coordinate from any $d$ out of $n-1$ remaining coordinates. An MSR code has the optimal access property if the amount of accessed data is the same as the amount of transmitted data in the repair procedure. The sub-packetization $\ell$ and the field size $q$ are of paramount importance in the MSR array code constructions. For optimal-access MSR codes, Balaji et al. proved that $\ell\geq s^{\left\lceil n/s \right\rceil}$, where $s = d-k+1$. Rawat et al. showed that this lower bound is attainable for all admissible values of $d$ when the field size is exponential in $n$. After that, tremendous efforts have been devoted to reducing the field size. However, till now, reduction to linear field size is only available for $d\in\{k+1,k+2,k+3\}$ and $d=n-1$. In this paper, we construct optimal-access MSR codes with linear field size and smallest sub-packetization $\ell = s^{\left\lceil n/s \right\rceil}$ for all $d$ between $k+1$ and $n-1$. We also construct another class of MSR codes that are not optimal-access but have even smaller sub-packetization $s^{\left\lceil n/(s+1)\right\rceil }$. The second class also has linear field size and works for all admissible values of $d$.

Large-scale dynamics of the oceans and the atmosphere are governed by primitive equations (PEs). Due to the nonlinearity and nonlocality, the numerical study of the PEs is generally challenging. Neural networks have been shown to be a promising machine learning tool to tackle this challenge. In this work, we employ physics-informed neural networks (PINNs) to approximate the solutions to the PEs and study the error estimates. We first establish the higher-order regularity for the global solutions to the PEs with either full viscosity and diffusivity, or with only the horizontal ones. Such a result for the case with only the horizontal ones is new and required in the analysis under the PINNs framework. Then we prove the existence of two-layer tanh PINNs of which the corresponding training error can be arbitrarily small by taking the width of PINNs to be sufficiently wide, and the error between the true solution and its approximation can be arbitrarily small provided that the training error is small enough and the sample set is large enough. In particular, all the estimates are a priori, and our analysis includes higher-order (in spatial Sobolev norm) error estimates. Numerical results on prototype systems are presented to further illustrate the advantage of using the $H^s$ norm during the training.

The coresets approach, also called subsampling or subset selection, aims to select a subsample as a surrogate for the observed sample. Such an approach has been used pervasively in large-scale data analysis. Existing coresets methods construct the subsample using a subset of rows from the predictor matrix. Such methods can be significantly inefficient when the predictor matrix is sparse or numerically sparse. To overcome the limitation, we develop a novel element-wise subset selection approach, called core-elements, for large-scale least squares estimation in classical linear regression. We provide a deterministic algorithm to construct the core-elements estimator, only requiring an $O(\mbox{nnz}(\mathbf{X})+rp^2)$ computational cost, where $\mathbf{X}$ is an $n\times p$ predictor matrix, $r$ is the number of elements selected from each column of $\mathbf{X}$, and $\mbox{nnz}(\cdot)$ denotes the number of non-zero elements. Theoretically, we show that the proposed estimator is unbiased and approximately minimizes an upper bound of the estimation variance. We also provide an approximation guarantee by deriving a coresets-like finite sample bound for the proposed estimator. To handle potential outliers in the data, we further combine core-elements with the median-of-means procedure, resulting in an efficient and robust estimator with theoretical consistency guarantees. Numerical studies on various synthetic and open-source datasets demonstrate the proposed method's superior performance compared to mainstream competitors.

Approximating convex bodies is a fundamental question in geometry and has a wide variety of applications. Consider a convex body $K$ of diameter $\Delta$ in $\textbf{R}^d$ for fixed $d$. The objective is to minimize the number of vertices (alternatively, the number of facets) of an approximating polytope for a given Hausdorff error $\varepsilon$. It is known from classical results of Dudley (1974) and Bronshteyn and Ivanov (1976) that $\Theta((\Delta/\varepsilon)^{(d-1)/2})$ vertices (alternatively, facets) are both necessary and sufficient. While this bound is tight in the worst case, that of Euclidean balls, it is far from optimal for skinny convex bodies. A natural way to characterize a convex object's skinniness is in terms of its relationship to the Euclidean ball. Given a convex body $K$, define its \emph{volume diameter} $\Delta_d$ to be the diameter of a Euclidean ball of the same volume as $K$, and define its \emph{surface diameter} $\Delta_{d-1}$ analogously for surface area. It follows from generalizations of the isoperimetric inequality that $\Delta \geq \Delta_{d-1} \geq \Delta_d$. Arya, da Fonseca, and Mount (SoCG 2012) demonstrated that the diameter-based bound could be made surface-area sensitive, improving the above bound to $O((\Delta_{d-1}/\varepsilon)^{(d-1)/2})$. In this paper, we strengthen this by proving the existence of an approximation with $O((\Delta_d/\varepsilon)^{(d-1)/2})$ facets.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

北京阿比特科技有限公司