亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we study the problem of maximizing the distance to a given point $C_0$ over a polytope $\mathcal{P}$. Assuming that the polytope is circumscribed by a known ball we construct an intersection of balls which preserves the vertices of the polytope on the boundary of this ball, and show that the intersection of balls approximates the polytope arbitrarily well. Then, we use some known results regarding the maximization of distances to a given point over an intersection of balls to create a new polytope which preserves the maximizers to the original problem. Next, a new intersection of balls is obtained in a similar fashion, and as such, after a finite number of iterations, we conjecture, we end up with an intersection of balls over which we can maximize the distance to the given point. The obtained distance is shown to be a non trivial upper bound to the original distance. Tests are made with maximizing the distance to a random point over the unit hypercube up to dimension $n = 100$. Several detailed 2-d examples are also shown.

相關內容

Given a graph $G=(V,E)$ and an integer $k\in \mathbb{N}$, we study {\sc 2-Eigenvalue Vertex Deletion} (2-EVD), where the goal is to remove at most $k$ vertices such that the adjacency matrix of the resulting graph has at most 2 eigenvalues. It is known that the adjacency matrix of a graph has at most 2 eigenvalues if and only if the graph is a collection of equal sized cliques. So {\sc 2-Eigenvalue Vertex Deletion} amounts to removing a set of at most $k$ vertices such that the resulting graph is a collection of equal sized cliques. The {\sc 2-Eigenvalue Edge Editing} (2-EEE), {\sc 2-Eigenvalue Edge Deletion} (2-EED) and {\sc 2-Eigenvalue Edge Addition} (2-EEA) problems are defined analogously. We provide a kernel of size $\mathcal{O}(k^{3})$ for {\sc $2$-EVD}. For the problems {\sc $2$-EEE} and {\sc $2$-EED}, we provide kernels of size $\mathcal{O}(k^{2})$. Finally, we provide a linear kernel of size $6k$ for {\sc $2$-EEA}. We thereby resolve three open questions listed by Misra et al. (ISAAC 2023) concerning the complexity of these problems parameterized by the solution size.

While the success of edge and fog computing increased with the proliferation of the Internet of Things (IoT) solutions, such novel computing paradigm, that moves compute resources closer to the source of data and services, must address many challenges such as reducing communication overhead to/from datacenters, the latency to compute and receive results, as well as energy consumption at the mobile and IoT devices. fog-to-fog (f2f) cooperation has recently been proposed to increase the computation capacity at the network edge through cooperation across multiple stakeholders. In this paper we adopt an analytical approach to studying f2f cooperation paradigm. We highlight the benefits of using such new paradigm in comparison with traditional three-tier fog computing paradigms. We use a Continuous Time Markov Chain (CTMC) model for the N f2f cooperating nodes and cast cooperation as an optimization problem, which we solve using the proposed model.

We consider the constrained sampling problem where the goal is to sample from a target distribution $\pi(x)\propto e^{-f(x)}$ when $x$ is constrained to lie on a convex body $\mathcal{C}$. Motivated by penalty methods from continuous optimization, we propose penalized Langevin Dynamics (PLD) and penalized underdamped Langevin Monte Carlo (PULMC) methods that convert the constrained sampling problem into an unconstrained sampling problem by introducing a penalty function for constraint violations. When $f$ is smooth and gradients are available, we get $\tilde{\mathcal{O}}(d/\varepsilon^{10})$ iteration complexity for PLD to sample the target up to an $\varepsilon$-error where the error is measured in the TV distance and $\tilde{\mathcal{O}}(\cdot)$ hides logarithmic factors. For PULMC, we improve the result to $\tilde{\mathcal{O}}(\sqrt{d}/\varepsilon^{7})$ when the Hessian of $f$ is Lipschitz and the boundary of $\mathcal{C}$ is sufficiently smooth. To our knowledge, these are the first convergence results for underdamped Langevin Monte Carlo methods in the constrained sampling that handle non-convex $f$ and provide guarantees with the best dimension dependency among existing methods with deterministic gradient. If unbiased stochastic estimates of the gradient of $f$ are available, we propose PSGLD and PSGULMC methods that can handle stochastic gradients and are scaleable to large datasets without requiring Metropolis-Hasting correction steps. For PSGLD and PSGULMC, when $f$ is strongly convex and smooth, we obtain $\tilde{\mathcal{O}}(d/\varepsilon^{18})$ and $\tilde{\mathcal{O}}(d\sqrt{d}/\varepsilon^{39})$ iteration complexity in W2 distance. When $f$ is smooth and can be non-convex, we provide finite-time performance bounds and iteration complexity results. Finally, we illustrate the performance on Bayesian LASSO regression and Bayesian constrained deep learning problems.

If $A$ and $B$ are sets such that $A \subset B$, generalisation may be understood as the inference from $A$ of a hypothesis sufficient to construct $B$. One might infer any number of hypotheses from $A$, yet only some of those may generalise to $B$. How can one know which are likely to generalise? One strategy is to choose the shortest, equating the ability to compress information with the ability to generalise (a proxy for intelligence). We examine this in the context of a mathematical formalism of enactive cognition. We show that compression is neither necessary nor sufficient to maximise performance (measured in terms of the probability of a hypothesis generalising). We formulate a proxy unrelated to length or simplicity, called weakness. We show that if tasks are uniformly distributed, then there is no choice of proxy that performs at least as well as weakness maximisation in all tasks while performing strictly better in at least one. In experiments comparing maximum weakness and minimum description length in the context of binary arithmetic, the former generalised at between $1.1$ and $5$ times the rate of the latter. We argue this demonstrates that weakness is a far better proxy, and explains why Deepmind's Apperception Engine is able to generalise effectively.

We devise a polynomial-time algorithm for partitioning a simple polygon $P$ into a minimum number of star-shaped polygons. The question of whether such an algorithm exists has been open for more than four decades [Avis and Toussaint, Pattern Recognit., 1981] and it has been repeated frequently, for example in O'Rourke's famous book [Art Gallery Theorems and Algorithms, 1987]. In addition to its strong theoretical motivation, the problem is also motivated by practical domains such as CNC pocket milling, motion planning, and shape parameterization. The only previously known algorithm for a non-trivial special case is for $P$ being both monotone and rectilinear [Liu and Ntafos, Algorithmica, 1991]. For general polygons, an algorithm was only known for the restricted version in which Steiner points are disallowed [Keil, SIAM J. Comput., 1985], meaning that each corner of a piece in the partition must also be a corner of $P$. Interestingly, the solution size for the restricted version may be linear for instances where the unrestricted solution has constant size. The covering variant in which the pieces are star-shaped but allowed to overlap--known as the Art Gallery Problem--was recently shown to be $\exists\mathbb R$-complete and is thus likely not in NP [Abrahamsen, Adamaszek and Miltzow, STOC 2018 & J. ACM 2022]; this is in stark contrast to our result. Arguably the most related work to ours is the polynomial-time algorithm to partition a simple polygon into a minimum number of convex pieces by Chazelle and Dobkin~[STOC, 1979 & Comp. Geom., 1985].

Let $\Omega = [0,1]^d$ be the unit cube in $\mathbb{R}^d$. We study the problem of how efficiently, in terms of the number of parameters, deep neural networks with the ReLU activation function can approximate functions in the Sobolev spaces $W^s(L_q(\Omega))$ and Besov spaces $B^s_r(L_q(\Omega))$, with error measured in the $L_p(\Omega)$ norm. This problem is important when studying the application of neural networks in a variety of fields, including scientific computing and signal processing, and has previously been solved only when $p=q=\infty$. Our contribution is to provide a complete solution for all $1\leq p,q\leq \infty$ and $s > 0$ for which the corresponding Sobolev or Besov space compactly embeds into $L_p$. The key technical tool is a novel bit-extraction technique which gives an optimal encoding of sparse vectors. This enables us to obtain sharp upper bounds in the non-linear regime where $p > q$. We also provide a novel method for deriving $L_p$-approximation lower bounds based upon VC-dimension when $p < \infty$. Our results show that very deep ReLU networks significantly outperform classical methods of approximation in terms of the number of parameters, but that this comes at the cost of parameters which are not encodable.

$\bf{Purpose}$: To assess whether Growth & Remodeling (G&R) theory could explain staphyloma formation from a local scleral weakening. $\bf{Methods}$: A finite element model of a healthy eye was reconstructed, including the following connective tissues: the lamina cribrosa, the peripapillary sclera, and the peripheral sclera. The scleral shell was modelled as a constrained mixture, consisting of an isotropic ground matrix and two collagen fiber families (circumferential and meridional). The homogenized constrained mixture model was employed to simulate the adaptation of the sclera to alterations in its biomechanical environment over a duration of 13.7 years. G&R processes were triggered by reducing the shear stiffness of the ground matrix in the peripapillary sclera and lamina cribrosa by 85%. Three distinct G&R scenarios were investigated: (1) low mass turnover rate in combination with transmural volumetric growth; (2) high mass turnover rate in combination with transmural volumetric growth; and (3) high mass turnover rate in combination with mass density growth. $\bf{Results}$: In scenario 1, we observed a significant outpouching of the posterior pole, closely resembling the shape of a Type-III staphyloma. Additionally, we found a notable change in scleral curvature and a thinning of the peripapillary sclera by 84%. In contrast, scenarios 2 and 3 exhibited less drastic deformations, with stable posterior staphylomas after approximately 7 years. $\bf{Conclusions}$: Our framework suggests that local scleral weakening is sufficient to trigger staphyloma formation under normal intraocular pressure. With patient-specific scleral geometries (obtainable via wide-field optical coherence tomography), our framework could aid in identifying individuals at risk of developing posterior staphylomas.

We give a constructive characterization of matrices satisfying the reverse-order law for the Moore--Penrose pseudoinverse. In particular, for a given matrix $A$ we construct another matrix $B$, of arbitrary compatible size and chosen rank, in terms of the right singular vectors of $A$, such that the reverse order law for $AB$ is satisfied. Moreover, we show that any matrix satisfying this law comes from a similar construction. As a consequence, several equivalent conditions to $B^+ A^+$ being a pseudoinverse of $AB$ are given, for example $\mathcal{C}(A^*AB)=\mathcal{C}(BB^*A^*)$ or $B\left(AB\right)^+A$ being an orthogonal projection. In addition, we parameterize all possible SVD decompositions of a fixed matrix and give Greville-like equivalent conditions for $B^+A^+$ being a $\{1,2\}-$inverse of $AB$, with a geometric insight in terms of the principal angles between $\mathcal{C}(A^*)$ and $\mathcal{C}(B)$.

We assume that we observe $N$ independent copies of a diffusion process on a time interval $[0,2T]$. For a given time $t$, we estimate the transition density $p_t(x,y)$, namely the conditional density of $X_{t + s}$ given $X_s = x$, under conditions on the diffusion coefficients ensuring that this quantity exists. We use a least squares projection method on a product of finite dimensional spaces, prove risk bounds for the estimator and propose an anisotropic model selection method, relying on several reference norms. A simulation study illustrates the theoretical part for Ornstein-Uhlenbeck or square-root (Cox-Ingersoll-Ross) processes.

We consider the task of locally correcting, and locally list-correcting, multivariate linear functions over the domain $\{0,1\}^n$ over arbitrary fields and more generally Abelian groups. Such functions form error-correcting codes of relative distance $1/2$ and we give local-correction algorithms correcting up to nearly $1/4$-fraction errors making $\widetilde{\mathcal{O}}(\log n)$ queries. This query complexity is optimal up to $\mathrm{poly}(\log\log n)$ factors. We also give local list-correcting algorithms correcting $(1/2 - \varepsilon)$-fraction errors with $\widetilde{\mathcal{O}}_{\varepsilon}(\log n)$ queries. These results may be viewed as natural generalizations of the classical work of Goldreich and Levin whose work addresses the special case where the underlying group is $\mathbb{Z}_2$. By extending to the case where the underlying group is, say, the reals, we give the first non-trivial locally correctable codes (LCCs) over the reals (with query complexity being sublinear in the dimension (also known as message length)). The central challenge in constructing the local corrector is constructing ``nearly balanced vectors'' over $\{-1,1\}^n$ that span $1^n$ -- we show how to construct $\mathcal{O}(\log n)$ vectors that do so, with entries in each vector summing to $\pm1$. The challenge to the local-list-correction algorithms, given the local corrector, is principally combinatorial, i.e., in proving that the number of linear functions within any Hamming ball of radius $(1/2-\varepsilon)$ is $\mathcal{O}_{\varepsilon}(1)$. Getting this general result covering every Abelian group requires integrating a variety of known methods with some new combinatorial ingredients analyzing the structural properties of codewords that lie within small Hamming balls.

北京阿比特科技有限公司