亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The case$^2$ study, also referred to as the case-case study design, is a valuable approach for conducting inference for treatment effects. Unlike traditional case-control studies, the case$^2$ design compares treatment in two types of cases with the same disease. A key quantity of interest is the attributable effect, which is the number of cases of disease among treated units which are caused by the treatment. Two key assumptions that are usually made for making inferences about the attributable effect in case$^2$ studies are 1.) treatment does not cause the second type of case, and 2.) the treatment does not alter an individual's case type. However, these assumptions are not realistic in many real-data applications. In this article, we present a sensitivity analysis framework to scrutinize the impact of deviations from these assumptions on obtained results. We also include sensitivity analyses related to the assumption of unmeasured confounding, recognizing the potential bias introduced by unobserved covariates. The proposed methodology is exemplified through an investigation into whether having violent behavior in the last year of life increases suicide risk via 1993 National Mortality Followback Survey dataset.

相關內容

Error-bounded lossy compression is a critical technique for significantly reducing scientific data volumes. Compared to CPU-based compressors, GPU-based compressors exhibit substantially higher throughputs, fitting better for today's HPC applications. However, the critical limitations of existing GPU-based compressors are their low compression ratios and qualities, severely restricting their applicability. To overcome these, we introduce a new GPU-based error-bounded scientific lossy compressor named cuSZ-$i$, with the following contributions: (1) A novel GPU-optimized interpolation-based prediction method significantly improves the compression ratio and decompression data quality. (2) The Huffman encoding module in cuSZ-$i$ is optimized for better efficiency. (3) cuSZ-$i$ is the first to integrate the NVIDIA Bitcomp-lossless as an additional compression-ratio-enhancing module. Evaluations show that cuSZ-$i$ significantly outperforms other latest GPU-based lossy compressors in compression ratio under the same error bound (hence, the desired quality), showcasing a 476% advantage over the second-best. This leads to cuSZ-$i$'s optimized performance in several real-world use cases.

In the metric distortion problem there is a set of candidates $C$ and voters $V$ in the same metric space. The goal is to select a candidate minimizing the social cost: the sum of distances of the selected candidate from all the voters, and the challenge arises from the algorithm receiving only ordinaL input: each voter's ranking of candidate, while the objective function is cardinal, determined by the underlying metric. The distortion of an algorithm is its worst-case approximation factor of the optimal social cost. A key concept here is the (p,q)-veto core, with $p\in \Delta(V)$ and $q\in \Delta(C)$ being normalized weight vectors representing voters' veto power and candidates' support, respectively. The (p,q)-veto core corresponds to a set of winners from a specific class of deterministic algorithms. Notably, the optimal distortion of $3$ is obtained from this class, by selecting veto core candidates using uniform $p$ and $q$ proportional to candidates' plurality scores. Bounding the distortion of other algorithms from this class is an open problem. Our contribution is twofold. First, we establish upper bounds on the distortion of candidates from the (p,q)-veto core for arbitrary weight vectors $p$ and $q$. Second, we revisit the metric distortion problem through the \emph{learning-augmented} framework, which equips the algorithm with a (machine-learned) prediction regarding the optimal candidate. The quality of this prediction is unknown, and the goal is to optimize the algorithm's performance under accurate predictions (consistency), while simultaneously providing worst-case guarantees under arbitrarily inaccurate predictions (robustness). We propose an algorithm that chooses candidates from the (p,q)-veto core, using a prediction-guided q vector and, leveraging our distortion bounds, we prove that this algorithm achieves the optimal robustness-consistency trade-off.

The $n$-way number partitioning problem is a classic problem in combinatorial optimization, with applications to diverse settings such as fair allocation and machine scheduling. All these problems are NP-hard, but various approximation algorithms are known. We consider three closely related kinds of approximations. The first two variants optimize the partition such that: in the first variant some fixed number $s$ of items can be \emph{split} between two or more bins and in the second variant we allow at most a fixed number $t$ of \emph{splittings}. The third variant is a decision problem: the largest bin sum must be within a pre-specified interval, parameterized by a fixed rational number $u$ times the largest item size. When the number of bins $n$ is unbounded, we show that every variant is strongly {\sf NP}-complete. When the number of bins $n$ is fixed, the running time depends on the fixed parameters $s,t,u$. For each variant, we give a complete picture of its running time. For $n=2$, the running time is easy to identify. Our main results consider any fixed integer $n \geq 3$. Using a two-way polynomial-time reduction between the first and the third variant, we show that $n$-way number-partitioning with $s$ split items can be solved in polynomial time if $s \geq n-2$, and it is {\sf NP}-complete otherwise. Also, $n$-way number-partitioning with $t$ splittings can be solved in polynomial time if $t \geq n-1$, and it is {\sf NP}-complete otherwise. Finally, we show that the third variant can be solved in polynomial time if $u \geq (n-2)/n$, and it is {\sf NP}-complete otherwise. Our positive results for the optimization problems consider both min-max and max-min versions. Using the same reduction, and we provide a fully polynomial-time approximation scheme for the case where the number of split items is lower than $n-2$.

We design and implement two single-pass semi-streaming algorithms for the maximum weight $k$-disjoint matching ($k$-DM) problem. Given an integer $k$, the $k$-DM problem is to find $k$ pairwise edge-disjoint matchings such that the sum of the weights of the matchings is maximized. For $k \geq 2$, this problem is NP-hard. Our first algorithm is based on the primal-dual framework of a linear programming relaxation of the problem and is $\frac{1}{3+\varepsilon}$-approximate. We also develop an approximation preserving reduction from $k$-DM to the maximum weight $b$-matching problem. Leveraging this reduction and an existing semi-streaming $b$-matching algorithm, we design a $(\frac{1}{2+\varepsilon})(1 - \frac{1}{k+1})$-approximate semi-streaming algorithm for $k$-DM. For any constant $\varepsilon > 0$, both of these algorithms require $O(nk \log_{1+\varepsilon}^2 n)$ bits of space. To the best of our knowledge, this is the first study of semi-streaming algorithms for the $k$-DM problem. We compare our two algorithms to state-of-the-art offline algorithms on 95 real-world and synthetic test problems, including thirteen graphs generated from data center network traces. On these instances, our streaming algorithms used significantly less memory (ranging from 6$\times$ to 512$\times$ less) and were faster in runtime than the offline algorithms. Our solutions were often within 5% of the best weights from the offline algorithms. We highlight that the existing offline algorithms run out of 1 TB memory for most of the large instances ($>1$ billion edges), whereas our streaming algorithms can solve these problems using only 100 GB memory for $k=8$.

For a skew polynomial ring $R=A[X;\theta,\delta]$ where $A$ is a commutative Frobenius ring, $\theta$ an endomorphism of $A$ and $\delta$ a $\theta$-derivation of $A$, we consider cyclic left module codes $\mathcal{C}=Rg/Rf\subset R/Rf$ where $g$ is a left and right divisor of $f$ in $R$. In this paper, we derive a parity check matrix when $A$ is a finite commutative Frobenius ring using only the framework of skew polynomial rings. We consider rings $A=B[a_1,\ldots,a_s]$ which are free $B$-modules where the restriction of $\delta$ and $\theta$ to $B$ are polynomial maps. If a Gr\"obner basis can be computed over $B$, then we show that all Euclidean and Hermitian dual-containing codes $\mathcal{C}=Rg/Rf\subset R/Rf$ can be computed using a Gr\"obner basis. We also give an algorithm to test if the dual code is again a cyclic left module code. We illustrate our approach for rings of order $4$ with non-trivial endomorphism and the Galois ring of characteristic $4$.

We present the first definition of strictly associative and unital $\infty$-category. Our proposal takes the form of a type theory whose terms describe the operations of such structures, and whose definitional equality relation enforces desired strictness conditions. The key technical device is a new computation rule in the definitional equality of the theory, which we call insertion, defined in terms of a universal property. On terms for which it is defined, this operation "inserts" one of the arguments of a substituted coherence into the coherence itself, appropriately modifying the pasting diagram and result type, and simplifying the syntax in the process. We generate an equational theory from this reduction relation and we study its properties in detail, showing that it yields a decision procedure for equality. Expressed as a type theory, our model is well-adapted for generating and verifying efficient proofs of higher categorical statements. We illustrate this via an OCaml implementation, and give a number of examples, including a short encoding of the syllepsis, a 5-dimensional homotopy that plays an important role in the homotopy groups of spheres.

In this paper, we present a high-order finite element method based on a reconstructed approximation to the biharmonic equation. In our construction, the space is reconstructed from nodal values by solving a local least squares fitting problem per element. It is shown that the space can achieve an arbitrarily high-order accuracy and share the same nodal degrees of freedom with the $C^0$ linear space. The interior penalty discontinuous Galerkin scheme can be directly applied to the reconstructed space for solving the biharmonic equation. We prove that the numerical solution converges with optimal orders under error measurements. More importantly, we establish a norm equivalence between the reconstructed space and the continuous linear space. This property allows us to precondition the linear system arising from the high-order space by the linear space on the same mesh. This preconditioner is shown to be optimal in the sense that the condition number of the preconditioned system admits a uniform upper bound independent of the mesh size. Numerical examples in two and three dimensions are provided to illustrate the accuracy of the scheme and the efficiency of the preconditioning method.

While grasp detection is an important part of any robotic manipulation pipeline, reliable and accurate grasp detection in $SE(3)$ remains a research challenge. Many robotics applications in unstructured environments such as the home or warehouse would benefit a lot from better grasp performance. This paper proposes a novel framework for detecting $SE(3)$ grasp poses based on point cloud input. Our main contribution is to propose an $SE(3)$-equivariant model that maps each point in the cloud to a continuous grasp quality function over the 2-sphere $S^2$ using a spherical harmonic basis. Compared with reasoning about a finite set of samples, this formulation improves the accuracy and efficiency of our model when a large number of samples would otherwise be needed. In order to accomplish this, we propose a novel variation on EquiFormerV2 that leverages a UNet-style backbone to enlarge the number of points the model can handle. Our resulting method, which we name $\textit{OrbitGrasp}$, significantly outperforms baselines in both simulation and physical experiments.

We examine the problem of question answering over knowledge graphs, focusing on simple questions that can be answered by the lookup of a single fact. Adopting a straightforward decomposition of the problem into entity detection, entity linking, relation prediction, and evidence combination, we explore simple yet strong baselines. On the popular SimpleQuestions dataset, we find that basic LSTMs and GRUs plus a few heuristics yield accuracies that approach the state of the art, and techniques that do not use neural networks also perform reasonably well. These results show that gains from sophisticated deep learning techniques proposed in the literature are quite modest and that some previous models exhibit unnecessary complexity.

Dynamic programming (DP) solves a variety of structured combinatorial problems by iteratively breaking them down into smaller subproblems. In spite of their versatility, DP algorithms are usually non-differentiable, which hampers their use as a layer in neural networks trained by backpropagation. To address this issue, we propose to smooth the max operator in the dynamic programming recursion, using a strongly convex regularizer. This allows to relax both the optimal value and solution of the original combinatorial problem, and turns a broad class of DP algorithms into differentiable operators. Theoretically, we provide a new probabilistic perspective on backpropagating through these DP operators, and relate them to inference in graphical models. We derive two particular instantiations of our framework, a smoothed Viterbi algorithm for sequence prediction and a smoothed DTW algorithm for time-series alignment. We showcase these instantiations on two structured prediction tasks and on structured and sparse attention for neural machine translation.

北京阿比特科技有限公司