亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We exhibit combinatorial results on Christoffel words and binary balanced words that are motivated by their geometric interpretation as approximations of digital segments. We show that for every pair $(a,b)$ of positive integers, all the binary balanced words with $a$ zeroes and $b$ ones are good approximations of the Euclidean segment from $(0,0)$ to $(a,b)$, in the sense that they encode paths that are contained within the region of the grid delimited by the lower and the upper Christoffel words of slope $b/a$. We then give a closed formula for counting the exact number of balanced words with $a$ zeroes and $b$ ones. We also study minimal non-balanced words and prefixes of Christoffel words.

相關內容

We study a variant of the subgraph isomorphism problem that is of high interest to the quantum computing community. Our results give an algorithm to perform pattern matching in quantum circuits for many patterns simultaneously, independently of the number of patterns. After a pre-computation step in which the patterns are compiled into a decision tree, the running time is linear in the size of the input quantum circuit. More generally, we consider connected port graphs, in which every edge $e$ incident to $v$ has a label $L_v(e)$ unique in $v$. Jiang and Bunke showed that the subgraph isomorphism problem $H \subseteq G$ for such graphs can be solved in time $O(|V(G)| \cdot |V(H)|)$. We show that if in addition the graphs are directed acyclic, then the subgraph isomorphism problem can be solved for an unbounded number of patterns simultaneously. We enumerate all $m$ pattern matches in time $O(P)^{P+3/2} \cdot |V(G)| + O(m)$, where $P$ is the number of vertices of the largest pattern. In the case of quantum circuits, we can express the bound obtained in terms of the maximum number of qubits $N$ and depth $\delta$ of the patterns : $O(N)^{N + 1/2} \cdot \delta \log \delta \cdot |V(G)| + O(m)$.

Error-correcting codes over the real field are studied which can locate outlying computational errors when performing approximate computing of real vector--matrix multiplication on resistive crossbars. Prior work has concentrated on locating a single outlying error and, in this work, several classes of codes are presented which can handle multiple errors. It is first shown that one of the known constructions, which is based on spherical codes, can in fact handle multiple outlying errors. A second family of codes is then presented with $\zeroone$~parity-check matrices which are sparse and disjunct; such matrices have been used in other applications as well, especially in combinatorial group testing. In addition, a certain class of the codes that are obtained through this construction is shown to be efficiently decodable. As part of the study of sparse disjunct matrices, this work also contains improved lower and upper bounds on the maximum Hamming weight of the rows in such matrices.

Creating large-scale and well-annotated datasets to train AI algorithms is crucial for automated tumor detection and localization. However, with limited resources, it is challenging to determine the best type of annotations when annotating massive amounts of unlabeled data. To address this issue, we focus on polyps in colonoscopy videos and pancreatic tumors in abdominal CT scans; both applications require significant effort and time for pixel-wise annotation due to the high dimensional nature of the data, involving either temporary or spatial dimensions. In this paper, we develop a new annotation strategy, termed Drag&Drop, which simplifies the annotation process to drag and drop. This annotation strategy is more efficient, particularly for temporal and volumetric imaging, than other types of weak annotations, such as per-pixel, bounding boxes, scribbles, ellipses, and points. Furthermore, to exploit our Drag&Drop annotations, we develop a novel weakly supervised learning method based on the watershed algorithm. Experimental results show that our method achieves better detection and localization performance than alternative weak annotations and, more importantly, achieves similar performance to that trained on detailed per-pixel annotations. Interestingly, we find that, with limited resources, allocating weak annotations from a diverse patient population can foster models more robust to unseen images than allocating per-pixel annotations for a small set of images. In summary, this research proposes an efficient annotation strategy for tumor detection and localization that is less accurate than per-pixel annotations but useful for creating large-scale datasets for screening tumors in various medical modalities.

A general setup for deterministic system identification problems on graphs with Dirichlet and Neumann boundary conditions is introduced. When control nodes are available along the boundary, we apply a discretize-then-optimize method to estimate an optimal control. A key piece in the present architecture is our boundary injected message passing neural network. This will produce more accurate predictions that are considerably more stable in proximity of the boundary. Also, a regularization technique based on graphical distance is introduced that helps with stabilizing the predictions at nodes far from the boundary.

We propose a new joint mean and correlation regression model for correlated multivariate discrete responses, that simultaneously regresses the mean of each response against a set of covariates, and the correlations between responses against a set of similarity/distance measures. A set of joint estimating equations are formulated to construct an estimator of both the mean regression coefficients and the correlation regression parameters. Under a general setting where the number of responses can tend to infinity, the joint estimator is demonstrated to be consistent and asymptotically normally distributed, with differing rates of convergence due to the mean regression coefficients being heterogeneous across responses. An iterative estimation procedure is developed to obtain parameter estimates in the required, constrained parameter space. We apply the proposed model to a multivariate abundance dataset comprising overdispersed counts of 38 Carabidae ground beetle species sampled throughout Scotland, along with information about the environmental conditions of each site and the traits of each species. Results show in particular that the relationships between the mean abundances of various beetle species and environmental covariates are different and that beetle total length has statistically important effect in driving the correlations between the species. Simulations demonstrate the strong finite sample performance of the proposed estimator in terms of point estimation and inference.

A multi-joint enabled robot requires extensive mathematical calculations to be done so the end-effector's position can be determined with respect to the other connective joints involved and their respective frames in a specific coordinate system. If a control algorithm employs fewer constraints than the cases necessary to explicitly determine the leg's position, the robot is generally underconstrained. Consequently, only a subset of the end effector's degree of freedom (DoF) can be assigned for the robot's leg position for pose and trajectory estimation purposes. This paper introduces a fully functional algorithm to consider all the cases of the robot's leg position in a coordinate system so the robot's degree of freedom is not limited. Mathematical derivation of the joint angles is derived with forward and inverse kinematics, and Python-based simulation has been done to verify and simulate the robot's locomotion. Using Python-based code for serial communication with a micro-controller unit makes this approach more effective for demonstrating its application on a prototype leg.

Although robust statistical estimators are less affected by outlying observations, their computation is usually more challenging. This is particularly the case in high-dimensional sparse settings. The availability of new optimization procedures, mainly developed in the computer science domain, offers new possibilities for the field of robust statistics. This paper investigates how such procedures can be used for robust sparse association estimators. The problem can be split into a robust estimation step followed by an optimization for the remaining decoupled, (bi-)convex problem. A combination of the augmented Lagrangian algorithm and adaptive gradient descent is implemented to also include suitable constraints for inducing sparsity. We provide results concerning the precision of the algorithm and show the advantages over existing algorithms in this context. High-dimensional empirical examples underline the usefulness of this procedure. Extensions to other robust sparse estimators are possible.

Synthetic control (SC) methods have gained rapid popularity in economics recently, where they have been applied in the context of inferring the effects of treatments on standard continuous outcomes assuming linear input-output relations. In medical applications, conversely, survival outcomes are often of primary interest, a setup in which both commonly assumed data-generating processes (DGPs) and target parameters are different. In this paper, we therefore investigate whether and when SCs could serve as an alternative to matching methods in survival analyses. We find that, because SCs rely on a linearity assumption, they will generally be biased for the true expected survival time in commonly assumed survival DGPs -- even when taking into account the possibility of linearity on another scale as in accelerated failure time models. Additionally, we find that, because SC units follow distributions with lower variance than real control units, summaries of their distributions, such as survival curves, will be biased for the parameters of interest in many survival analyses. Nonetheless, we also highlight that using SCs can still improve upon matching whenever the biases described above are outweighed by extrapolation biases exhibited by imperfect matches, and investigate the use of regularization to trade off the shortcomings of both approaches.

We revisit the problems of pitch spelling and tonality guessing with a new algorithm for their joint estimation from a MIDI file including information about the measure boundaries. Our algorithm does not only identify a global key but also local ones all along the analyzed piece. It uses Dynamic Programming techniques to search for an optimal spelling in term, roughly, of the number of accidental symbols that would be displayed in the engraved score. The evaluation of this number is coupled with an estimation of the global key and some local keys, one for each measure. Each of the three informations is used for the estimation of the other, in a multi-steps procedure. An evaluation conducted on a monophonic and a piano dataset, comprising 216 464 notes in total, shows a high degree of accuracy, both for pitch spelling (99.5% on average on the Bach corpus and 98.2% on the whole dataset) and global key signature estimation (93.0% on average, 95.58% on the piano dataset). Designed originally as a backend tool in a music transcription framework, this method should also be useful in other tasks related to music notation processing.

Dynamic programming (DP) solves a variety of structured combinatorial problems by iteratively breaking them down into smaller subproblems. In spite of their versatility, DP algorithms are usually non-differentiable, which hampers their use as a layer in neural networks trained by backpropagation. To address this issue, we propose to smooth the max operator in the dynamic programming recursion, using a strongly convex regularizer. This allows to relax both the optimal value and solution of the original combinatorial problem, and turns a broad class of DP algorithms into differentiable operators. Theoretically, we provide a new probabilistic perspective on backpropagating through these DP operators, and relate them to inference in graphical models. We derive two particular instantiations of our framework, a smoothed Viterbi algorithm for sequence prediction and a smoothed DTW algorithm for time-series alignment. We showcase these instantiations on two structured prediction tasks and on structured and sparse attention for neural machine translation.

北京阿比特科技有限公司