亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Studies of alcohol and drug use are often interested in the number of days that people use the substance of interest over an interval, such as 28 days before a survey date. Although count models are often used for this purpose, they are not strictly appropriate for this type of data because the response variable is bounded above. Furthermore, if some peoples' substance use behaviors are characterized by various weekly patterns of use, summaries of substance days-of-use used over longer periods can exhibit multiple modes. These characteristics of substance days-of-use data are not easily fitted with conventional parametric model families. We propose a continuation ratio ordinal model for substance days-of-use data. Instead of grouping the set of possible response values into a small set of ordinal categories, each possible value is assigned its own category. This allows the exact numeric distribution implied by the predicted ordinal response to be recovered. We demonstrate the proposed model using survey data reporting days of alcohol use over 28-day intervals. We show the continuation ratio model is better able to capture the complexity in the drinking days dataset compared to binomial, hurdle-negative binomial and beta-binomial models.

相關內容

讓 iOS 8 和 OS X Yosemite 無縫切換的一個新特性。 > Apple products have always been designed to work together beautifully. But now they may really surprise you. With iOS 8 and OS X Yosemite, you’ll be able to do more wonderful things than ever before.

Source:

A canonical problem in social choice is how to aggregate ranked votes: given $n$ voters' rankings over $m$ candidates, what voting rule $f$ should we use to aggregate these votes into a single winner? One standard method for comparing voting rules is by their satisfaction of axioms - properties that we want a "reasonable" rule to satisfy. Unfortunately, this approach leads to several impossibilities: no voting rule can simultaneously satisfy all the properties we want, at least in the worst case over all possible inputs. Motivated by this, we consider a relaxation of these worst case requirements. We do so using a "smoothed" model of social choice, where votes are perturbed with small amounts of noise. If, no matter which input profile we start with, the probability (post-noise) of an axiom being satisfied is large, we will consider the axiom as good as satisfied - called "smoothed-satisfied" - even if it may be violated in the worst case. Our model is a mild restriction of Lirong Xia's, and corresponds closely to that in Spielman and Teng's original work on smoothed analysis. Much work has been done so far in several papers by Xia on axiom satisfaction under such noise. In our paper, we aim to give a more cohesive overview on when smoothed analysis of social choice is useful. Within our model, we give simple sufficient conditions for smoothed-satisfaction or smoothed-violation of several previously-unstudied axioms and paradoxes, plus many of those studied by Xia. We then observe that, in a practically important subclass of noise models, although convergence eventually occurs, known rates may require an extremely large number of voters. Motivated by this, we prove bounds specifically within a canonical noise model from this subclass - the Mallows model. Here, we present a more nuanced picture on exactly when smoothed analysis can help.

For many statistical experiments, there exists a multitude of optimal designs. If we consider models with uncorrelated observations and adopt the approach of approximate experimental design, the set of all optimal designs typically forms a multivariate polytope. In this paper, we mathematically characterize the polytope of optimal designs. In particular, we show that its vertices correspond to the so-called minimal optimum designs. Consequently, we compute the vertices for several classical multifactor regression models of the first and the second degree. To this end, we use software tools based on rational arithmetic; therefore, the computed list is accurate and complete. The polytope of optimal experimental designs, and its vertices, can be applied in several ways. For instance, it can aid in constructing cost-efficient and efficient exact designs.

Networks with hop-by-hop flow control occur in several contexts, from data centers to systems architectures (e.g., wormhole-routing networks on chip). A worst-case end-to-end delay in such networks can be computed using Network Calculus (NC), an algebraic theory where traffic and service guarantees are represented as curves in a Cartesian plane. NC uses transformation operations, e.g., the min-plus convolution, to model how the traffic profile changes with the traversal of network nodes. NC allows one to model flow-controlled systems, hence one can compute the end-to-end service curve describing the minimum service guaranteed to a flow traversing a tandem of flow-controlled nodes. However, while the algebraic expression of such an end-to-end service curve is quite compact, its computation is often intractable from an algorithmic standpoint: data structures tend to grow quickly to unfeasibly large sizes, making operations intractable, even with as few as three hops. In this paper, we propose computational and algebraic techniques to mitigate the above problem. We show that existing techniques (such as reduction to compact domains) cannot be used in this case, and propose an arsenal of solutions, which include methods to mitigate the data representation space explosion as well as computationally efficient algorithms for the min-plus convolution operation. We show that our solutions allow a significant speedup, enable analysis of previously unfeasible case studies, and -- since they do not rely on any approximation -- still provide exact results.

Statistical shape models (SSM) have been well-established as an excellent tool for identifying variations in the morphology of anatomy across the underlying population. Shape models use consistent shape representation across all the samples in a given cohort, which helps to compare shapes and identify the variations that can detect pathologies and help in formulating treatment plans. In medical imaging, computing these shape representations from CT/MRI scans requires time-intensive preprocessing operations, including but not limited to anatomy segmentation annotations, registration, and texture denoising. Deep learning models have demonstrated exceptional capabilities in learning shape representations directly from volumetric images, giving rise to highly effective and efficient Image-to-SSM. Nevertheless, these models are data-hungry and due to the limited availability of medical data, deep learning models tend to overfit. Offline data augmentation techniques, that use kernel density estimation based (KDE) methods for generating shape-augmented samples, have successfully aided Image-to-SSM networks in achieving comparable accuracy to traditional SSM methods. However, these augmentation methods focus on shape augmentation, whereas deep learning models exhibit texture bias results in sub-optimal models. This paper introduces a novel strategy for on-the-fly data augmentation for the Image-to-SSM framework by leveraging data-dependent noise generation or texture augmentation. The proposed framework is trained as an adversary to the Image-to-SSM network, augmenting diverse and challenging noisy samples. Our approach achieves improved accuracy by encouraging the model to focus on the underlying geometry rather than relying solely on pixel values.

Most state-of-the-art techniques for medical image segmentation rely on deep-learning models. These models, however, are often trained on narrowly-defined tasks in a supervised fashion, which requires expensive labeled datasets. Recent advances in several machine learning domains, such as natural language generation have demonstrated the feasibility and utility of building foundation models that can be customized for various downstream tasks with little to no labeled data. This likely represents a paradigm shift for medical imaging, where we expect that foundation models may shape the future of the field. In this paper, we consider a recently developed foundation model for medical image segmentation, UniverSeg. We conduct an empirical evaluation study in the context of prostate imaging and compare it against the conventional approach of training a task-specific segmentation model. Our results and discussion highlight several important factors that will likely be important in the development and adoption of foundation models for medical image segmentation.

Examination of the umbilical artery with Doppler ultrasonography is performed to investigate blood supply to the fetus through the umbilical cord, which is vital for the monitoring of fetal health. Such examination involves several steps that must be performed correctly: identifying suitable sites on the umbilical artery for the measurement, acquiring the blood flow curve in the form of a Doppler spectrum, and ensuring compliance to a set of quality standards. These steps rely heavily on the operator's skill, and the shortage of experienced sonographers has thus created a demand for machine assistance. In this work, we propose an automatic system to fill the gap. By using a modified Faster R-CNN network, we obtain an algorithm that can suggest locations suitable for Doppler measurement. Meanwhile, we have also developed a method for assessment of the Doppler spectrum's quality. The proposed system is validated on 657 images from a national ultrasound screening database, with results demonstrating its potential as a guidance system.

Polytomous categorical data are frequent in studies, that can be obtained with an individual or grouped structure. In both structures, the generalized logit model is commonly used to relate the covariates on the response variable. After fitting a model, one of the challenges is the definition of an appropriate residual and choosing diagnostic techniques. Since the polytomous variable is multivariate, raw, Pearson, or deviance residuals are vectors and their asymptotic distribution is generally unknown, which leads to difficulties in graphical visualization and interpretation. Therefore, the definition of appropriate residuals and the choice of the correct analysis in diagnostic tools is important, especially for nominal data, where a restriction of methods is observed. This paper proposes the use of randomized quantile residuals associated with individual and grouped nominal data, as well as Euclidean and Mahalanobis distance measures, as an alternative to reduce the dimension of the residuals. We developed simulation studies with both data structures associated. The half-normal plots with simulation envelopes were used to assess model performance. These studies demonstrated a good performance of the quantile residuals, and the distance measurements allowed a better interpretation of the graphical techniques. We illustrate the proposed procedures with two applications to real data.

Robots are notoriously difficult to design because of complex interdependencies between their physical structure, sensory and motor layouts, and behavior. Despite this, almost every detail of every robot built to date has been manually determined by a human designer after several months or years of iterative ideation, prototyping, and testing. Inspired by evolutionary design in nature, the automated design of robots using evolutionary algorithms has been attempted for two decades, but it too remains inefficient: days of supercomputing are required to design robots in simulation that, when manufactured, exhibit desired behavior. Here we show for the first time de-novo optimization of a robot's structure to exhibit a desired behavior, within seconds on a single consumer-grade computer, and the manufactured robot's retention of that behavior. Unlike other gradient-based robot design methods, this algorithm does not presuppose any particular anatomical form; starting instead from a randomly-generated apodous body plan, it consistently discovers legged locomotion, the most efficient known form of terrestrial movement. If combined with automated fabrication and scaled up to more challenging tasks, this advance promises near instantaneous design, manufacture, and deployment of unique and useful machines for medical, environmental, vehicular, and space-based tasks.

Knowledge plays a critical role in artificial intelligence. Recently, the extensive success of pre-trained language models (PLMs) has raised significant attention about how knowledge can be acquired, maintained, updated and used by language models. Despite the enormous amount of related studies, there still lacks a unified view of how knowledge circulates within language models throughout the learning, tuning, and application processes, which may prevent us from further understanding the connections between current progress or realizing existing limitations. In this survey, we revisit PLMs as knowledge-based systems by dividing the life circle of knowledge in PLMs into five critical periods, and investigating how knowledge circulates when it is built, maintained and used. To this end, we systematically review existing studies of each period of the knowledge life cycle, summarize the main challenges and current limitations, and discuss future directions.

Deep convolutional neural networks (CNNs) have recently achieved great success in many visual recognition tasks. However, existing deep neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past few years, tremendous progress has been made in this area. In this paper, we survey the recent advanced techniques for compacting and accelerating CNNs model developed. These techniques are roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transferred/compact convolutional filters, and knowledge distillation. Methods of parameter pruning and sharing will be described at the beginning, after that the other techniques will be introduced. For each scheme, we provide insightful analysis regarding the performance, related applications, advantages, and drawbacks etc. Then we will go through a few very recent additional successful methods, for example, dynamic capacity networks and stochastic depths networks. After that, we survey the evaluation matrix, the main datasets used for evaluating the model performance and recent benchmarking efforts. Finally, we conclude this paper, discuss remaining challenges and possible directions on this topic.

北京阿比特科技有限公司