亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the paradoxical aspects of closed time-like curves and their impact on the theory of computation. After introducing the $\text{TM}_\text{CTC}$, a classical Turing machine benefiting CTCs for backward time travel, Aaronson et al. proved that $\text{P} = \text{PSPACE}$ and the $\Delta_2$ sets, such as the halting problem, are computable within this computational model. Our critical view is the physical consistency of this model, which leads to proposing the strong axiom, explaining that every particle rounding on a CTC will be destroyed before returning to its starting time, and the weak axiom, describing the same notion, particularly for Turing machines. We claim that in a universe containing CTCs, the two axioms must be true; otherwise, there will be an infinite number of any particle rounding on a CTC in the universe. An immediate result of the weak axiom is the incapability of Turing machines to convey information for a full round on a CTC, leading to the proposed $\text{TM}_\text{CTC}$ programs for the aforementioned corollaries failing to function. We suggest our solution for this problem as the data transferring hypothesis, which applies another $\text{TM}_\text{CTC}$ as a means for storing data. A prerequisite for it is the existence of the concept of Turing machines throughout time, which makes it appear infeasible in our universe. Then, we discuss possible physical conditions that can be held for a universe containing CTCs and conclude that if returning to an approximately equivalent universe by a CTC was conceivable, the above corollaries would be valid.

相關內容

 人類接(jie)受高層次(ci)教育、進行原創性研究的場(chang)所。 現在的大(da)學一(yi)般(ban)包(bao)括一(yi)個(ge)能授予(yu)(yu)碩士(shi)和博士(shi)學位(wei)的研究生院和數個(ge)專業學院,以(yi)及能授予(yu)(yu)學士(shi)學位(wei)的一(yi)個(ge)本科生院。大(da)學還包(bao)括高等專科學校

We consider a high-dimensional dynamic pricing problem under non-stationarity, where a firm sells products to $T$ sequentially arriving consumers that behave according to an unknown demand model with potential changes at unknown times. The demand model is assumed to be a high-dimensional generalized linear model (GLM), allowing for a feature vector in $\mathbb R^d$ that encodes products and consumer information. To achieve optimal revenue (i.e., least regret), the firm needs to learn and exploit the unknown GLMs while monitoring for potential change-points. To tackle such a problem, we first design a novel penalized likelihood-based online change-point detection algorithm for high-dimensional GLMs, which is the first algorithm in the change-point literature that achieves optimal minimax localization error rate for high-dimensional GLMs. A change-point detection assisted dynamic pricing (CPDP) policy is further proposed and achieves a near-optimal regret of order $O(s\sqrt{\Upsilon_T T}\log(Td))$, where $s$ is the sparsity level and $\Upsilon_T$ is the number of change-points. This regret is accompanied with a minimax lower bound, demonstrating the optimality of CPDP (up to logarithmic factors). In particular, the optimality with respect to $\Upsilon_T$ is seen for the first time in the dynamic pricing literature, and is achieved via a novel accelerated exploration mechanism. Extensive simulation experiments and a real data application on online lending illustrate the efficiency of the proposed policy and the importance and practical value of handling non-stationarity in dynamic pricing.

This paper studies a house allocation problem in a networked housing market, where agents can invite others to join the system in order to enrich their options. Top Trading Cycle is a well-known matching mechanism that achieves a set of desirable properties in a market without invitations. However, under a tree-structured networked market, existing agents have to strategically propagate the barter market as their invitees may compete in the same house with them. Our impossibility result shows that TTC cannot work properly in a networked housing market. Hence, we characterize the possible competitions between inviters and invitees, which lead agents to fail to refer others truthfully (strategy-proof). We then present a novel mechanism based on TTC, avoiding the aforementioned competition to ensure all agents report preference and propagate the barter market truthfully. Unlike the existing mechanisms, the agents' preferences are less restricted under our mechanism. Furthermore, we show by simulations that our mechanism outperforms the existing matching mechanisms in terms of the number of swaps and agents' satisfaction.

Physiological responses to pain have received increasing attention among researchers for developing an automated pain recognition sensing system. Though less explored, Blood Volume Pulse (BVP) is one of the candidate physiological measures that could help objective pain assessment. In this study, we applied machine learning techniques on BVP signals to device a non-invasive modality for pain sensing. Thirty-two healthy subjects participated in this study. First, we investigated a novel set of time-domain, frequency-domain and nonlinear dynamics features that could potentially be sensitive to pain. These include 24 features from BVP signals and 20 additional features from Inter-beat Intervals (IBIs) derived from the same BVP signals. Utilizing these features, we built machine learning models for detecting the presence of pain and its intensity. We explored different machine learning models, including Logistic Regression, Random Forest, Support Vector Machines, Adaptive Boosting (AdaBoost) and Extreme Gradient Boosting (XGBoost). Among them, we found that the XGBoost offered the best model performance for both pain classification and pain intensity estimation tasks. The ROC-AUC of the XGBoost model to detect low pain, medium pain and high pain with no pain as the baseline were 80.06 %, 85.81 %, and 90.05 % respectively. Moreover, the XGboost classifier distinguished medium pain from high pain with ROC-AUC of 91%. For the multi-class classification among three pain levels, the XGBoost offered the best performance with an average F1-score of 80.03%. Our results suggest that BVP signal together with machine learning algorithms is a promising physiological measurement for automated pain assessment. This work will have a national impact on accurate pain assessment, effective pain management, reducing drug-seeking behavior among patients, and addressing national opioid crisis.

This paper proposes a criterion for detecting change structures in tensor data. To accommodate tensor structure with structural mode that is not suitable to be equally treated and summarized in a distance to measure the difference between any two adjacent tensors, we define a mode-based signal-screening Frobenius distance for the moving sums of slices of tensor data to handle both dense and sparse model structures of the tensors. As a general distance, it can also deal with the case without structural mode. Based on the distance, we then construct signal statistics using the ratios with adaptive-to-change ridge functions. The number of changes and their locations can then be consistently estimated in certain senses, and the confidence intervals of the locations of change points are constructed. The results hold when the size of the tensor and the number of change points diverge at certain rates, respectively. Numerical studies are conducted to examine the finite sample performances of the proposed method. We also analyze two real data examples for illustration.

The continuously growing number of objects orbiting around the Earth is expected to be accompanied by an increasing frequency of objects re-entering the Earth's atmosphere. Many of these re-entries will be uncontrolled, making their prediction challenging and subject to several uncertainties. Traditionally, re-entry predictions are based on the propagation of the object's dynamics using state-of-the-art modelling techniques for the forces acting on the object. However, modelling errors, particularly related to the prediction of atmospheric drag may result in poor prediction accuracies. In this context, we explore the possibility to perform a paradigm shift, from a physics-based approach to a data-driven approach. To this aim, we present the development of a deep learning model for the re-entry prediction of uncontrolled objects in Low Earth Orbit (LEO). The model is based on a modified version of the Sequence-to-Sequence architecture and is trained on the average altitude profile as derived from a set of Two-Line Element (TLE) data of over 400 bodies. The novelty of the work consists in introducing in the deep learning model, alongside the average altitude, three new input features: a drag-like coefficient (B*), the average solar index, and the area-to-mass ratio of the object. The developed model is tested on a set of objects studied in the Inter-Agency Space Debris Coordination Committee (IADC) campaigns. The results show that the best performances are obtained on bodies characterised by the same drag-like coefficient and eccentricity distribution as the training set.

In many medical subfields, there is a call for greater interpretability in the machine learning systems used for clinical work. In this paper, we design an interpretable deep learning model to predict the presence of 6 types of brainwave patterns (Seizure, LPD, GPD, LRDA, GRDA, other) commonly encountered in ICU EEG monitoring. Each prediction is accompanied by a high-quality explanation delivered with the assistance of a specialized user interface. This novel model architecture learns a set of prototypical examples (``prototypes'') and makes decisions by comparing a new EEG segment to these prototypes. These prototypes are either single-class (affiliated with only one class) or dual-class (affiliated with two classes). We present three main ways of interpreting the model: 1) Using global-structure preserving methods, we map the 1275-dimensional cEEG latent features to a 2D space to visualize the ictal-interictal-injury continuum and gain insight into its high-dimensional structure. 2) Predictions are made using case-based reasoning, inherently providing explanations of the form ``this EEG looks like that EEG.'' 3) We map the model decisions to a 2D space, allowing a user to see how the current sample prediction compares to the distribution of predictions made by the model. Our model performs better than the corresponding uninterpretable (black box) model with $p<0.01$ for discriminatory performance metrics AUROC (area under the receiver operating characteristic curve) and AUPRC (area under the precision-recall curve), as well as for task-specific interpretability metrics. We provide videos of the user interface exploring the 2D embedded space, providing the first global overview of the structure of ictal-interictal-injury continuum brainwave patterns. Our interpretable model and specialized user interface can act as a reference for practitioners who work with cEEG patterns.

With the increasing complexity of software permeating critical domains such as autonomous driving, new challenges are emerging in the ways the engineering of these systems needs to be rethought. Autonomous driving is expected to continue gradually overtaking all critical driving functions, which is adding to the complexity of the certification of autonomous driving systems. As a response, certification authorities have already started introducing strategies for the certification of autonomous vehicles and their software. But even with these new approaches, the certification procedures are not fully catching up with the dynamism and unpredictability of future autonomous systems, and thus may not necessarily guarantee compliance with all requirements imposed on these systems. In this paper, we identified a number of issues with the proposed certification strategies, which may impact the systems substantially. For instance, we emphasize the lack of adequate reflection on software changes occurring in constantly changing systems, or low support for systems' cooperation needed for the management of coordinated moves. Other shortcomings concern the narrow focus of the awarded certification by neglecting aspects such as the ethical behavior of autonomous software systems. The contribution of this paper is threefold. First, we discuss the motivation for the need to modify the current certification processes for autonomous driving systems. Second, we analyze current international standards used in the certification processes towards requirements derived from the requirements laid on dynamic software ecosystems and autonomous systems themselves. Third, we outline a concept for incorporating the missing parts into the certification procedure.

Most comparisons of treatments or doses against a control are performed by the original Dunnett single step procedure \cite{Dunnett1955} providing both adjusted p-values and simultaneous confidence intervals for differences to the control. Motivated by power arguments, unbalanced designs with higher sample size in the control are recommended. When higher variance occur in the treatment of interest or in the control, the related per-pairs power is reduced, as expected. However, if the variance is increased in a non-affected treatment group, e.g. in the highest dose (which is highly significant), the per-pairs power is also reduced in the remaining treatment groups of interest. I.e., decisions about the significance of certain comparisons may be seriously distorted. To avoid this nasty property, three modifications for heterogeneous variances are compared by a simulation study with the original Dunnett procedure. For small and medium sample sizes, a Welch-type modification can be recommended. For medium to high sample sizes, the use of a sandwich estimator instead of the common mean square estimator is useful. Related CRAN packages are provided. Summarizing we recommend not to use the original Dunnett procedure in routine and replace it by a robust modification. Particular care is needed in small sample size studies.

The development of privacy-enhancing technologies has made immense progress in reducing trade-offs between privacy and performance in data exchange and analysis. Similar tools for structured transparency could be useful for AI governance by offering capabilities such as external scrutiny, auditing, and source verification. It is useful to view these different AI governance objectives as a system of information flows in order to avoid partial solutions and significant gaps in governance, as there may be significant overlap in the software stacks needed for the AI governance use cases mentioned in this text. When viewing the system as a whole, the importance of interoperability between these different AI governance solutions becomes clear. Therefore, it is imminently important to look at these problems in AI governance as a system, before these standards, auditing procedures, software, and norms settle into place.

Out-of-distribution (OOD) detection is critical to ensuring the reliability and safety of machine learning systems. For instance, in autonomous driving, we would like the driving system to issue an alert and hand over the control to humans when it detects unusual scenes or objects that it has never seen before and cannot make a safe decision. This problem first emerged in 2017 and since then has received increasing attention from the research community, leading to a plethora of methods developed, ranging from classification-based to density-based to distance-based ones. Meanwhile, several other problems are closely related to OOD detection in terms of motivation and methodology. These include anomaly detection (AD), novelty detection (ND), open set recognition (OSR), and outlier detection (OD). Despite having different definitions and problem settings, these problems often confuse readers and practitioners, and as a result, some existing studies misuse terms. In this survey, we first present a generic framework called generalized OOD detection, which encompasses the five aforementioned problems, i.e., AD, ND, OSR, OOD detection, and OD. Under our framework, these five problems can be seen as special cases or sub-tasks, and are easier to distinguish. Then, we conduct a thorough review of each of the five areas by summarizing their recent technical developments. We conclude this survey with open challenges and potential research directions.

北京阿比特科技有限公司