Activity

  • Paul Mercer posted an update 1 week, 6 days ago

    Gastric motility disorders are associated with bioelectrical abnormalities in the stomach. Recently, gastric ablation has emerged as a potential therapy to correct gastric dysrhythmias. However, the tissue-level effects of gastric ablation have not yet been evaluated. In this study, radiofrequency ablation was performed in vivo in pigs (n=7) at temperature-control mode (55-80°C, 5-10 s per point). The tissue was excised from the ablation site and routine H&E staining protocol was performed. In order to assess tissue damage, we developed an automated technique using a fully convolutional neural network to segment healthy tissue and ablated lesion sites within the muscle and mucosa layers of the stomach. The tissue segmentation achieved an overall Dice score accuracy of 96.18 ± 1.0 %, and Jacquard score of 92.77 ± 1.9 %, after 5-fold cross validation. The ablation lesion was detected with an overall Dice score of 94.16 ± 0.2 %. This method can be used in combination with high-resolution electrical mapping to define the optimal ablation dose for gastric ablation.Clinical Relevance-This work presents an automated method to quantify the ablation lesion in the stomach, which can be applied to determine optimal energy doses for gastric ablation, to enable clinical translation of this promising emerging therapy.The progression of cells through the cell cycle is a tightly regulated process and is known to be key in maintaining normal tissue architecture and function. Disruption of these orchestrated phases will result in alterations that can lead to many diseases including cancer. Regrettably, reliable automatic tools to evaluate the cell cycle stage of individual cells are still lacking, in particular at interphase. Therefore, the development of new tools for a proper classification are urgently needed and will be of critical importance for cancer prognosis and predictive therapeutic purposes. Thus, in this work, we aimed to investigate three deep learning approaches for interphase cell cycle staging in microscopy images 1) joint detection and cell cycle classification of nuclei patches; 2) detection of cell nuclei patches followed by classification of the cycle stage; 3) detection and segmentation of cell nuclei followed by classification of cell cycle staging. click here Our methods were applied to a dataset of microscopy images of nuclei stained with DAPI. The best results (0.908 F1-Score) were obtained with approach 3 in which the segmentation step allows for an intensity normalization that takes into account the intensities of all nuclei in a given image. These results show that for a correct cell cycle staging it is important to consider the relative intensities of the nuclei. Herein, we have developed a new deep learning method for interphase cell cycle staging at single cell level with potential implications in cancer prognosis and therapeutic strategies.Segmentation of cell nuclei in fluorescence microscopy images provides valuable information about the shape and size of the nuclei, its chromatin texture and DNA content. It has many applications such as cell tracking, counting and classification. In this work, we extended our recently proposed approach for nuclei segmentation based on deep learning, by adding to its input handcrafted features. Our handcrafted features introduce additional domain knowledge that nuclei are expected to have an approximately round shape. For round shapes the gradient vector of points at the border point to the center. To convey this information, we compute a map of gradient convergence to be used by the CNN as a new channel, in addition to the fluorescence microscopy image. We applied our method to a dataset of microscopy images of cells stained with DAPI. Our results show that with this approach we are able to decrease the number of missdetections and, therefore, increase the F1-Score when compared to our previously proposed approach. Moreover, the results show that faster convergence is obtained when handcrafted features are combined with deep learning.Major depressive disorder (MDD) is a complex mental disorder characterized by a persistent sad feeling and depressed mood. Recent studies reported differences between healthy control (HC) and MDD by looking to brain networks including default mode and cognitive control networks. More recently there has been interest in studying the brain using advanced machine learning-based classification approaches. However, interpreting the model used in the classification between MDD and HC has not been explored yet. In the current study, we classified MDD from HC by estimating whole-brain connectivity using several classification methods including support vector machine, random forest, XGBoost, and convolutional neural network. In addition, we leveraged the SHapley Additive exPlanations (SHAP) approach as a feature learning method to model the difference between these two groups. We found a consistent result among all classification method in regard of the classification accuracy and feature learning. Also, we highlighted the role of other brain networks particularly visual and sensory motor network in the classification between MDD and HC subjects.Alzheimers disease is characterized by complex changes in brain tissue including the accumulation of tau-containing neurofibrillary tangles (NFTs) and dystrophic neurites (DNs) within neurons. The distribution and density of tau pathology throughout the brain is evaluated at autopsy as one component of Alzheimers disease diagnosis. Deep neural networks (DNN) have been shown to be effective in the quantification of tau pathology when trained on fully annotated images. In this paper, we examine the effectiveness of three DNNs for the segmentation of tau pathology when trained on noisily labeled data. We train FCN, SegNet and U-Net on the same set of training images. Our results show that using noisily labeled data, these networks are capable of segmenting tau pathology as well as nuclei in as few as 40 training epochs with varying degrees of success. SegNet, FCN and U-Net are able to achieve a DICE loss of 0.234, 0.297 and 0.272 respectively on the task of segmenting regions of tau. We also apply these networks to the task of segmenting whole slide images of tissue sections and discuss their practical applicability for processing gigapixel sized images.