Tele-medicine, or tele-health, refers to the practice of medicine at a spatial and/or temporal distance by exchanging medical information via electronic communications. Tele-health and artificial intelligence are gaining ground as new innovations shaping the future of medicine. The practice of ophthalmology lends itself to the practice of tele-medicine through its heavy reliance on imaging. With the advancing technology and high-speed connectivity tele-ophthalmology and artificial intelligence are poised to transform ophthalmology into large interconnected systems and to improve the efficiency, quality, outcomes, and accessibility to healthcare, while decreasing costs.
Access to care is problematic in high risk and underserved communities in New York City due to multitude of reasons. In our study published in 2017 (Screening for glaucoma in populations at high risk: The eye screening New York project) we found that 57% of the screened individuals never saw an eye doctor in their lifetime regardless of having insurance. Subsequently we initiated a study to screen for the four-leading cause of blindness using a mobile tele-ophthalmology unit equipped with state of the art devices and staffed with technicians, linked in real-time to a reading center. Our pilot study had a high rate of disease detection when used in high risk communities. This initial experience establishes the feasibility of mobile tele-ophthalmology as a method of facilitating access to care. Furthermore, it highlights the importance of an active blindness prevention program in the context of population management.
Artificial intelligence algorithms using deep learning are showing great promise in medicine. Artificial algorithm based on deep machine learning had high sensitivity and specificity for detecting referable diabetic retinopathy. The FDA recently approved IDx-Dr AI based diagnostic system for the autonomous detection of diabetic retinopathy. Recently we validated the use of Pegasus Deep Learning System (PDLS) in identifying glaucomatous optic damage on disc photos when compared to a reference standard comprised of expert graders. The PDLS achieved an AUC-ROC of 83% (P<0.05) with sensitivity of 96.1% and specificity of 58.3%. its high sensitivity suggests potential utility for glaucoma screening in settings where specialists are not available, including tele-medicine programs.
Dr. Lama A. Al-Aswad is an ophthalmologist with subspecialty in glaucoma and cataract with a strong interest in disease prevention and population health management. She is an Associate Professor of Ophthalmology at Columbia University, Glaucoma fellowship director, Chair of quality assurance of the Eye Institute and the Director of the Tele-ophthalmology initiative. Dr. Al-Aswad received her medical degree from Damascus University Medical School and a Glaucoma research fellowship from Mass Eye and Ear infirmary Harvard Medical School. She completed her Residency in ophthalmology at SUNY Downstate and her Glaucoma fellowship at UT Memphis. Dr. Al-Aswad is a Board certificated ophthalmologist, the past president of the NY Glaucoma Society and the current president for the Women in Ophthalmology. She is an active member of several professional societies. In 2015, Dr. Al-Aswad was also conferred the degree Master of Public Health from Columbia University, Mailman School of Public Health for her work in healthcare policy and management.
Dr. Al-Aswad’s dedication to science and the scientific method is evident from her list of scientific publications, book chapters, invited articles and a large number of scientific presentations at national and international meetings. Dr. Al-Aswad is a believer in prevention of blindness as evident from her large-scale glaucoma screening project in NYC where she screened more than 8500 individuals for glaucoma. She recently launched the tele-ophthalmology screening project for the four-leading cause of blindness using a mobile tele-ophthalmology unit equipped with state of the art devices and staffed with technicians, linked in real-time to a reading center. Dr. Al-Aswad is currently working on validating artificial intelligence systems in glaucoma and diabetic retinopathy screening as a tool in blindness prevention.
In this presentation we will review the latest development in deep learning and look at their recent applications and future impact on retinal image analysis in tasks such as prognosis and diagnosis as well as other AI applications to healthcare and medical image analysis, and will conclude by providing perspectives on clinical deployments.
Dr. Phil Burlina holds joint faculty positions at the Johns Hopkins University School of Medicine Wilmer Eye Institute, the Malone Center for Healthcare Engineering and the Department of Computer Science. He is a principal scientist with the Johns Hopkins University Intelligent Systems Center at the Applied Physics Laboratory. Dr. Burlina’s research spans several areas of machine intelligence including machine learning, deep learning, machine vision, object detection and recognition, deep reinforcement learning, medical image diagnostics, and addressing problems of making AI work in the wild such as zero/one/adaptive shot learning and unsupervised learning. His interests are in the development AI algorithms that are impactful for problem areas in medicine, robotics, and autonomous navigation.
A total of 5% of newborns have potentially treatable pathology present that can result in vision loss and blindness. Diseases are diverse ranging including retinal detachment, blastomas and many others. We detect abnormalities related to these events in images of the newborn retina and the eye.
Newborn eye screening in newborns can be done with photographic screening in the first 48-72 hours of birth, while the infant is still in the hospital. In fact, we know that fundus hemorrhages are most easily detected in this time period, rapidly dropping off over the next 4 weeks. The same is true for non-hemorrhagic, non-amblyogenic, defined diagnosis vision threatening pathology in 1.5-2.5% of newborns.
For universal screening ~4 million newborns, i.e. 50 million images, must be screened annually in the US. We are deploying an AI system that can perform such screening with high accuracy. Upon detection of abnormality the system alerts the clinic to initiate referral within 1 week after birth, which is comfortably within the effective treatment window to prevent vision loss.
Deep learning architectures were optimized to develop the Pr3Novo™ Classifier to detect abnormality in healthy term newborn retinal images. Deep learning using neural network has been successfully applied to image analysis and has been used to classify adult retinal images for macular degeneration, glaucoma and diabetes. With training set of 5000 scored images we identified than 89% of abnormal images. Retina specialists will curate a larger portion of Pr3vent’s in-house database of ~250,000+ newborn eye examination images to enable significant increase in accuracy. Further work has added the ability to detect abnormality in images of healthy term newborn eyes (anterior segment images). Similar approaches requiring a curated training set of 3000 patient samples with >6,000 curated images have proved effective.
The AI system presented here is scalable and follows a single regulatory pathway to provide nationwide service. It performs at a level comparable to a specialized pediatric opthamologist, nearly eliminates human error and has significant economic and health impact.
Jochen Kumm is the CEO of Pr3vent, a universal screening company. He is a Visiting Scholar in the Department of Ophthalmology at the Stanford University School of Medicine. He is originally geneticist and a mathematician focused on healthcare applications of Deep Learning and AI. He is a sequential founder/advisor of start-ups that operate in Europe, North America and Asia including NextBio, Pathogenica, Veracyte, Pinpoint Science and insightAI, and hold multiple patents in diagnostics and AI.
Educated at Harvard University and Stanford University, he worked at the Department of Statistics at the University of Washington and at the UW Genome Center. At Roche Pharmaceuticals he was – as head of Computational Biology – global technology lead for genomics. Subsequently, he led biomathematics at the Stanford Genome Technology Center for a decade. In collaborations with Harvard, MIT, UCSF, UCSC, the CDC and others, Dr. Kumm has developed tools used by the CDC, biotech, financial institutions and the British Home Office. More recently, he worked for IBM Research and IBM Watson developing large-scale healthcare solution.
Retinopathy of prematurity (ROP) is an important cause of blindness in premature infants throughout the world. Current clinical management consists of screening, decisions for which are typically based on only birth weight and gestational age at birth; diagnosis by ophthalmologist examination, which in some hospitals is triggered by preceding retinal image grading; and treatment with laser photocoagulation or intravitreal injection of anti-vascular endothelial growth factor agents, to prevent progression to retinal detachment. Current ROP screening guidelines, based on studies of high-risk infants and expert opinion, have low specificity for disease requiring treatment. Based upon advances in the understanding of the pathogenesis of ROP, numerous postnatal-weight-gain-based models have been developed to improve the specificity of ROP screening, but these models have been limited by complexity and small development cohorts, which result in model overfitting and resultant decreased sensitivity in validation studies. To overcome these limitations, the postnatal growth and ROP (G-ROP) collaborative study group has recently carried out two large-scale multicenter studies to develop and validate a clinically implementable, birth weight, gestational age, and weight-gain prediction model, which takes the form of modified ROP screening criteria. In this presentation, we will discuss principles of clinical predictive models and demonstrate these principles using the G-ROP Studies and preceding ROP predictive model studies. The story of these models demonstrate how large amounts of detailed clinical data can help to guide clinical practice, not only improving the efficiency of care but also potentially allowing screening practices to be updated in response to changes in care that directly impact the profile of infants at risk for ROP, if data registries contain sufficiently detailed medical information. The models highlight the fundamental importance of dataset size in the development of clinical prediction tools. The G-ROP Studies datasets also can be used to study ophthalmologist practice patterns and produce evidence-based examination schedules. Finally, these approaches can be integrated into a hybrid system, in which predictive modelling is combined with telemedicine to accurately, promptly, and more efficiently identify infants who develop severe ROP and require referral to an ophthalmologist for possible treatment.
Gil Binenbaum MD MSCE is the Richard Shafritz Endowed Chair of Ophthalmology Research and Attending Surgeon in the Division of Ophthalmology at The Children’s Hospital of Philadelphia, and Associate Professor of Ophthalmology at the Perelman School of Medicine of the University of Pennsylvania, in the United States. He completed medical school, residency, fellowship, and graduate studies in clinical epidemiology and biostatistics at these same institutions. One of his primary research interests is retinopathy of prematurity, for which he is Chair of an international, multicenter ROP research group funded by the National Institutes of Health; their goal is to use biomarkers such as postnatal growth to improve prediction of ROP risk and increase the efficiency of ROP screening. Another major research interest is mechanisms and patterns of intraocular injury in pediatric head trauma, the goal of which is to improve the accuracy of the diagnosis of child abuse. Dr. Binenbaum is a research and clinical mentor for students, at all levels of training, from college and medical school to residency and fellowship. His clinical practice specializes in pediatric and adult strabismus surgery, retinopathy of prematurity diagnosis and treatment, and consultative pediatric ophthalmology.
Michael F. Chiang, MD, is Knowles Professor of Ophthalmology & Medical Informatics and Clinical Epidemiology at the Oregon Health & Science University (OHSU) Casey Eye Institute, and is Vice-Chair (Research) in the ophthalmology department. His clinical practice focuses on pediatric ophthalmology and strabismus. He is board-certified in clinical informatics, and is an elected Fellow of the American College of Medical Informatics. His research has been NIH-funded since 2003, and involves applications of telemedicine, clinical information systems, computer-based image analysis, and genotype-phenotype correlation to improve delivery of health care. His group has published over 140 peer-reviewed journal papers. He directs an NIH-funded T32 training program in visual science for graduate students & postdoctoral fellows at OHSU, directs an NIH-funded K12 mentored clinician-scientist program in ophthalmology, and teaches in both the ophthalmology & biomedical informatics departments. Before coming to OHSU in 2010, he spent 9 years at Columbia University, where he was Anne S. Cohen Associate Professor of Ophthalmology & Biomedical Informatics, director of medical student education in ophthalmology, and director of the introductory graduate student course in biomedical informatics.
Dr. Chiang received a B.S. in Electrical Engineering & Biology from Stanford University, and an M.D. from Harvard Medical School & the Harvard-MIT Division of Health Sciences and Technology. He received an M.A. in Biomedical Informatics from Columbia University, where he was an NLM fellow in biomedical informatics. He completed residency and pediatric ophthalmology fellowship training at the Johns Hopkins Wilmer Eye Institute. He is past Chair of the American Academy of Ophthalmology (AAO) Medical Information Technology Committee, Chair of the AAO IRIS Registry Data Analytics Committee, member of the AAO IRIS Registry Executive Committee, and member of the AAO Board of Trustees. He is Associate Editor for the Journal of the American Medical Informatics Association (JAMIA), Associate Editor for the Journal of the American Association for Pediatric Ophthalmology & Strabismus, and serves on the Editorial Boards for Ophthalmology, Ophthalmology Retina, Asia-Pacific Journal of Ophthalmology, and EyeNet. He has received “Top Doctor” awards from Castle Connolly, Best Doctors in America, and Portland Monthly magazine, and has received numerous research and teaching awards.
One of the major areas in ophthalmology where the application of telemedicine and artificial intelligence has been proposed is retinopathy of prematurity (ROP). This talk will discuss challenges in traditional ROP management, ways in which telemedicine has potential to improve the quality and delivery of care by addressing these challenges, and the published evidence to date. We will then discuss challenges in ROP diagnosis, ways in which artificial intelligence methods have potential to make diagnosis more objective and quantitative, and the published evidence to date. This talk will conclude by summarizing how ROP care is evolving because of these technologies.
I’m a non-practicing physician and product manager for a team that works on applying deep learning to medical data, especially medical imaging. Here is some of our team’s recent work in diabetic eye disease (JAMA & TensorFlow Dev Summit) and pathology.
Before Google, I was a product manager at Doximity, the “linkedin” for physicians, and a co-founder of Nano Precision Medical (NPM), a medical device start-up developing a small implantable drug delivery device. I completed my M.D. and Ph.D. in Bioengineering at the University of California, San Francisco and Berkeley. I received my B.S. with honors and distinction in Chemical Engineering from Stanford University.
Deep learning is a family of machine learning techniques in which multiple computational units, organized in layers, work together to model complex systems with high accuracy by learning from examples. Deep convolutional neural network is a specific subtype of deep learning optimized for images. This technique has produced algorithms that can diagnose melanoma, breast cancer lymph node metastases and diabetic retinopathy from medial images with comparable accuracy to human experts. This talk covers work in applying deep learning to retinal imaging for diabetic retinopathy, including recent work in using different reference standards and techniques to improve explainability. It will also cover how retinal images and deep learning can be leverage to make novel predictions such as cardiovascular risk factors.