Every time a tumor is removed, whether a small portion in the form of a biopsy or the entire mass at the time of surgery, the specimen is sent for evaluation by a team of pathologists. Their job involves examining the tissue under a microscope and performing tests to answer a broad range of questions, starting with the most fundamental: Is it cancer? What type of cancer is it? What may sound like a simple task can prove quite difficult in many tumor types. The pathologist provides further information regarding the current stage of the tumor, for example: Is it contained locally or is it invasive? Their mandate is to gather as much information as possible about specific tumor characteristics, providing biomarkers based on the cells’ physical and molecular features in order to make predictions about how the tumor is likely to behave. Much of this work is accomplished by placing tissue on glass slides, using different types of stains to highlight tumor features, and examining the results under a high-power microscope. Slide interpretation relies on the complex pattern recognition capabilities of a human brain that has been trained through years of intensive study of thousands of microscopy images.

Most of us are familiar with the term artificial intelligence in which computers perform tasks that normally require the processing capacity of a human brain. Some will have heard of its subfield machine learning where computers are able to learn tasks without the programmer supplying explicit instructions. The term deep learning may be less familiar, yet we are increasingly inundated with the products of this relatively new computer science field. Deep learning refers to the use of algorithms called neural networks that were inspired by the structure of the human brain and can be employed for such applications as self-driving cars, natural language processing, finance and trading strategies, and facial detection and recognition. Odds are good that you are currently holding a device with powerful deep learning capability embedded in its software, whether in using your facial features to unlock your phone, to confirm your identity as you access your banking app, or in automatically detecting faces and accurately identifying the people in your photos. Deep learning is ubiquitous in our daily lives, with ever increasing applications, including in cancer research and patient care.

Recent advances in our ability to create and store high-resolution digitized versions of glass microscopy slides allows the computer to accept these images as input, translating features that can be seen by eye into numerical representations from which an algorithm can learn. This ability allows the application of powerful deep learning techniques to the complex task of recognizing patterns and extracting meaningful information from tumor specimens. Your phone has software capable not just of recognizing a human face, but of accurately distinguishing thousands of specific people from one another, using pictures taken from any angle, in any light, against any background. Applying this same approach to digital pathology images, early efforts have already shown significant promise in such tasks as distinguishing tumor from non-tumor, classifying tumors by type, and even in predicting the presence of specific changes to the tumor’s DNA known to underlie malignant behavior. The increasing availability of large libraries of digitized tumor images continually improves the performance of deep learning algorithms by offering larger and more diverse training sets from which to learn. In turn, the sophistication of the analysis expands, providing automatic identification of ever richer and more complex biomarkers that will have significant impact on patient care in terms of making accurate diagnoses, prognosticating, and predicting the likelihood of response to specific treatments. Deep learning from digitized images shows great potential to augment the already remarkable capacity of the pathologist to help guide treatment decisions for cancer patients.

Information about the tumor is only useful if we can directly relate it to the outcomes experienced by the patient. One challenge facing researchers in the field is in assuring the availability of clinical data, describing specific patient features and details of their disease course, to accompany the large volume of digital imaging data. As such, the goal of creating deep learning-driven tools to advance cancer care can only be realized through close collaboration between clinicians on the front lines of treatment and researchers with extensive computational and mathematical expertise. Through collaboration with the Jackson Laboratory for Genomic Medicine, the Hartford Healthcare Cancer Institute has assembled just such a team. In the near future, we hope to bring the benefits of this cutting-edge technology to the clinical realm as part of our commitment not just to provide excellence in precision oncology care, but to continually drive that care to the next level.