Jimbo Wilson
Research Areas
Authored Publications
Sort By
Preview abstract
A key challenge in developing and deploying Machine Learning (ML) systems is understanding their performance across a wide range of inputs. To address this challenge, we created the What-If Tool, an open-source application that allows practitioners to probe, visualize, and analyze ML systems, with minimal coding. The What-If Tool lets practitioners test performance in hypothetical situations, analyze the importance of different data features, and visualize model behavior across multiple models and subsets of input data. It also lets practitioners measure systems according to multiple ML fairness metrics. We describe the design of the tool, and report on real-life usage at different organizations.
View details
Scalable and accurate deep learning for electronic health records
Alvin Rishi Rajkomar
Eyal Oren
Nissan Hajaj
Mila Hardt
Peter J. Liu
Xiaobing Liu
Jake Marcus
Patrik Per Sundberg
Kun Zhang
Yi Zhang
Gerardo Flores
Gavin Duggan
Jamie Irvine
Kurt Litsch
Alex Mossin
Justin Jesada Tansuwan
De Wang
Dana Ludwig
Samuel Volchenboum
Kat Chou
Michael Pearson
Srinivasan Madabushi
Nigam Shah
Atul Butte
npj Digital Medicine (2018)
Preview abstract
Predictive modeling with electronic health record (EHR) data is anticipated to drive personalized medicine and improve healthcare quality. Constructing predictive statistical models typically requires extraction of curated predictor variables from normalized EHR data, a labor-intensive process that discards the vast majority of information in each patient’s record. We propose a representation of patients’ entire raw EHR records based on the Fast Healthcare Interoperability Resources (FHIR) format. We demonstrate that deep learning methods using this representation are capable of accurately predicting multiple medical events from multiple centers without site-specific data harmonization. We validated our approach using de-identified EHR data from two U.S. academic medical centers with 216,221 adult patients hospitalized for at least 24 hours. In the sequential format we propose, this volume of EHR data unrolled into a total of 46,864,534,945 data points, including clinical notes. Deep learning models achieved high accuracy for tasks such as predicting: in-hospital mortality (AUROC across sites 0.93-0.94), 30-day unplanned readmission (AUROC 0.75-0.76), prolonged length of stay (AUROC 0.85-0.86), and all of a patient’s final discharge diagnoses (frequency-weighted AUROC 0.90). These models outperformed state-of-the-art traditional predictive models in all cases. We also present a case-study of a neural-network attribution system, which illustrates how clinicians can gain some transparency into the predictions. We believe that this approach can be used to create accurate and scalable predictions for a variety of clinical scenarios, complete with explanations that directly highlight evidence in the patient’s chart.
View details
Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow
Kanit Wongsuphasawat
Daniel Smilkov
Dandelion Mane
Doug Fritz
Dilip Krishnan
IEEE Transaction on Visualization and Computer Graphics (2017)
Preview abstract
We present a design study of the TensorFlow Graph Visualizer, part of the TensorFlow machine intelligence platform. This
tool helps users understand complex machine learning architectures by visualizing their underlying dataflow graphs. The tool works
by applying a series of graph transformations that enable standard layout techniques to produce a legible interactive diagram. To
declutter the graph, we decouple non-critical nodes from the layout. To provide an overview, we build a clustered graph using the
hierarchical structure annotated in the source code. To support exploration of nested structure on demand, we perform edge bundling
to enable stable and responsive cluster expansion. Finally, we detect and highlight repeated structures to emphasize a model’s
modular composition. To demonstrate the utility of the visualizer, we describe example usage scenarios and report user feedback.
Overall, users find the visualizer useful for understanding, debugging, and sharing the structures of their models.
View details
No Classification without Representation: Assessing Geodiversity Issues in Open Data Sets for the Developing World
Shreya Shankar
Yoni Halpern
NIPS 2017 workshop: Machine Learning for the Developing World
Preview abstract
Modern machine learning systems such as image classifers rely heavily on
large scale data sets for training. Such data sets are costly to create,
thus in practice a small number of freely available, open source data sets
are widely used. Such strategies may be particularly important for ML
applications in the developing world, where resources may be constrained
and the cost of creating suitable large scale data sets may be a
blocking factor. However, we suggest that examining the {\em geo-diversity}
of open data sets is critical before adopting a data set for such use
cases. In particular, we analyze two large, publicly available image
data sets to assess geo-diversity and find that these data sets appear
to exhibit a observable amerocentric and eurocentric representation bias.
Further, we perform targeted analysis on classifiers that use these data
sets as training data to assess the impact of these training distributions,
and find strong differences in the relative performance on images from
different locales. These results emphasize the need to ensure
geo-representation when constructing data sets for use in the developing
world.
View details