Art Pope
Art Pope received BS and PhD degrees from the University of British Columbia, and an SM from Harvard, all in computer science. He is a software engineer at Google in Mountain View, CA. Prior to joining Google in 2011 he worked at BBN, GE, Sarnoff and SAIC. His research interests include computer vision, machine learning and artificial intelligence.
Research Areas
Authored Publications
Sort By
A connectomic study of a petascale fragment of human cerebral cortex
Alex Shapson-Coe
Daniel R. Berger
Yuelong Wu
Richard L. Schalek
Shuohong Wang
Neha Karlupia
Sven Dorkenwald
Evelina Sjostedt
Dongil Lee
Luke Bailey
Angerica Fitzmaurice
Rohin Kar
Benjamin Field
Hank Wu
Julian Wagner-Carena
David Aley
Joanna Lau
Zudi Lin
Donglai Wei
Hanspeter Pfister
Adi Peleg
Jeff W. Lichtman
bioRxiv (2021)
Preview abstract
We acquired a rapidly preserved human surgical sample from the temporal lobe of the cerebral cortex. We stained a 1 mm3 volume with heavy metals, embedded it in resin, cut more than 5000 slices at ∼30 nm and imaged these sections using a high-speed multibeam scanning electron microscope. We used computational methods to render the three-dimensional structure containing 57,216 cells, hundreds of millions of neurites and 133.7 million synaptic connections. The 1.4 petabyte electron microscopy volume, the segmented cells, cell parts, blood vessels, myelin, inhibitory and excitatory synapses, and 104 manually proofread cells are available to peruse online. Many interesting and unusual features were evident in this dataset. Glia outnumbered neurons 2:1 and oligodendrocytes were the most common cell type in the volume. Excitatory spiny neurons comprised 69% of the neuronal population, and excitatory synapses also were in the majority (76%). The synaptic drive onto spiny neurons was biased more strongly toward excitation (70%) than was the case for inhibitory interneurons (48%). Despite incompleteness of the automated segmentation caused by split and merge errors, we could automatically generate (and then validate) connections between most of the excitatory and inhibitory neuron types both within and between layers. In studying these neurons we found that deep layer excitatory cell types can be classified into new subsets, based on structural and connectivity differences, and that chandelier interneurons not only innervate excitatory neuron initial segments as previously described, but also each other’s initial segments. Furthermore, among the thousands of weak connections established on each neuron, there exist rarer highly powerful axonal inputs that establish multi-synaptic contacts (up to ∼20 synapses) with target neurons. Our analysis indicates that these strong inputs are specific, and allow small numbers of axons to have an outsized role in the activity of some of their postsynaptic partners.
View details
High-Precision Automated Reconstruction of Neurons with Flood-Filling Networks
Jörgen Kornfeld
Larry Lindsey
Winfried Denk
Nature Methods (2018)
Preview abstract
Reconstruction of neural circuits from volume electron microscopy data requires the tracing of cells in their entirety, including all their neurites. Automated approaches have been developed for tracing, but their error rates are too high to generate reliable circuit diagrams without extensive human proofreading. We present flood-filling networks, a method for automated segmentation that, similar to most previous efforts, uses convolutional neural networks, but contains in addition a recurrent pathway that allows the iterative optimization and extension of individual neuronal processes. We used flood-filling networks to trace neurons in a dataset obtained by serial block-face electron microscopy of a zebra finch brain. Using our method, we achieved a mean error-free neurite path length of 1.1 mm, and we observed only four mergers in a test set with a path length of 97 mm. The performance of flood-filling networks was an order of magnitude better than that of previous approaches applied to this dataset, although with substantially increased computational costs.
View details