Arjun Gopalan

Arjun Gopalan

I am a software engineer at Google Research. My areas of interest include graph-based machine learning, label propagation, and data mining. I currently work on Google's large scale semi-supervised machine learning platform and on Neural Structured Learning in TensorFlow.

Prior to Google, I worked on developing enterprise storage technologies at Tintri for close to 4 years. While at Tintri, I was one of the principal contributors to the design and implementation of logical synchronous replication with automatic transparent failover. A paper on Logical Synchronous Replication appeared in FAST’18.

I completed my Masters in Computer Science with a distinction in research at Stanford University in 2014. At Stanford, I was part of the Platform Lab working with Dr. John Ousterhout on RAMCloud, a low latency DRAM-based distributed data center storage system. A paper on RAMCloud appeared in TOCS’15. My Master’s thesis was on managing objects and secondary indexes in RAMCloud, which was part of a larger effort to design and implement scalable low-latency secondary indices (SLIK) in RAMCloud. A paper on SLIK appeared in ATC'16.

Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Recognizing Multimodal Entailment (tutorial at ACL 2021)
    Afsaneh Hajiamin Shirazi
    Blaž Bratanič
    Christina Liu
    Gabriel Fedrigo Barcik
    Georg Fritz Osang
    Jared Frank
    Lucas Smaira
    Ricardo Abasolo Marino
    Roma Patel
    Vaiva Imbrasaite
    (2021) (to appear)
    Preview abstract How information is created, shared and consumed has changed rapidly in recent decades, in part thanks to new social platforms and technologies on the web. With ever-larger amounts of unstructured and limited labels, organizing and reconciling information from different sources and modalities is a central challenge in machine learning. This cutting-edge tutorial aims to introduce the multimodal entailment task, which can be useful for detecting semantic alignments when a single modality alone does not suffice for a whole content understanding. Starting with a brief overview of natural language processing, computer vision, structured data and neural graph learning, we lay the foundations for the multimodal sections to follow. We then discuss recent multimodal learning literature covering visual, audio and language streams, and explore case studies focusing on tasks which require fine-grained understanding of visual and linguistic semantics question answering, veracity and hatred classification. Finally, we introduce a new dataset for recognizing multimodal entailment, exploring it in a hands-on collaborative section. Overall, this tutorial gives an overview of multimodal learning, introduces a multimodal entailment dataset, and encourages future research in the topic. View details
    Preview abstract We present Neural Structured Learning (NSL) in TensorFlow, a new learning paradigm to train neural networks by leveraging structured signals in addition to feature inputs. Structure can be explicit as represented by a graph, or implicit, either induced by adversarial perturbation or inferred using techniques like embedding learning. NSL is open-sourced as part of the TensorFlow ecosystem and is widely used in Google across many products and services. In this tutorial, we provide an overview of the NSL framework including various libraries, tools, and APIs as well as demonstrate the practical use of NSL in different applications. The NSL website is hosted at www.tensorflow.org/neural_structured_learning, which includes details about the theoretical foundations of the technology, extensive API documentation, and hands-on tutorials. View details