Two Views from the 2009 Google Faculty Summit

August 3, 2009

Posted by Alfred Spector, Vice President of Research and Special Initiatives



[cross-posted with the Official Google Blog]

We held our fifth Computer Science Faculty Summit at our Mountain View campus last week. About 100 faculty attendees from schools in the Western hemisphere attended the summit, which focused on a collection of technologies that serve to connect and empower people. Included in the agenda were presentations on technologies for automated translation of human language, voice recognition, responding to crises, power monitoring and collaborative data management. We also talked about technologies to make personal systems more secure, and how to teach programming — even using Android phones. You can see a more complete list of the topics in the Faculty Summit Agenda or check out my introductory presentation for more information.

I asked a few of the faculty to provide us their perspective on the summit, thinking their views may be more valuable than our own: Professor Deborah Estrin, a Professor of Computer Science at UCLA and an expert in large-scale sensing of environmental and other information, and Professor John Ousterhout, an expert in distributed operating systems and scripting languages.

Professor Estrin's perspective:

We all know that Google has produced a spectacular array of technologies and services that has changed the way we create, access, manage, share and curate information. A very broad range of people samples and experiences Google’s enhancements and new services on a daily basis. I, of course, am one of those minions, but last week I had the special opportunity to get a glimpse inside the hive while attending the 2009 Google Faculty Summit. I still haven't processed all of the impressions, facts, figures and URLs that I jotted down over the packed day and a half-long gathering, but here are a few of the things that impressed me most:

  • The way Google simultaneously launches production services while making great advances in really hard technical areas such as machine translation and voice search, and how these two threads are fully intertwined and feed off of one another.
  • Their embrace of open source activities, particularly in the Android operating system and programming environment for mobiles. They also seed and sponsor all sorts of creative works, from K-12 computer science learning opportunities to an the open data kit that supports data-gathering projects worldwide.
  • The company’s commitment to thinking big and supporting their employees in acting on their concerns and cares in the larger geopolitical sphere. From the creation of Flu Trends to the support of a new "Crisis Response Hackathon" (an event that Google, Microsoft and Yahoo are planning to jointly sponsor to help programmers find opportunities to use their technical skills to solve societal problems), Googlers are not just encouraged to donate dollars to important causes — they are encouraged to use their technical skills to create new solutions and tools to address the world's all-too-many challenges.

This was my second Google Faculty Summit — I previously attended in 2007. I was impressed by the 2007 Summit, but not as deeply as I was this year. Among other things, this year I felt that Googlers talked to us like colleagues instead of just visitors. The conversations flowed: Not once did I run up across the "Sorry, can't talk about that... you know our policy on early announcements". I left quite excited about Google's expanded role in the CS research ecosystem. Thanks for changing that API!

Professor Ousterhout's perspective:

I spent Thursday and Friday this week at Google for their annual Faculty Summit. After listening to descriptions of several Google projects and talking with Googlers and the other faculty attendees, I left with two overall takeaways. First, it's becoming clear that information at
scale is changing science and engineering. If you have access to enormous datasets, it opens up whole new avenues for scientific discovery and for solving problems. For example, Google's machine translation tools take advantage of "parallel texts": documents that have been translated by humans from one language to another, with both forms available. By comparing the sentences from enormous numbers of parallel texts, machine translation tools can develop effective translation tools using simple probabilistic approaches. The results are better than any previous attempts at computerized translation, but only if there are billions of words available in parallel texts. Another example of using large-scale information is Flu Trends, which tracks the spread of flu by counting the frequency of certain search terms in Google's search engine; the data is surprisingly accurate and available more quickly than that from traditional approaches.

My second takeaway is that it's crucial to keep as much information as possible publicly available. It used to be that much of science and engineering was driven by technology: whoever had the biggest particle accelerator or the fastest computer had an advantage. From now on, information will be just as important as technology: whoever has access to the most information will make the most discoveries and create the most exciting new products. If we want to maintain the leadership position of the U.S., we must find ways to make as much information as possible freely available. There will always be vested commercial interests that want to restrict access to information, but we must fight these interests. The overall benefit to society of publishing information outweighs the benefit to individual companies from restricting it.