Sergey Brin

Sergey Brin

Sergey Brin, a native of Moscow, received a bachelor of science degree with honors in mathematics and computer science from the University of Maryland at College Park. He is currently on leave from the Ph.D. program in computer science at Stanford University, where he received his master's degree. Sergey is a recipient of a National Science Foundation Graduate Fellowship as well as an honorary MBA from Instituto de Empresa. It was at Stanford where he met Larry Page and worked on the project that became Google. Together they founded Google Inc. in 1998, and Sergey continues to share responsibility for day-to-day operations with Larry Page and Eric Schmidt.

Sergey's research interests include search engines, information extraction from unstructured sources, and data mining of large text collections and scientific data. He has published more than a dozen academic papers, including Extracting Patterns and Relations from the World Wide Web; Dynamic Data Mining: A New Architecture for Data with High Dimensionality, which he published with Larry Page; Scalable Techniques for Mining Casual Structures; Dynamic Itemset Counting and Implication Rules for Market Basket Data; and Beyond Market Baskets: Generalizing Association Rules to Correlations.

Sergey has been a featured speaker at several international academic, business and technology forums, including the World Economic Forum and the Technology, Entertainment and Design Conference. He has shared his views on the technology industry and the future of search on the Charlie Rose Show, CNBC, and CNNfn. In 2004, he and Larry Page were named "Persons of the Week" by ABC World News Tonight.

Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Mining Optimized Gain Rules for Numeric Attributes
    Rajeev Rastogi
    Kyuseok Shim
    IEEE Trans. Knowl. Data Eng., 15 (2003), pp. 324-338
    Preview
    Query-Free News Search
    Monika Henzinger
    Bay-Wei Chang
    Proceedings of the 12th International World Wide Web Conference (WWW-2003), Budapest, Hungary
    Preview abstract Many daily activities present information in the form of a stream of text, and often people can benefit from additional information on the topic discussed. TV broadcast news can be treated as one such stream of text; in this paper we discuss finding news articles on the web that are relevant to news currently being broadcast. We evaluated a variety of algorithms for this problem, looking at the impact of inverse document frequency, stemming, compounds, history, and query length on the relevance and coverage of news articles returned in real time during a broadcast. We also evaluated several postprocessing techniques for improving the precision, including reranking using additional terms, reranking by document similarity, and filtering on document similarity. For the best algorithm, 84%-91% of the articles found were relevant, with at least 64% of the articles being on the exact topic of the broadcast. In addition, a relevant article was found for at least 70% of the topics. View details
    Scalable Techniques for Mining Causal Structures
    Craig Silverstein
    Rajeev Motwani
    Jeffrey D. Ullman
    VLDB (1998), pp. 594-605
    Preview
    Preview abstract In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want. View details