Jump to Content

Frank Stajano

Before his academic appointments he worked in industry, gaining firsthand experience of startup companies, technology transfer, entrepreneurship, and patents. His academic research therefore maintains a strong practical orientation. He has been an employee of Google, Toshiba, AT&T, Oracle, and Olivetti. He was elected a Toshiba Fellow in 2000.

Interviewed by David Harper, Head of University Relations, EMEA

DH: Tell us about your background.

FS: I hold a PhD in computer security from Cambridge, following several years in industry and a previous degree in electronic engineering. I’ve always been at the boundary between engineering and computing, and at the boundary between university and industry. When I did my engineering degree, some of my lecturers suggested I remain in academia; but I said to myself: no, I’m studying this because I want to be an engineer. It would feel like cheating to teach people how to be engineers without having been one myself! So I spent several years in industrial research at Olivetti, Oracle, AT&T and Toshiba. After gaining hands-on experience of engineering and software development I took up a full-time academic post at Cambridge, where I now supervise PhD students in Computer Security.

DH: Did your experience with industry prove valuable once you returned to academia?

FS: Of course! I like to work on practical problems that have a connection with the real world. Even after I returned to university full-time, I still maintained connections with industry, mainly through consultancy jobs. I feel this helps keep my research honest, as it makes me work on problems that have practical relevance. At the same time I still have to filter this useful input from industry to distinguish what's relevant in the long term from what's only an immediate concern---because industry people (especially in computing) are often under tremendous pressure to deliver something by the end of the quarter, with little scope for long-term strategic vision. Therefore a complementary input, when looking for topics to research, is to look for ways in which I find the world isn't quite how I'd like it to be: this, for example, drives my interest towards privacy in the electronic society, which you wouldn't normally list as an industry concern, and it was great that this long-term goal was of strategic interest for Google, too.

DH: Can you tell us a bit about your current work at Cambridge?

FS: On the teaching side, I have the privilege of instructing undergraduates in one of the most fundamental areas of computer science: algorithms. That's an exciting course to teach because, out of all programming, it's when you work on core algorithms that the really smart stuff happens. Before that, I’ve taught a dozen other courses---from computer architecture and operating systems to programming (all the way down to low level stuff such as assembly language and even verilog), security and related areas. On the research side, I focus on various aspects of security and particularly three main threads: systems security, privacy in the electronic society and ubiquitous computing---all of them examined with a transversal interest in human psychology. For example I’ve recently partnered with a co-author, Paul Wilson, who is a magician and a sleight-of-hand expert. We’re working to understand the human aspects of security and the psychology of fraud---what goes on in the mind of fraudsters and especially of victims of fraud.

DH: So, you take these themes and apply them to problems faced by the computer science community?

FS: Yes. Engineers who design security systems have their own mental models and they implicitly assume that users will behave in a particular way. But your typical engineers, programmers and security geeks are notoriously poor at “people” skills so they aren’t the best at anticipating how non-engineers will behave, particularly in the face of scams or, more generally, security threats. Engineers (I can joke about engineers since I am one) are the kind of guys who make up elaborate security rules that are too hard to follow and then blame users for not following them. Those who understand the users’ psychology much better than engineers are instead the fraudsters themselves, who have learned to exploit those psychological insights for their own nefarious purposes. So, our thesis is that we ought to learn from the fraudsters, who really know what makes users tick. My co-author and I have documented and studied hundreds of scams, isolating psychological principles that are common to all. Based on this research, we can explain and predict the behaviour of users and work to build stronger systems defences.

DH: Not only are you enunciating these principles, but you’re also operationalizing them and figuring out how to lay systems around them at the architecture and user interaction levels?

FS: Exactly! We need to make both users and designers aware of these issues. Lazy programmers, when exposed to this work, tend to want checklists: “You have these principles---they ask---how do I map them into things I can do to make my system secure?” But my answer is: even if I had a checklist, I wouldn’t want to give it to you because it can quickly become an excuse for turning your brain off. If you blindly follow a checklist of countermeasures, the fraudsters will simply find a new way of getting around them. Instead, learn to think like the fraudsters and their victims and understand why they behave as they do. If you rely on pre-made solutions you’ll always “fight the previous war” and get there too late.

DH: Switching topics a bit, what sparked your interest in Google’s Visiting Faculty program? You mentioned that you like to keep your research informed by industry---perhaps you can speak about your specific motivation.

FS: Several things attracted me to Google. The first is the ability to develop systems that people around the world will use every day: a great opportunity to have real impact on the world. Then, for someone like me with a strong interest in privacy in the electronic society, there was no better place to visit than Google, whose useful if not indispensable services have a tremendous impact on society in terms of privacy issues. It was an excellent opportunity to get to know the “Big G” from inside: to understand what the situation is from the viewpoint of the entity that provides these services and what Google can do about it. Once I joined, Google employees could treat me as a colleague: everything was much more open and it was possible to get into sensitive and technical privacy discussions as a peer. Things that may appear black and white from an external perspective often look different when you know what goes on behind the scenes: I gained better insights about Google's stance on privacy and about the trade-offs that accompany each policy decision. Although I cannot say that my views on privacy always coincide with those of Google, I nonetheless witnessed a number of coordinated initiatives and a genuine commitment to stronger privacy, which I respected, from many smart and dedicated individuals. The dialogue was mutually stimulating and instructive even when we disagreed.

DH: Can you tell us about your work at Google and the impact it might have on our users?

FS: When I joined Google, we envisaged I’d be working mostly on privacy in social networks, an area I am exploring with my graduate students at Cambridge. But, in the second half of my time there, we shifted towards a different topic. I was in the unusual but lucky circumstance that my PhD student Jonathan Anderson was approved to join the Munich team for an internship while I was still there as visiting faculty. Before his arrival, my Google hosts and I threw around half a dozen potential project suggestions and selected one that Jon and I could work on together, relating to privacy annotations for web search results, based on previous research by Professor Lorrie Cranor of Carnegie Mellon University whom I had also visited as part of my sabbatical. In brief, this project involved researching potential ways to provide web users with clear and concise information about how web sites handle their personal data.

DH: How might your work at Google advance your research agenda once you’re back at university?

FS: It has been very interesting to see the way Google engineers tackle problems of scale and real-world emergencies. For example, members of an internal team that looks after the security of Google web sites came from Zurich to offer a two-day course to the engineers in Munich, and I took part in it. It was very instructive to hear the war stories from the people whose pagers go off when there’s a breach! They are obviously very knowledgeable about all the latest hacks and they let us try some of them on a sample site. It would be fun to run that kind of hands-on “hack this web site” class with my students. Also, it was interesting to experience Google’s software infrastructure, processes and tools, including Google's offline version of “pair programming”---each piece of code must be reviewed and approved by one of your peers before you can check it in. I wrote some short programs in Sawzall, a Google-made language for analyzing very large data sets. It’s fascinating to see how a complex problem is tackled with a tool which is both versatile and, at the same time, specialized for making the task easier. I can share this approach with my algorithms students as an example of how industry teams solve problems.

DH: What’s your advice to students or fellow academics who are considering roles or placements at Google?

FS: Go for it! If you do it right, spending time here can be extremely useful---particularly from a networking perspective. Once you join Google, you find people with common interests and it’s a great way to establish connections that you can keep up later. Plus, you’ll have the chance to work with some of the largest scale computing infrastructure on the planet. This experience is invaluable for computer science students, teachers and researchers.