Skip to main content Skip to secondary navigation

Faculty Spotlight: Elisabeth Paté-Cornell

Main content start
Portrait of Professor Elisabeth Pate-Cornell
Photo credit: Wikipedia

February 5, 2019

Meet Elisabeth Paté-Cornell, Professor of MS&E. She specializes in risk management, a broad field that has led to her work with cybersecurity, space shuttles, and the President's Intelligence Advisory Board. We caught up with Prof. Paté-Cornell to dive into these stories and the paths she has followed.

"Risk analysis was perfect for me because it could truly be applied to anything I wanted."

LAYING THE GROUNDWORK: EARLY LIFE

Describe your early life—what was it like for you and how did it inform your path to Stanford?

I grew up in many different places because my father was an officer of the French Marine Corps. I was born in Dakar, Senegal, where he was deployed at the time. We moved to France when I was two, but returned to Dakar when I was ten. There, my parents made one of the best decisions they could have made: they sent me to the local public school, mostly with African students, as opposed to a private one.

I loved that time of my life. We lived in Dakar for three years, I was in a great high school, belonged to a judo club and spent a lot of time swimming.  But then we had to return to Europe in the middle of a school year. I was placed for a quarter in a high school in the south of France, where for historical reasons, most of the kids spoke Spanish. So I had to build up quickly on the small amount of Spanish I had learned before, which turned out to be very helpful later on when I arrived in California.

Then we lived in a magnificent town called La Rochelle, a historic harbor on the Atlantic coast of France, where my father was the head of the military district. There, I learned how to sail and went through the local lyceum, but I missed the diversity of my Dakar school. In the different places that I attended, I enjoyed math very much and I was the best at it in my high school class (later, it was a bit more challenging!).

After high school, I went to Nantes in the south of Brittany to study math and physics as an undergraduate, preparing for the top schools of engineering in France. My father then retired, and my family moved to Marseille, where two years later, I graduated from that program.

For graduate school, I first went to Grenoble in the Alps, to study applied mathematics and computer science for an Engineering Degree. I then arrived at Stanford in 1971 and decided, on the spot, that I liked both the institution and the climate. I completed my Master's in Operations Research in 1972, and in 1978, a PhD in Engineering-Economic Systems (now part of Management Science and Engineering), focusing on risk analysis.

MIT offered me a job as an Assistant Professor in Civil Engineering, six months before I finished my PhD, and I accepted. And of course, walking twice every day in winter across the Harvard bridge, I almost froze to death! I was about to marry Allin Cornell, who was a Professor there. We agreed that we were not going to stay in the Northeast, and he quickly decided to go back with me to Stanford, where he had done his PhD. So, Allin and I got married in 1981 and moved to Stanford together. Our son Phil was born shortly after, and Ariane in 1984. Tragically, Allin died of lymphoma in 2007.

PAVING HER PATH: EARLY CAREER

How did you choose your fields of research?

Risk analysis was very interesting to me because I had chosen math and physics as my major and wanted to apply what I knew to important problems. My parents were not enthusiastic about it, because in their view it was not feminine enough. My sisters went into medicine; that was more acceptable. But I stuck to my plans because that's what I liked to do.

Then I made my case worse—I studied computer science! That's where my father was somehow frustrated. He was an electrical engineer and an officer of the Signal Corps of the French Marines, but he did not know computers. Then, I made my case even worse when I announced that I was accepted at Stanford and going to the US. Nonetheless, we were always on very good terms, and I suspect that deep down they were proud of what I was doing.

An alternative path, which I seriously considered, was history, and I loved reading about it. But when faced with the choice to pursue math and science or classics and literature, I picked science with the argument that I could always go to the opera and read all the books I wanted, but if I left science I was going to get disconnected from it.

I chose applied mathematics and computer science because it was still new at the time, and I attended what must have been one of the first schools in Europe in that field. And when I arrived at Stanford with an engineering degree under my belt, I found it helpful because I was much more easily integrated into the local culture.

How did you choose to do your PhD in EES, instead of continuing with computer science?

My interests were quite varied, and I thought that computer science was a bit narrow for what I was trying to achieve. I picked Operations Research for my master's because I had the best grade in OR in my class, and my OR professor (Dr. Kaufman) was a friend of George Dantzig, then a Professor of OR at Stanford. I obtained my Master's, but realized quickly that EES could give me the opportunity to apply my research to a whole spectrum of important, tangible problems. Risk analysis was perfect for me because it could truly be applied to anything I wanted, starting with earthquakes which I had never felt and scared me! After an accident that prevented me from skiing for a while, I had some extra time in Grenoble, and I completed most of a degree in economics. It was a perfect fit with what I was interested in and with the diversity of topics that I was curious about.

AHEAD OF THE CURVE: CYBERSECURITY

How did you decide to do focused research in cybersecurity?

It was an interesting sequence of events that led me to it. In the late 1980's, I did a major study of the tiles of the heat shield of the US space shuttle, which was my first introduction to NASA. Following that, I became a member of the NASA Advisory Council (I am now a member of it for the second time). I have been working closely with NASA ever since, and in particular with the Jet Propulsion Laboratory, where I am now a Distinguished Visiting Scientist.

I was elected to the National Academy of Engineering in 1995, and to their council shortly after. This platform probably contributed to my appointment, after the attacks of September 11, 2001, to what is now the President's Intelligence Advisory Board. At that time at Stanford, I started doing work on counterterrorism, counterinsurgency, and the kinds of risk analysis where one is not up against Mother Nature, but against other people. That's what eventually led me into the questions of cybersecurity, as it became clear that part of the problems that one encounters in these other worlds are of similar nature.

A few other things lead me to cybersecurity. Earlier in my career, I was a member of the Army Science Board, the Air Force Science Board, and later, the board of In-Q-Tel, which funds companies in support of the intelligence community. At that time, I was also on the Board of Aerospace, and the Board of Advisors of the Naval Postgraduate School, which I chaired once, and of which I am still a member. In all these places, cyber problems are real and their study a focal point.

Tell us about your current work in cybersecurity:

I am currently involved at Stanford in the two centers that I know of that study cybersecurity. One is based at the Freeman Spogli Institute and focuses on policies. The other is based in the School of Engineering and lead by Dan Boneh, a respected specialist of cryptography. Before that I was part of the cybersecurity initiative.

I've also advised multiple PhD theses in cybersecurity. The first one involved the study of a large database of attacks in a space organization. It allowed us to do first a statistical analysis of the kinds of problems that they had experienced. We then extended that analysis to more serious attacks that have not occurred yet, which is the most challenging part of the story. In another thesis, we looked at the optimal timing to replace the software inside a system, because we had found that, for example, some of the hospitals that have been attacked by ransomware had not changed their software for ten years or more.

I've also had four or five students from the military services, officers at the rank of major and lieutenant colonel. Major Matthew Smith, who is now with the National Security Agency, finished his thesis with me on the optimal level of connectivity in a system such as a smart electric grid. One can introduce new connections between elements of a physical system and the cloud for a benefit, but at the same time it opens a new vulnerability to cyber attacks. The question is thus whether it is worth it.

Currently, one of my students, Major Isaac Faber, is finishing his thesis on warnings of cyber attacks. It's a very interesting study that considers how systems can be monitored by robots that can decide, for instance, to cut off links up to the point where a robot realizes that it is above its head and passes the baton to a human being with more knowledge of the context. At that point, the human can intervene and decide to close additional links to protect the system as a whole, considering all aspects of that decision.

Another of my students, Lieutenant Colonel Travis Trammel is looking at the cyber aspects of fake news. He is focusing on US elections, particularly political divisions, and the way certain groups, mostly from Russia, are trying to use social networks to exacerbate these divisions in the country.

What do you see as the most pressing issue or research question in cybersecurity today?

We have to stay ahead of the curve, and keep our eyes open for new adversaries and kinds of attacks. Another pressing issue is to convince organizations to gather attack data and share that information whenever they can. I understand why a company that has been targeted is not too keen on disclosing the details, but that information would be a significant advantage to other institutions. But first, they need to figure out what is truly the architecture and the structure of their system, and to properly monitor not only the vulnerabilities that they know, but also constantly look for new ones.

In addition, there is a major problem of insider threat. How to detect someone who is likely to create damage in a cyber system from within the organization is very difficult. And clearly, one should make sure not to give too much information to people who don't really need it.

These are the few topics that come to my mind at the moment, but I am sure I could find ten others if I thought more deeply about it…

What issues do you think the field of cybersecurity will address in the next five years, or even 20 years?

First we'll need to address policy concerns—what can we do from a policy point of view to protect the country and our political system, including addressing the issues of freedom and privacy. I also think that there will be major progress in the mechanisms of protection of cyber systems. Cryptography is one of the most effective protection tools, but there are many others. Again, sharing information is important in that respect.

Another important part of the story will be how we respond to attacks. Currently, if threatened, one can try to protect one's systems, but in most cases, cannot legally get back at the attackers. I think that unfortunately, we're going to have to move toward a legal model of retaliation or cyber warfare, which we have tried to avoid so far because we did not want an escalation similar, for instance, to what we have seen with nuclear weapons. But that position may no longer be tenable.

RISKS IN SPACE

Another area of your work has revolved around outer space. Tell us more about your current work with NASA:

On the NASA Advisory Council, right now, one focus is going to Mars. There is no question that we want to do it, but when is anybody's guess. The current idea is to first go to the moon, and there are two ways to do it. One is a straightforward landing, the other is to establish some kind of orbiting station called the Gateway as a stepping stone. But I think that one of the main difficulties about going to Mars is going to be the human physiological resistance, even more perhaps than some of the technical challenges.

Can you give more context to that comment about the difficulty of living in space?

It will take about four months to get there. The astronauts will then live on Mars for some time and will take four more months to come back. They will live on whatever they will be able to bring or to grow, and they will be subjected to all kinds of challenges. Hopefully they will be protected against radiation, but they will live in microgravity for a long time, where the human body loses some of its bone mass. And there are other secondary effects that I am hearing about from my astronaut colleagues. In that respect, it was interesting to meet years ago a Russian cosmonaut who had been in orbit for about a year and said to me: "Problem? I don't see a problem." So it will probably vary widely among individuals. But there are some serious concerns about the effect of such a trip on the heart, bone mass, and psychology. Jokingly, I've said we should take a group of trappist monks and send them there, because they might have practiced living for long periods without talking!

You did a lot of work and received press around your risk analysis of the space shuttle Challenger and the failure of its heat shield. Can you tell us about that experience?

I was sitting on a plane one day next to a gentleman and we started talking. It turned out he was a professor at Stanford heading a lab where they were working on the tiles of the shuttle. And although I was doing risk analysis at the time, I had not done anything specifically in that area. He invited me to come the following day to NASA headquarters in Washington, where they asked me if I would like to study the engines of the shuttle. I said it was too big a problem—I was not a consulting firm and I only had one or two doctoral students. Then they asked me if I would study the tiles of the heat shield, and I agreed.

I went to Johnson Space Center to see how the tiles had been designed. There was an older gentleman about to retire who was happy to open his files for me. I learned that the tiles, which were going to be used, were essentially glued to the orbiter's surface. The astronauts were not enthusiastic about this—I talked to Sally Ride and John Young—but the gaps between the tiles was the easiest way to allow the surface to curve slightly depending on the temperature.

Then I put on my jeans and my sneakers and went to Kennedy Space Center to see how the tiles were maintained between flights. I spent a few days there watching the technicians, and in the bars of Cocoa Beach I heard many of the stories that gave me the information that I needed to do a risk analysis. With the support of one of my assistants, Paul Fischbeck, I figured out that the risk of losing the mission due to the tiles was about 1:1000 per flight, whereas the risk of losing the whole mission was about 1:100. So ten percent of the risk was attributable to the tiles.

I presented my report to the head of Kennedy, who was very pleased with it and asked me how I got that information, to which I answered: "Sir, I don't think that you could have gotten it yourself the same way—I just went under the orbiter and talked to the people involved."

I then sent the report to Johnson Space Center where it was mostly ignored (they had not funded it). That was unfortunate. We wrote it in 1990, and in 2003 one of the failure scenarios that we had identified caused the Columbia shuttle accident. I was among the first people to testify in front of the investigation team and I was asked why NASA had not taken one of my critical recommendations into account. I responded half-jokingly that perhaps a man with a German accent instead of a woman with a French one would have been listened to more readily… Shortly after, I was named on the NASA Advisory Council.

I should add, however, that NASA did implement several of our other recommendations, but this one—about the attachment of the insulation of the external tank—was ignored and turned out to be critical.

A LASTING LEGACY

You were the first chair of the newly-formed Management Science and Engineering department in 2000. What was the vision for the new department?

The main reason why it was created was that we had three small departments, some of which had difficulties hiring faculty, and some that had difficulties projecting their image. So John Hennessy, who was then the Dean of Engineering, decided to pull them together with the help of Bill Perry, now Professor Emeritus of MS&E, and he asked me to chair that new department.

The vision was to make sure that we had a true department of engineering that addressed management issues with all the tools that engineering provides. One of the difficulties at the time was to find a title, and it had to have "engineering" in it. One of the legacy departments was named Industrial Engineering and Engineering Management, and thank goodness we had secured that term, so we could call the new department Management Science and Engineering.

What lasting impact would you like to see your research have in the field and in the world?

What I would like to leave behind is a framework of risk analysis that well communicates uncertainties, and a quantitative characterization of the risk that includes probability, technical issues and human and organizational factors. Risk includes not only what can go wrong, but also the probability and the consequences that it happens. Then, the risk management decisions have to involve the way people feel about it, which economists call the utility function. But the risk analysis itself has to be as unbiased and free of preferences as possible.

I think that communication of uncertainties has been one of the major challenges that I have encountered. I have often had to explain what were the chances that a failure might happen—and how we had come to the answer—so that we could compare various risk management solutions and their cost effectiveness.

What do you enjoy doing in your free time?

I love to travel with my husband to warm places! I like sailing—I just got my American sailing license—and I love swimming. I read a lot of things that have nothing to do with my work—novels, history, biographies, etc. I read in French as much as in English. I also really like walking at night in the hills near our home, in spite of the steep roads and the horror stories about mountain lions that people love to tell me! But it is good exercise. I play—badly but it is fun and relaxing—the piano and the guitar. And I spend a lot of time with my husband Jim Ellis, a retired Navy Admiral and now a fellow at Hoover, and with my children, talking about what they do in space (Ariane works for Blue Origin), energy policies (Phil is with the Atlantic Council) and everything that's on our minds…

Interviewed by Jim Fabry and Tim Keely.

Faculty stories & voices