Skip to main content Skip to secondary navigation
Main content start

Innovating in AI: Four stories of MS&E faculty

MS&E faculty are influencing how artificial intelligence integrates into society by working on the speed, efficiency, profitability, and transparency of AI-powered decisions.
Four MS&E faculty share how their work impacts and utilizes artificial intelligence | Images by Stanford

Lately, media headlines are full of the latest AI marvels. Machine learning advances in business, healthcare, and government are attracting public attention and stimulating questions about what’s possible in the future. 

MS&E faculty are major contributors to the AI revolution and are decisively shaping the next wave of AI advancements. Below, meet four MS&E professors who are innovating in AI.

Ben Van Roy: Teaching AI to learn faster and better
Madeleine Udell: Using AI to help ordinary people solve extraordinary problems
Markus Pelger: Using AI to make better financial investments
Kay Giesecke: Making AI-powered decisions more transparent

Ben Van Roy: Teaching AI to learn faster and better

While it seems like ChatGPT and "AI-powered everything" happened overnight, according to MS&E professor Ben Van Roy, it's actually the culmination of decades of work. "I've been telling people for some time that this is coming. A lot of them seemed to think I was making this up."

Image courtesy of Ben Van Roy

Van Roy would certainly know; he has been a pioneer in reinforcement learning since the early 1990s. Van Roy joined the Stanford faculty in 1998. He laughed as he described how another prominent reinforcement learning researcher introduced him at a conference saying she was inspired because he was proof that a reinforcement learning researcher could actually get a job.

At the time, reinforcement learning was considered an obscure subject, while other approaches to AI took the spotlight. Mainstream thought in the field of AI was somewhat dismissive of the possibility that learning from massive data, as is done by today's artificial neural networks, or learning from human interaction, as is done by reinforcement learning, could give rise to intelligence.

We now know that artificial neural networks and reinforcement learning power the leading approach to AI, producing astounding capabilities. This has, for example, driven the mind-blowing improvements in large language model-based chatbots like ChatGPT.

Van Roy points to a period of about a decade ago when there was a lot of momentum and investment in the industry, which led people to develop hardware and software that enabled the efficient use of training algorithms. "When you can bring the amount of time it takes to train a model down from a full week to 20 minutes, it suddenly stimulates a lot of research. People can play around and try stuff, and this has fueled a lot of progress over the past 10 years," said Van Roy.

During this period of intense innovation, Van Roy founded DeepMind's Mountain View research team. Van Roy and his team continue to work on efficiency so computer agents can learn faster from even less interaction with their environments. This is still the reinforcement learning "holy grail," so to speak. For his contributions to the field, Van Roy received the Frederick W. Lanchester Prize from INFORMS.

When talking about what’s next for AI, Van Roy highlighted topics his students are interested in, like continual learning, where an AI could learn and interact over a very long lifetime. "Once you get it to work well," he said, "it just continues, developing increasingly sophisticated skills and capabilities that we haven't seen yet."

When it comes to AI, his suggestion to all of us is to avoid getting caught up in today's technologies. "AI is a field defined by aspirations, not by methods," said Van Roy.

Basically, we don't know what AI looks like or is capable of in the future, but it will likely be very different than what we see today.

Madeleine Udell: Using AI to help ordinary people solve extraordinary problems

For decades, MS&E has been at the forefront of operations research, with our faculty creating optimization models to aid excellent decision-making in an endless stream of complex situations where undesirable outcomes are minimized and desired outcomes are maximized. For example, planning vaccine distribution, designing a safe criminal justice plan for releasing pre-trial detainees, and determining an investment portfolio with a predetermined level of risk can all be modeled as optimization problems.

Image courtesy of Madeleine Udell

Here's the issue: most of us don't have the technical know-how or the math skills to do it. That includes the people tasked with high-stakes decisions, like county health officers, grant administrators, and leaders of government agencies.

Enter MS&E professor Madeleine Udell, who wants to make automated optimization modeling tools to help non-techies determine the best set of choices for complicated situations.

"A fraction of all difficult problems actually get modeled and solved using optimization solvers," said Udell. "And that's because it's really difficult to model a problem and go from the vague things that you know about the world to the math that you need to solve and find the provably optimal decisions."

Consider a vaccine distribution problem as an example. Let's say you can make 100 million doses of vaccine and need to distribute it in a way that minimizes bad health outcomes and product waste. You'll have to consider transmission dynamics, demographics, distribution capabilities, a limited healthcare staff, and on, and on, and on.

For a modeling project like this, the first layer is the optimization modeler, who might sit down with the state health authority to flesh out the variables, build a model, and yes, do some math. And then you have a separate optimization solver to do the ongoing calculations for an optimal solution (more math, plus computer software).

What if a tool like ChatGPT, which has become possible through advances in large language models, could do some of this (or—gasp!—all of it) and the decision-maker could just talk directly to the chatbot to create their model? "I love the idea of being able to design tools for a broader range of people and to get the expert out of the loop," said Udell.

Putting powerful modeling tools in the hands of those in charge of fixing serious societal problems could facilitate faster and better solutions at a larger scale.

"What I've always enjoyed about optimization modeling is that it interacts with facts about the real world and problems that impact people's lives," said Udell.

If Udell's research and that of her students could put this superpower of optimized decision-making into the hands of a much larger group of decision-makers, just imagine what's possible for society's messiest, most complicated problems.

Markus Pelger: Using AI to make better financial investments

MS&E professor Markus Pelger is working to perfect machine learning techniques to help finance professionals make better investments. “If you have a better understanding of risk and a better understanding of how to manage information, then you should be able to make better financial decisions,” said Pelger.

Image courtesy of Markus Pelger

Pelger is part of the Stanford Advanced Financial Technologies Lab (AFTLab), a group dedicated to advancing next-generation financial technologies using big data, machine learning, and computation.

In his latest research, Pelger is building a smarter AI to tackle the classic problem of predicting stock market returns. According to Pelger, market predictions are incredibly tricky because there is a fixed amount of available data, and that data has lots of variables to consider. There may be 1,000-8,000 companies to analyze, which sounds like a lot of data, but that’s a miniscule amount when you consider the kind of data sets AI is capable of parsing.

How does Pelger deal with the variables and not-so-big data in his model? He carefully applies constraints in the form of domain-specific knowledge. His biggest innovation is applying a "no-arbitrage" constraint.

To understand what that is and how it's applied, Pelger suggests that you look at two companies in the same industry, like Ford and GM. You can expect that their market performance should be similar because they're in the same industry and have a similar business model. The no-arbitrage constraint would declare that companies that are exposed to the same sorts of risks should, on average, have the same returns. So by applying a no-arbitrage constraint on the model, you prevent too much deviation between closely related companies.

The result? The model can become more profitable. In terms of the risk-adjusted performance of making optimal investments, Pelger's model is three times as profitable as the standard, off-the-shelf machine learning methods being used today.

Kay Giesecke: Making AI-powered decisions more transparent

According to MS&E professor Kay Giesecke, AI can be a tool for better decision-making in a variety of industries, but there’s an elephant in the room: trust. "People often prefer more transparent but inferior algorithms that have been around for decades. They're using them because they understand them," said Giesecke.

Image courtesy of Kay Giesecke

Giesecke's solution to this problem is what he calls "explainability tools." Based on rigorous statistical tests, Giesecke's tools can identify the underlying variables and attributes of a problem that have a statistically meaningful influence on the output of the algorithm. Once those are defined, you can determine how the AI is making the decision, fostering trust and a human gut-check of that decision-making process.

Increasingly, machine learning tools are being used for a variety of decisions impacting people on a daily basis, for example, getting approved for a loan, determining bail at a hearing, and even healthcare screenings. But it's hard to trust a computer when you can't look it in the eye or hear the "why" around the decision.

This is increasingly the case in the financial space. According to Giesecke, "Machine learning algorithms are making predictions about the likelihood of a loan applicant repaying their loans for a home mortgage or a credit card. Also, this is a highly regulated space, and regulators actually require the decision to be transparent and to be justified to the applicant. So we need methods to look under the hood of these new machine learning algorithms producing predictions."

His goal is to develop methods that are broadly applicable to many machine learning algorithms and to application areas beyond finance. "The lack of explainability is holding back the adoption of machine learning methods in the financial field and beyond. I want to give the user insight into the behavior of the machine, making it transparent and auditable, so the user can trust it," said Giesecke.

Lifting the black box could ensure that decisions follow appropriate rules and are fair and unbiased. This will promote trust in the technologies, which would allow technological advancement and better decision-making.

Conclusion

The research efforts of these four professors, who are working on the speed, efficiency, profitability, and transparency of AI-powered decisions, are influencing how AI integrates into our society. They are joined by other MS&E faculty and students currently conducting foundational research in AI and machine learning. These innovations in computer-aided problem-solving are poised to change how we use AI next year, next decade, and beyond.