Toward the Future: GWSB and Trustworthy AI


April 23, 2024

Four people seated in chairs on a stage indoors. An audience appears.

A panel discussion focused on Trustworthy AI was held as part of the 2024 GW Business & Policy Forum: Imagining the Future with AI.

Trustworthy AI will be shaped by high-performance algorithms, effective governance, and the input of people from different cultural backgrounds, according to the high-profile participants at the 2024 GW Business & Policy Forum. It will also involve the expertise of academics and researchers who value an interdisciplinary approach to understanding the world’s challenges.

In other words, it will come from the vision and principles that already underpin the mission of the GW School of Business, where several faculty members are engaged in research and teaching that looks at AI. 

The April 2 Business & Policy Forum marked the second year in a row that the School of Business spearheaded the organization of the event. This year’s theme, Imaging the Future with AI, underscored the university’s leading-edge role in shaping policy that addresses consequential issues, and an area in which the School of Business has deepening expertise.

Among GW Business faculty, Patrick Hall, assistant professor of decision sciences, conducts research in support of the National Institute of Standards and Technology (NIST) AI Risk Management Framework, sits on the advisory board of the AI Incident Database, and teaches graduate and undergraduate courses that examine responsible AI. Donna Hoffman, the Louis Rosenfeld Distinguished Scholar, and Thomas Novak, Denit Trust Distinguished Scholar, co-direct the school’s Center for the Connected Consumer and are engaged in research that looks at consumer interaction with AI. 

The school offers a Graduate Certificate in Artificial Intelligence while courses in several disciplines—among them marketing and information systems and technology management—examine the role of AI. The GW Center for International Business Education and Research (GW-CIBER) has included AI in its podcast discussions on global careers, and Microsoft’s chief Responsible AI officer has taken part in the George Talks Business interview series. Internally, an ad hoc group of faculty members tracks the use of AI in the classroom.

Responding to AI’s Impact

During the GW Business & Policy Forum, Erwin Gianchandani, assistant director in the Directorate for Technology Innovation and Partnerships at the National Science Foundation (NSF), described AI as “one of the most, if not the most, interdisciplinary spaces that we work in at this time.” As moderator of a discussion on Trustworthy AI, he tapped the insights of panelists Jill Crisman, the vice president and executive director of the Digital Safety Research Institute at UL Research Institutes; Andy Henson, the senior vice president of the Digital Innovation Factory at SAIC; and Elham Tabassi, the chief technology officer at the U.S. AI Safety Institute and NIST’s chief AI advisor.

Gianchandani engaged the panelists on issues related to the trustworthiness of AI technologies and the perceived positives and negatives of generative AI.

Much like it was with the development and use of electricity, Crisman said, trust is a crucial component of the AI evolution. Tabassi noted that NIST looks at trustworthy AI and responsible AI as related but separate concepts.

“Responsible [AI] takes in the human elements. Trustworthy is the systems. How private is private, how safe is safe, how secure is secure depends on the context,” Tabassi said. She also said the development of measurement standards, part of NIST’s role as a nonregulatory research agency under the U.S. Department of Commerce, is vital to the advancement of U.S. innovation and industrial competitiveness. “If you cannot measure it, you cannot improve it,” Tabassi explained, referring to AI’s impact.

SAIC’s Henson said AI’s influence will be felt across people’s lives, from how and what their children are taught in school to the jobs they hold.

“How do I wrap my arms around this? In companies that are struggling, do I ban it? Do I not ban it? Do I allow my data to be used? I think that the interdisciplinary sprawl is huge,” he said. “We’re hyper-focused on how you use the technology to solve problems. [But] it’s got to work for the person, it’s got to solve their problem."

Elevating Trust Amid Rapid Innovation 

The panelists agreed that the community that develops trustworthy AI must extend beyond computer scientists, mathematicians and technology experts to include psychologists, sociologists and even English majors and philosophers.

“AI is a foundational technology that other types of domains can be built on top of—finance, health care,” Tabassi said. “It is a vertical approach … that AI risk managers need to work with. It is messy and complex but also wonderful.”

She said many perspectives must be built into AI work to ensure that diverse voices are included. For his part, Gianchandani predicted that AI will become more understandable, and doubts about it will lessen, if a broader cross section of interests and disciplines works on its evolution. He advocated bringing more public and private stakeholders together, including in academic settings like the forum, to deepen the national conversation.

Henson agreed, pointing out that generative AI has dramatically stepped up the pace at which the AI space is developing. “It’s happening so much faster than we realized,” he said. “We on the operational side have to bring the real-world problems to researchers [now].”

Tabassi, too, said the rapid pace is challenging professionals engaged with AI.

“We don’t know how to evaluate AI. We’re trying to do all these thing in a space where the speed and change of technology, and the time from which the item comes to market for widespread adoption, are shorter and shorter,” she said. “We have to find out how to do the cycle of operational research. Every component, from the community… to those using technology, to regular citizens have to come together.”

Gianchandani recognized that a new pilot program at the National Artificial Intelligence Research Resource (NAIRR) seeks a shared research infrastructure for innovation in AI. The undertaking is led by the NSF in partnership with 10 other federal agencies and 25 nongovernmental partners. The pilot program, which launched on Jan. 24, 2024, and will run for two years, broadly supports fundamental, translational and use-inspired AI-related research with a particular emphasis on societal challenges. Priority topics include safe, secure and trustworthy AI; human health; and environment and infrastructure. The pilot supports educators in training students on responsible use and development of AI technologies.

Broadening AI’s Stakeholders 

AI forum panelists shared their ideas for elevating trust in generative AI. 

“AI has made creativity, software program, all kinds of things available to everyone. As you play with these new technologies … think, ‘How could I use AI more responsibly?’ Crisman said. “Get it to help you on things you already know about before you start exploring areas where you have less input.”

Henson agreed, noting that many AI applications—such as riding-sharing services and restaurant reservation systems—are already viewed as helpful and trusted. Tabassi said the dissemination of more science-based data could also help maximize an understanding of AI’s benefits while helping to minimize its risks. The panelists also discussed the need to train and educate everyone about AI.

“Who decides the positive and failures of AI systems? Who decides about the impacts? I think everyone has to decide,” Crisman said. “What if there was an AI model that could look at treatment for rare diseases, but flipping a switch could make it one of the worst solutions? 

“We have to really think about what we want as a society,” she added.

Tabassi characterized the same challenge in a different way: “We need to change the conversation from ‘can be’ to ‘must be.’”

GW used the daylong forum as an opportunity to announce the launch of the GW Trustworthy AI Initiative, an umbrella entity for The Institute for Trustworthy AI in Law & Society (TRAILS) and the Co-Design of Trustworthy AI in Systems (GW DETAIS) program, which is an NSF program focused on PhD students.

TRAILS is designed to transform AI from a practice powered by technological innovation to one that is also driven by ethics, human rights and the input of previously marginalized communities. It is funded by a $20 million award from the NSF and NIST and is the first organization to integrate artificial intelligence participation, technology and governance during the design, development, deployment and oversight of AI systems. 

The initiative operates as a collaboration among GW, the University of Maryland and Morgan State University. Its first nonacademic partner is SAIC.

GW DTAIS, meanwhile, is a research traineeship program that also offers a Graduate Certificate in Trustworthy AI for Decision-Making Systems, giving professionals and graduate students the skills to address challenges in the AI space and to lead initiatives at their organizations.