Skip to main content
Faculty Research

BYU Professor, Partners Create First-Ever Risk Guide to Help Organizations Safely Adopt AI

BYU Marriott professor teams with ASU, University of Duisburg-Essen, industry coauthors to create GenAI Governance Framework

AI generated image of bank edifice on blue background

June 20, 2024 — Ever since ChatGPT went live in November 2022, artificial intelligence has flooded virtually every industry on the planet. And while generative AI has changed the way people work and do business forever, organizations and institutions worldwide are scrambling to safely adopt the evolving technology.

Hoping to help companies of all sizes responsibly harness the power of AI while also managing its risks, professors from Brigham Young University, Arizona State University and the University of Duisburg-Essen have collaborated with software company Boomi and consulting firm Connor Group to create the first-ever enterprise risk framework for generative artificial intelligence.

“GenAI pushes the boundaries of current governance structures by creating entirely new information that did not exist before,” said framework author David Wood, a BYU professor in the Marriott School of Business. “As such, it introduces a multitude of new possibilities and risks that organizations of all sizes must confront. We designed the framework to help any size organization think through the most important risks that they will face from GenAI.”

Wood and fellow authors Scott Emett (Arizona State University), Marc Eulerich (University of Duisburg-Essen) and Jason Pikoos (Connor Group) say while individuals frequently use GenAI directly through purchased products like ChatGPT or Google’s Gemini, they may also engage with it unwittingly. For example, employees might unknowingly utilize GenAI through software programs containing embedded GenAI components, such as Microsoft’s Copilot.

Built in collaboration with over 1,000 business leaders, academics, and industry contacts, the 20-page GenAI Governance Framework and accompanying GenAI Maturity Model provide a detailed guide and comprehensive methodology for organizations to assess their AI readiness, identify and manage risks associated with GenAI technologies, and move into responsible GenAI adoption.

Wood, who has published a number of high impact academic papers on AI since 2022, and has led efforts to introduce its use in course curriculum at BYU’s Marriott School, coordinated with coauthors to make sure the framework allows for easy customization to align with an organization’s objectives, needs, and risk appetite. The guide breaks down governance into five major domains, as outlined in the image below:

Infographic explains the components of GenAI Governance Framework

Provided as a free resource by the team of collaborators, the framework also provides an approach to pinpointing vulnerabilities and implementing controls, ensuring that GenAI technologies are deployed securely and responsibly.

“The governance of GenAI is an important topic for all internal and external assurance providers,” said Eulerich, Dean and professor at the University of Duisburg-Essen’s Mercator School of Management. “Our framework gives helpful guidance to create an overall governance structure that will help organizations to minimize the risks of GenAI.”

Project partner Jeff Pickett said that effective AI adoption will be a massive competitive advantage, but many don't know where to start and how to apply it. “A few AI tools exist now, with many on the way, and they are coming fast," said Pickett, Connor Group Chair and Founder. "Having a smart AI adoption strategy with underlying controls, data, and processes that are ready for AI takes time. The most competitive companies are doing these things now.”

Learn more about the GenAI Governance Framework here.

Written by Todd Hollingshead