Posted in | News | Consumer Robotics

The Role of AI in Corporate Social Responsibility

According to a national survey of company officials, revealed in the report Responsible AI Management: Evolving Practice, Growing Value, corporate efforts to use artificial intelligence in a more socially responsible manner have an unexpected benefit: they can frequently improve product quality.

AI, artificial intelligence

Image Credit: Anggalih Prasetya/Shutterstock.com

The survey respondents ranked product quality as the area of their businesses that benefited the most from implementing responsible AI management (RAIM) practices, surpassing more obvious choices such as reducing regulatory and legal risk.

That response surprised one of the survey’s leaders, Dennis Hirsch, faculty director of The Ohio State University’s Program on Data Governance.

We did not expect that the primary response for how AI governance would create value would be by improving product quality. That is very interesting and encouraging.

Dennis Hirsch, Survey Leader and Director, Program on Data Governance, Ohio State University

The Program on Data and Governance at Ohio State's Moritz College of Law and Translational Data Analytics Institute created the report.

The corporate rush to use AI has raised concerns regarding possible harm and misuse, including privacy violations, discrimination, and misinformation.

The survey, distributed in early 2023, examined RAIM procedures at companies that create and apply AI. Those identified as data governance officials at U.S. companies received the survey via email. A total of 75 people completed the surveys, most of whom were employed by large corporations with 1,000 or more employees and $10 million or more in yearly revenue.

The survey respondents came from a wide range of business sectors, including consumer goods, financial, health care, and information technology.

Hirsch said that few companies today have effective RAIM programs in place, as evidenced by the survey’s comparatively low response rate and the majority of responses coming from large corporations.

Hirsch added, “We think the largest companies have the most resources and are most engaged in AI governance.

What precisely are the ethical AI management techniques that companies are employing?

According to the study, assessing regulatory risk, determining stakeholder risk, creating a RAIM management structure, and implementing standards like RAIM policies and AI ethics principles were the most often reported RAIM activities.

According to the findings, 68% of those surveyed stated that RAIM was either very important or important to their business. Even among the big businesses that replied, though, enthusiasm for RAIM program implementation was noticeably lacking. As per the study, the majority of respondents reported that their RAIM programs were still in their infancy.

This survey was conducted prior to the explosion of generative AI and the widespread use of tools such as ChatGPT, so Hirsch believes the situation may be changing. More companies are probably realizing that they need to regulate the use of AI, but it is still not as widespread as it should be, he said.

He said more companies could make investments in RAIM if they are aware of the larger companies’ experience and the value they believe it brings to their businesses.

Nearly 40% of respondents said that their company’s responsible AI management programs provide “a lot” or “a great deal” of value. An additional 38% claimed that it generated “a moderate amount” of value. No one stated they were of no value.

Perhaps most remarkably, however, survey respondents believed that RAIM was most valuable in the area of product quality. How RAIM enhanced product quality was not a question in this survey, and Hirsch stated that further research is necessary to fully understand this finding.

He stated, “Our preliminary take is that it improves product quality by promoting AI innovation and better meeting customer expectations.

This could be explained by a 2018 study conducted by the Program on Data and Governance, which included interviews with representatives of corporate AI governance.

People perceive data governance as stifling innovation because it limits what people can do. However, those interviewed for the 2018 study believe the opposite is true.

Hirsch further added, “If employees have standards and policies and guidelines about how they can use AI, they can innovate with a lot more confidence. It can actually unleash innovation, rather than dampen it.

The new report stated, “These results suggest an important, new way of thinking about AI management – as a source of value and competitiveness, and not just a way of mitigating risks and costs.

While this is good news for companies implementing responsible AI management practices, Hirsch emphasized that the survey included mostly large corporations.

He added, “I think if we looked more broadly at businesses around the country that use AI, you wouldn’t see such an optimistic picture of the view of the importance of AI governance.

Most businesses are still in the early stages of implementing AI management.

According to Hirsch, more companies should conduct algorithmic impact assessments to determine whether their use of AI will harm customers or others. They also need to create a management structure and substantive policies that will assist their employees in determining how to use AI.

He concluded, “We need to do a lot more with companies to help them understand how to responsibly use AI.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.