Skip to main content

AI backlash and what the fight is all about

Source: SAP Industries

“It just reminds developers that their choices in training a model are important to users, and if they want to gain trust, they need to be open and communicative about what the training choices are.”

Long exposure photograph depicting people walking in a city

The AI arms race, including the meteoric rise of GenAI, has captured the world’s attention, for reasons both good and bad.

First, some of the good: The global nonprofit Radiology Without Borders is using AI to help healthcare providers in developing nations improve breast cancer diagnosis and treatment. A tech-based nonprofit, Beyond 12, uses machine learning with its student coaching app to analyze data and deliver insights that help higher-education students succeed. And a nonprofit focused on immigration, Justicia Lab, is developing AI-powered technology solutions to reduce the workload for legal and social services providers and streamline the immigration process.

And now some of the bad: Despite the enormous promise of AI, many of the humans tasked with using it are concerned about the trustworthiness of AI-generated output and potential job displacement—and whether the significant investments many organizations are making in the technology will pan out.

Consider the growing threat of deepfakes: AI-generated images, audio, and video created to spread disinformation. As AI technology improves and deepfakes become harder to detect, concerns rise about how they can be weaponized in politics and about their risk to businesses that could find their brands or executives featured in deceptive content.

“Generative AI is impacting business and society in a way we have not witnessed before,” says Pierpaolo Vezzosi, vice president of solution management, artificial intelligence at SAP. “In many ways, we can no longer trust everything we see or hear.”

Job disruption is another ongoing concern. Goldman Sachs predicts that GenAI is set up to automate nearly one-quarter of jobs across all industries, impacting up to 300 million jobs globally. High-risk candidates include copywriting, coding, and customer service.

“A lot of employees are very concerned that they will be replaced by AI,” says Jacqui Irwin, a California State Assembly member who earlier this year introduced a bill that would require software developers to disclose what data they use to train their AI models. The bill was one of several signed into law by Governor Gavin Newsom in September, which reflects the growing concern by legislators and the public about AI’s effect on consumers and businesses.

AI advocates in both the public and private sectors are already taking steps to mitigate some of the backlash. They’ll need to do more to keep the AI bubble from collapsing under the weight of disappointing outcomes and unrealized expectations.

“The collapse of the generative AI bubble—in a financial sense—appears imminent,” writes Gary Markus, a scientist, best-selling author, and frequent critic of the current state of AI. “To be sure, generative AI itself won’t disappear. But investors may well stop forking out money at the rates they have [been], enthusiasm may diminish, and a lot of people may lose their shirts.”

A matter of trust

Every innovative, powerful technology inevitably faces resistance, which Gartner famously dubbed the “trough of disillusionment” phase of its technology hype cycle. AI has pushed the hype cycle to the extreme, offering game-changing innovation—but also opportunities for deal-breaking disappointment. Case in point: a pilot program for AI-based order-taking that McDonald’s launched at more than 100 drive-through locations. After a string of AI-induced errors with customers’ food orders, the fast-food giant abandoned the program.

The success or failure of AI may boil down to one important point: whether people trust the technology. In a 2023 global study from KPMG and the University of Queensland that examined the public’s trust in AI, 61% of respondents said they are wary of trusting AI systems, and 67% reported low to moderate acceptance of AI.

“Trust is a very complicated topic, and it's not going to have a very straightforward solution,” says Ilana Golbin Blumenfeld, a director at PwC Labs who focuses on new tech and AI. “You can’t press a button and make something trustworthy, especially when you’re talking about adopting [AI] applications on a massive scale in ways that literally touch everyone in the business. So we’re seeing a lot of emphasis on redesigning operating models, organizational practices, governance, testing, and training to develop a trusted mindset around AI.”

Concerns about AI in business are attributable in some ways to a growing expectation gap between employers and employees. In a study by The Upwork Research Institute, 96% of C-suite leaders say they expect the use of AI tools to increase their companies’ overall productivity levels. But 47% of employees using AI say they have no idea how to reach the productivity gains their employers expect, and 77% say these tools have actually decreased their productivity and added to their workloads.

That disparity seems to imply that AI training is imperative to its success. “AI will create opportunities for new jobs and new roles, so it will be really important to properly train your employees,” says Vezzosi. “The faster they learn, the faster the impact will be absorbed into the organization.”

Success with AI in business “has to be a story about empowerment,” says Dan Diasio, global artificial intelligence consulting leader at global consultancy EY. “There is a risk that if we don't think about people as part of the solution, we're going to run straight into a backlash of people thinking that AI is there to take their jobs or make their work even more challenging, as opposed to improving their experience.”

Assembly member Irwin is even more blunt: “We need to make sure that companies realize they are responsible for training their employees on these AI tools,” she says. “If not, we are heading toward a disaster.”

 

In search of ROI

Ongoing trust issues and concerns about the effect of AI on the workforce have not dampened enthusiasm for AI technology investments—at least not yet. Worldwide spending on AI-enabled applications, infrastructure, and related IT and business services will surpass $630 billion by 2028—more than double the current spending levels, according to IDC.

What’s more likely to drive backlash from boards of directors and senior leadership teams is an age-old challenge: ROI. We’re already seeing signs of resistance, as unfettered investment has left some CEOs and CFOs asking, “Where’s the payback?”

“Proving AI works is actually not hard,” says Bret Greenstein, a partner at PwC who leads the firm’s GenAI go-to-market strategy. “Showing how AI improves your ability to respond to a customer is a much more important challenge,” he adds, citing a common use case for AI. His point is that organizations need to do a better job of tying AI initiatives to business outcomes rather than investing in one-off experiments that don’t correlate to business value. “Companies that are still building a lot of [proofs of concept] and a science lab on the side are going to be continually frustrated about a lack of progress,” Greenstein says.

Analyst firm Gartner predicts that at least 30% of GenAI projects will be abandoned after proof of concept by the end of 2025, attributing the fallout to poor data quality, inadequate risk controls, escalating costs, or unclear business value. The firm estimates that companies can spend from $750,000 to $1 million for integrating GenAI into existing applications and a whopping $5 million to $20 million for big-ticket initiatives such as customizing GenAI models or building their own, with additional recurring costs.

“It’s still unclear to many CIOs... of the value they are getting out of AI experimenting, so we’re seeing the costs of AI spiraling,” J. R. Storment, executive director of the FinOps Foundation, tells CIO.

The resulting backlash from runaway spending and disappointing results could cause many organizations to pull back on AI investments—which is not necessarily bad if it causes them to narrow the focus of their efforts. But taking risks to potentially gain an edge is part of business.

“CFOs consistently tell me they need to go from 200 or 500 use cases down to about 5,” says EY’s Diasio. “Right now [investments are focused] on a whole bunch of small things instead of the use cases that drive systemic value.”

 

AI regulation is ramping up

Legislators are tackling concerns about AI privacy, transparency, and security with a variety of bills and frameworks. AI regulation is still in its early days, but we’re seeing change on several fronts. For example:

  • In March 2024, the European Union adopted the AI Act, a comprehensive legal framework that defines requirements for data quality, transparency, oversight, and accountability for the development, distribution, and use of AI systems. The legislation will take full effect in 2026.
  • At least 16 countries in the Asia-Pacific region have introduced or adopted various types of AI regulation or non-binding standards for responsible use, data security, end-user protection, and human autonomy, according to Sidley, a global law firm.
  • In October 2023, the Biden administration released an executive order outlining non-binding principles for the “safe, secure, and trustworthy development and use of artificial intelligence.” The White House also released the Blueprint for an AI Bill of Rights, a set of five principles and practices to help guide the design, use, and deployment of automated systems to protect the rights of the American public in the age of AI.
  • The U.S. Congress has yet to introduce comprehensive federal legislation or regulations for AI, but 34 individual state legislatures have enacted or proposed legislation addressing AI oversight. In California, Irwin’s bill “doesn’t prohibit or constrain AI systems from being developed,” she says. “It just reminds developers that their choices in training a model are important to users, and if they want to gain trust, they need to be open and communicative about what the training choices are.”

Irwin believes more collaboration among legislatures to enact uniform policies is crucial to overcome ongoing trust concerns about AI.

“How we regulate AI is going to be an ongoing question,” says Irwin, who worked as a systems engineer at Teledyne Systems and at Johns Hopkins University’s Applied Physics Lab before transitioning to public service. “We're trying to strike a balance between enabling innovation and putting guardrails around a technology that we don't even know the limits of.”

 

Responsible AI requirements

Many businesses are taking their own steps to address concerns and potential pitfalls of AI deployment, giving rise to the concept of “responsible AI.” A variety of frameworks and best practices are being launched, including SAP’s “3 Rs,” which the company follows for its AI development efforts. The framework’s guiding principles—making AI relevant, reliable, and responsible—are imperative for building a foundation of trust for AI that can drive usage and successful results across an organization.

To help businesses address real-world problems, AI models must rely on their own enterprise data, not pure public large-language models. Using data that is specific to each business will provide better context for the AI tools to deliver useful outputs, Vezzosi notes.

Relevance also means deploying AI where employees and customers actually need it.

"You don’t want people to start using GenAI in company business processes for things it was not designed for,” says Vezzosi. “You’ll end up with bad results, and people will lose trust.”

Making AI relevant for the workforce requires addressing its “black box” concerns, too. “If we are going to fully utilize the benefits of AI, we need to know how it works or people will reject it,” says assembly member Irwin. Organizations will need to provide education and training to help people understand AI, its use cases, and how it can help them do their jobs better.

EY’s Diasio concurs: “Organizations need to empower all of their employees with the capabilities of AI and the opportunity to learn and contribute to what that company can be in the future,” he says.

The corollary is also true: AI needs to understand how humans work, which requires close collaboration between data scientists and business users to effectively add AI capabilities to existing processes and workflows.

“The only people who know how to do a certain job today are not in data science. They're sitting in the business doing the work every day,” says PwC’s Greenstein. “You have to engage them so you can understand how they work and then identify how you can apply AI in a way that drives the future of that work.”

He offers one telling example: “I was working with a creative writing teacher who had never used AI. I prompted AI to write a complete paper in front of her in about five seconds. She immediately thought she was out of a job. But then I asked her about the critical thinking that went into her idea and whether she could assess whether the output was good and told the story she wanted to tell.” At that point, he says, the teacher "realized that AI could be a valuable tool for creative writing but that she was the key to unlocking the creative output by shaping the writing and coaching the tool.”

 

Reliable AI data promotes trust

AI models are only as good as the data they ingest. That’s why high-quality, up-to-date data is table stakes for trustworthy AI. But many users of the technology remain skeptical of the data being used to train AI models. Ensuring the reliability of AI inputs—and building trust in AI outputs—requires modern practices and policies for collecting, ingesting, analyzing, and storing data. Models need to be trained with data that gives them a complete and current picture of the business.

Organizations also need practices and tools to identify when AI is used to generate content and insights and validate whether those outcomes are accurate. A new crop of AI detection tools can help organizations determine whether AI was used to create content, but organizations should not rely solely on those tools because they are prone to error.

“Recent studies suggest that [AI detection] tools are not foolproof, so you may end up penalizing something that’s a false negative,” says Diasio. “That can create just as much backlash as inaccurate results.”

Greenstein believes that as AI technology evolves, the discussion will shift from whether AI generated the output to whether the results are trustworthy. “Instead of asking whether AI generated a certain piece of content,” he says, “the more important questions to establish trust will be ‘Is it accurate?’ or ‘Is it useful?’”

It’s important to ask those questions now because AI and machine learning models have limitations that can significantly affect the quality of their output, despite the hype. For example, Markus, the scientist and author, writes that “current approaches to machine learning are lousy at outliers, which is to say that when they encounter unusual circumstances, they often say and do things that are absurd.”

As an example, he cites an investigation by The Wall Street Journal into crashes involving Tesla’s Autopilot feature for driverless navigation. Machine learning models are trained on ingesting many different versions of common scenarios, such as using multiple images of a stop sign to teach Autopilot when to stop the vehicle. If it’s not trained on an unusual circumstance, such as an overturned tractor trailer blocking the road, it may not recognize the situation and therefore may not instruct the vehicle to stop or navigate to avoid the obstacle—or it may create “absurd” outputs that Markus calls “discomprehensions.”

To help root out these inaccuracies, organizations need to develop processes for testing AI models on different datasets and checking for errors and anomalies of the outputs. They’ll also need to continuously monitor and fine-tune their models to ensure the validity of results as the models and technology develop.

 

Making AI responsible

A recent PwC study found that responsible AI practices, which create safeguards for AI’s inherent risks, help AI initiatives succeed by building trust with stakeholders. The study found that responsible AI practices can drive a number of business benefits, including better customer experience, enhanced cybersecurity and risk management, and accelerated innovation.

PwC’s Golbin Blumenfeld believes responsible AI practices complement existing security and risk-management practices. “Leadership teams will need to reconcile who owns what with the deployment of AI systems in the context of how they manage risk,” she says. “It’s important to account for the diversity of risks in this space, including the need for transparency and traceability about how AI is being used across the enterprise.”

Responsible AI practices also require coordination among different functions, including software development teams, to work more cohesively together. “The champions of responsible AI will not just sit in the risk-management function,” she says. Among other steps toward responsible AI, PwC recommends creating an AI risk taxonomy and tools to help assess potential risks and guide mitigation practices.

Other practices to build trust in AI include:

  • Aligning responsible AI practices and systems with the organization’s values
  • Embedding responsible practices at every stage of AI development and deployment
  • Establishing policies and protections for everyone who will be working with AI
  • Updating governance policies to account for AI capabilities for handling sensitive data, to ensure that they’re being implemented and managed responsibly

 

Beyond the backlash

Resistance to a new technology is vital to helping it develop and making it beneficial to the people using it. It’s a standard component of technology innovation that dates back centuries.

“Soon after the first automobiles were on the road, there was the first car crash,” Bill Gates writes in a 2023 blog post. “But we didn’t ban cars—we adopted speed limits, safety standards, licensing requirements, drunk-driving laws, and other rules of the road.... History shows that it’s possible to solve the challenges created by new technologies.”

Even the acknowledged AI critic Markus sees a positive path forward for the technology. "AI will of course survive... even if it idles for a few years,” he writes. “And this year’s bubble doesn’t spell the end of AI for all time; eventually AI will advance well beyond what has been possible in 2024. In time, new innovations will come.”

What those innovations will be remain unclear. The key to capturing them, says Vezzosi, is a disciplined method that builds trust in the technology.

“It can be difficult to see what lies beyond the hype around AI,” he says. “We’re likely to see new features and other innovations across every discipline that will blow our minds. But it’s important to deploy AI carefully and responsibly because it will impact not just your employees but your customers as well.”