SoulMete - Informative Stories from Heart. Read the informative collection of real stories about Lifestyle, Business, Technology, Fashion, and Health.

Instead of AI sentience, focus on the current risks of large language models

[ad_1]

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Recently, a Google engineer made international headlines when he asserted that LaMDA, their system for building chatbots, was sentient. Since his initial post, public debate has raged over whether artificial intelligence (AI) exhibits consciousness and experiences feelings as acutely as humans.

While the topic is undoubtedly fascinating, it’s also overshadowing other, more pressing risks such as unfairness and privacy loss posed by large-scale language models (LLMs), especially for companies that are racing to integrate these models into their products and services. These risks are further amplified by the fact that the companies deploying these models often lack insight into the specific data and methods used to create them, which can lead to issues of bias, hate speech and stereotyping

What are LLMs?

LLMs are massive neural nets that learn from huge corpora of free text (think books, Wikipedia, Reddit and the like). Although they are designed to generate text, such as summarizing long documents or answering questions, they have been found to excel at a variety of other tasks, from generating websites to prescribing medicine to basic arithmetic.

It’s this ability to generalize to tasks for which they were not originally designed that has propelled LLMs into a major area of research. Commercialization is occurring across industries by tailoring base models built and trained by others (e.g., OpenAI, Google, Microsoft, and other technology companies) to specific tasks.

Researchers at Stanford coined the term “foundational models” to characterize the fact that these pretrained models underlie countless other applications. Unfortunately, these massive models also bring with them substantial risks.

The downside of LLMs

Chief among those risks: the environmental toll, which can be massive. One well-cited paper from 2019 found that training a single large model can produce as much carbon as five cars over their lifetimes — and models have only gotten larger since then. This environmental toll has direct implications for how well a business can meet its sustainability commitments and, more broadly, its ESG targets. Even when businesses rely on models trained by others, the carbon footprint of training those models cannot be ignored, consistent with the way a company must track emissions across their entire supply chain. 

Then there’s the issue of bias. The internet data sources commonly used to train these models has been found to contain bias toward a number of groups, including people with disabilities and women. They also over-represent younger users from developed countries, perpetuating that world view and lessening the impact of under-represented populations.

This has a direct impact on the DEI commitments of businesses. Their AI systems might continue to perpetuate biases even while they strive to correct for those biases elsewhere in their operations, such as in their hiring practices. They may also create customer-facing applications that fail to produce consistent or reliable results across geographies, ages or other customer subgroups. 

LLMs can also have unpredictable and scary results that can pose real dangers. Take, for example, the artist who used an LLM to re-create his childhood imaginary friend, only to have his imaginary friend ask him to put his head in the microwave. While this may be an extreme example, businesses cannot ignore these risks, particularly in cases where LLMs are applied in inherently high-risk areas like healthcare

These risks are further amplified by the fact that there can be a lack of transparency into all the ingredients that go into creating a modern, production-grade AI system. These can include the data pipelines, model inventories, optimization metrics and broader design choices in the interaction of the systems with humans. Companies should not blindly integrate pretrained models into their products and services without carefully considering their intended use, source data and the myriad other considerations that lead to the risks described earlier.

The promise of LLMs is exciting, and under the right circumstances, they can deliver impressive business results. The pursuit of these benefits, however, cannot mean ignoring the risks that can lead to customer and societal harms, litigation, regulatory violations and other corporate implications. 

The promise of responsible AI

More broadly, companies pursuing AI must put in place a robust responsible AI (RAI) program to ensure their AI systems are consistent with their corporate values. This begins with an overarching strategy that includes principles, risk taxonomies and a definition of AI-specific risk appetite.

Also important in such a program is putting in place the governance and processes to identify and mitigate risks. This includes clear accountability, escalation and oversight, and direct integration into broader corporate risk functions.

At the same time, employees must have mechanisms to raise ethical concerns without fear of reprisal, which are then evaluated in a clear and transparent way. A cultural change that aligns this RAI program with the organization’s mission and values increases the chance of success. Finally, the key processes for product development — KPIs, portfolio monitoring and controls, and program steering and design — can augment the likelihood of success as well. 

Meanwhile, it’s important to develop processes to build responsible AI expertise into product development. This includes a structured risk assessment process in which teams identify all relevant stakeholders, consider the second- and third-order impacts that could inadvertently occur and develop mitigation plans.

Given the sociotechnical nature of many of these issues, it’s also important to integrate RAI experts into inherently high-risk efforts to help with this process. Teams also need new technology, tools and frameworks to accelerate their work while enabling them to implement solutions responsibly. This includes software toolkits, playbooks for responsible development and documentation templates to enable auditing and transparency. 

Leading with RAI from above

Business leaders should be prepared to communicate their RAI commitment and processes internally and externally. For example, developing an AI code of conduct that goes beyond high-level principles to articulate their approach to responsible AI.

In addition to preventing inadvertent harm to customers and, more broadly, society in general, RAI can be a real source of value for companies. Responsible AI leaders report higher customer retention, market differentiation, accelerated innovation and improved employee recruiting and retention. External communication about a company’s RAI efforts helps create the transparency that is needed to elevate customer trust and realize these benefits.

LLMs are powerful tools that are poised to create incredible business impact. Unfortunately, they also bring real risks that need to be identified and managed. With the right steps, corporate leaders can balance the benefits and the risks to deliver transformative impact while minimizing risk to customers, employees and society. We should not let the discussion around sentient AI, however, become a distraction that keeps us from focusing on these important and current issues. 

Steven Mills is chief AI ethics officer and Abhishek Gupta is senior responsible AI leader & expert at BCG.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers



[ad_2]
Source link