ai-safety-tools-media-matters-worldwide
Who’s Responsible for Safeguarding Our Future? Everything Companies Need To Know About A.I. Safety

BY: JOHN MCGRANE | APRIL 2023

The artificial intelligence (AI) floodgates are open, and every week hundreds of new applications are released to the public. It’s not just the sheer number of artificial intelligence tools available that has experts worried – the rate at which those tools are advancing is sounding an alarm as well. In fact, artificial intelligence is evolving so fast, that months after OpenMind famously released their core product, ChatGPT, dozens of industry leaders called for a 6 month pause on all AI development. The open letter emphasizes that as AI technologies continue to advance, it is crucial that companies can ensure their safety and reliability.

In this article, we’ll look at the AI risks and challenges, key principles for AI safety, collaborative initiatives, regulatory frameworks, industry best practices, and the importance of public awareness and education.

WHO CREATES AND CONTROLS AI?

A new report from Stanford University highlighted a major shift taking place in the world of AI. While up until 2014 Academia was largely responsible for its production, Industry is now increasingly responsible for building complex machine learning systems and other AI tools. In 2022, there were 32 significant industry-made machine learning systems compared to just three by academia. One reason for this is that creating leading-edge AI systems increasingly requires large amounts of data, computing power, and money, which industry players have in greater amounts than academia or non-profits. Academia however, still plays a major part in the research behind these AI tools with the number of academic journals mentioning AI doubling since 2017.

Consequently, especially as meaningful legislation seems to be lacking, it will be companies rather than institutions of learning who will decide the best way to balance the risk and opportunity of these tools in the fast-moving field of AI evolution. That means companies will bear the majority of responsibility in keeping us all safe.

(The AI Index 2023 Annual Report by Stanford University)

UNDERSTANDING AI RISKS & CHALLENGES

Artificial intelligence is playing a bigger part in shaping our world, transforming industries such as healthcare, finance, and transportation. According to a 2023 Ad Age and Harris Poll study, two-thirds (67%) of US adults said they are concerned about the safety of artificial intelligence technologies. The effects of AI will undoubtedly have massive positive impacts on humankind, and some of the early stories of just how revolutionary it can be are downright jaw-dropping; however, like every technology before it, AI also has the potential to cause great harm without proper safeguarding. Let’s look at some of the biggest risks:

  • Bias and fairness: The outputs that AI technologies generate will only be as fair and unbiased as the data it receives. Dr. Nicol Turner Lee, Senior Fellow at the Center for Technology Innovation at The Brookings Institution summed up this point nicely:

“AI algorithms can emanate from unrepresentative or incomplete training data or the reliance on flawed information that reflects historical inequalities. If left unchecked, biased algorithms can lead to decisions which can have a collective, disparate impact on certain groups of people even without the programmer’s intention to discriminate.”

To give a concrete example, during a US Congressional hearing, the ACLU stunned Congress members by processing their headshot photos into an AI facial recognition surveillance tool that Amazon had recently sold to local law enforcement agencies across the country. The tool, named Rekognition, scanned a database of more than 25,000 criminal mugshots and then falsely matched 28 members of Congress to those headshots and the criminal records associated with them. The false matches were disproportionately people of color with nearly 40% of Rekognition’s false matches being people of color, even though they make up only 20% of Congress.

  • Ethical use: According to an AIAAIC database, which tracks incidents related to the ethical misuse of AI, the number of controversial AI incidents has increased 26 times since 2012. Using surveillance as an example, even if it were unbiased and fair, is law enforcement’s use of AI facial recognition ethical? Is it an invasion of the right to privacy? Should it also apply to other industries- say for example, a social media company? And who decides where the lines get drawn? These types of questions will become critical in the coming years.
  • Manipulation: In April 2023 the UK newspaper, The Guardian, reported that they
    discovered a chilling revelation: ChatGPT has been generating and spreading completely made-up articles This showed that AI not only can get the facts wrong, it can actually make up its own stories and sources. Last year, a deep fake video, spread through social media, depicted Ukrainian President Volodymyr Zelenskyy telling his country’s soldiers to surrender the war with Russia. Needless to say, these incidents exemplify the dire need to have safeguards in place.
FOUR KEY PRINCIPLES FOR AI SAFETY & RELIABILITY

To ensure AI safety and reliability, companies must address four key principles: transparency, robustness, interpretability, and accountability. Transparency in AI development and decision-making processes is vital for fostering trust. Robust systems should be designed to handle uncertainties and adverse conditions, and not be easily hackable or controlled by bad actors. Interpretability focuses on how AI systems remain understandable by humans, and accountability emphasizes the need for responsibility in AI design and deployment. If you’re a company, now is the time to start discussing these key principles internally. It would be wise to create an “ethical and responsible use” policy that clearly identifies how you plan to infuse
these principles into guidelines which outline how your company plans to use AI technologies. Here are some questions to consider:

  • Transparency: How does your company think about transparency in the tools you use while conducting business? How should disclosures about the use of AI in your advertising work? Should people be notified when they’re chatting with an AI chatbot and not a human, for example? Should we work with 3rd party auditors?
  •  Robustness: What are the ways we can protect against the weaponization of AI in our tools? How can we navigate extraordinary circumstances or conditions? How do we avoid misinformation, harmful deep fakes, fraud, and abuse?
  • Interpretability: What are the minimum and optimal viewability expectations for AI decision-making? Are we comfortable with the amount of information that remains hidden in the black box of machine learning as it executes decisions on the company’s behalf?
  • Accountability: How do we build accountability into algorithmic incentives? How do we protect at-risk users—and all consumers— from AI that exploits dark patterns or behavioral “hacks”? How do we respond to reports of tools being misused or hijacked?

SAFETY INITIATIVES & COLLABORATIONS

It’s hard to underestimate just how impactful AI will be on society. It will undoubtedly have major social, psychological, and environmental impacts. We must also consider the impacts on trust, as well as the legal and financial ramifications. Fortunately there are a growing number of groups and organizations coming together to address AI safety concerns. Let’s look at some examples:

● A group of 10 companies, including OpenAI, TikTok, Adobe, the BBC, and the dating app Bumble, formed the Partnership on AI (PAI) which collectively created a set of guidelines on how to build, create, and share AI-generated content responsibly. The Responsible Practices for Synthetic Media framework calls on three primary groups: the sources, the creators, and the platforms to adopt and implement their framework.

● In an effort to increase AI interpretability, company C3 AI deployed LIME (Local
Interpretable Model-Agnostic Explanations) which offers a generic framework to uncover machine learning black boxes and provides the “why” behind AI-generated predictions or recommendations.

● Organizations like the IEEE Standards Association bring together individuals and
organizations from 160 different countries to create and implement ethical standards for AI deployment. The IEEE Global Initiative’s mission is, “To ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity.”

● The Coalition for Content Provenance and Authenticity (C2PA) attempts to tackle the prevalence of misleading information online by certifying the source and edit history of media content. The video below demonstrates just how C2PA creates a digital signature to authenticate content. C2PA is a Joint Development Foundation project that was formed through an alliance between Adobe, Arm, Intel, Microsoft and Truepic. Technologies like this could also fundamentally change how companies ensure brand safety and authenticity in our new world of advanced generative AI tools

REGULATORY FRAMEWORKS & GUIDELINES

Around the globe, governments are scrambling to keep up with the ever evolving AI landscape. Stanford University’s AI Index analysis of the legislative records of 127 countries shows that the number of bills containing “artificial intelligence” that were passed into law grew from just 1 in 2016 to 37 in 2022. Likewise, an analysis of 81 countries shows that mentions of AI in global legislative bills have increased nearly 6.5 times since 2016.

The United States has yet to implement federal legislation for AI, but the Biden Administration and the National Institute of Standards and Technology (NIST) recently released broad guidance for AI safety. The AI Bill of Rights, although not legally binding, addresses AI misuse concerns and provides recommendations for the public and private sectors. NIST has also published standards for managing AI bias, and tracks public sector AI integrations. In 2022, 15 states and localities proposed or passed AI-related legislation.For example, New York City introduced a law to prevent AI bias in employment, and Colorado and Vermont created task forces to study AI applications like facial recognition.

While the US seems to be taking an approach that responds to specific cases of AI misuse, Europe is taking a bit of a different approach. In April 2021, the European Union introduced the Artificial Intelligence Act (AIA). The law proposes a risk-based approach to guide the use of AI in both the private and public sectors and defines three risk categories: unacceptable risk applications, high-risk applications, and applications not explicitly banned. The use of AI in critical services that could threaten livelihoods is prohibited, but the technology can be used in some sensitive sectors, like healthcare, as long as maximum transparency requirements by regulators can be met.

PUBLIC AWARENESS & EDUCATION

“AI is fast, accurate and stupid; humans are slow, inaccurate and brilliant;
together they are powerful beyond imagination.”

It’s not just companies, creators, and platforms that need to focus on AI safety. The general public, as users, must take a level of responsibility as well. In 2022, generative AI broke into the public consciousness, but even with the widespread release of text-to-image models like DALL-E 2, text-to-video systems like Make-A-Video, and chatbots like ChatGPT, there is a lack of knowledge which creates a lack of trust. A 2023 poll by Stanford University showed that only 35% of sampled Americans agreed that “products and services using AI had more benefits than drawbacks”. This was a stark contrast from countries on the higher end of those surveyed like China (78%), Saudi Arabia (76%) or India(71%).

Currently there are a number of efforts dedicated to expanding public knowledge of AI capabilities. Nonprofits like AI4ALL work with young adults in all 50 states to learn the fundamentals of AI safety, and then apply those principles in real-world scenarios to positively influence AI. Marketing AI Institute offers AI education and safety courses specifically for Marketers guided by their own Responsible AI Principles manifesto. Similarly, at universities across the country, AI safety courses are becoming increasingly common at private and public institutions alike. There are also a growing number of conferences that invite the public as well as industry experts to engage in the important discussions needed to move AI forward in a safe and reliable way. Examples include the AI For Good Global Summit hosted by the United Nations every year and London’s The AI Summit. which will take place in June 2023.

Increasing the public’s understanding of AI is crucial to keeping AI safe and reliable. As users, we must have the knowledge needed to avoid creating, posting, or sharing deceptive, harmful, or sensitive information. Companies should consider integrating AI best practices training for all of their team members whether through new hire onboarding, participating in a summer university class, or by attending an annual AI conference.

INDUSTRY BEST PRACTICES

At Media Matters Worldwide, we believe the best thing companies can do to ensure the safeguarding of AI’s future is to educate and train employees on how to use the technologies according to internal policies and procedures that center the four key safety principles. These policies and procedures should be updated regularly to make sure all products and services are compliant with any new or evolving regulations. Having specific but flexible policies in place can help avoid potentially damaging mistakes. For example, one key focus of MMWW’s internal AI policy prohibits thatany company or client data be uploaded into GPT techonlogies— a tough lesson that Samsung recently learned by experience. If your guidelines are easily accessible and understood by all employees, rather than hiding from these technologies, you’ll be able to embrace them successfully.

Further, companies should perform routine audits of their technologies and explore real-life use cases to ensure they align with company policies. If you can use trusted 3rd parties in your audits, even better. Be sure to share your findings publicly to add to the collective knowledge that benefits everyone. Finally, consider engaging with external stakeholders, including industry experts, regulators, and advocacy groups, to stay up-to-date on the latest AI-related regulations, advancements, and best practices.

CONCLUSION

“With great power comes great responsibility.”

As AI continues to advance at breakneck speed, it is the duty of all stakeholders – industry, academia, governments, and the public – to work together to ensure the safe and responsible development, deployment, and use of AI technologies. By understanding the risks and challenges, adhering to key principles of AI safety, embracing collaborative initiatives, staying abreast of regulatory frameworks, and fostering public awareness and education, we can collectively shape a future where AI is not only powerful and transformative but also reliable, ethical, and beneficial
to all. The responsibility we all bear in navigating the complex landscape of AI evolution is immense, and the actions we take today will shape the world of tomorrow. As we stand at the precipice of this technological revolution, it is crucial that we actively engage in developing and implementing best practices, fostering a culture of transparency, and continuously refining our approach to ensure the safe and sustainable growth of AI for generations to come