Skip to content
Home » Google Career Certificates » Introduction to Generative AI Learning Path Specialization » Responsible AI: Applying AI Principles with Google Cloud

Responsible AI: Applying AI Principles with Google Cloud

Embracing the growth of enterprise Artificial Intelligence and Machine Learning requires a commitment to building responsible AI practices. While discussing responsible AI is crucial, translating theory into practice can be a significant challenge. If you’re eager to operationalize responsible AI in your organization, this course is designed to guide you through the process.

Through this course, you’ll gain insights into Google Cloud’s current approach to responsible AI, along with best practices and lessons learned. By the end of the course, you’ll have a comprehensive framework to develop your own responsible AI strategy, tailored to your organization’s unique needs and goals. By the end of the course, you’ll be equipped with the knowledge and tools to integrate responsible AI practices into your organization’s workflow, ensuring the ethical and effective use of AI in your organization


Module 1: Introduction

In this module, you will learn about the impact of AI technology and Google’s approach to responsible AI, and also be introduced to Google’s AI Principles.

Learning Objectives

  • Describe why AI technology development requires a responsible approach.
  • Recognize Google’s AI Principles.

Video: Course introduction

The course focuses on the practice of responsible AI, highlighting the importance of building AI systems that are transparent, fair, accountable, and private. The narrators, Marcus and Caitlin, introduce the concept of responsible AI, emphasizing that AI is not infallible and can replicate existing biases and issues if not developed responsibly.

The course will explore Google Cloud’s approach to responsible AI, including its AI principles, practices, governance processes, and tools. Google’s approach is rooted in a commitment to building AI that is accountable, safe, and respects privacy, driven by scientific excellence.

The course aims to provide a framework for organizations to develop their own responsible AI strategy, emphasizing the importance of human decision-making in AI development. The goal is to provide insights and lessons learned from Google Cloud’s journey towards responsible AI development and use.

The introduction also clarifies that the course will not delve into the definitions of AI, machine learning, and deep learning, but instead focus on the importance of human decision-making in technology development. The course will explore how to operationalize responsible AI in practice, providing a window into Google Cloud’s approach to responsible AI development and use.

Hi there and welcome to applying AI principles
with Google Cloud. A course, focused on the
practice of responsible AI. My name is Marcus. I’m Caitlin. We’ll be your narrators
throughout this course. Many of us already have
daily interactions with artificial
intelligence or AI. From predictions for
traffic and weather to recommendations of TV shows
you might like to watch next. As AI, especially generative
AI, becomes more common, many technologies that aren’t AI enabled may start
to seem inadequate. With such powerful,
far-reaching technology raises equally
powerful questions about its development and use. Historically, AI was not
accessible to ordinary people. The vast majority
of those trained and capable of developing AI were specialty
engineers who were scarce in number and
expensive to hire. But the barriers to
entry are being lowered, allowing more
people to build AI, even those without AI expertise. Now, AI systems are
enabling computers to see, understand, and
interact with the world in ways that were unimaginable
just a decade ago. These systems are developing
at an extraordinary pace. According to Stanford University’s
2019 AI Index Report. Before 2012, AI results track closely with Moore’s Law with compute doubling
every two years. The report states
that since 2012, computer has been doubling approximately every
three and half months. To put this in perspective,
over this time, Vision AI technologies have only become more accurate
and powerful. For example, the error
rate for ImageNet and image classification dataset
has declined significantly. In 2011, the error
rate was 26 percent. By 2020, that number
was two percent. For reference, the error
rate of people performing the same task is five
percent and yet, despite these remarkable
advancements, AI is not infallible. Developing responsible
AI requires an understanding of
the possible issues, limitations, or
unintended consequences. Technology is a reflection
of what exists in society. Without good practices, AI may replicate existing issues
or bias and amplify them. But there isn’t a universal
definition of responsible AI, nor is there a simple
checklist or formula that defines how responsible AI practices should be implemented. Instead, organizations
are developing their own AI principles that reflect their
mission and values. While these principles are
unique to every organization, if you look for common themes, you find a consistent set of
ideas across transparency, fairness, accountability,
and privacy. Hi Google, our approach to responsible AI is rooted in a commitment
to strive towards AI that is built for
everyone that is accountable and safe,
that respects privacy. This is driven by
scientific excellence. We’ve developed our
own AI principles, practices, governance processes, and tools that together embody our values and guide our
approach to responsible AI. We’ve incorporated
responsibility by design into our products and even more
importantly, our organization. Like many companies, we use our AI principles as a framework to guide responsible
decision-making. We’ll explore how we do this in detail later in this course. It’s important to
emphasize here that we don’t pretend to have
all of the answers. We know this work is never finished and we
want to share while learning to collaborate and help others on
their own journeys. We all have a role to play in how responsible AI is applied. Whatever stage in the AI
process you’re involved with, from design to deployment
or application, the decisions you
make have an impact. It’s important that you too have a defined and repeatable process for using AI responsibly. Google is not only
committed to building socially valuable
advanced technologies, but also to promoting
responsible practices by sharing our insights and lessons learned with the
wider community. This course represents one
piece of these efforts. The goal of this
course is to provide a window into Google
and more specifically, Google Cloud’s journey toward the responsible
development and use of AI. Our hope is that
you’ll be able to take the information and
resources were sharing and use them to help shape your organization’s own
responsible AI strategy. But before we get any further, let’s clarify what we mean
when we talk about AI. Often people want to know the differences between
artificial intelligence, machine learning,
and deep learning. However, there is no universally agreed
upon definition of AI. Critically, this
lack of consensus around how AI should
be defined has not stopped technical
advancement underscoring the need for ongoing dialogue about how to responsibly create and
use these systems. At Google, we say our
AI principles apply to advanced technology
development as an umbrella to encapsulate
all technologies. Becoming bogged down in
semantics can distract from the central goal to develop
technology responsibly. As a result, we’re
not going to do a deep dive into the definitions
of these technologies. Instead we’ll focus on
the importance of human decision-making in
technology development. There is a common
misconception with artificial intelligence
that machines play the central
decision-making role. In reality, it’s
people who design and build these machines and
decide how they are used. People who are involved in
each aspect of AI development. They collect or create the data that the model is trained on. They control the deployment of the AI and how it is
applied in a given context. Essentially, human decisions of threaded throughout our
technology products. Every time a person
makes a decision, they’re actually making a
choice based on their values. Whether it’s the decision
to use generative AI to solve a problem as
opposed to other methods. Or anywhere throughout the
machine learning life-cycle. They introduced their
own sets of values. This means that every
decision point requires consideration and
evaluation to ensure that choices have been
made responsibly from concept through deployment
and maintenance.

Video: Google and responsible AI

The article emphasizes the importance of responsible AI development and deployment, highlighting the potential risks and unintended consequences of AI innovation, such as perpetuating biases, job displacement, and lack of accountability. It stresses that ethics and responsibility are crucial in AI development, not just for controversial use cases, but for all AI applications, to ensure they benefit people’s lives. Google’s approach to responsible AI involves a series of assessments and reviews to ensure alignment with AI principles, and a commitment to transparency, trust, and community collaboration. The article encourages organizations of all sizes to start their responsible AI journey, acknowledging that it’s an iterative process that requires dedication, discipline, and a willingness to learn and adjust over time. Key takeaways include:

  1. Responsible AI is essential for successful AI deployment.
  2. Ethics and responsibility should be integrated into AI development from the start.
  3. AI innovation can have unintended consequences, and it’s crucial to address these risks.
  4. Community collaboration and collective values are essential for responsible AI development.
  5. Robust processes and transparency are necessary for building trust in AI decision-making.
  6. Starting small and taking incremental steps towards responsible AI is better than doing nothing.
  7. Responsible AI is an ongoing process that requires continuous learning and improvement.

Many of us rely on technological
innovation to help live happy and healthy lives. Whether it’s navigating
the best route home or finding the right information
when we don’t feel well. The opportunity for
innovation is incredible. But it’s accompanied by
a deep responsibility for technology providers to get it right. There is a growing concern
surrounding some of the unintended or undesired impacts of AI innovation. These include concerns around ML fairness
and the perpetuation of historical biases at scale, the future of work and
AI driven unemployment, and concerns around the accountability and
responsibility for decisions made by AI. We’ll explore these in more
detail later in the course. Because there is potential to impact many
areas of society, not to mention people’s daily lives, it’s important to develop
these technologies with ethics in mind. Responsible AI is not meant to focus just
on the obviously controversial use cases. Without responsible AI practices,
even seemingly innocuous AI use cases or those with good intent could
still cause ethical issues or unintended outcomes, or
not be as beneficial as they could be. Ethics and responsibility are important, not least because they represent
the right thing to do, but also because they can guide AI design
to be more beneficial for people’s lives. At Google,
we’ve learned that building responsibility into any AI deployment makes better models
and builds trust with our customers and our customers’ customers. If at any point that trust is broken, we run the risk of AI deployments
being stalled, unsuccessful, or at worst, harmful to stakeholders,
those products affect. This all fits into our belief at Google
that responsible AI equals successful AI. We make our product and business decisions around AI through
a series of assessments and reviews. These instill rigor and consistency in
our approach across product areas and geographies. These assessments and reviews begin with ensuring that any
project aligns with our AI principles. During this course,
you’ll see how we approach building our responsible AI process at Google and
specifically within Google cloud. At times you might think well, it’s easy
for you with substantial resources and a small army of people, there are only a
few of us, and our resources are limited. You may also feel overwhelmed or
intimidated by the need to grapple with thorny new philosophical and
practical problems. And this is where we assure you that no
matter what size your organization is, this course is here to guide you. Responsible AI is an iterative practice. It requires dedication, discipline, and a
willingness to learn and adjust over time. The truth is that it’s not easy,
but it’s important to get right. So starting the journey even
with small steps, is key. Whether you’re already on a responsible AI
journey or just getting started, spending time on a regular basis simply reflecting
on your company values and the impact you want to make with your products will go
a long way in building AI responsibly. Finally, before we get any further,
we’d like to make one thing clear. At Google, we know that we represent just
one voice in the community of AI users and developers. We approach the development and deployment
of this powerful technology with a recognition that we do not and cannot
know and understand all that we need to. We will only be at our best when we
collectively tackle these challenges together. The true ingredient to ensuring
that AI is developed and used responsibly is community. We hope that this course will
be the starting point for us to collaborate together
on this important topic. While AI principles help ground a group
in shared commitments, not everyone will agree with every decision made on how
products should be designed responsibly. This is why it’s important to develop
robust processes that people can trust. So even if they don’t agree
with the end decision, they trust the process
that drove the decision. In short, and in our experience, a culture based on a collective value
system that is accepting of healthy deliberation must exist to guide
the development of responsible AI. By completing this course, you yourself
are contributing to the culture by advancing the practice of responsible AI
development as AI continues to experience incredible adoption and innovation.

Video: An introduction to Google’s AI Principles

Google announced 7 principles to guide their AI development and use, recognizing the significant impact on society. The principles are:

  1. AI should be socially beneficial, considering broad social and economic factors.
  2. AI should avoid unfair bias, particularly against sensitive characteristics.
  3. AI should be built and tested for safety, with strong safety and security practices.
  4. AI should be accountable to people, with opportunities for feedback and appeal.
  5. AI should incorporate privacy design principles, with notice, consent, and transparency.
  6. AI should uphold high scientific standards, with rigorous and multi-disciplinary approaches.
  7. AI should be used for beneficial purposes, limiting harmful or abusive applications.

Additionally, Google will not pursue AI applications that:

  • Cause overall harm
  • Are weapons or facilitate injury to people
  • Violate international surveillance norms
  • Contravene international law and human rights

These principles serve as a foundation for Google’s AI development, and they encourage other organizations to develop their own AI principles.

How AI is developed and used will have a significant effect on society for many years to come.
As a leader in AI we’re Google, and Google Cloud, recognize we have responsibility to do this well and to get it right. In June 2018, we announced seven principles to guide our work.
These are concrete standards that actively govern our research and product development, and affect our business decisions. We’re going to cover these principles and their development in some depth later in the course, so for now, here’s an overview of each one: 1. AI should be socially beneficial. Any project should take into account a broad range of Social and economic factors and will proceed only where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides. 2. AI should avoid creating or reinforcing unfair bias. We seek to avoid unjust effects on people particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability and political or religious belief. 3. AI should be built and tested for safety, we will continue to develop and apply strong Safety and Security practices to avoid unintended results that create risks of harm. 4. AI should be accountable to people we will Design AI systems that provide appropriate opportunities for feedback, relevant explanations and appeal. 5. AI should incorporate privacy design principles. We will give opportunity for notice and consent encourage architectures with privacy safeguards and provide appropriate transparency and control over the use of data. 6. AI should uphold high standards of scientific Excellence, we will work with a range of stakeholders to promote thoughtful leadership in this area drawing on scientifically rigorous and multi-disciplinary approaches and we will responsibly share AI knowledge by publishing educational materials, best practices and research that enable more people to develop useful AI applications. 7. AI should be made available for uses that Accord with these principles many technologies have multiple uses, so will work to limit potentially harmful or abusive applications. In addition to these seven principles there are certain AI applications we will not pursue. We will not design or deploy AI in these four application areas: Technologies that cause or are likely to cause overall harm. Weapons or other Technologies whose principal purpose or implementation is to cause or directly facilitate injury to people. Technologies that gather or use information for surveillance that violates internationally accepted norms. And Technologies whose purpose contravenes widely accepted principles of international law and human rights. Establishing principles was a starting point, rather than an end. What remains true is that our AI Principles rarely give us direct answers to our questions on how to build our products. They don’t, and shouldn’t, allow us to sidestep hard conversations. They are a foundation that establishes what we stand for, what we build and why we build it, and they are core to the success of our Enterprise AI offerings. Later in the course we’ll give some suggestions to develop your own set of AI principles within your organization.

Module 2: The Business Case for Responsible AI

In this module, you will learn about how to make a business case for responsible AI, based on the report ‘The Business Case for Ethics by Design’ by the Economist Intelligence Unit.

Learning Objectives

  • Generate a strong case for why responsible AI is positive for your business.
  • Recognize the impacts and challenges that ML is likely to surface in your business.

Video: The Economist Intelligence Unit report

AI’s Potential Impact on Global GDP:

  • PricewaterhouseCoopers estimates AI could boost global GDP by 14% ($15.7 trillion) by 2030
  • Google believes responsible AI deployment is crucial to realizing this projection

The Business Case for Responsible AI:

  • Google sponsored a report by The Economist Intelligence Unit (EIU) titled “Staying Ahead of the Curve: The Business Case for Responsible AI”
  • The report highlights the value of responsible AI practices in an AI-driven world
  • It presents the impact of responsible AI on an organization’s core business considerations

Report Findings:

  • The report is based on extensive research, industry-expert interviews, and an executive survey program
  • It reflects the sentiment of developers, industry leaders, and end users of AI
  • The report is divided into 7 sections, covering how responsible AI:
    1. Enhances product quality
    2. Improves talent acquisition, retention, and engagement
    3. Contributes to better data management, security, and privacy
    4. Prepares organizations for current and future AI regulations
    5. Improves top- and bottom-line growth
    6. Strengthens relationships with stakeholders and investors
    7. Maintains strong trust and branding

Next Steps:

  • The next video will explore each of these 7 sections in detail
  • The report is available in the resources section of the course

One leading estimate from PricewaterhouseCoopers
suggests that AI could boost global GDP by 14%,
or up to $15.7 trillion, by 2030. At Google, we believe that the responsible,
inclusive, and fair deployment of AI is a critical factor in realizing this projection. Simply put, we believe that responsible AI
is synonymous with successful AI that can be deployed for the long term with trust. We also believe that responsible AI programs
and practices afford business leaders a strategic
and competitive advantage. To explore the business benefits of responsible
AI in depth, we sponsored an original report titled “Staying Ahead of the Curve: The
Business Case for Responsible AI,” which was developed by The Economist Intelligence
Unit (EIU), the research and analysis division of The Economist Group. The report showcases the value of responsible
AI practices in an increasingly AI-driven world. It comprehensively presents the
impact that Responsible AI can have on an organization’s
core business considerations. It’s important to emphasize that the data
collected to create this report came from extensive data-driven research, industry-expert
interviews, and an executive survey program. The report reflects the sentiment of developers,
industry leaders deploying AI, and end users of AI. We will share the main findings, and we encourage
you to read the full report, available in the resources section of this course. We hope you’ll use these highlights to draw
a connection between your business goals and responsible AI initiatives,
which can empower you to influence stakeholders in your own organization. The report is subdivided into seven sections
and includes data on how responsible AI: enhances product quality; improves the outlook on acquisition; retention,
and engagement of talent; contributes to better data management, security
and privacy; leads to readiness for current and future
AI regulations; leads to improvements to the top- and bottom-line
growth; assists to strengthen relationships with stakeholders
and investors; and maintains strong trust and branding. In the next video, we’ll explore each of
these seven sections in detail.

Video: The business case for responsible innovation

The Economist Intelligence Unit’s report “Staying Ahead of the Curve: The Business Case for Responsible AI” highlights seven key points on the importance of incorporating responsible AI practices in business. Here’s a summary of the seven highlights:

  1. Incorporating responsible AI practices is a smart investment in product development: 97% of survey respondents agree that ethical AI reviews are important for product innovation. Responsible AI practices can reduce development costs, improve product quality, and increase trust with stakeholders.
  2. Responsible AI trailblazers attract and retain top talent: Top workers are attracted to companies that prioritize ethical issues, including responsible AI practices. This can lead to increased productivity, reduced turnover rates, and cost savings.
  3. Safeguarding the promise of data is crucial: Cybersecurity and data privacy concerns are major obstacles to AI adoption. Organizations must prioritize data protection to build trust with customers, reduce the risk of data breaches, and improve AI outcomes.
  4. Prepare in advance of AI regulation: As AI technology advances, governments are implementing regulations to ensure responsible AI practices. Organizations that develop responsible AI can expect a significant advantage when new regulations come into force.
  5. Responsible AI can improve revenue growth: Responsible AI can result in a larger target market, competitive advantage, and improved engagement with existing customers. Ethical considerations are increasingly important in business decisions.
  6. Responsible AI is powering up partnerships: Investors are looking to align their portfolios with their personal values, and responsible AI practices can influence an organization’s corporate strategy and financial performance.
  7. Maintaining strong trust and branding is essential: Responsible AI practices can boost trust and branding, while a lack of oversight can lead to unfavorable public opinion, brand erosion, and negative press cycles.

Overall, the report emphasizes the importance of incorporating responsible AI practices in business to reduce risks, improve products, attract top talent, and increase revenue growth, while also maintaining strong trust and branding.

Let’s explore the seven highlights of the
Economist Intelligence Unit’s report titled “Staying Ahead of the Curve. The Business Case for Responsible AI.” The first of the seven highlights from the
EIU report says that incorporating responsible AI practices is a smart investment in product
development. 97% of EIU survey respondents agree that ethical
AI reviews are important for product innovation. Ethical reviews
examine the potential opportunities and harms associated with new technologies to better
align products with Responsible AI design. These reviews closely examine data sets, model
performance across sub-groups, and consider the impact of both intended and unintended
outcomes. When organizations
aren’t working to incorporate responsible AI practices, they expose themselves to multiple
risks, including delaying product launches,
halting work, and in some cases pulling generally available
products off the market. By incorporating responsible AI practices
early and providing space to identify and mitigate harms,
organizations can reduce development costs through a reduction in downstream ethical
breaches. According to CCS Insight’s 2019 IT Decision-Maker
Workplace Technology Survey, trusting AI systems remains the biggest barrier
to adoption for enterprises. And in one study by Capgemini,
90% of organizations reported encountering ethical issues. Of those companies, 40% went on to
abandon the AI project instead of solving for those issues. In many reported cases, AI hasn’t shifted
out of labs and into production because of the real-world risks inherent in the technology,
so it makes sense that companies that have reached scale with AI
are 1.7 times more likely to be guided by responsible AI. If implemented properly, Responsible AI makes
products better by uncovering and working to reduce the harm that unfair bias can cause,
improving transparency, and increasing security. These are all key components to fostering
trust with your product’s stakeholders, which boosts both a product’s value to users
and your competitive advantage. The second highlight from the EIU report states
that responsible AI trailblazers attract and retain top talent. The world’s top workers now seek much more
than a dynamic job and a good salary. As demand for tech talent becomes increasingly
competitive and expensive, research shows that getting the right employees
is worth it. One study found that top workers are 400%
more productive than average, less-skilled individuals,
and 800% more productive in highly complex occupations, such as software development. Research also shows that it’s important
to retain top talent when you have it. It can cost organizations
around $30,000 to replace entry-level tech employees, and up to $312,000 when a tech
expert or leader leaves. So what can be done to keep great talent? The Deloitte Global Millennial survey showed
that workers have stronger loyalty to employers who tackle the issues that resonate with them, especially ethical issues. Organizations that build
shared commitments and responsible AI practices are best positioned to build trust
and engagement with employees, which helps to invigorate and retain top talent. The third EIU report highlight is the importance
of safeguarding the promise of data. According to The EIU’s executive survey,
cybersecurity and data privacy concerns represent the biggest obstacles to AI adoption. Organizations need to think very carefully
about how they collect, use,
and protect data. Today, over 90% of consumers will not buy
from a company if they have concerns about how their data will be used. Data breaches
are very costly for a business. IBM and the Ponemon Institute reported that,
globally, the average data breach involved 25,575 records
and cost an average of US$3.92m, with the United States having the highest
country average cost, at US$8.19m. The research also found that lost business
was the most financially harmful aspect of a data breach, accounting for 36% of the total
average cost. Consumers are also more
likely to blame companies for data breaches rather than the hackers themselves, which
highlights the impact that safeguarding data can have on customer engagement with firms. Enterprise customers also need to be confident
that the company itself is a trustworthy host of their data. At Google, we know that privacy plays a critical
role in earning and maintaining customer trust. With a public privacy policy, we want to be
clear about how we proactively protect our customers’ data. And when an organization can be
trusted with data, it can result in larger, more diverse data sets, which will in turn
improve AI outcomes. Cisco research reports that for every $1 invested
in strengthening data privacy, the average company will see a return of $2.70. All these findings are clear indicators that
using responsible AI practices to address data concerns will lead to greater adoption
and business value of AI tech. The fourth EIU report highlight is the importance
of preparing in advance of AI regulation. As AI technology
advances, so do global calls for its regulation from broader society
and the business community and from within the technology sector itself. Governments have realized the importance of
AI regulations and have started working towards implementing
them. For example,
to ensure a human-centric and ethical development of AI in Europe,
members of the European Parliament endorsed new transparency and risk-management rules
for AI systems. Once approved, they will be the world’s
first rules on artificial intelligence. This is a good start. However, it still takes significant
time and effort to have robust and mature AI
regulations globally. EIU executive survey data shows that 92% of
US business executives from the five surveyed sectors believe that technology companies
must be proactive to ensure responsible AI practices in the absence of official AI regulation. Organizations
developing responsible AI can expect to experience a significant advantage when new regulations come into force. This might mean a reduced risk of non-compliance
when regulation does take effect, or even being able to productively contribute
to conversations about regulation to ensure that it is appropriately scoped. The challenge is to develop regulations in
a way that is proportionately tailored to mitigate risks and promote reliable and trustworthy
AI applications while still enabling innovation and the promise
of AI for societal benefit. Take the General Data Protection Regulation,
or GDPR, in the European Union, for example. When it was first adopted, only 31% of businesses
believed that their organization was already GDPR-compliant before the law was enacted. The cost of non-compliance with GDPR was found
to outweigh the costs of compliance by a factor of 2.71. Although regulatory penalties are a well-known
risk of non-compliance, they accounted for just 13% of total non-compliance costs, with
disruption to business operations causing 34%, followed by productivity loss and revenue
loss. Reflection on that experience has prompted
many organizations to begin planning ahead of AI regulations. The fifth highlight from the EIU report says
that responsible AI can improve revenue growth. For AI vendors, responsible AI can result
in a larger target market, a competitive advantage, and improved engagement with existing customers. Of executives surveyed by The EIU, 91% said
that ethical considerations are included as part of their company’s request for proposal
process, and 91% also said they would be more willing
to work with a vendor if they offered guidance around the responsible use of AI. Furthermore, 66% of executives say their organization
has actually decided against working with an AI vendor due to ethical concerns. There is mounting evidence of a positive relationship
between an organization’s ethical behavior and its core financial performance. For example, companies that
invest in environmental, social, and corporate governance measures, or ESG perform better on the stock market,
while recent data shows that the World’s Most Ethical Companies outperformed the Large
Cap index by 14.4% over 5 years. Customer behaviour is also influenced by ethics. A Nielsen survey of 30,000 consumers across
60 countries found that 66% of respondents were willing to pay more
for sustainable, socially responsible, and ethically designed goods and services. Next, the EIU report highlights that responsible
AI is powering up partnerships. Investors are increasingly looking to align
their portfolios to their personal values, reflected in interest in sustainable, long-term
investing. This stakeholder relationship
can influence an organization’s corporate strategy
and financial performance. The broadest definition of sustainable investing
includes any investment that screens out unsavory investees or explicitly takes ESG factors
and risks into account, such as greenhouse gas emissions, diversity initiatives, and
pay structures. Although ESG assessment criteria don’t traditionally
include Responsible AI, this trend toward investment in socially responsible
firms indicates that funds will be reallocated toward companies that
prioritize responsible AI. One UK investment firm, Hermes Investment
Management, made clear in its 2019 report, “Investors’ Expectations on Responsible
Artificial Intelligence and Data Governance,” that it evaluates investees against a set
of responsible AI principles. More recent research has shown much the same
trend. Forrester research shows that investors are
increasingly interested in nurturing responsible AI startups. In 2013, there was $8m in funding for responsible
AI startups, and that grew to $335m in 2020. There was even a 93% increase in funding from
2018 to 2019. The final highlight from the EIU report relates
to maintaining strong trust and branding. Just as a lack of responsible AI practices
organizations that take the lead on responsible AI can expect to reap rewards related to public
opinion, trust, and branding. For technology firms, the connection between
trust and branding has never been stronger. can weaken customer trust and loyalty, evidence
confirms that Experts say that without strong oversight
of AI, companies that are developing or implementing AI are opening themselves up to risks, including unfavorable public opinion,
brand erosion and negative press cycles. And brand erosion doesn’t stop at the door
of the company that committed the misdeed. Organizations can mitigate these types of
trust and branding risks through the implementation of responsible AI practices, which have the
potential to boost the organizations and brands they are associated with. As the report by The Economist Intelligence
Unit emphasizes, responsible AI brings undeniable value to firms, along with
a clear moral imperative to embrace it. Although identifying the full spectrum of
negative outcomes that could result from irresponsible AI practices is impossible,
companies have a unique opportunity to make decisions today that will prevent these outcomes
in the future. We hope this video will provide data and discussion
points for you to use when engaging with your own business stakeholders and customers. Our intention is that it will serve as a means
to promote responsible AI practices and equip you with the tools to develop your own business
case for investment.

Module 3: AI’s Technical Consideration and Ethical Concerns

In this module, you will learn about ethical dilemmas and how emerging technology such as generative AI can surface ethical concerns that need to be addressed.

Learning Objectives

  • Describe ethical dilemmas and concerns with AI, and learn how emerging technology can raise difficult questions.
  • Identify some of the key ethical concerns related to artificial intelligence.

Video: AI’s technical considerations and ethical concerns

The text discusses the importance of ethics in Artificial Intelligence (AI) development. It starts by explaining what an ethical dilemma is, using a scenario where a person has to choose between keeping a confidence and warning a friend about an impending layoff. This highlights the complexity of ethical decision-making.

The text then emphasizes that building AI raises many ethical dilemmas due to its potential impact on society. It cites headlines that stress the need for responsible AI and digital trust. A Capgemini report is mentioned, which shows a growing demand for companies to develop robust ethical values and processes.

The text defines ethics as an ongoing process of articulating values and questioning decisions based on those values. It acknowledges that ethics can be subjective and culturally relative, but emphasizes the importance of diverse perspectives and experiences in ethical deliberation.

The text also notes that ethics cannot be reduced to rules or checklists, especially when dealing with new moral challenges created by groundbreaking technology. It requires humility, a willingness to confront difficult questions, and a willingness to change opinions in the face of new evidence.

Finally, the text highlights the need for organizations to define what ethics means to them, in order to build trust with users, teams, and society. It cites the rapid expansion of technology and the need for a thoughtful, careful approach to avoid unintentionally replicating harms. The text concludes by mentioning the increasing awareness of AI-related issues and the growth of organizations defining ethical charters for AI development.

Imagine a scenario where
you have a best friend, someone you’ve known since you were a kid, that you also work with
today. One day your manager, whom you are also very
close to, confides in you that your childhood friend will soon be laid off from their job,
and they ask you to keep it confidential for now. Later that day your friend calls you to share
their excitement that they are planning on buying a new house! Oh no, what do you do!? An ethical dilemma is a situation
where a difficult choice has to be made between different courses of action, each of which entails transgressing a moral
principle. Not making a decision is the same as making
a decision to do nothing. They are uncertain and complicated and require
a close examination of your values to solve. It’s important to note that an ethical dilemma
is different from a moral temptation. A temptation is a choice between a right and
wrong, and specifically, when doing something wrong
is advantageous to you. Imagine
you’re leaving a movie theater after seeing a film and
notice that another movie you’d love to also see is starting. No one is around to check tickets. Do you go? This would not be considered an ethical dilemma,
but a moral temptation. So in our first scenario,
do you share the information with your friend despite the request from your manager to keep
it quiet? Do you pretend you don’t know anything and
keep the confidence of your manager? Do you find some other way to warn them without
crossing that line? Different options are justifiable and could
be considered ethical, depending upon who you ask. Despite the lack of a right answer, a difficult
choice has to be made. When building AI,
there are many ethical dilemmas that may need to be confronted,
due to the impact AI can have on society. This is why the focus on ethics must remain
at the forefront in the AI community. Consider these headlines. Building digital trust will be essential to
adoption of AI tools. Great promise but potential for peril: Ethical
concerns mount as AI takes bigger decision-making role in more industries. Responsible AI becomes critical in 2021. These headlines highlight the importance of
Responsible AI for companies and for society. According to a Capgemini report in 2020, there
is a growing demand from a wide range of stakeholders, both internal and external, for companies
to develop more robust ethical values, processes, expertise, corporate culture, and leadership. But what exactly do we mean when we talk about
ethics? In general terms, ethics is an ongoing process of articulating values,
and in turn, questioning and justifying decisions based on those values,
usually in terms of rights, obligations, benefits to society, or specific virtues. Ultimately, ethics is what allows everyone
to flourish together as a society. This isn’t to say that there aren’t elements
of subjectivity and cultural relativity that need to be acknowledged and confronted. When looking at ethical frameworks and theories
from around the world, the various approaches can often be contradictory, but regardless
of the approach you align with, ethics is the art of living well with others. As such, it is crucial that ethical deliberation
draws on a diverse set of perspectives and experiences. However, ethics
doesn’t lend itself well to rules or checklists, especially when trying to wade through moral
challenges that have never existed before- -like those created through groundbreaking
technology. There’s
an element of ingenuity needed to help solve new moral challenges that haven’t yet been
faced. It requires humility, a willingness to confront
difficult questions, as well as change opinions in the face of new evidence and valid objections. It’s also important to understand that ethics
should not be viewed as law and policy. Ethics reflect values and expectations we
have of one another–most of which have not been written down or enforced by a formal
system. While laws and policies often do draw insight
from ethics, many unethical acts are legal, and some ethical acts are illegal. For example, most types of lying, breaking
promises, or cheating are generally recognized as being unethical but are often legal, while
some of the most heroic acts of civil disobedience were illegal at the time. At the end of the day, defining what ethics
means to your organization should compel you to think about the bonds
of trust you want to have with your users, your teams,
and the wider society through the work they do. Without that trust, strong customer relationships
won’t exist. Organizations are rapidly recognizing the
need for responsible AI. The challenges with advanced technologies
are multiplying as the social, political, and environmental impact of 21st-century
technology rapidly expands. Technologies using AI have the power to unintentionally
replicate harms at incredible speed and scale, making the need for a thoughtful, careful
approach even more important. Capgemini’s AI and the Ethical conundrum
survey in 2020 shows that twice as many executives are aware of AI-related issues than there
were in 2019, and the percentage of organizations that have
defined an ethical charter to provide guidelines on AI development has increased from 5% to
45% in that same time period.

Video: Concerns about artificial intelligence

The article discusses the ethical concerns surrounding Artificial Intelligence (AI) and its applications. The main concerns are:

  1. Transparency: AI systems can be complex and difficult to understand, making it hard for users to know how decisions are made.
  2. Unfair bias: AI can perpetuate and amplify existing biases in society, leading to unfair outcomes.
  3. Security: AI systems can be vulnerable to exploitation by bad actors, and their data-driven nature makes them attractive targets.
  4. Privacy: AI can gather and analyze vast amounts of data, leading to risks of data exploitation, identification, and tracking.
  5. AI pseudoscience: AI practitioners may promote unscientific and ineffective systems that can cause harm.
  6. Accountability: AI systems should be designed to meet the needs of all people and enable human direction and control.
  7. AI-driven unemployment and deskilling: AI may lead to job displacement and deskilling, and society needs to adapt to these changes.

Additionally, there are concerns specific to generative AI, such as:

  1. Hallucinations: AI models generating unrealistic or fabricated content.
  2. Factuality: The accuracy or truthfulness of generated information.
  3. Anthropomorphization: Attributing human-like qualities to non-human entities.

The article suggests that these concerns arise from a lack of resources, diverse teams, and ethical AI codes of conduct. It emphasizes the importance of responsible AI practices to avoid harm and promote human flourishing.

So far we’ve explored ethics
at a high level. However there are some ethical concerns that
are especially relevant to the advanced technologies
AI enables. While this module may not be exhaustive, we’ll
take a look at some of the themes that receive lot of attention when discussing ethical concerns
with AI. Each use case for AI raises unique challenges
that Google works to address but as an industry, we must
grow our awareness of these concerns so we can
develop approaches to tackle them together. So what are the main AI concerns being raised? The first is transparency. As AI systems become more complex, it can
be increasingly difficult to establish enough transparency for people to understand how
AI systems make decisions. In many situations, being able to understand
how an AI system works is critical to an end user’s autonomy or ability to make informed
choices. A lack of transparency can also make it harder
for a developer to predict when and how these systems might fail or cause unintended harm. Models that allow a human to understand
the factors contributing to a decision can help
stakeholders of AI systems to better collaborate with an AI. This might mean knowing when to intervene
if the AI is underperforming, strengthening a strategy for using the results of an AI
system, and identifying how the AI can be improved. A second concern is unfair bias. AI doesn’t create unfair bias on its own;
it exposes biases present in existing social systems and amplifies them. A major pitfall of AI is that its ability
to scale can reinforce and perpetuate unfair biases which can lead to further unintentional harms. The
unfair biases that shape society also shape every stage of AI,
from datasets and problem formulation to model creation and validation. AI is a direct reflection of the societal
context in which it’s designed and deployed. To mitigate harms, the societal context and
probable biases need to be recognized and addressed. For instance, vision systems are
being adopted in critical areas of public safety
and physical security to monitor building activity or public demonstrations. Here bias can make surveillance systems more
likely to misidentify marginalized groups as criminals. These challenges
stem from many root causes, such as the underrepresentation of some groups
and overrepresentation of others in training data, a lack of critical data needed to fully
understand a system’s impact, or a lack of societal context in product development. A third AI concern is security. Like any computer system,
there is the potential for bad actors to exploit vulnerabilities in AI systems for malicious
purposes. As AI systems
become embedded in critical components of society,
these attacks represent vulnerabilities, with the potential to significantly affect
safety and security. Safe and secure AI involves traditional concerns
in information security, as well new ones. The data-driven nature of AI
makes the training data more valuable to exfiltrate, plus, AI can allow for greater scale and speed
of attacks. We’re also seeing new techniques of manipulation
unique to AI, like deepfakes, which can impersonate someone’s voice
or biometrics. A fourth AI concern is privacy. AI presents the ability to quickly and easily
gather, analyze, and combine vast quantities of data from different sources. The potential impact of AI on privacy is immense,
leading to risks of data exploitation, unwanted identification and tracking, intrusive voice
and facial recognition, and profiling. The expanded use of AI comes with the need
to take a responsible approach to privacy. Another concern is AI pseudoscience, where
AI practitioners promote systems that lack scientific foundation. Examples include face analysis algorithms
that claim the ability to measure the criminal tendency of a person
based on facial features and the shape and size of their head, or models used for emotion detection
to determine if someone is trustworthy from their facial expressions. These practices are considered unscientific
and ineffective by the scientific community and can cause harm. However, they have been repackaged with AI
in a way that can make pseudoscience seem more credible. These pseudoscientific uses of AI not only
harm individuals and communities, but they can undercut appropriate and beneficial use
cases of AI. A sixth concern is accountability to people. AI systems should be designed to ensure that
they are meeting the needs and objectives of all types of people,
while enabling appropriate human direction and control. We strive to achieve accountability in AI
systems in different ways, through clearly defined goals and operating parameters for
the system, transparency about when and how AI is being used, and the ability for people
to intervene or provide feedback to the system. The final AI concern is AI-driven unemployment
and deskilling. While AI brings efficiency and speed to common
tasks, there is a more general concern that AI drives
unemployment and deskilling. Further, there is a concern that human abilities
will decline as we depend more on technology. Society has seen technological innovation
in the past and we’ve adjusted accordingly, like when cars replaced horses but created new industries and jobs previously unimagined. Today, innovation and technology advances
are happening faster and at a scale unlike previous times. If generative AI delivers on its promised
capabilities, the labor market could face significant disruption. However,
jobs will shift, as they always do during any major technological advances. For example, who could have imagined flight
attendants before commercial air travel? While many jobs might be complemented by generative
AI, entirely new jobs we can’t imagine today
will be created as well. This challenge is accompanied with opportunities. We need to work together on programs
that help people make a living and find meaning in work,
facing the challenge and seizing the opportunity. In addition to the list of concerns for generic
AI applications and models, there are concerns unique to generative AI. As a well-known type of generative AI, large
language models generate creative combinations of text in the form of natural-sounding language. There are three main concerns on large language
models, hallucinations, factuality, and anthropomorphization. In generative AI, hallucinations refer to
instances where the AI model generates content that is unrealistic,
fictional, or completely fabricated. Factuality relates to
the accuracy or truthfulness of the information generated by a generative AI model. Anthropomorphization refers to the attribution
of human-like qualities, characteristics, or behaviors to non-human entities, such as
machines or AI models. These are just a selection of some of
the common concerns related to AI and generative AI development and deployment. Your awareness of these unique technological
challenges can guide you when developing approaches to
tackle them. So what’s causing these concerns? The executive respondents of a Capgemini survey
cited a number of reasons for these reported ethical issues: A lack of resources dedicated to ethical AI
systems. So funds, people, and technology. A lack of diverse teams when developing AI
systems, with respect to race, gender, and geography. And a lack of an ethical AI code of conduct
or the ability to assess deviation from it. In the report, executives also identify the
pressure to urgently implement AI as the top reason why ethical issues arise from the use
of AI. This pressure could stem from the urgency
to gain a first-mover advantage, the need to acquire an edge over competitors
with an innovative application of AI, or the pressure simply to harness the benefits
that AI has to offer. It’s also worth noting that 33% of respondents
in the survey stated that ethical issues were not actually considered while constructing
AI systems, which is concerning in itself. But ethics isn’t just about the things we
don’t want to do or shouldn’t do. There are plenty of socially beneficial uses
for AI and emerging technology to help contribute positively to life and society. AI and new technology can help solve complex
problems by: Improving materials, designs, and processes,
Developing new medical and scientific breakthroughs. Allowing more reliable forecasting of complex
dynamic systems. And providing more affordable goods and services. And freedom from routine or repetitive tasks. Even for these very socially beneficial solutions,
responsible AI is critical to ensuring those benefits are realized by all and not
just small subsets of stakeholders. The key benefit of ethical practices in an
organization is that they can help to avoid bringing harm to customers,
users, and society at large. Ethical practices promote human flourishing. This is what we need to focus on the most. At Google, the goal of our AI governance is
to try to address these concerns that are fueling ethical issues,
and responsible AI practices can help achieve this. Implementing your own responsible AI governance
and process can help address these ethical concerns in your business.

Module 4: Creating AI Principles

In this module, you will learn about how Google’s AI Principles were developed and explore the ethical aims of each of these Principles.

Learning Objectives

  • Describe the process for creating AI principles and how your mission and values play a key role.
  • Recognize ethical issue spotting as a core part of AI governance.

Video: How Google’s AI principles were developed

The article discusses Google’s journey in developing its AI principles, which are a set of guidelines that govern the company’s research and product development processes to ensure responsible AI development. Here are the key points:

  1. Google recognized the need for AI principles to guide its development and use of AI, and assembled a cross-functional team of experts from diverse backgrounds to develop these principles.
  2. The team conducted research to identify concerns around AI, including user and academic research, media representation, and cultural references.
  3. The team drafted a set of principles aimed at addressing the major concerns and themes identified in the research, and refined them through an iterative process of feedback and refinement.
  4. The resulting AI principles include seven objectives for AI applications and four AI applications that Google will not pursue.
  5. The principles are incorporated into daily conversations, product development processes, and provide a shared ethical commitment for all Googlers when making decisions.
  6. Google’s AI principles are not unique, and many organizations have developed their own AI principles, with a growing body of research on ethical requirements, standards, and practices in AI.
  7. The article highlights the importance of incorporating diverse voices and perspectives in developing AI principles, and encourages other organizations to learn from Google’s experience and develop their own AI principles that align with their mission, values, and business context.

Overall, the article emphasizes the need for responsible AI development and the importance of developing AI principles that guide the development and use of AI in a way that aligns with an organization’s values and mission.

Google’s mission
and values have been our guiding principles for many
years. However, when thinking about how
to responsibly develop and use AI, we needed to design a set of AI principles
to actively govern our research and product development processes and guide our business
decisions. When we set out to write our AI Principles,
there were some pioneers in the field, but there wasn’t a lot of industry guidance
to help set the course. Things have changed a lot since then , and
the list of organizations that have developed guidelines for the responsible use of AI has
grown considerably. As illustrated by the Berkman Klein Center
report “‘Principled Artificial Intelligence,”’, many organizations have defined their own
AI principles, and by Capgemini’s research that number
grew by 40% between 2019 and 2020. Capgemini also found that ethically-sound
AI requires a strong foundation of leadership, governance,
and internal practices around audits, training, and operationalization of ethics. As you work to create AI principles in your
organization, we’d like to share the process we took to
create ours. We want to acknowledge we are not the only
ones implementing AI Principles, and you should do your research and take
the best learnings from various initiatives. It’s our hope that you can learn from our
process, challenges and experiences, and ultimately create and use your own AI Principles as a foundation for your development process. From our mission statement and values defined
at the beginning, to ongoing work by teams on topics such as
ethics and compliance,. trust and safety,
and privacy, Google has had many different initiatives over the years to help guide our
work responsibly. As AI emerged as a more prominent component
of our business, there were many teams
advocating for a responsible AI approach through a growing awareness of the importance
of ML fairness, specifically. What we did not have was a formal and comprehensive
approach to the broader goals of responsible AI that all Googlers could unite behind. Work on Google’s AI Principles started in
the summer of 2017, when our CEO, Sundar Pichai, designated Google
an AI-first company. With this company-wide
vision as our foundation we set out to design an “AI ethical charter”
for Google’s future technology; this effort would evolve into our AI principles. It’s important to note that this journey
was not always smooth. It is the result of several years of work
from many different groups of people, with learning and iterating along the way. We understand this is a complex and evolving
topic, and we will share some of our lessons learned
in a later module. But what has shown itself to be true is that
it takes ongoing commitment to work toward developing AI responsibly. Today, and in the future,
we fully expect to continue iterating on our methods and interpretations as we learn and
the field evolves. We believe that organizations and communities
flourish when, in addition to individuals’ ethical values,
there are also shared ethical commitments that each person plays a part in fulfilling. Having a set of shared and codified AI principles
keeps us motivated by a common purpose in addition to the values we all individually
hold. Google recognized the need to not only focus
on technical development and innovation, but also ensure that development aligned with
our mission and values. A cross-functional group of experts was assembled
to determine what guidelines were needed to address the important challenges raised by
AI. When creating the team, we didn’t just rely
on functional expertise in artificial intelligence. Instead, individuals were chosen who represented
different skills, backgrounds, and demographics across Google. From a skill perspective, we sought people
for the core group with backgrounds in user research, law, public policy, privacy, online
safety, sustainability, and nonprofits. We also sought input from experts in AI, human
rights, and civil rights, and product experts who weren’t strictly in the core working group. We incorporated input across a broad range
of diverse voices, including people from different countries, genders, races, ethnicities and
age groups. We also developed ways for those not directly
in the working group to have a voice. For example, we asked every member to solicit
discussion and feedback from other teams and external experts,
and to bring back ideas to the core group. Having a small group charged with taking action
based on input from as many stakeholders as possible
was key to our success. Incorporating a broad range of voices when
creating your AI principles makes the principles more inclusive, and also fosters trust in
the process. The team started by conducting research. We wanted to document what concerns people
had regarding AI. What did people consider irresponsible AI? The team scoured user and academic research
from a wide range of sources and analyzed how AI was represented in the media. We even researched cultural and pop-cultural
AI references, like how AI was being portrayed on TV shows and in sci-fi books,
to gain a better understanding of how consumers might perceive AI. All of this research would help us discover
the standards that we wanted to guide our work. After that, the team began an iterative process
to draft a set of principles aimed at addressing the major concerns and themes identified in
the research. We started by aggregating and organizing all
of the research into categories, which produced a long list of potential principles. To refine this list: We first asked outside experts in AI, policy,
law, and civil society, without seeing our draft principles, to come up with their own
shortlist. We then shared our draft principles created
from research to compare. And finally, we gathered their reactions and
highlighted gaps to bring back to the internal working group for further consolidation
We engaged in a continuous feedback and refinement process to
further consolidate the list of principles while maintaining a wide breadth of coverage,
and recognizing anything we may have overlooked. What resulted was Google’s AI principles,
including seven “‘objectives for AI applications”’ which guide our AI aspirations
As well as a list of four “‘AI Applications we will not pursue.”’. The goal of identifying the AI applications
we will not pursue was to provide clear guardrails around highly sensitive AI application areas
we will not design for across all parts of our business. Acknowledging what we explicitly won’t build
at all is just as critical as outlining what we will. This work culminated when we published our
AI principles in June of 2018. As a company, we remain dedicated to putting
these principles into practice every day. They are incorporated into daily conversations,
they form the foundation for opportunity and harm reviews in the product development process,
and most importantly they provide a shared ethical commitment for all Googlers when making
decisions. What we’ve described here is the journey
Google took to codify our principles, while the field of responsible AI was in its
early stages. The body of research on ethical requirements,
standards, and practices in AI has grown a lot since then,
especially thanks to the pioneering work of scholars of color and communities of advocates. There has been a relative convergence in the
AI community around what AI principles should encompass to be useful. While your company’s mission, values, geographic
presence and organizational goals will influence your approach,
making some principles more relevant to your particular business context than others, there
are a clear set of themes that apply to all uses and industries to help you get started. For example, if your company is involved specifically
in creating chatbots for customer support, while there may be core themes, some of your
AI principles may look different or more specific to your context
from those of a consulting company involved in a very wide range of use cases for different
customers. We hope this insight into our approach is
helpful to your organization, providing a scaffold to build upon. The challenges you face, and your organization’s
values, will define your process for identifying and creating your own AI principles that both
convey the ethos of your organization and serve as a foundation for your AI governance.

Video: Ethical issue spotting

The article discusses the importance of issue spotting in AI governance, which is the process of recognizing potential ethical concerns in an AI project. The author argues that checklists and decision trees are not effective in identifying ethical issues, as each use case, customer, and social context is unique and requires a nuanced approach. Instead, the author suggests that a robust issue spotting practice is needed, which involves:

  1. Becoming sensitized to ethical issues, similar to how a trained birdwatcher becomes more aware of different bird species.
  2. Having multiple reviewers to identify and classify ethical issues and risks.
  3. Leveraging ethical lenses, such as philosophical frameworks, to provide a structured way of considering issues from multiple angles and perspectives.
  4. Learning when and how to use these lenses to assess the consequences of decisions, their impact on human rights and duties, and their alignment with virtuous character.

The author emphasizes that issue spotting is not a one-time task, but rather an ongoing process that requires continuous learning and adaptation to new technologies and use cases. By developing a robust issue spotting practice, organizations can better identify and address ethical concerns in their AI projects.

A core part of AI governance
is a robust issue spotting practice. Issue spotting is the process of recognizing
the potential for ethical concerns in an AI project. The AI principles
your organization develops can be a guide for
spotting these issues. To address ethical concerns, we first need
to identify them. It may be tempting to try and make this process
more efficient through checklists, outlining what is and isn’t acceptable for each principle. We know it’s tempting because we tried it. We tried to create decision trees and checklists
that would ensure our technology would be ethical. That didn’t work. The reality is that we need to address ethical
issues, not just in familiar products or use cases,
but also by recognizing new risks that we have never seen before emerging from
new technologies. Each use case, customer, and social context
is unique. A tool or solution aligned with our AI principles
in one context could be misaligned in another. Technologies we’ve never imagined are being
developed at rapid speed and scale that require an adaptive process, not rigid, prescriptive
yes or no answers. It’s not feasible to expect to create a simple
checklist for each use case to meet your AI principles. We’ve learned that there’s no replacement
for careful review of the facts of each case. A useful analogy is
to think of ethical issues like birds. They are all around us, often unseen,
they tend to be found in some areas more than others,
and they range from big to small, exotic to ordinary. Noticing them gets easier with practice—
on your way to work, you probably passed a lot of birds but you
probably didn’t really notice them. Now imagine you were a trained birdwatcher. You’d be more sensitized to the species
you encounter along your route and much more likely to notice the intricate differences
in birds you pass everyday. Similarly, in issue spotting, the goal is
to become more sensitized, to be able to quickly and accurately identify and classify ethical
issues and risks. As with birdwatching, you see more as a team,
so having multiple reviewers helps. No one individual can see everything that’s
there and there exist special tools that enhance spotting abilities. In the case of ethical issue spotting,
moral philosophers have spent thousands of years developing lenses
to help identify ethical issues. While you may look at the various philosophical
lenses and wonder which you should choose to align with,
we have discovered that it really isn’t about choosing one approach over another for
all scenarios. In practice, leveraging ethical lenses provides
a structured way of considering issues from multiple angles and perspectives to make sure
we are surveying and surfacing what is important to consider. Learning when and how to use such lenses
allows you to switch between assessing the consequences of your decisions,
their impact on human rights and duties, as well as their alignment with what it means
to have a virtuous character. If you want to learn more about ethical lenses,
see the materials from the Markkula Center for Applied Ethics.

Video: The ethical aims of Google’s AI principles

Google has established seven core ethical aims, known as “Objectives for AI applications,” to guide the development and use of artificial intelligence (AI) in a responsible and ethical manner. These aims are:

  1. Be socially beneficial: Support healthy social systems and institutions, prevent unfair denial of essential services, and reduce the risk of social harm to vulnerable groups.
  2. Avoid creating or reinforcing unfair bias: Promote fair, just, and equitable treatment of people and groups, limit the influence of historical bias in training data, and ensure global representation in datasets.
  3. Be built and tested for safety: Ensure the safety and security of people, communities, and systems, with effective oversight and testing of safety-critical applications.
  4. Be accountable to people: Respect people’s rights and independence, limit power inequities, and ensure informed user consent and a path to report and redress misuse.
  5. Incorporate privacy design principles: Protect the privacy and safety of individuals and groups, handle personally identifiable information with care, and ensure clear expectations and informed consent.
  6. Uphold high standards of scientific excellence: Advance the state of knowledge in AI, follow scientifically rigorous approaches, and ensure feature claims are scientifically credible.
  7. Be made available for uses that accord with these principles: Limit potentially harmful or abusive applications, ensure accountability for Google’s unique impact on society, and make beneficial AI technologies widely available.

Additionally, Google has outlined four areas of AI applications that it will not pursue, including those that are likely to cause overall harm, weapons, surveillance violating internationally accepted norms, and those that contravene international law and human rights.

To operationalize these principles, Google has established a formal review process with a governance structure to assess multifaceted ethical issues that arise in new projects, products, and deals. This includes associated programs and initiatives to ensure responsible AI decisions are made.

At Google we have identified some core ethical
aims which can be seen as an explanation of the
ethos behind each of the AI principles. These ethical aims help us assess against
the principles in a consistent way. Ethical aims can help be a guide for what
ethical issues may exist, but don’t represent a checklist. Let’s walk through some of the core ethical
aims, known as ‘Objectives for AI applications’ for each of the AI Principles: The first, be socially beneficial,
seeks to support healthy social systems and institutions. For example, this might mean preventing automated
systems from unfairly denying essential services for people’s well-being like employment, housing, insurance, or education. The principle also aims to reduce the risk of social harm in terms of quantity, severity, likelihood,
and extent. It aims to diminish risk to vulnerable groups. And it also aims to reduce the risk of unintended
harms. The second principle, avoid creating or reinforcing unfair bias, aims to promote AI that creates fair, just,
and equitable treatment of people and groups. It should limit the influence of historical
bias against marginalized groups in training data both through data that’s
included and data that is absent or invisible due to
historical exclusion. Through this principle we pay close attention
to the impact that technology discrimination might have on the
usefulness of the product for all users. Take, for example, someone doing an image
search for ‘wedding’. An AI image classifier trained on biased data
may only apply the label ‘wedding’ to images of couples wearing traditional western
wedding attire. However, an image where the couple is wearing
traditional wedding attire from another culture, may just be labeled as “people” instead of
“wedding.” What this demonstrates is how this image classifier
may not recognize wedding images from different parts of the
world or cultures. These are not the kinds of labels and distinctions
we want to be seeing and this would be an example
of a data set that doesn’t reflect our global user-base. Underrepresentation, as depicted in this example,
is harmful. Recognizing this, Google is committed to building
global products that are intended to work for everybody. To bring greater representation across the
full range of diversity, Google ran a competition that invited global
citizens To add their images
to an extended data set because training data has to be able to represent
societies as they are, not as a limited data set might represent
them. What’s important to recognize is that unfairness can enter into the system at any
point in the ML lifecycle, from how you define the problem originally, how you collect and prepare data, how the model is trained and evaluated, and on to how the model is integrated and
used. At each stage in this flow, developers face different responsible
AI questions and considerations. Within that lifecycle,
the way we sample data, the way we label it, how the model was trained and whether or not
the objective leaves out a particular set of users, can all work together to create biased systems. Rarely can you identify a single cause of,
or a single solution to, these problems. The work of machine learning fairness is
to disentangle these root causes and interactions, and to find ways forward with the most fair
solutions possible. How we do that is by getting clarity on the questions that need answering at each stage of the life cycle, Such as:
What problems will the model solve? Who are the intended users? What other groups may be impacted? What groups are invisible today? How was the training data collected, sampled
and labeled? Is the training data skewed? How was the model tested and validated? Is the model behaving as expected? While not meant to be a comprehensive list
we’ve found questions like these, asked at each stage, to be helpful in guiding our
investigations and recognizing possible unfair bias. While these questions can be hard to answer
and require a range of sociotechnical inputs, there are foundational tools to help in that
process. Some of them are open source via the TensorFlow
ecosystem. Some of them are managed products from Google
Cloud. We won’t focus on the tools here, but you
can check out the resources for links to them. These questions have a huge impact
on how datasets and models get developed. Some of these questions seem simple but are
in fact often underestimated, which can result in significant last-minute
changes to projects or even canceling projects altogether. At its core, doing AI responsibly is about
asking hard questions. The third, be built and tested for safety
seeks to promote the safety—both bodily integrity and overall health—of people and
communities, as well as the security of places, systems, properties, and infrastructures from
attack or disruption. This principle also aims to ensure that there
is effective oversight and testing of safety-critical applications, that there is control of AI systems behavior, and that there is a limit to the reliance
on machine intelligence. The fourth principle, be accountable to people,
aims to respect people’s rights and independence. This means limiting
power inequities, and limiting situations where people lack
the ability to opt -out of an AI interaction. The principle aims to promote informed user
consent, and it seeks to ensure that there is a path to report and redress misuse, unjust
use, or malfunction. The goal is meaningful human control
and oversight of AI systems to promote explainable and trustworthy AI
decisions. With the fifth of Google’s AI Principles, incorporate privacy design principles,
the aim is to protect the privacy and safety of both individuals and
groups. To do so, we want to ensure that personally
identifiable information and sensitive data are handled with special care through robust
security. It is also the goal of the principle to ensure
that users have clear expectations of how data will be used, and that they feel informed
and have the ability to give consent to that use. The sixth principle, uphold high standards of scientific excellence,
seeks to advance the state of knowledge in AI. This means to follow scientifically rigorous
approaches and ensure that feature claims are scientifically credible. This principle aims to do this through a commitment
to open inquiry, intellectual rigor,
integrity, and collaboration. Responsibly sharing AI knowledge by publishing
educational materials best practices,
and research enables more people to develop useful and beneficial AI applications, while
avoiding AI pseudoscience. The last ‘Objective for AI applications’
in the AI Principles, be made available for uses that accord with
these principles, seeks accountability for Google’s unique
impact on society. Many technologies
have multiple uses. This principle aims to limit potentially harmful
or abusive applications. This includes how closely a technology’s
solution is related to, or adaptable to, a harmful use. The principle aims for the widest availability
and impact of our beneficial AI technologies, while discouraging harmful or abusive AI applications. It takes into account the fact that Google doesn’t just build and control technology
for its own use, but makes that technology available to others
to use. Google wants to ensure that it’s not just
the technology that we own and operate that aligns with our AI Principles, but the technology that we make available
to customers and partners. We use various factors to accurately define
our scope of responsibility for a particular AI application. As well as these seven ethical aims which
represent our commitment to how we will use AI responsibly, Google has also outlined the four areas of
AI applications we will not pursue. Those AI applications that are likely to cause
overall harm, weapons or other technologies whose principal
purpose is to cause injury to people, surveillance violating internationally accepted
norms, and those whose purpose contravenes international
law and human rights. These seven aims and four areas together make
up Google’s AI principles and succinctly communicate our values in developing advanced
technologies. We believe these principles are the right
foundation for our company and the future development
of AI. However, establishing AI principles is just
one step. To use these principles to interpret issues
and make decisions requires a process. Responsible AI decisions require careful consideration of how the AI Principles
should apply, how to make tradeoffs when principles come
into conflict, and how to mitigate risks for a given circumstance. To operationalize the AI principles we’ve established a formal review process with a governance structure to assess the
multifaceted ethical issues that arise in new projects, products and deals. We also have several associated programs and
initiatives. This is how the AI Principles are put into
practice.

Module 5: Operationalizing AI Principles: Setting Up and Running Reviews


Video: Google’s AI Governance

Establishing AI Governance: A Framework for Responsible AI

The Importance of AI Principles and Review Process

Having defined AI principles is not enough; a review process is necessary to put them into practice. AI principles provide a starting point for establishing values and assessing technology development, but they don’t immediately answer all ethical concerns. A dedicated process promotes a culture of responsible AI, which is critical in establishing goals and evaluating technical tools.

Common Misconceptions about Responsible AI Governance

  1. Hiring Ethical People is Not Enough: Even people with strong ethics can have different conclusions based on their experiences and backgrounds. Research suggests that even the most ethical people can be subject to ethical blind spots.
  2. Checklists are Ineffective: Checklists or decision trees can lead to critical thinking boundaries and ethical blind spots. Each product requires a unique evaluation considering technical details and context.

Key Components of AI Governance

  1. Programs and Practices: Establish procedures to support the review of technologies, allowing teams to exercise moral imagination and build issue-spotting practice.
  2. Diverse Perspectives: Encourage participation from a diverse range of people to ensure robust and trustworthy outcomes.
  3. Psychological Safety: Foster an environment of psychological safety for discussion and debate to succeed.

Google’s AI Governance Committee Structure

  1. Responsible Innovation Team: Provides guidance, establishes common interpretation of AI principles, and ensures calibrated decision-making.
  2. Senior Experts: Inform strategy and guidelines around emerging technologies and themes, and consult on reviews when required.
  3. Council of Senior Executives: Handles complex and difficult issues, makes precedent-setting decisions, and provides accountability.
  4. Customized AI Governance Committees: Embedded within product areas, taking into account unique circumstances, technology, use case, training data, societal context, and AI integration.

Best Practices for Reviews

  1. Seek Diverse Participation: Ensure robust and trustworthy outcomes based on deliberation.
  2. Foster Psychological Safety: Create an environment for successful discussion and debate.

Once you have your AI principles defined,
the next step in establishing AI governance is to set up a review process to put them
into practice. Having AI principles as guidance won’t immediately
answer all of your questions around AI’s ethical concerns, and they don’t relieve
you from having hard conversations. What the Principles provide is a starting
point for establishing the values you stand for and what you need to assess in technology
development. Applying those AI principles then takes concerted
and ongoing effort. While responsible AI technical tools are helpful
to examine how a particular ML model is performing, having robust AI governance processes is a
critical first step in establishing what your goals are. Technical tools are only useful if you have
clear responsibility goals. A dedicated process promotes a culture of
responsible AI often not present in traditional product development lifecycles. Let’s quickly address some common misconceptions
about responsible AI governance. One misconception is that hiring ethical people
will guarantee ethical AI products. The reality is that two people considered
to have strong ethics could evaluate the same situation, or AI solution, and come to very
different conclusions based on their experiences and backgrounds. Research in the World Economic Forum report
“‘Ethics By Design”’ suggests that even the most ethical people can be subject
to ethical blind spots. This is why it is important to build practices
around ethical decision making and making space for ethical deliberation. Both are big factors in achieving ethical
outcomes. Another common misconception is that it’s
possible to create a checklist for responsible AI. Checklists or decision trees can feel comforting,
but in our experience checklists are ineffective at governance for such nascent technologies. For every product, both the technical details
and context in which it’s used are unique and require its own evaluation. Following a checklist can place boundaries
on critical thinking and lead to ethical blind spots. Essential to AI governance is having programs
and practices to support the review of your technologies. These procedures allow your teams to exercise
moral imagination, which is envisioning the full range of possibilities in a particular
situation in order to solve an ethical challenge. It also encourages people to build their issue
spotting practice in a way that prescribed checklists and more rigid rules could not
achieve. So let’s get into some detail on how we operationalize
the review process. Google created a formal review committee structure
to assess new projects, products and deals for alignment with our AI Principles. The committee structure consists of the following
AI governance teams: A central ‘Responsible Innovation team’ provides guidance to teams
across different Google product areas that are implementing AI Principles reviews, establishing
a common interpretation of our AI principles, and ensures calibrated decision-making across
the company. They handle the day-to-day operations and
initial assessments. This group includes: user researchers, social
scientists, ethicists human rights specialists, policy and privacy advisors, and legal experts,
among many others, which allows for diversity of perspectives
and disciplines. The second AI governance team in our committee
structure is a group of senior experts from a range of disciplines across Google who provide
technological, functional, and application expertise. These experts inform strategy and guidelines
around emerging technologies and themes, and consult on reviews when required. The third AI governance team in our committee
structure is a council of senior executives who handle the most complex and difficult
issues, including decisions that affect multiple products and technologies. They serve as the escalation body, make complex,
precedent- setting decisions, and provide accountability at the highest level of the
company. Finally there are customized AI governance
and review committees embedded within certain product areas that work closely with the Responsible
Innovation team. These take into account their unique circumstances,
regarding: the technology, use case, training data, societal context, and how the AI is
integrated in production. A best practice for reviews across all teams
is to seek participation from a diverse range of people, which ensures robust and trustworthy
outcomes based on deliberation. An environment of psychological safety needs
to be fostered for this discussion and debate to succeed. To give you a better idea of how Google Cloud
approaches responsible AI, we will go through their custom AI governance process next.

Video: Google Cloud’s review process

This article describes Google Cloud’s rigorous review process for ensuring ethical and responsible AI development, both for customer-specific projects and for publicly available AI products.

Two key review processes are highlighted:

  1. Customer AI Deal Review: This process scrutinizes early-stage customer projects involving custom AI solutions. It aims to identify and mitigate potential conflicts with Google’s AI Principles before the project proceeds. This involves a multi-stage review process including deal submission, preliminary review by the AI Principles team, and final decision-making by a diverse committee.
  2. Cloud AI Product Development Review: This process focuses on evaluating Google Cloud’s own AI products throughout their development lifecycle. It emphasizes an “ethics by design” approach, incorporating responsible AI considerations from the outset. This involves pipeline tracking, preliminary reviews, in-depth reviews with a prepared brief, and the creation of a product-specific “alignment plan” to address potential harms.

Both review processes prioritize:

  • Alignment with Google’s AI Principles: Ensuring that AI development and deployment align with ethical guidelines.
  • Proactive Risk Mitigation: Identifying and addressing potential harms before they occur.
  • Contextualized Evaluation: Recognizing that ethical implications can vary depending on the specific use case and societal context.
  • Continuous Improvement: The review processes are iterative and evolve based on lessons learned and emerging challenges.

The article emphasizes that ethical AI development requires careful consideration, open discussion, and a commitment to responsible innovation. Google Cloud’s review processes serve as a potential framework for other organizations developing their own AI governance practices.

Google Cloud develops a wide range of technologies
for enterprises that build and implement AI, through our AI platform, – Vertex AI,ML
Ops capabilities, APIs, and end-to-end solutions. Google Cloud has implemented its own custom
AI Principles review processes, as we believe that ethical evaluation of the impacts of
the products we’re creating is critical for trust and success. Two connected but deliberately distinct review
bodies exist in Google Cloud, to ensure AI is developed responsibly: a customer AI deal
review and, a Cloud AI product development review. The customer AI deal review looks at early-stage
customer projects, which involve custom work above and beyond our generally available products,
to determine whether a proposed use case conflicts with our AI principles. The Cloud AI product development review focuses
on how we assess, scope, build and govern the products Google Cloud creates using advanced
technologies before a product can become available to the public. These review processes answer two big questions:
Is the proposed use case aligned with our AI principles? and, if it is, How should we approach the
design and integration of this solution to ensure the intended benefit is realized and
harms are mitigated? Even the most socially beneficial use cases
need to follow responsible design practices or they risk not fulfilling their intended
benefit. So how do Google Cloud’s review processes
work in practice? Let’s start with the Google Cloud customer
AI deal review. The goal is to identify any use cases that
risk not being aligned with our principles before the deal moves forward. This review happens in several stages: Sales
Deal Submission is the intake process that can be achieved in two ways to ensure coverage. Field Sales representatives are trained to
submit their AI customer opportunities for review. Additionally, an automated process flags deals
for review in our company-wide sales tool. In the Preliminary Review stage members of
the Cloud AI Principles team, with help from the central Responsible Innovation team, review
deals submitted via the intake process and prioritize deals needing a deeper review. During this preliminary review, they apply
any relevant historical precedent, discuss and debate potential AI principles risks,
and request additional information where required. This analysis sets the review agenda for the
AI principles deal review committee, which is the group directly responsible for making
final decisions. At the Review, Discuss and Decide stages,
the deal review committee meets to discuss the customer deals. This committee is composed of leaders across
multiple functions in the organization such as product, Policy, sales, AI ethics, and
legal. Careful consideration is given to how the
AI principles apply to the specific deal and use case, and the committee decides whether
the deal can proceed. The range of decisions this group makes can
include: go forward, don’t go forward, cannot go forward
until certain conditions / metrics are met, or escalate decision. The decisions are made by consensus. If a clear decision cannot be agreed upon,
the deal review committee can escalate to the council of senior executives. Now let’s walk through the Cloud AI product
development review. It also consists of several different stages:
For Pipeline Development, the Cloud AI Principles team tracks the product pipeline and plans
reviews so they happen early on in the product development lifecycle. This is important when seeking to ensure an
“‘ethics by design” approach to development. Ensuring responsible AI considerations are
incorporated in the design of the product, as opposed to tacked on at the end. Preliminary review is where a team works to
prioritize the AI products for review, based on launch timelines unless a particular use
case is deemed more risky. With a healthy product pipeline we aim for
in-depth reviews every two weeks. Before a review meeting, members of the Cloud
AI Principles team evaluate the product and draft a Review brief. They work hand in hand with the product managers,
engineers, other members of the Cloud AI Principles team, and fairness experts to deeply understand
and scope the product review. The review brief includes:
the intended goals and social benefits of the product, what business problem the product
will solve, the data being used, how the model is trained and monitored, the societal context
in which the product is going to be integrated and it’s potential risks and harms. In this evaluation, the teams collaborate
to think through each of the stakeholder groups affected by the AI system. They discuss the ethical dilemmas and value
tensions that exist when deciding on one course of action versus another. Finally, a key aspect of the review brief
is a proposed alignment plan to align the product development with the AI principles
by addressing potential harms. This review brief is the basis for the review
meeting. Committee members spend time in advance of
the meeting familiarizing themselves with the product and potential issues, in order
to be ready to discuss them from their specialized perspective. Providing the review brief in advance allows
for the review time to be focused, effective and efficient. Discuss & Align is where the team actually
meets to review the AI product from a responsible AI perspective. These in-depth, live product reviews are a
critical component of our review process. It allows us to: spot and discuss additional
ethical issues as a team and make decisions that incorporate responsible AI into a product’s
design, development and future roadmap. Over time, these reviews have been effective
at normalizing tough conversations about risky technologies and preventing potentially adverse
outcomes. After the review meeting takes place, the
AI Principles team works to synthesize the relevant content from the review brief and
adds new issues, mitigations or decisions brought up in the review meeting to update
and finalize an alignment plan. At the approval stage, the alignment plan
is sent to the committee and product leaders for sign-off. With this sign-off, the alignment plan is
incorporated into the product development roadmap and the AI Principles team tracks
the execution and completion of the alignment plan. The alignment plan is unique to each product
or solution. It’s important to note that not all paths
forward involve technical solutions or fixes. Ethical risks and harms aren’t always a
result of technological lapses, but can be a result of the context in which the product
is being integrated. The path forward could include, among other
things: Narrowing down the scope of the technology’s purpose. Launching with an allow-list, meaning the
product is not available generally and needs a customer deal review prior to use. Or launching with education materials packaged
with the product, such as an associated model card or implementation guide with information
on using a solution responsibly. Over time, reviewing products with similar
issues has surfaced some findings that can be leveraged across multiple reviews. This has allowed for the creation of certain
generalized policies, which then become precedents, that simplify the process for product teams. Every review needs to be conducted with the
same level of care, as each new case brings up new considerations, highlighting why the
process and in-depth discussions are so important. This process of how we put our AI principles
into practice has grown and evolved over time, and we expect that to continue. As you think about developing your own AI
governance process, we hope this serves as a helpful framework that you can adapt to
fit the mission, values and goals of your organization. Later in the course we’ll explore more lessons
we’ve learned that have made our reviews at Google more effective.

Video: Celebrity Recognition Case Study

This case study details Google Cloud’s journey towards launching a tightly scoped facial recognition API for celebrity recognition, demonstrating how their AI principles and review processes shaped the development.

Early Concerns and Review:

  • Despite high customer demand, Google initially excluded facial recognition from its Cloud Vision API due to concerns about potential bias.
  • An early iteration of Google’s AI Principles review process led to a critical examination of the technology’s research, societal context, and challenges.

Addressing Concerns and Scoping the Solution:

  • Google recognized the potential for misuse of facial recognition, particularly regarding fairness, surveillance, and privacy.
  • They opted for a narrowly focused approach, developing a celebrity recognition API designed for media and entertainment clients.

External Expertise and Human Rights Assessment:

  • Google sought external expertise from civil rights leaders and a human rights consultancy (BSR) to ensure their approach aligned with diverse perspectives.
  • BSR conducted a Human Rights Impact Assessment, revealing areas requiring additional oversight and validating Google’s decision to avoid general-purpose facial recognition APIs.

Fairness Analysis and Improvements:

  • Google conducted fairness analyses to evaluate the API’s performance across various skin tones and gender groups.
  • Initial tests revealed discrepancies, particularly affecting darker-skinned individuals.
  • Further investigation found inaccuracies in skin tone labels within the training dataset and a lack of representation for some celebrities at different ages.
  • Google addressed these issues by refining the skin tone labels, expanding the training dataset to include images from various career stages, and implementing safeguards like a whitelist for qualifying customers and an opt-out policy for celebrities.

Outcome and Lessons Learned:

  • Google’s responsible development process, including rigorous testing and external review, enabled them to launch a celebrity recognition API that aligns with their AI principles.
  • The experience highlights the importance of considering the broader context of representation in media and the need for comprehensive fairness analyses to mitigate potential bias.
  • Google’s approach demonstrates how responsible AI development can lead to successful integration of the technology.

Key Takeaways:

  • Responsible AI development requires careful consideration of ethical and societal implications.
  • External expertise and human rights assessments are crucial for ensuring inclusivity and minimizing potential harm.
  • Rigorous fairness analyses and ongoing improvement processes are essential for mitigating bias.
  • Tightly scoped applications can be a responsible way to introduce potentially sensitive technologies while mitigating risks.

The following case study outlines how our
AI principles and review processes shaped Google Cloud’s approach to facial recognition
technology. Let’s start with the outcome of our review. In 2019 Google Cloud launched Celebrity
Recognition, a tightly scoped API to Media & Entertainment customers looking to tag celebrities
in their professional licensed media content. Searching through video content has been a
difficult and time-intensive task without expensive tagging processes. This makes it difficult for creators to organize
their content and offer personalized experiences. The Celebrity Recognition API is a pre-trained
AI model — meaning it’s not customizable— that’s able to recognize thousands of popular
actors and athletes from around the world based on licensed images. This is Google Cloud’s first enterprise
product with facial recognition. So, how did we get here? Facial recognition was identified as a key
concern for potential unfair bias. In early 2016, Cloud leadership decided facial
recognition would not be a part of the Cloud Vision API offering, despite it being a top
request from our customers. To explore this further, we took facial recognition
through an early iteration of our AI Principles review process. These reviews gave us the open forum and time
to think critically about the research, societal context, and challenges of the technology.
We’ve seen how useful the spectrum of face-related technologies can be for people and for society
overall. They can make products safer and more secure,
like using face authentication to control access to sensitive information. There are
uses with tremendous social good, such as nonprofits using facial recognition to fight
trafficking against minors. But it’s important that these technologies are developed thoughtfully
and responsibly. Google shares many of the widely-discussed
concerns over the misuse of facial recognition technology, namely: It needs to be fair, so
it doesn’t reinforce or amplify existing biases, especially where this might impact
underrepresented groups. It should not be used in surveillance that
violates internationally accepted norms And it needs to protect people’s privacy, providing
the right level of transparency and control. To reduce the potential for misuse and make
the technology available for an enterprise use case aligned with our AI principles, Google
decided to pursue a tightly scoped facial recognition application for celebrity recognition. Google decided to pursue a tightly scoped
facial recognition application for celebrity recognition. To prepare for launch readiness of the Celebrity
Recognition API, along with our own internal review processes, we sought help from external
experts and civil rights leaders. We recognized that our lived experience wouldn’t
necessarily align with the lived experience of impacted people, and we needed help incorporating
those experiences and concerns into our review. Systemic underrepresentation of black and
minoritized actors in society was a key factor in our evaluation given the product’s intended
use. To focus even further on potential impacts
we engaged with an external human rights consultancy— called Business for Social Responsibility,
or BSR— to conduct an in-depth Human Rights Impact Assessment. Engaging with BSR played an essential role
in shaping the API’s capabilities and policies, integrating human rights considerations throughout
the product development lifecycle. It also revealed where the solution needed
additional oversight and validated our earlier decision not to offer general purpose facial
recognition APIs. Their full report is publicly available and
can be found in the resources section of this course. Based on BSR’s recommendations Google implemented
a number of safeguards, including: Making the Celebrity Recognition API available only
to qualifying customers behind an allow list. The database of “Celebrity” individuals
is carefully defined and restricted to a predefined list. An opt-out policy is implemented to enable
celebrities to remove themselves from the list. And an expanded terms of service apply to
the API. These measures serve to avoid and mitigate
potential harms and provide Google with a firm basis to reduce risks to human rights. Another key step in Google’s review of the
Celebrity Recognition API was a series of fairness analyses. Fundamentally, these fairness tests sought
to evaluate the performance of the API in terms of Recall and Precision. In other words, we evaluated the performance
of the API both for individual skin tone and gender groups, but also for the combination
of those groups —for example, for women with darker skin tones, or men with lighter
skin tones. Over three separate fairness tests, we found
errors between our training datasets and one of the benchmarks based on skin tone. Those errors gave us pause and we decided
to take a deeper look at the root causes. The first thing we checked was whether the
skin tone labels in our dataset were accurate. It was discovered they weren’t completely
accurate for medium- and darker- skinned people. We relabelled the skin tones according to
the Fitzpatrick skin type scale as used in the seminal “‘Gender Shades”’ research
by Joy Buolamwini and Timnit Gebru. This research evaluated bias present in automated
facial analysis algorithms and datasets with respect to skin tone and gender. Relabelling the skin tones reduced error rates,
but we found further discrepancies. A small subset of actors represented a significant
proportion of the total missed identifications in the evaluation datasets, especially for
darker-skinned men. Knowing the majority of error rates were affecting a select few actors,
we looked at the actors with the largest number of errors and found they had nearly a 100%
false rejection rate. Due to the reduced scope of the Celebrity
Recognition API we were able to go one by one through the test set and gallery to determine
what the problem was. We found that for three black actors our celebrity
gallery had images of them as adults while the training set had images of them as much
younger actors. Our model could not recognize the adult actors
as the younger characters they had played years prior. In this instance, we were able to correct
that problem by expanding the training dataset to include images of celebrities at many different
points in their careers and at different ages . This removed the discrepancy between error
rates. This experience drove home the importance
of taking the time to look at the overall context of the solution, namely, the issues
of representation in media. Only with an appreciation of that context,
tightly scoping the solution, and after rigorously testing and improving the API for fairness
were we able to get comfortable launching the API. This is an example of why responsible development
of AI leads to successful integration of AI. In mid 2020 we welcomed the news that other
technology companies were limiting or exiting their facial recognition business given the
wider concerns about the technology. Ultimately our AI governance process allowed
us to research and scope a product that aligned with our AI principles. and scope a product that aligned with our
AI principles. Today, Google has released the Monk Skin Tone
(MST) Scale, a more refined skin tone scale that will help us better understand representation
in imagery.

Module 6: Operationalizing AI Principles: Issue Spotting and Lessons Learned


Video: Issue spotting process

The speaker is discussing Google’s AI governance and review process, which includes “issue spotting” to identify potential ethical issues with AI use cases. This process involves asking critical thinking questions to uncover potential issues that may not be immediately apparent. The questions cover various topics, including:

  1. Product definition and purpose
  2. Intended users and data used
  3. Model training and testing
  4. Context and potential for misuse
  5. Fairness, safety, privacy, and accountability

The speaker uses a hypothetical use case, the Autism Spectrum Disorder (ASD) Carebot, to demonstrate how issue spotting questions can help identify potential ethical issues. The questions asked include:

  • Who are the stakeholders and what do they hope to gain?
  • How might the AI Principles be fulfilled or violated?
  • Is this the best or right way to offer therapy for ASD?
  • How might the team involved influence the fairness of the model?
  • Are there potential privacy risks and considerations?
  • How will human oversight and informed consent be ensured?
  • Have the necessary experts been consulted to develop the tool responsibly?

By asking these questions, teams can think critically about the potential benefits and harms of a use case and take a responsible approach to developing AI applications.

At Google, when we put our AI Principles into
practice, a key part of the AI governance and review process is issue spotting. This is the process of identifying possible
ethical issues with the AI use case in question. Google realized that people needed a guide
to help spot ethical issues, but that guide couldn’t be a simple checklist, which can
hinder critical analysis. Instead, our approach to issue spotting is
based on providing questions that require people to think critically about the technology
that they’ve developed. These questions are rooted in well-established
ethical decision making frameworks that emphasize the importance of seeking out additional information
and considering best- and worst- case scenarios. This helps to uncover potential ethical issues
that may have otherwise gone unnoticed. The questions cover a variety of topics, including:
overall product definition, what problem is being solved, its intended users, what data
is used, and how the model is trained and tested. There are also questions that focus on context
such as the purpose and importance of the use case, the socially beneficial applications
of the use case, and the potential for misuse. These questions are intended to highlight
the implications of design decisions that could impact fair and responsible use of the
AI model. We assess all AI use cases starting from an
assumption that there are always issues we can address, even if a use case seems obviously
socially beneficial. If issues arise in this critical thinking
process that may conflict with the ethical aims of the AI principles, then a more in-depth
review takes place. Over the course of conducting AI Principles
reviews, we have recognized certain complex areas during AI development that warrant a
closer ethical review. Identifying the areas of risk and harm for
use cases relevant to your business context is critical, and particular care should be
taken when building AI applications in these areas. These could, for example, be use cases involving
Surveillance or Synthetic Media among many others. Areas that are complex for your business will
depend a lot on your domain customers. Identifying emerging areas of risk and social
harm is all part of an active discussion within the industry, and we can expect forthcoming
standards and policies in this area. Let’s take a look at a hypothetical use
case using issue spotting questions to assess whether any AI Principles are being, or are
at risk of being, infringed. We will review a fictional product called
the Autism Spectrum Disorder (ASD) Carebot, adapted from a case study created by the Markkula
Center for Applied Ethics at Santa Clara University. The causes of ASD are deeply debated, though
research indicates that its prevalence is increasing and that children diagnosed early
and provided with key services are more likely to reach their fullest potential. Some schools have found success with robots
that help students practice verbal skills and social interactions under the supervision
of a trained therapist, but this is not yet an affordable or widely accessible resource. As a, result, not all schools provide such
support. Now imagine that an AI product team proposes
to build an affordable ASD Carebot aimed at preschool-age children and intended for use
in children’s homes. They envision a cloud-based AI chatbot with
speech, gesture, facial sentiment analysis, and personalized learning modules to reinforce
positive social interactions. In issue spotting, it’s useful to first
identify the questions needed to think critically about the use case. Questions such as, who are the stakeholders
for this product? What do they hope to gain from it? Do different stakeholders have different needs? How might each of the AI Principles either
be fulfilled or violated with the development and use of the ASD Carebot? Your AI Principles review may ask more questions
than you can immediately answer, but the analysis will uncover areas for exploration that will
ultimately impact how your team proceeds. From a social benefit standpoint, the aim
of the product is to expand access to a form of therapy not currently available for all
who could benefit from it. However, is this the best or right way to
offer this therapy and should ASD be treated as something that needs this type of intervention
at all? The goal of avoiding the creation or reinforcement
of unfair bias may lead your team to ask: How might the team involved influence the
fairness of the model? Reviewing the product design and integration,
where should fairness be closely considered and evaluated? Do we have the necessary input from the people
who will be directly affected by the Carebot? Where will the training data be sourced to
develop the underlying models for the Carebot? Who will that data represent, and are there
people, or groups of people, with ASD who may not be well represented? In terms of safety, what could happen if this
model does not perform as expected or is subject to model drift or decay over time? Could human safety be endangered? Taking a look at potential privacy risks and
considerations, the home can be seen as a highly sensitive and shared environment, even
more so than the classroom. What kind of data will this Carebot be collecting? Are there datasets that could pose special
privacy risks? What design principles could help ensure appropriate
privacy protections for this highly sensitive use case? To evaluate how accountable the system is,
the developers may want to know how human oversight of the system will be ensured and
determine what kind of informed consent is appropriate for those engaging with this system. For example, should the Carebot be allowed
to present itself as a “friend”? Are there possible positive and negative impacts
to consider? Scientific excellence urges product owners
to evaluate whether they have the necessary expertise to develop such a tool, or if they
should consider engaging an external partner who specializes in ASD or education therapy
to develop a deep understanding of the needs of students? Some questions to help determine this include:
What kinds of testing and review would be appropriate for this use case to ensure that
it performs and delivers the desired benefits? What are the technical and scientific criteria
for doing this responsibly? Lastly, the AI principle be made available
for uses that accord with these principles suggests that product owners think about whether
the solution will be broadly available to users, such as being ​​affordable and
accessible. By asking issue spotting questions teams can
think critically to assess the potential benefits and harms of a use case. Only with a thorough review can a responsible
approach to a new AI application be formed.

Video: What we’ve learned from operationalizing AI Principles: Challenges

Here are the 4 key challenges of operationalizing Google’s AI principles:

Challenge 1: Measuring Effectiveness
Measuring the effectiveness of responsible AI is difficult, especially when it comes to mitigating ethical issues. Traditional metrics may not be sufficient, and new metrics are needed to track impact, identify trends, and establish precedents.

Challenge 2: Ethical Dilemmas
Applying AI principles can lead to ethical dilemmas, where different values and interpretations create tension and debate. Open and honest conversations are necessary to work through these dilemmas and identify trade-offs.

Challenge 3: Subjectivity
Applying AI principles can seem subjective or culturally relative. To reduce subjectivity, it’s essential to have a well-defined review process, ground the review in technical and business realities, document decisions, and keep a record of prior precedents.

Challenge 4: Getting External Input
Getting direct input from external domain experts and affected groups is critical but challenging. It’s essential to hear a wide range of voices to ensure products are made for everyone, but it’s difficult to represent the viewpoints of an entire group.

Our journey to operationalize Google’s AI
principles required the collaboration and diligent work of many. We continue to learn a lot about this process—both
from our successes and challenges— and are committed to iterating, evolving, and sharing
along the way to help you on your journey. Next, we’ll explore some challenges we often
encounter during the AI principles process, all of which we suspect might not be unique
challenges to Google alone. The first key challenge is that measuring
the effectiveness of responsible AI is not straightforward. Assessing how mitigations address ethical
issues can be more difficult than assessing technical performance. In a sector that values impact metrics and
quantifiable results, measuring the effectiveness of mitigations that prevent a potential harm
or issue from happening is not easy. Because of this, the metrics that indicate
success for responsible innovation may look a bit different from traditional business
metrics. For example, we track issues and their related
mitigations, and how they are implemented in the product. We also look at the impact our AI governance
has on building customer trust and accelerating deal success. Another measure of effectiveness is gathering
end -users’ experiences and perceptions through surveys and customer feedback. These types of metrics help to track impact,
identify trends, and establish precedents. Another challenge is around ethical dilemmas. When applying our principles, ethical dilemmas
often arise rather than clear decisions between right and wrong. Members of the review committee, each with
their individual interpretation of the AI principles, lived experiences and expertise,
apply their own values to ethical issues. This can create a tension between different
values that fosters a lot of debate. It’s important to remember that these dilemmas,
and resulting deliberations, are a core goal of an AI Principles review. Working through these dilemmas requires open
and honest conversations, and an understanding that these aren’t easy decisions to make. These conversations ultimately help identify
and assess the trade-offs between our choices. Yet another challenge is that applying our
AI Principles can seem subjective or culturally relative. A few ways we reduce this subjectivity include: Having a well-defined AI Principles review
and decision-making process to foster trust in that process. Grounding the review in technical, research,
and business realities connects the mitigations to real world issues. Documenting how decisions were made can provide
necessary transparency and ensure accountability for the review team and beyond. Keeping a comprehensive record of prior precedents
is important to ensure consistency, by assessing whether the case at hand is relevantly different
from cases in the past. An additional challenge we face is getting
direct input from external domain experts and affected groups. This is critical, but not easy and we want
to recognize that the process of doing so can be difficult. No one person can represent the viewpoints
of the group of people you are trying to represent. The goal is to hear as wide a range of voices
as possible. so that products are made for everyone. These are just a few examples of the many
challenges that can be faced when developing responsible AI. On the responsible AI journey, there will
always be issues and challenges. Striving to minimize and mitigate them starts
with that recognition.

Video: What we’ve learned from operationalizing AI Principles: Best practices

The speaker shares 10 best practices learned from operationalizing AI Principles at Google, which can be adapted and evolved to fit the needs of other organizations. These best practices are:

  1. Diverse review committee: Assemble a review committee that is diverse in cultural identity, expertise, and seniority to ensure informed decisions.
  2. Top-down and bottom-up support: Get both senior leadership endorsement and organization-wide adoption to embed responsible AI into the company culture.
  3. Education and training: Educate product and technical teams on tech ethics and encourage non-technical stakeholders to understand AI’s technological, business, and societal impacts.
  4. Align business and responsible AI goals: Recognize that responsible AI adoption is crucial for successful AI products and align business and responsible AI team motivations.
  5. Transparency in governance: Strive for transparency in the responsible AI governance process to build trust, while maintaining confidentiality on individual review details.
  6. Track alignment plans and decisions: Develop a system to keep track of alignment plans, issues, mitigations, and precedents to inform future work and reviews.
  7. Humility and openness to evolution: Maintain a humble approach, recognizing that AI is rapidly changing, and be open to new research and inputs to improve responsible AI practices.
  8. Psychological safety: Invest in psychological safety to encourage teams to take risks, explore “what if” questions, and surface potential issues.
  9. Balance efficiency and comprehensiveness: Balance product development goals with the time needed for comprehensive AI reviews, avoiding analysis paralysis and ensuring thorough consideration of risks and mitigations.
  10. Assume all AI applications need attention: Start with the assumption that each AI application needs attention, exploring all possible scenarios to develop comprehensive mitigations.

These best practices aim to help organizations develop a responsible AI process that is transparent, inclusive, and adaptable to the rapidly changing AI landscape.

We will now explore some of the lessons we
have learned from operationalizing our AI Principles and how these led us to develop
a set of best practices. We encourage you to select, adapt, and evolve
these best practices to fit the needs of your organization. (1) Research shows the benefit of assembling
a review committee that is diverse in cultural identity, expertise and seniority, and at
Google, we have found this to be key. All AI principles require interpretation,
making it important to have a review committee that more closely represents your current,
or potential, user base. It’s critical to include Diversity, Equity
& Inclusion (or DEI) considerations when building our multidisciplinary teams. Bringing together a diverse group encourages
more informed decisions, which in turn, results in more actionable, and feasible solutions. (2) With regard to the adoption of our AI
Principles, we learned that it is important to get both top-down and bottom-up support
and engagement. A top-down mandate where senior leadership
endorses the adoption of AI Principles is necessary, but it’s not enough. We’ve learned that a true cultural transformation
requires organization-wide adoption. Our experience is that bottom-up engagement
from teams helps normalize it and is a critical step when embedding responsible AI into a
company’s culture. It’s also our experience that teams are
likely to be very interested in the topic of responsible AI— often with their own
opinions and beliefs on the topic. Harnessing that drive and knowledge is beneficial
to overall adoption. (3) At Google, responsible AI adoption comes
from educating our teams. Therefore, we suggest that you train your
product and technical teams on tech ethics, and encourage non-technical stakeholders to
develop an understanding of the technological, business and societal impacts of AI. This helps to build a company culture that
embraces responsible AI, where ethics are directly tied to technology development and
product excellence. (4) It’s important to recognize that the goals and motivations of
the business and the responsible AI team align, since responsible AI equals successful AI. Building responsible AI products means confronting
ethical issues and dilemmas and, at times, slowing down to find the right path forward. Here the motivations of the business and responsible
AI could be perceived to be in conflict, when in reality releasing a product that works
well for everyone is good for business and good for the world. (5) We strive for transparency in our responsible
AI governance process. We believe that transparency around our process,
and the people involved, builds trust. Confidentiality on the details of individual
reviews is often required but transparency in the governance process can help build trust
and credibility. (6) At Google we’ve also learned that the
work we do now can affect our future decisions. Therefore, we suggest developing a system
to keep track of alignment plans including issues, mitigations and precedents. One specific goal of Google’s review team
is to identify patterns and maintain records to track decisions and how they were made
to inform future work and reviews. This system also helps provide transparency
to stakeholders that the review team followed a tried and trusted path to reach their decisions. With this kind of documentation that provides
consistent information throughout an organization, we found that a responsible AI initiative
can scale to reach more people. (7) On our responsible AI journey, we’ve
also recognized the importance of a humble approach. AI is changing rapidly and our world is not
static. We try to consciously remember we are always
learning and we can always improve. We believe we must maintain a delicate balance
between ensuring consistency in our interpretations and remaining open and responsive to new research
and inputs. As we implement our responsible AI practices,
we believe an openness to evolve will allow us to make the best, most informed decisions. (8) We’ve learned the benefit of investing
in psychological safety. When a team has psychological safety, they
often feel safe to take risks and be vulnerable with one another. In the review process, teams need to feel
comfortable to explore “what if” questions and areas of misuse in order to work together
to surface potential issues. However, while exploring all potential issues
is an important step in this process, in order to avoid analysis paralysis you must ground
your issue spotting in technical, business and societal realities before developing a
comprehensive set of guardrails. (9) Another best practice is that efficiency is not the
primary goal of an AI principles process. A balance is needed between the product development
goals and the time needed for a comprehensive AI review. If you focus too much on being efficient,
you may miss potential issues that cause downstream harms for your customers. While our AI Principles require interpretation
and an element of trial and error, they still need to support the speed and scale of the
business. Deliberation and healthy disagreement allows
people the space to explore risks and mitigations, but a thoughtful and robust ethical process
also means supporting product development goals. (10) Start with the assumption that each AI
application needs attention. Ethical issues do not always arise from the
most obvious controversial use cases and AI products. Even seemingly beneficial or innocuous AI
use cases can have issues and risks associated with them. This assumption pushes us to imagine “what
if” and explore all the possible scenarios in order to develop a comprehensive set of
mitigations. Our AI Principles reviews are a framework
to guide those conversations. These are some of the best practices Google
has learned from operationalizing our AI Principles, and we know these will evolve further with
time. We hope that these best practices can be helpful
as you create and implement your own responsible AI process.

Module 7: Continuing the Journey Towards Responsible AI


Video: Continuing the journey towards responsible AI

The speaker from Google emphasizes the importance of building AI responsibly and encourages others to do the same. They highlight the role of Google’s AI Principles in guiding the company’s decisions and actions, and share the hope that others will take the lessons learned from this training to develop their own AI principles and review processes.

The key points are:

  • Rigorous evaluations of responsible AI are crucial for creating successful AI products that work for everyone.
  • Google’s AI Principles serve as a common purpose, guiding the company’s use of advanced technologies in the best interest of people worldwide.
  • The goal is to encourage others to develop their own AI principles and review processes, tailored to their own business context.
  • The speaker acknowledges that no system is perfect, and improving responsible AI is an ongoing task.
  • Google will continue to share updates on their progress and learning on responsible AI through their website and cloud pages.
  • Those interested in working with Google on responsible AI projects or seeking guidance can reach out to their local account representative or the Google Cloud responsible AI team.

Overall, the message is one of collaboration and commitment to responsible AI, with the goal of creating a better future for everyone.

At Google, we believe that rigorous evaluations
of how to build AI responsibly are not only the right thing to do, they are a critical
component of creating successful AI. Products and technology should work for everyone. Our AI Principles keep us motivated by a common
purpose, guide us to use advanced technologies in the best interest of people around the
world, and help us make decisions that are aligned with Google’s mission and core values. We all have a role to play in how responsible
AI is applied. Our aim is that through taking this course you’ve gained an understanding
of how we at Google developed our AI Principles and how we’ve operationalized them within
the organization. Our hope is you can take the lessons learned
and best practices from this training as a starting point to work with your teams to
further your Responsible AI strategy. Our challenge to you is that you now take
this knowledge and develop your own AI principles and accompanying review process. Wherever you are in your AI journey a valuable
goal can be to talk with your teams about what responsible AI means in the context of
your own business. Those discussions can help you when outlining
your own AI principles. We know no system, whether human or AI powered,
will ever be perfect, so we don’t consider the task of improving it to ever be finished. We look forward to continuing to update you
on what we’re learning and on our progress. We share these on the Google and Google Cloud
Responsible AI pages. If you want to take the next step and work
with Google on your next project or business goal, you can always reach out to your local
Google Cloud account representative or a Google Cloud ML Specialized Partner. If you have specific questions around responsible
AI, you can reach out to the Google Cloud responsible AI team directly. We are unwavering in our commitment to Responsible
AI. Thank you for joining us on this journey and learning with us.


Home » Google Career Certificates » Introduction to Generative AI Learning Path Specialization » Responsible AI: Applying AI Principles with Google Cloud