Skip to content
Home » Google Career Certificates » Introduction to Generative AI Learning Path Specialization » Responsible AI: Applying AI Principles with Google Cloud

Responsible AI: Applying AI Principles with Google Cloud

Embracing the growth of enterprise Artificial Intelligence and Machine Learning requires a commitment to building responsible AI practices. While discussing responsible AI is crucial, translating theory into practice can be a significant challenge. If you’re eager to operationalize responsible AI in your organization, this course is designed to guide you through the process.

Through this course, you’ll gain insights into Google Cloud’s current approach to responsible AI, along with best practices and lessons learned. By the end of the course, you’ll have a comprehensive framework to develop your own responsible AI strategy, tailored to your organization’s unique needs and goals. By the end of the course, you’ll be equipped with the knowledge and tools to integrate responsible AI practices into your organization’s workflow, ensuring the ethical and effective use of AI in your organization


Module 1: Introduction


Video: Course introduction

The course focuses on the practice of responsible AI, highlighting the importance of building AI systems that are transparent, fair, accountable, and private. The narrators, Marcus and Caitlin, introduce the concept of responsible AI, emphasizing that AI is not infallible and can replicate existing biases and issues if not developed responsibly.

The course will explore Google Cloud’s approach to responsible AI, including its AI principles, practices, governance processes, and tools. Google’s approach is rooted in a commitment to building AI that is accountable, safe, and respects privacy, driven by scientific excellence.

The course aims to provide a framework for organizations to develop their own responsible AI strategy, emphasizing the importance of human decision-making in AI development. The goal is to provide insights and lessons learned from Google Cloud’s journey towards responsible AI development and use.

The introduction also clarifies that the course will not delve into the definitions of AI, machine learning, and deep learning, but instead focus on the importance of human decision-making in technology development. The course will explore how to operationalize responsible AI in practice, providing a window into Google Cloud’s approach to responsible AI development and use.

Hi there and welcome to applying AI principles
with Google Cloud. A course, focused on the
practice of responsible AI. My name is Marcus. I’m Caitlin. We’ll be your narrators
throughout this course. Many of us already have
daily interactions with artificial
intelligence or AI. From predictions for
traffic and weather to recommendations of TV shows
you might like to watch next. As AI, especially generative
AI, becomes more common, many technologies that aren’t AI enabled may start
to seem inadequate. With such powerful,
far-reaching technology raises equally
powerful questions about its development and use. Historically, AI was not
accessible to ordinary people. The vast majority
of those trained and capable of developing AI were specialty
engineers who were scarce in number and
expensive to hire. But the barriers to
entry are being lowered, allowing more
people to build AI, even those without AI expertise. Now, AI systems are
enabling computers to see, understand, and
interact with the world in ways that were unimaginable
just a decade ago. These systems are developing
at an extraordinary pace. According to Stanford University’s
2019 AI Index Report. Before 2012, AI results track closely with Moore’s Law with compute doubling
every two years. The report states
that since 2012, computer has been doubling approximately every
three and half months. To put this in perspective,
over this time, Vision AI technologies have only become more accurate
and powerful. For example, the error
rate for ImageNet and image classification dataset
has declined significantly. In 2011, the error
rate was 26 percent. By 2020, that number
was two percent. For reference, the error
rate of people performing the same task is five
percent and yet, despite these remarkable
advancements, AI is not infallible. Developing responsible
AI requires an understanding of
the possible issues, limitations, or
unintended consequences. Technology is a reflection
of what exists in society. Without good practices, AI may replicate existing issues
or bias and amplify them. But there isn’t a universal
definition of responsible AI, nor is there a simple
checklist or formula that defines how responsible AI practices should be implemented. Instead, organizations
are developing their own AI principles that reflect their
mission and values. While these principles are
unique to every organization, if you look for common themes, you find a consistent set of
ideas across transparency, fairness, accountability,
and privacy. Hi Google, our approach to responsible AI is rooted in a commitment
to strive towards AI that is built for
everyone that is accountable and safe,
that respects privacy. This is driven by
scientific excellence. We’ve developed our
own AI principles, practices, governance processes, and tools that together embody our values and guide our
approach to responsible AI. We’ve incorporated
responsibility by design into our products and even more
importantly, our organization. Like many companies, we use our AI principles as a framework to guide responsible
decision-making. We’ll explore how we do this in detail later in this course. It’s important to
emphasize here that we don’t pretend to have
all of the answers. We know this work is never finished and we
want to share while learning to collaborate and help others on
their own journeys. We all have a role to play in how responsible AI is applied. Whatever stage in the AI
process you’re involved with, from design to deployment
or application, the decisions you
make have an impact. It’s important that you too have a defined and repeatable process for using AI responsibly. Google is not only
committed to building socially valuable
advanced technologies, but also to promoting
responsible practices by sharing our insights and lessons learned with the
wider community. This course represents one
piece of these efforts. The goal of this
course is to provide a window into Google
and more specifically, Google Cloud’s journey toward the responsible
development and use of AI. Our hope is that
you’ll be able to take the information and
resources were sharing and use them to help shape your organization’s own
responsible AI strategy. But before we get any further, let’s clarify what we mean
when we talk about AI. Often people want to know the differences between
artificial intelligence, machine learning,
and deep learning. However, there is no universally agreed
upon definition of AI. Critically, this
lack of consensus around how AI should
be defined has not stopped technical
advancement underscoring the need for ongoing dialogue about how to responsibly create and
use these systems. At Google, we say our
AI principles apply to advanced technology
development as an umbrella to encapsulate
all technologies. Becoming bogged down in
semantics can distract from the central goal to develop
technology responsibly. As a result, we’re
not going to do a deep dive into the definitions
of these technologies. Instead we’ll focus on
the importance of human decision-making in
technology development. There is a common
misconception with artificial intelligence
that machines play the central
decision-making role. In reality, it’s
people who design and build these machines and
decide how they are used. People who are involved in
each aspect of AI development. They collect or create the data that the model is trained on. They control the deployment of the AI and how it is
applied in a given context. Essentially, human decisions of threaded throughout our
technology products. Every time a person
makes a decision, they’re actually making a
choice based on their values. Whether it’s the decision
to use generative AI to solve a problem as
opposed to other methods. Or anywhere throughout the
machine learning life-cycle. They introduced their
own sets of values. This means that every
decision point requires consideration and
evaluation to ensure that choices have been
made responsibly from concept through deployment
and maintenance.

Video: Google and responsible AI

The article emphasizes the importance of responsible AI development and deployment, highlighting the potential risks and unintended consequences of AI innovation, such as perpetuating biases, job displacement, and lack of accountability. It stresses that ethics and responsibility are crucial in AI development, not just for controversial use cases, but for all AI applications, to ensure they benefit people’s lives. Google’s approach to responsible AI involves a series of assessments and reviews to ensure alignment with AI principles, and a commitment to transparency, trust, and community collaboration. The article encourages organizations of all sizes to start their responsible AI journey, acknowledging that it’s an iterative process that requires dedication, discipline, and a willingness to learn and adjust over time. Key takeaways include:

  1. Responsible AI is essential for successful AI deployment.
  2. Ethics and responsibility should be integrated into AI development from the start.
  3. AI innovation can have unintended consequences, and it’s crucial to address these risks.
  4. Community collaboration and collective values are essential for responsible AI development.
  5. Robust processes and transparency are necessary for building trust in AI decision-making.
  6. Starting small and taking incremental steps towards responsible AI is better than doing nothing.
  7. Responsible AI is an ongoing process that requires continuous learning and improvement.

Many of us rely on technological
innovation to help live happy and healthy lives. Whether it’s navigating
the best route home or finding the right information
when we don’t feel well. The opportunity for
innovation is incredible. But it’s accompanied by
a deep responsibility for technology providers to get it right. There is a growing concern
surrounding some of the unintended or undesired impacts of AI innovation. These include concerns around ML fairness
and the perpetuation of historical biases at scale, the future of work and
AI driven unemployment, and concerns around the accountability and
responsibility for decisions made by AI. We’ll explore these in more
detail later in the course. Because there is potential to impact many
areas of society, not to mention people’s daily lives, it’s important to develop
these technologies with ethics in mind. Responsible AI is not meant to focus just
on the obviously controversial use cases. Without responsible AI practices,
even seemingly innocuous AI use cases or those with good intent could
still cause ethical issues or unintended outcomes, or
not be as beneficial as they could be. Ethics and responsibility are important, not least because they represent
the right thing to do, but also because they can guide AI design
to be more beneficial for people’s lives. At Google,
we’ve learned that building responsibility into any AI deployment makes better models
and builds trust with our customers and our customers’ customers. If at any point that trust is broken, we run the risk of AI deployments
being stalled, unsuccessful, or at worst, harmful to stakeholders,
those products affect. This all fits into our belief at Google
that responsible AI equals successful AI. We make our product and business decisions around AI through
a series of assessments and reviews. These instill rigor and consistency in
our approach across product areas and geographies. These assessments and reviews begin with ensuring that any
project aligns with our AI principles. During this course,
you’ll see how we approach building our responsible AI process at Google and
specifically within Google cloud. At times you might think well, it’s easy
for you with substantial resources and a small army of people, there are only a
few of us, and our resources are limited. You may also feel overwhelmed or
intimidated by the need to grapple with thorny new philosophical and
practical problems. And this is where we assure you that no
matter what size your organization is, this course is here to guide you. Responsible AI is an iterative practice. It requires dedication, discipline, and a
willingness to learn and adjust over time. The truth is that it’s not easy,
but it’s important to get right. So starting the journey even
with small steps, is key. Whether you’re already on a responsible AI
journey or just getting started, spending time on a regular basis simply reflecting
on your company values and the impact you want to make with your products will go
a long way in building AI responsibly. Finally, before we get any further,
we’d like to make one thing clear. At Google, we know that we represent just
one voice in the community of AI users and developers. We approach the development and deployment
of this powerful technology with a recognition that we do not and cannot
know and understand all that we need to. We will only be at our best when we
collectively tackle these challenges together. The true ingredient to ensuring
that AI is developed and used responsibly is community. We hope that this course will
be the starting point for us to collaborate together
on this important topic. While AI principles help ground a group
in shared commitments, not everyone will agree with every decision made on how
products should be designed responsibly. This is why it’s important to develop
robust processes that people can trust. So even if they don’t agree
with the end decision, they trust the process
that drove the decision. In short, and in our experience, a culture based on a collective value
system that is accepting of healthy deliberation must exist to guide
the development of responsible AI. By completing this course, you yourself
are contributing to the culture by advancing the practice of responsible AI
development as AI continues to experience incredible adoption and innovation.


Home » Google Career Certificates » Introduction to Generative AI Learning Path Specialization » Responsible AI: Applying AI Principles with Google Cloud