Skip to content
Home » Google Career Certificates » Introduction to Generative AI Learning Path Specialization » Introduction to Responsible AI

Introduction to Responsible AI

This is an introductory-level micro learning course aimed at explaining what responsible AI is, why it’s important, and how Google implements responsible AI in their products. It also introduces Google’s 7 AI principles.

Learning Objectives

  • Identify the need for a responsible AI practice within an organization.
  • Understand why Google has put AI principles in place
  • Recognize that decisions made at all stages of a project have an impact on responsible AI.
  • Recognize that organizations can design AI to fit their own business needs and values.

Video: Introduction to Responsible AI

The speaker, Manny, a security engineer at Google, discusses the importance of responsible AI practices within an organization. He explains that AI is not infallible and can replicate existing biases and amplify them if not developed responsibly. Google has developed its own AI principles, which include:

  1. AI should be socially beneficial
  2. AI should avoid creating or reinforcing unfair bias
  3. AI should be built and tested for safety
  4. AI should be accountable to people
  5. AI should incorporate privacy design principles
  6. AI should uphold high standards of scientific excellence
  7. AI should be made available for uses that accord with these principles

Additionally, Google has identified four areas where they will not pursue AI applications, including:

  1. Technologies that cause or are likely to cause overall harm
  2. Weapons or other technologies whose principle purpose or implementation is to cause or directly facilitate injury to people
  3. Technologies that gather or use information for surveillance that violates internationally accepted norms
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights

Manny emphasizes that responsible AI practices are essential to ensure that AI systems are developed with ethics in mind, and that it’s crucial to have a defined and repeatable process for using AI responsibly. He also highlights the importance of transparency, fairness, accountability, and privacy in AI development.

  • AI is being discussed a lot, but what does it mean
    to use AI responsibly? Not sure, that’s great,
    that’s what I’m here for. I’m Manny and I’m a
    security engineer at Google. I’m going to teach you how to understand why Google has put AI principles in place. Identify the need for
    responsible AI practice within an organization. Recognize that responsible
    AI affects all decisions made at all stages of a project, and recognize that organizations can
    design their AI tools to fit their own business
    needs and values. Sounds good, let’s get into it. You might not realize it, but many of us already
    have daily interactions with artificial intelligence,
    or AI, from predictions for traffic and weather,
    to recommendations for TV shows you might like to watch next. As AI becomes more common, many technologies that aren’t AI enabled, they start to seem inadequate,
    like having a phone that can’t access the internet. Now, AI systems are
    enabling computers to see, understand, and interact
    with the world in ways that were unimaginable just a decade ago. And these systems are developing
    at an extraordinary pace. What we’ve got to remember though is that despite these
    remarkable advancements, AI is not infallible. Developing responsible AI
    requires an understanding of the possible issues, limitations, or unintended consequences. Technology is a reflection
    of what exists in society. So without good practices, AI
    may replicate existing issues or bias and amplify them. This is where things get tricky because there isn’t a universal definition of responsible AI, nor is
    there a simple checklist or formula that defines how
    responsible AI practices should be implemented. Instead, organizations are developing their own AI principles that reflect their mission and value. Luckily for us though, while
    these principles are unique to every organization, if
    you look for common themes, you find a consistent set of
    ideas across transparency, fairness, accountability, and privacy. Let’s get into how we
    view things at Google. Our approach to responsible AI is rooted in a commitment
    to strive towards AI that’s built for everyone,
    that’s accountable and safe, but respects privacy, and that is driven by scientific excellence. We’ve developed our own AI principles, practices, governance processes and tools that together embody our values and got our approach to responsible AI. We’ve incorporated responsibility by design into our products, and even more importantly, organization. Like many companies, we use our
    AI principles as a framework to guide responsible decision-making. We all have a role to
    play in how AI is applied. Whatever stage in the AI
    process you’re involved with from design to deployment or application, the decisions you make have an impact, and that’s why it’s so important that you too have a defined
    and repeatable process for using AI responsibly. There’s a common misconception
    with artificial intelligence that machines play the
    central decision making role. In reality, it’s people who
    design and build these machines and decide how they’re used. Let me explain. People are involved in each
    aspect of AI development. They collect or create
    the data that the model is trained on. They control the deployment of the AI, and how it’s applied in a given context. Essentially, human decisions are threaded throughout our technology products, and every time a person makes a decision, they’re actually making a choice
    based on their own values. Whether it’s a decision
    to use generative AI to solve a problem as
    opposed to other methods, or anywhere throughout the
    machine learning lifecycle, that person introduces
    their own set of values. This means that every decision
    point requires consideration and evaluation to ensure that choices have been made responsibly from concept through
    deployment and maintenance. Because there’s a potential to
    impact many areas of society, not to mention people’s daily lives, it’s important to develop
    these technologies with ethics in mind. Responsible AI doesn’t mean to focus only on the obviously controversial use cases, without responsible AI practices even seemingly innocuous AI use cases or those with good intent could
    still cause ethical issues or unintended outcomes,
    or not be as beneficial as they could be. Ethics and responsibility are important, not just because they represent
    the right thing to do, but also because they can guide AI design to be more beneficial for people’s lives. So how does this relate to Google? We’ve learned that building responsibility into any AI deployment makes better models and builds trust with our customers, and our customer’s customers. If at any point that trust is broken, we run the risk of AI
    deployments being stalled, unsuccessful, or at worst,
    harmful to stakeholders those products affects. And tying it all together, this all fits into our belief at Google that responsible AI equals successful AI. We make our product and
    business decisions around AI through a series of
    assessments and reviews. These instill rigor and consistency in our approach across
    product areas and geographies. These assessments and
    reviews begin with ensuring that any project aligns
    with our AI principles. While AI principles help ground a group in shared commitments, not everyone will agree
    with every decision made about how products should
    be designed responsibly. This is why it’s important
    to develop robust processes that people can trust. So even if they don’t agree
    with the end decision, they trust the process
    that drove the decision. So we’ve talked a lot about just how important
    guiding principles are for AI in theory, but what are they in practice? Let’s get into it. In June, 2018, we announced
    seven AI principles to guide our work. These are concrete standards that actively govern our
    research and product development, and affect our business decisions. Here’s an overview of each one. One, AI should be socially beneficial. Any project should take into
    account a broad range of social and economic factors and will
    proceed only where we believe that the overall likely
    benefits substantially exceed the foreseeable risk and downsides. Two, AI should avoid creating
    or reinforcing unfair bias. We seek to avoid unjust effects on people, particularly those related
    to sensitive characteristics such as race, ethnicity,
    gender, nationality, income, sexual orientation, ability, and political or religious belief. Three, AI should be built
    and tested for safety. We’ll continue to develop
    and apply strong safety, and security practices to
    avoid unintended results that create risks of harm. Four, AI should be accountable to people. We’ll design AI systems that provide appropriate
    opportunities for feedback, relative explanations and appeal. Five, AI should incorporate
    privacy design principles. We’ll give opportunity
    for notice and consent, encourage architectures
    with privacy safeguards, and provide appropriate transparency and control over the use of data. Six, AI should uphold high standards of scientific excellence. We’ll work with a range of stakeholders to promote thoughtful
    leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches, and we will responsibly share AI knowledge by publishing educational materials, best practices and research
    that enable more people to develop useful AI applications. Seven, AI should be made
    available for uses that accord with these principles. Many technologies have multiple uses, so it’ll work to limit
    potentially harmful, or abusive applications. So those are the seven principles we have. But in addition to these seven principles, there are certain AI
    applications we will not pursue. We will not design or deploy AI in these four application areas. Technologies that cause or are
    likely to cause overall harm, weapons or other technologies
    whose principle purpose or implementation is to cause or directly facilitate injury to people. Technologies that gather or use
    information for surveillance that violates internationally
    accepted norms, and technologies whose purpose
    contravenes widely accepted principles of international
    law and human rights. Establishing principles
    was a starting point rather than an end. What remains true is that our AI principles
    rarely give us direct answers to our questions, how
    to build our products. They don’t and shouldn’t allow us to sidestep hard conversations. They’re a foundation that
    establishes what we stand for, what we build, and why we build it, and there core is the success of our enterprise AI offerings. Thanks for watching and if
    you wanna learn more about AI, make sure to check out our other videos.

Quiz: Introduction to Responsible AI: Quiz

Why is responsible AI practice important to an organization?
Responsible AI practice can help improve operational efficiency.
Responsible AI practice can help build trust with customers and stakeholders.
Responsible AI practice can help drive revenue.
Responsible AI practice can improve communication efficiency.

Organizations are developing their own AI principles that reflect their mission and values. What are the common themes among these principles?
A consistent set of ideas about transparency, fairness, accountability, and privacy.
A consistent set of ideas about transparency, fairness, and diversity.
A consistent set of ideas about transparency, fairness, and equity.
A consistent set of ideas about fairness, accountability, and inclusion.

Which of these is correct with regard to applying responsible AI practices?
Decisions made at an early stage in a project do not make an impact on responsible AI.
Decisions made at a late stage in a project do not make an impact on responsible AI.
Decisions made at all stages in a project make an impact on responsible AI.
Only decisions made by the project owner at any stage in a project make an impact on responsible AI.

Which of the below is one of Google’s 7 AI principles?
AI should uphold high standards of operational excellence.
AI should uphold high standards of scientific excellence.
AI should create unfair bias.
AI should gather or use information for surveillance.