This is an introductory-level micro learning course aimed at explaining what responsible AI is, why it’s important, and how Google implements responsible AI in their products. It also introduces Google’s 7 AI principles.
Learning Objectives
- Identify the need for a responsible AI practice within an organization.
- Understand why Google has put AI principles in place
- Recognize that decisions made at all stages of a project have an impact on responsible AI.
- Recognize that organizations can design AI to fit their own business needs and values.
Video: Introduction to Responsible AI
The speaker, Manny, a security engineer at Google, discusses the importance of responsible AI practices within an organization. He explains that AI is not infallible and can replicate existing biases and amplify them if not developed responsibly. Google has developed its own AI principles, which include:
- AI should be socially beneficial
- AI should avoid creating or reinforcing unfair bias
- AI should be built and tested for safety
- AI should be accountable to people
- AI should incorporate privacy design principles
- AI should uphold high standards of scientific excellence
- AI should be made available for uses that accord with these principles
Additionally, Google has identified four areas where they will not pursue AI applications, including:
- Technologies that cause or are likely to cause overall harm
- Weapons or other technologies whose principle purpose or implementation is to cause or directly facilitate injury to people
- Technologies that gather or use information for surveillance that violates internationally accepted norms
- Technologies whose purpose contravenes widely accepted principles of international law and human rights
Manny emphasizes that responsible AI practices are essential to ensure that AI systems are developed with ethics in mind, and that it’s crucial to have a defined and repeatable process for using AI responsibly. He also highlights the importance of transparency, fairness, accountability, and privacy in AI development.
- AI is being discussed a lot, but what does it mean
to use AI responsibly? Not sure, that’s great,
that’s what I’m here for. I’m Manny and I’m a
security engineer at Google. I’m going to teach you how to understand why Google has put AI principles in place. Identify the need for
responsible AI practice within an organization. Recognize that responsible
AI affects all decisions made at all stages of a project, and recognize that organizations can
design their AI tools to fit their own business
needs and values. Sounds good, let’s get into it. You might not realize it, but many of us already
have daily interactions with artificial intelligence,
or AI, from predictions for traffic and weather,
to recommendations for TV shows you might like to watch next. As AI becomes more common, many technologies that aren’t AI enabled, they start to seem inadequate,
like having a phone that can’t access the internet. Now, AI systems are
enabling computers to see, understand, and interact
with the world in ways that were unimaginable just a decade ago. And these systems are developing
at an extraordinary pace. What we’ve got to remember though is that despite these
remarkable advancements, AI is not infallible. Developing responsible AI
requires an understanding of the possible issues, limitations, or unintended consequences. Technology is a reflection
of what exists in society. So without good practices, AI
may replicate existing issues or bias and amplify them. This is where things get tricky because there isn’t a universal definition of responsible AI, nor is
there a simple checklist or formula that defines how
responsible AI practices should be implemented. Instead, organizations are developing their own AI principles that reflect their mission and value. Luckily for us though, while
these principles are unique to every organization, if
you look for common themes, you find a consistent set of
ideas across transparency, fairness, accountability, and privacy. Let’s get into how we
view things at Google. Our approach to responsible AI is rooted in a commitment
to strive towards AI that’s built for everyone,
that’s accountable and safe, but respects privacy, and that is driven by scientific excellence. We’ve developed our own AI principles, practices, governance processes and tools that together embody our values and got our approach to responsible AI. We’ve incorporated responsibility by design into our products, and even more importantly, organization. Like many companies, we use our
AI principles as a framework to guide responsible decision-making. We all have a role to
play in how AI is applied. Whatever stage in the AI
process you’re involved with from design to deployment or application, the decisions you make have an impact, and that’s why it’s so important that you too have a defined
and repeatable process for using AI responsibly. There’s a common misconception
with artificial intelligence that machines play the
central decision making role. In reality, it’s people who
design and build these machines and decide how they’re used. Let me explain. People are involved in each
aspect of AI development. They collect or create
the data that the model is trained on. They control the deployment of the AI, and how it’s applied in a given context. Essentially, human decisions are threaded throughout our technology products, and every time a person makes a decision, they’re actually making a choice
based on their own values. Whether it’s a decision
to use generative AI to solve a problem as
opposed to other methods, or anywhere throughout the
machine learning lifecycle, that person introduces
their own set of values. This means that every decision
point requires consideration and evaluation to ensure that choices have been made responsibly from concept through
deployment and maintenance. Because there’s a potential to
impact many areas of society, not to mention people’s daily lives, it’s important to develop
these technologies with ethics in mind. Responsible AI doesn’t mean to focus only on the obviously controversial use cases, without responsible AI practices even seemingly innocuous AI use cases or those with good intent could
still cause ethical issues or unintended outcomes,
or not be as beneficial as they could be. Ethics and responsibility are important, not just because they represent
the right thing to do, but also because they can guide AI design to be more beneficial for people’s lives. So how does this relate to Google? We’ve learned that building responsibility into any AI deployment makes better models and builds trust with our customers, and our customer’s customers. If at any point that trust is broken, we run the risk of AI
deployments being stalled, unsuccessful, or at worst,
harmful to stakeholders those products affects. And tying it all together, this all fits into our belief at Google that responsible AI equals successful AI. We make our product and
business decisions around AI through a series of
assessments and reviews. These instill rigor and consistency in our approach across
product areas and geographies. These assessments and
reviews begin with ensuring that any project aligns
with our AI principles. While AI principles help ground a group in shared commitments, not everyone will agree
with every decision made about how products should
be designed responsibly. This is why it’s important
to develop robust processes that people can trust. So even if they don’t agree
with the end decision, they trust the process
that drove the decision. So we’ve talked a lot about just how important
guiding principles are for AI in theory, but what are they in practice? Let’s get into it. In June, 2018, we announced
seven AI principles to guide our work. These are concrete standards that actively govern our
research and product development, and affect our business decisions. Here’s an overview of each one. One, AI should be socially beneficial. Any project should take into
account a broad range of social and economic factors and will
proceed only where we believe that the overall likely
benefits substantially exceed the foreseeable risk and downsides. Two, AI should avoid creating
or reinforcing unfair bias. We seek to avoid unjust effects on people, particularly those related
to sensitive characteristics such as race, ethnicity,
gender, nationality, income, sexual orientation, ability, and political or religious belief. Three, AI should be built
and tested for safety. We’ll continue to develop
and apply strong safety, and security practices to
avoid unintended results that create risks of harm. Four, AI should be accountable to people. We’ll design AI systems that provide appropriate
opportunities for feedback, relative explanations and appeal. Five, AI should incorporate
privacy design principles. We’ll give opportunity
for notice and consent, encourage architectures
with privacy safeguards, and provide appropriate transparency and control over the use of data. Six, AI should uphold high standards of scientific excellence. We’ll work with a range of stakeholders to promote thoughtful
leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches, and we will responsibly share AI knowledge by publishing educational materials, best practices and research
that enable more people to develop useful AI applications. Seven, AI should be made
available for uses that accord with these principles. Many technologies have multiple uses, so it’ll work to limit
potentially harmful, or abusive applications. So those are the seven principles we have. But in addition to these seven principles, there are certain AI
applications we will not pursue. We will not design or deploy AI in these four application areas. Technologies that cause or are
likely to cause overall harm, weapons or other technologies
whose principle purpose or implementation is to cause or directly facilitate injury to people. Technologies that gather or use
information for surveillance that violates internationally
accepted norms, and technologies whose purpose
contravenes widely accepted principles of international
law and human rights. Establishing principles
was a starting point rather than an end. What remains true is that our AI principles
rarely give us direct answers to our questions, how
to build our products. They don’t and shouldn’t allow us to sidestep hard conversations. They’re a foundation that
establishes what we stand for, what we build, and why we build it, and there core is the success of our enterprise AI offerings. Thanks for watching and if
you wanna learn more about AI, make sure to check out our other videos.
Quiz: Introduction to Responsible AI: Quiz
Why is responsible AI practice important to an organization?
Responsible AI practice can help improve operational efficiency.
Responsible AI practice can help build trust with customers and stakeholders.
Responsible AI practice can help drive revenue.
Responsible AI practice can improve communication efficiency.
Responsible AI practice can help build trust with customers and stakeholders.
Gemini
Responsible AI practice is important to an organization for several reasons, but the most relevant to trust and reputation is:
- Responsible AI practice can help build trust with customers and stakeholders.
Here’s why building trust is crucial:
- Ethical Considerations: AI systems can raise ethical concerns around bias, fairness, and transparency. Responsible AI practices address these concerns and promote ethical use of AI.
- Transparency and Explainability: Responsible AI aims to make AI models understandable and explainable. This builds trust with users who can see how the AI arrives at its decisions.
- Reduced Risk: By proactively managing bias and ensuring data privacy, responsible AI practices can help mitigate risks associated with AI deployment, ultimately leading to a more trustworthy system.
While responsible AI can have other benefits, building trust is a foundational aspect:
- Operational Efficiency: Responsible AI practices might improve efficiency by avoiding rework due to biased models or fixing issues caused by irresponsible data collection. However, building trust is a more direct benefit.
- Revenue: While responsible AI can lead to improved products and potentially higher revenue, trust is a more fundamental factor influencing customer adoption.
- Communication Efficiency: Clear communication about responsible AI practices can improve communication, but trust is the core outcome sought through such communication.
Organizations are developing their own AI principles that reflect their mission and values. What are the common themes among these principles?
A consistent set of ideas about transparency, fairness, accountability, and privacy.
A consistent set of ideas about transparency, fairness, and diversity.
A consistent set of ideas about transparency, fairness, and equity.
A consistent set of ideas about fairness, accountability, and inclusion.
A consistent set of ideas about transparency, fairness, accountability, and privacy.
Gemini
The most common themes among AI principles are:
- Transparency – Users should understand how AI systems work and the basis for their decisions.
- Fairness – AI systems should avoid bias and ensure fair outcomes for all.
- Accountability – There should be a clear understanding of who is responsible for the development, deployment, and use of AI systems.
Of the options you provided, the closest answer is:
- A consistent set of ideas about transparency, fairness, and accountability.
While diversity, equity, and inclusion are all important aspects of fairness, transparency and accountability are broader themes that encompass these concepts. For instance, ensuring a diverse development team can contribute to reducing bias (fairness), and transparency can help identify and address potential biases.
Which of these is correct with regard to applying responsible AI practices?
Decisions made at an early stage in a project do not make an impact on responsible AI.
Decisions made at a late stage in a project do not make an impact on responsible AI.
Decisions made at all stages in a project make an impact on responsible AI.
Only decisions made by the project owner at any stage in a project make an impact on responsible AI.
Decisions made at all stages in a project make an impact on responsible AI.
Gemini
The most responsible answer is:
- Decisions made at all stages in a project make an impact on responsible AI.
Here’s why:
- Responsible AI is an ongoing process, not a one-time decision. It requires considering ethical implications throughout the entire project lifecycle.
- Early Stage Considerations: The data used for training, the chosen algorithms, and how the AI system will be deployed all have ethical implications that need to be addressed early on.
- Development and Implementation: Monitoring for bias, ensuring transparency, and mitigating risks are crucial during development and implementation phases.
- Ongoing Monitoring: Responsible AI requires ongoing monitoring of the AI system after deployment to identify and address any emerging issues.
Therefore, considering responsible AI practices at every stage of the project is essential.
Here’s why the other options are not quite right:
- Early Stage Decisions: As mentioned above, decisions made early on regarding data, algorithms, and deployment can significantly impact the responsible use of AI.
- Late Stage Decisions: While some might think responsible AI is only about monitoring after deployment, decisions made late in the development process, such as how to address bias identified during testing, can also be crucial.
- Project Owner Decisions: While leadership plays a vital role, responsible AI is a collaborative effort. Decisions from data scientists, engineers, and other stakeholders throughout the project lifecycle all influence the responsible development and use of AI.
Llama3
The correct answer is:
Decisions made at all stages in a project make an impact on responsible AI.
According to the text, “responsible AI affects all decisions made at all stages of a project”. This means that decisions made at any stage of a project, from design to deployment, can have an impact on responsible AI practices.
Which of the below is one of Google’s 7 AI principles?
AI should uphold high standards of operational excellence.
AI should uphold high standards of scientific excellence.
AI should create unfair bias.
AI should gather or use information for surveillance.
AI should uphold high standards of scientific excellence.
Gemini
Out of the options you listed, the one that aligns with Google’s 7 AI principles is:
- AI should uphold high standards of scientific excellence.
Here’s why this is a Google AI principle:
- Google’s AI Principles emphasize responsible development and use of AI. Scientific excellence ensures the AI system is well-founded in research and avoids making claims beyond its capabilities.
Let’s explore why the other options are not part of Google’s AI principles:
- Operational Excellence: While important, Google’s AI principles focus more on ethical considerations than operational efficiency.
- Create Unfair Bias: Google’s AI principles explicitly aim to avoid creating or reinforcing unfair bias.
- Gather Information for Surveillance: Google’s AI principles promote privacy and discourage using AI for surveillance purposes.
claude-3-opus-20240229
The correct answer is:
AI should uphold high standards of scientific excellence.
This is one of Google’s 7 AI principles, which guide the company’s AI development and use. The other principles are:
- Be socially beneficial
- Avoid creating or reinforcing unfair bias
- Be built and tested for safety
- Be accountable to people
- Incorporate privacy design principles
- Uphold high standards of scientific excellence
- Be made available for uses that accord with these principles
The other options given are contrary to Google’s AI principles. Google explicitly states that AI should avoid creating unfair bias and should not be used for surveillance purposes.
reka-flash-preview-20240611
The correct answer is AI should uphold high standards of scientific excellence.
Google’s 7 AI principles are:
- AI should be socially beneficial
- AI should be grounded in human values
- AI should be accountable to people
- AI should be transparent
- AI should respect the law and be compliant with regulations
- AI should be safe and secure
- AI should uphold high standards of scientific excellence
The other options are not part of Google’s 7 AI principles:
- “AI should uphold high standards of operational excellence” is not mentioned in the list.
- “AI should create unfair bias” is actually the opposite of one of Google’s principles: “AI should be grounded in human values” which includes avoiding bias.
- “AI should gather or use information for surveillance” goes against Google’s principle of “AI should be socially beneficial” and “AI should respect the law and be compliant with regulations”.