Skip to content
Home » IBM » Generative AI Fundamentals Specialization » Generative AI: Impact, Considerations, and Ethical Issues » Week 2: Social and Economic Impact and Responsible Generative AI

Week 2: Social and Economic Impact and Responsible Generative AI

In this module, you will examine the importance and considerations for responsible development and use of generative AI. You will discover the perspective of different key players, including IBM, about the ethical use of AI. You will also understand how corporations can use generative AI beyond the profit motive to safeguard the interests of all involved stakeholders. Furthermore, you will explore the economic and social impact of generative AI. You will understand the potential economic growth that businesses can achieve with generative AI and how generative AI can benefit society and social well-being. Finally, you will identify the impact of generative AI on the workforce.

Learning Objectives

  • Describe how the workforce can be modified with generative AI.
  • Explain and apply the considerations for the responsible use of generative AI.
  • Identify the ethical responsibility of corporations regarding generative AI.
  • Discuss the economic and social impact of generative AI.
  • Explain the impact of generative AI on the workforce.

Considerations for Responsible AI


Video: Considerations for Responsible Generative AI

The video discusses the critical considerations for the responsible use of generative AI. To ensure that generative AI operates responsibly, four vital considerations are essential:

  1. Transparency: Openness and clarity in how AI models work, make decisions, and generate content, allowing users to understand and trust the technology. Users must have access to non-technical explanations of generative AI, its limits, capabilities, and risks.

Example: A streaming service provides transparency reports explaining how recommendations were generated, empowering users to understand and control their content recommendations.

  1. Accountability: Holding individuals, organizations, and AI models responsible for the ethical and legal consequences of their AI-driven actions and decisions. Since AI models lack autonomy and intent, humans must be accountable for the consequences of generative AI.

Example: A news agency is accountable for the quality and integrity of articles generated by a content generation AI tool.

  1. Privacy: Safeguarding personal data and ensuring that AI-generated content does not disclose sensitive or confidential information. Generative AI models must use privacy-preserving algorithms during training to prevent privacy risks.

Example: An AI-powered chatbot must protect customer data and avoid inadvertently revealing personal information.

  1. Safety Guardrails: Measures, policies, and controls to ensure the safe and responsible use of generative AI models. These guardrails aim to mitigate risks and prevent potential harm or misuse.

Some critical aspects of Safety Guardrails include:

  • Content filtering
  • Security controls
  • Ethical usage
  • Legal compliance
  • Monitoring and reporting
  • User education
  • Access controls

By considering these four vital aspects, we can ensure that generative AI is used responsibly and benefits humanity while minimizing risks.

[MUSIC] Welcome to considerations for
responsible generative AI. After watching this video, you’ll be able
to describe the critical considerations for the responsible use of generative AI. You’ll also be able to discuss how
these considerations can be applied. Billions of people use generative AI
powered technology to improve their lives. As AI systems become increasingly capable,
they can benefit society greatly, but also bring ethical and safety challenges. Ensuring that generative AI operates
responsibly is essential to harness its potential while minimizing risks and
ensuring it benefits all humanity. How do we ensure that generative
AI is used responsibly? By implementing four vital considerations:
transparency, accountability, privacy, and safety guardrails. Let’s take a closer look at each. Transparency, in generative
AI refers to the openness and clarity in how AI models work,
make decisions, and generate content, allowing users to understand and
trust the technology. End users often lack a deep understanding
of AI in large language models or LLMs. That is why transparency cannot be
achieved only through disclaimers regarding potential inaccuracies
in generative AI models. To ensure transparency and ethical
decision making, users must have access to nontechnical explanations of generative
AI, its limits, capabilities, and risks. Let’s look at an example. A streaming service using an AI driven
recommendation system lets users view the suggestion criteria. Users can access a transparency report
that explains how the recommendations were generated, including factors like user
history, content relevance, and diversity. This transparency empowers
users to understand and control their content
recommendations better. Accountability in generative AI means
holding individuals, organizations and AI models responsible for the ethical and legal consequences of their AI
driven actions and decisions. As generative AI becomes better
able to mimic human creativity, we must carefully consider
the human side of this equation. Because unlike a human, an AI model
does not possess autonomy or intent, it cannot be held accountable
in any meaningful sense. Everyone will be impacted by generative
AI in one way or another, from outsourced labor to layoffs, changing professional
roles, and even potentially legal issues. Since we cannot know the consequences
that may result from the mass adoption of generative AI,
we need analysis, scrutiny, context awareness, and humanity at
the center of all AI endeavors. This means linking the generative AI
product and its outcomes with its creator, the enterprise. Here’s an example that illustrates
the role of accountability in responsible generative AI. A news agency uses a content
generation AI tool to produce articles. If the generative AI tool generates
an article that is factually incorrect or biased, and the article goes to print, who
should be held responsible for this error? The news agency. A tool an agency uses AI in this
example doesn’t absolve the agency from being accountable for the quality and
integrity of the articles it publishes. Privacy in generative AI involves
safeguarding personal data and ensuring that AI generated content
does not disclose sensitive or confidential information. Without using privacy preserving
algorithms during training, generative AI models become
vulnerable to privacy risks. Generative AI can inadvertently generate
content that violates personal information as it learns from large databases that
often contain sensitive data without explicit consent. LLMs are particularly at risk as they can
memorize and associate sensitive data, leading to privacy breaches. The acceptance of generative AI
apps has raised privacy concerns, as sometimes responses inadvertently
include sensitive data. Further, integrating unvetted generative
AI apps into business systems can cause compliance violations. Let’s look at an example. An AI powered chatbot is used for
customer support. When responding to inquiries,
it occasionally generates responses that inadvertently reveal a customer’s
personal information, such as contact details or
purchase history. This privacy lapse is a concern, as the chatbot’s responses
should protect customer data. The final consideration is
the use of Safety Guardrails. Safety Guardrails in generative
AI are measures, policies, and controls to ensure the safe and
responsible use of generative AI models. These guardrails aim to mitigate risks and
prevent potential harm or misuse. They help maintain ethical and
legal standards and protect against unintended consequences. Some critical aspects of Safety Guardrails
in generative AI include content filtering. Which refers to implementing filters
to prevent harmful or offensive outputs security controls that protect
generative AI models from misuse and potential cybersecurity threats. Ethical usage to ensure AI generates
content without harm, discrimination, or bias. Legal compliance to comply with
relevant laws and regulations, including data protection and
intellectual property rights. Monitoring and reporting to continuously
oversee AI model behavior and provide issue reporting and
resolution mechanisms. User education to ensure comprehension of
AI model capabilities, limitations, and their own responsibilities and usage. Access controls to manage
access to AI models, particularly in contexts
where sensitive or controversial content may be generated
to control usage and potential risks. In this video, you learned that as the
role of generative AI in making our lives better increases, one must also
consider how to use it responsibly. There are certain key
considerations toward this. The first is transparency,
which refers to the openness and clarity in how AI models work,
make decisions, and generate content. Then there’s accountability,
which means holding individuals, organizations and
AI models responsible for this ethical and legal consequences of their AI
driven actions and decisions. The third consideration is privacy, which
refers to protecting personal data and ensuring that AI generated content
does not disclose sensitive or confidential information. Finally, you learned about the importance
of Safety Guardrails in generative AI, which are measures, policies, and controls
put in place to ensure the safe and responsible use of generative AI models. [MUSIC]

Video: Implementing Responsible Generative AI Across Domains

Ethical Concerns in Different Domains:

  • Content creation: accuracy, authenticity, copyright infringement, and data security
  • Customer service: transparency, monitoring, and control for customers
  • Software development: transparency, explainability, human oversight, and safety and security of generated code

Mitigating Ethical Concerns:

  • Verify and validate generated content
  • Include human review and authentication
  • Use other AI or third-party tools to verify authenticity
  • Ensure originality of generated content
  • Provide appropriate attribution to original creators
  • Understand ownership and rights for input and generated content
  • Be mindful of data privacy concerns and avoid providing sensitive information
  • Understand AI platform’s policies regarding data retention, usage, and sharing

Best Practices for Implementing Responsible AI:

  • Provide training to employees on ethical implications and potential risks
  • Use individually trained AI models to avoid bias and hallucinations
  • Comply with relevant laws governing data protection, privacy, and AI usage
  • Educate employees on guidelines for using AI tools and generated content

Key Takeaways:

  • Generative AI poses numerous ethical challenges
  • Organizations should adopt best practices and considerations to leverage the power of generative AI ethically
  • Transparency, monitoring, and control are crucial in customer service
  • Transparency and explainability are essential in software development
  • Responsible AI implementation requires understanding of ethical implications and potential risks.

Welcome to implementing responsible generative
AI across domains. After watching this video, you’ll be able to describe common ethical issues
in different domains, and explain the
considerations for mitigating ethical issues
in different domains. Generative AI has created new opportunities
for professionals across different domains, such as IT, marketing, customer service, learning
and development, and others. Organizations in different
domains adopting generative AI experience
enhance their productivity, creativity, and task automation. However, there are also
concerns regarding biases, inaccuracies, data
privacy security, and copyright infringement. Any organization or
professional leveraging generative AI, has to consider the
ethical implications of generative AI and take responsibility for
navigating the concerns associated with the
use of generative AI. Let’s explore the
ethical implications of generative AI in a few domains that are broadly
using generative AI. A potential and widespread
use of generative AI is content creation
professionals in different domains
and industries, including marketing, human resources, learning
and development, software documentation, and entertainment use generative
AI for content creation. The major ethical concerns
surrounding the usage of generative AI for
content creation include content accuracy, content authenticity, copyright infringement,
and data security. Generative AI may
produce inaccurate, inconsistent, or
hallucinated content. To mitigate the concerns
regarding content accuracy, professionals must
verify and validate the content they generate
using generative AI systems, and correct any errors
or inconsistencies. Many generative tools,
including ChatGPT, encourage users to review
the generated information. The content generated
by generative AI can be plagiarized with
no clear citations. In addition, the AI tool may cite fake or non
existent sources. There is a risk of
professionals using deceptive AI generated content, resulting in tarnishing
the credibility of their work or the reputation
of their organization. For example, a lawyer
may unwittingly cite fake cases based
on their research through a generative AI tool. To mitigate the concerns
regarding content authenticity, organizations can
consider including a step of human review to assess and authenticate AI
generated content before its implementation. They can also consider using other AI or third party
tools or systems, to verify the authenticity of content produced
by generative AI. Now let’s analyze the concerns regarding copyright
infringement. AI development
organizations may use copyrighted material
from other organizations to train the AI models. For example, Getty Images
sued stability AI for using its image library to train as
AI image generation model. If an AI model is trained
on copyrighted material, there’s a risk that
the generated content may replicate copyrighted work. Professionals using
generative AI should ensure the originality
of the generated text, images, videos, or
any other assets. If the generative AI tool produces content based
on specific sources, consider providing
appropriate attribution to the original creators as per
their terms and conditions. Professionals must also ensure that the data prompted
to the AI tool, that is the input content, is not copyrighted material. Using copyrighted
material as input without permission could
lead to legal issues. Understanding ownership
and rights for input and generated
content is crucial. Some AI tools and services
assign ownership and legal responsibility
of the content to the users as seen in
open AI’s terms of use. However, other tools
may claim ownership or impose specific terms
on content usage. As AI tools tend to store or use the data you
feed as a prompt, it’s crucial to be mindful of data privacy concerns
associated with its use. Organizations or professionals
should avoid providing any sensitive or
confidential information as input to the
generative AI tools. Use anonymized data
when possible to minimize the risk of
personal identification. Another important consideration
is to understand the AI platform’s policies regarding data retention,
usage, and sharing. Understand how the generative AI platform collects and
utilizes your data. Inquire about the platform’s
data security measures. Ensure that data collection is transparent and aligned
with your consent. Another popular domain that can leverage generative AI
is customer service. Let’s try to understand
the ethical considerations ensure responsible AI
in customer service. Firstly, clearly communicate
to customers when they are interacting with a generative AI chat bot instead of a human. Transparency builds trust and helps manage customer
expectations. Also continuously monitor the generative AI
system’s performance and impact on customer
interactions. It is important to
safeguard customer data and ensure compliance with
relevant privacy regulations. Further, it’s crucial to
empower customers with control, allowing them to easily
switch to human assistance, seek clarifications
or escalate concerns. Let’s explore a
few considerations and implications
that developers or software engineers must evaluate when using generative
AI in their work. Firstly, ensure that
the code generated by generative AI is transparent
and understandable. Accordingly, include inline
comments in the code. Include step of human review and comprehension of
the generated code. Regarding the safety and
security of the generated code, it’s important to implement rigorous testing and
validation procedures. Verify that the generated code does not introduce
vulnerabilities, bugs, or security risks that
could harm users or systems. While generative AI demonstrates great capabilities for use in different domains
and industries, it also poses numerous
ethical challenges. Accordingly,
organizations should adopt relevant
best practices and considerations to
leverage the power of generative AI ethically, it’s crucial to
provide training to employees regarding the
ethical implications, potential risks and
limitations of generative AI. Further, organizations
can consider using their own individually
trained AI models, to avoid bias and
hallucinations and to ensure the protection of the
organization and consumer data. It’s crucial for
organizations to comply with relevant laws
governing data protection, privacy and AI usage. Also, employees should be educated on the guidelines
regarding the usage of AI tools and AI generated
content. Let’s summarize. In this video, you learned about the common
ethical concerns regarding the use of generative
AI in different domains. The major ethical concerns
around the usage of generative AI for
content creation include content authenticity, copyright infringement,
and data security. Organizations and
professionals should consider relevant considerations
to mitigate these issues. To implement responsible
AI and customer service, organizations should consider implications regarding
transparency, monitoring and control
for customers. To implement responsible AI
and software development, consider implications regarding the transparency
and explainability, human oversight and safety and security regarding
the generated code.

Ethical Considerations for Generative AI in Different Domains

Exercise 1: Evaluate generative AI’s ability to generate output for the human resource (HR) domain

Exercise 2: Use generative AI for content creation in the marketing domain

Exercise 3: Use generative AI for data analysis based on customer data

Video: AI Ethics: Perspective of Key Players

The video discusses the ethical standards and principles established by key players in the AI industry, including IBM, Google, OpenAI, and Microsoft, for responsible AI development. The main points are:

IBM:

  • Guidelines for responsible AI development: explainability, fairness, robustness, transparency, and privacy
  • Products and services for responsible AI workflow, such as IBM Watson Studio and IBM Watsonx.governance

Google:

  • Guiding principles: social benefit, safety and design, accountability to people, privacy-centric designs, and principle-aligned use
  • Tools for recognizing and addressing bias in generative AI models, such as Fairness Turnaround and Explainable AI

OpenAI:

  • Safety guidelines for responsible AI development, including alignment research, misuse prevention, transparency, and collaboration
  • Tools for detecting and preventing malicious AI systems, such as GPT classifier

Microsoft:

  • Guiding principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability
  • Tools for identifying and mitigating bias in AI models, such as Fairness Checklists and Safety Analysis

The video highlights the importance of ethical considerations in AI development and the commitment of these companies to responsible and ethical AI practices.

Welcome to AI ethics, perspective of key players. After watching this video, you’ll be able to identify the open ethical standards and principles established
by key players, like IBM, Google, OpenAI, and Microsoft for
responsible AI development. You’ll also be able
to evaluate and compare the practical
strategies employed by the key players to
actively pursue and implement their
respective AI principles. As generative AI models become integral to various aspects
of business and society, ethical considerations
become paramount. Open ethical standards provide a transparent framework that
guides the development, deployment, and utilization
of these models, ensuring responsible
and fair practices. Let’s discuss how key players
like IBM, Google, OpenAI, and Microsoft have established a set of principles to promote responsible and ethical
development and usage of Gen AI. To start with, IBM has created the following guidelines for the responsible use
of AI technologies. Explainability. An AI
system must clearly explain the factors influencing the recommendations
behind its algorithms, which are relevant to
various stakeholders with various objectives. Fairness. This
means an AI system treats every one or every
group of individuals fairly. When calibrated in
the right manner, AI can assist humans in
making fairer choices, countering human biases
and promoting inclusivity. Robustness. AI-powered
systems must be actively defended from
adversarial attacks, minimizing security risks and enabling confidence
in system outcomes. Transparency. To
reinforce trust, users must be able to see
how the service works, evaluate its functionality, and comprehend its
strengths and limitations. Privacy. AI systems must prioritize and safeguard consumers’ privacy
and data rights, and provide explicit
assurances to users about how their data
will be used and protected. IBM offers its
customers products and services for responsible, transparent, and
ethical AI workflow. IBM Watsonx.governance is
a cost-effective platform enhancing an organization’s
risk mitigation abilities, regulatory compliance, and ethical management
for AI activities. Even for models developed
using third-party tools. IBM Watson Studio enables the development of advanced
machine learning models, using both notebooks
and code-free tools, facilitating the infusion of
AI into business operations. Moving forward,
Google encompasses the following guiding
principles. Social benefit. AI must contribute to the
well-being of society and avoid creating or
reinforcing unfair bias. Safety and design.
AI systems should prioritize safety and security and refrain from causing harm. Accountability to people. AI systems should be accountable
to people and should be designed in a way that allows for transparency and oversight. Privacy-centric designs. AI systems must
safeguard user privacy, refraining the model from harmful or unfair
data practices. Principle-aligned use. AI technology
should only be made available for uses that are consistent with
these principles. Now, let’s explore how
Google is actively pursuing the objectives
outlined in its AI principles. Google has developed
various tools to assist developers and users in recognizing
and addressing bias in generative AI models. Help users understand the
decision-making process within generative AI systems. Safeguard the privacy
of user data and prevent the misuse
of Gen AI models. Fairness turnaround is
designed to aid developers in identifying and eliminating
biases from these models. Explainable AI is designed to elucidate how Gen AI
models make predictions. Differential privacy is a method that enables the collection and analysis of data
without compromising the privacy of individual users, and safety net is
designed to identify and prevent malicious uses
of Gen AI models. OpenAI safety
guidelines constitute a framework of
principles and practices formulated to guarantee secure and advantageous
utilization of Gen AI technologies. These guidelines are based
on the company’s belief that Gen AI has great potential
to benefit society, but it’s important to develop and use it in a way
that’s responsible, safe, and aligned
with human values. These guidelines
cover a wide range of topics, including
alignment research. OpenAI is dedicated to aligning
Gen AI with human values. The company is heavily
investing in research to develop Gen AI
systems that are safe, reliable, and beneficial.
Misuse prevention. OpenAI is actively
working to prevent the misuse of generative
AI technologies by developing tools that
can detect and prevent the creation of
malicious AI systems. The company has
developed a tool called GPT classifier that can be used to detect text generated
by an AI system. Transparency.
OpenAI is committed to transparency in
research and development. They openly publish
research papers and code, allowing others to review
and build upon their work. This ensures ethical and
transparent research practices. Collaboration. OpenAI
emphasizes collaboration for the safe and beneficial
development of AI. The company is actively
engaged in collaborations with other researchers, policymakers,
and organizations. OpenAI is a member of
a partnership on AI, which is a group of companies
and organizations working together to promote the safe and beneficial
development of AI. Microsoft has been
at the forefront of AI development
for many years, and is dedicated to using AI
ethically and responsibly. Let’s see how Microsoft
is actively pursuing the objectives outlined in
the guiding principles. Fairness. AI should be designed and used in a way
that is fair and unbiased. Microsoft has developed a tool called fairness
checklists that can help developers identify
potential sources of bias in their AI models. Reliability and safety. AI system should be
reliable and safe and should not be used in a
way that could cause harm. Safety analysis is a tool
that can help developers identify and mitigate
potential safety risks in their AI models. Privacy and security. AI systems should be designed to safeguard both user
privacy and security. A technique called differential
privacy can be used to collect and analyze data without compromising
user privacy. Inclusiveness. AI systems
should be designed and employed in a manner that is accessible and
inclusive for everyone. Inclusive design toolkit has been developed to
help developers design AI systems that are accessible to people
with disabilities. Transparency. AI systems should be transparent and
easy to understand. Microsoft openly publishes
its AI ethics principles and AI bias mitigation toolkit. Accountability. AI systems
should be accountable, and those who develop
and use them should take responsibility
for their impacts. Explainable AI is a
tool developed to help users understand how AI
models make predictions. In this video, you gained insights into the
ethical standards and principles established by major industry
players such as IBM, Google, OpenAI, and Microsoft for responsible
AI development. You explored the key principles
including explainability, fairness, robustness,
transparency, and privacy, and discussed the
strategies and tools employed by each
company to actively implement these principles
in the development and usage of
generative AI models. The video emphasized the
increasing importance of ethical considerations as generative AI
becomes integral to various aspects of
business and society. There are specific
guidelines and tools developed by IBM, Google, OpenAI, and
Microsoft, showcasing their commitment to responsible
and ethical AI practices.

Video: Trustworthy AI: An IBM Perspective

Here is a summary of the conversation between Kate Soule and Kush Varshney:

Introduction
Kate Soule, a Business Strategy Senior Manager at IBM Research and the MIT-IBM-Watson AI Lab, interviews Kush Varshney, a Distinguished Research Scientist at IBM Research, about trustworthy AI.

Trustworthy AI
Kush emphasizes that trust is the most important aspect of AI, and that without trust, AI models cannot be successfully implemented in enterprises. He highlights the importance of transparency, fairness, and governance in AI development.

Chatbot Experiment
Kate asks chatbots to learn about Kush’s work in trustworthy AI, but the results are inaccurate, demonstrating the concept of “hallucination” in AI, where AI systems make up information that doesn’t exist.

Risks of Generative AI
Kush discusses the risks associated with generative AI, including hallucination, leakage of private information, bullying, and copyright infringement. He emphasizes the need for governance and safeguards to mitigate these risks.

Defining Trust
Kush defines trust in AI as not only accuracy and quality but also reliability, robustness, fairness, and transparency. He emphasizes the need for openness and understanding of AI systems to build trust.

Transparency and Fairness
Kush discusses the importance of transparency in AI development, using the analogy of an open-concept kitchen to illustrate the need for understanding of AI systems. He also emphasizes the need for fairness in AI, particularly in generative AI, to prevent stereotyping and toxicity.

Adopting Generative AI Responsibly
Kush advises clients to adopt generative AI in a safe, responsible, and ethical way by implementing governance, testing, and continuous monitoring to ensure trustworthy AI systems.

Conclusion
Kate and Kush conclude the conversation, emphasizing the importance of trust, transparency, and governance in AI development.

[MUSIC] Hello and welcome to AI Academy. My name is Kate Soule, I’m a Business
Strategy Senior Manager at IBM Research and the MIT-IBM-Watson AI Lab. And this is my colleague, Distinguished
Research Scientist Kush Varshney. Kush is an AI researcher with
a focus on trustworthy AI. Kush, I’m really excited we get
to have this conversation today. I’ve been working with clients and thinking about trustworthy AI from
a business perspective for a while now. But I know you’ve been innovating
in trustworthy AI from a research perspective for a number of years. When it comes to AI, I think you and I can both agree trust is
the number one important thing. >> Speaker 2: Yeah, it has to be. If we don’t have that
trust in those models, they have billions of parameters and
they’re really huge. But until we have that trust, we can’t really get the benefit
of that AI in enterprises. >> Speaker 1: Now, you have quite a few
accomplishments to your name in this space, right? You’ve published hundreds of papers, you have algorithms that are working
in labs around the world. You’re a sought after speaker, right? >> Speaker 2: Yeah. >> Speaker 1: And I say this to emphasize
that you have a big footprint in this space, a public footprint in this space. And given your public accomplishments, I thought it might be interesting if I
asked some consumer chatbots to learn a little bit more about some of the work
that you’re doing in trustworthy AI. >> Speaker 2: Yeah,
that sounds like a fun thing to do. >> Speaker 1: So you published a book
on trustworthy machine learning. >> Speaker 2: Yep,
that’s absolutely correct. >> Speaker 1: You were named an Elevate
Fellow by the government of Ontario, Canada. >> Speaker 2: I’ve never
heard of that fellowship. >> Speaker 1: You’re a co-founder
of the Machine Learning for Good Social foundation. >> Speaker 2: That’s almost right. So I did found the IBM science for
Social Good initiative, so we’re close. >> Speaker 1: You’ve created
many open source toolkits. >> Speaker 2: So we created the 360
toolkits around AI fairness, 360, AI explainability 360,
and some others, yep. >> Speaker 1: You have
a PhD in electrical and computer engineering from the University
of Illinois at Urbana Champaign. >> Speaker 2: I went to MIT. >> Speaker 1: So, Kush, what’s going
on with these chatbot responses here? Some of these are right, and
some of them are complete fiction. What’s going on? >> Speaker 2: So
I would call that hallucination. And so that means that these AI systems,
they’ll make some things up, they’ll make associations
that aren’t exactly correct. And I think that’s what
happened in our last example. So kind of created this association
that didn’t exactly exist. >> Speaker 1: Got it, I think everyone is
feeling the pressure of operationalizing generative AI as fast as possible. But when companies hear
about AI hallucinating or other toxic behaviors like bullying or
gaslighting, and there’s other concerns around generative
AI and copyright infringements. Or the revealing of personal or
private information, and it makes companies concerned and
nervous and even fearful about adopting
generative AI in their organization. >> Speaker 2: Yeah, and what we have
to remember is that AI is not a race, it’s a journey, we have to be careful. And as anything that we want
to get into enterprise AI, it has to have these principles of
trust and transparency throughout. We have to slow down, put in all of
these governance aspects, make sure that we’re putting in safeguards,
guardrails and just doing the right thing. [MUSIC] >> Speaker 1: I know you and your team
have worked on this for a while, right? How have the risks changed with
the advent of generative AI compared to the risks we were seeing before
with traditional machine learning? >> Speaker 2: Yeah, so predictive
machine learning and generative AI, they’re kind of two
sides of the same coin. So a lot of the techniques are very
similar, but there are differences. So the hallucination that you mentioned,
the leakage of private information, the bullying, all of those are new
risks that we haven’t seen before. We still have a lot of other risks
as well that kind of carry over, but the difference mainly is
around the solutions. How do we address these issues? And a lot of the reason we can’t apply
the same techniques from before is because of the huge data
that we’re dealing with now. Just humongous, humongous data sets. >> Speaker 1: Yeah, can you talk a little
bit more about that specifically? So when we have these
huge volumes of data, how does that impact our
ability to trust a model? >> Speaker 2: Yeah, the data is so huge,
we can put in data governance techniques, we can ensure that certain
sites are not scraped, that certain filtering is done and
so forth. But it’s beyond the ability
of any individual human or a team of humans to even read through
every single piece of content. So that’s where the challenge comes from. [MUSIC] >> Speaker 1: Now,
let’s take a step back for a second and talk about trust as a concept. When I talk to clients about trust, most of the time their minds jump straight
to accuracy, thinking about quality. And can they trust the model in the use
case that they’re trying to deploy it in. How do you define trust? >> Speaker 2: Yeah, so
I think the starting point is that. So the quality, the accuracy, just
the general performance of these models, because without that,
nothing else follows. But that’s just the starting point, right? >> Speaker 2: Yeah.

So there’s all sorts of other considerations, whether it’s
reliability and robustness or fairness. Can we, as humans,
understand how the model is working? Can we understand the entire
process of how it came together? Can we ensure that the models,
these AI systems, are working for our benefit, not doing something else? >> Speaker 1: Yeah, I think a valid
criticism of AI in general, including generative AI,
is that it can be a bit of a black box. Can you speak a little bit more
about transparency as a dimension of trustworthy AI? >> Speaker 2: Transparency says it,
I mean, already, right? So we think of these AI systems,
they’re black boxes in some capacity, and what we need is more openness. We need to shed light on them. And what transparency allows us to do is
kind of understand what’s going on from beginning to end. So an analogy to that is,
let’s say you’re at a restaurant and it has an open concept kitchen. You can see all the ingredients
before they’re chopped up. You can see what the chef is doing, and all of that gives you confidence that
there’s just general goodness happening. And the same thing applies to AI systems. If we can know where the data came from,
what sort of processing steps were performed, what sort of testing was done,
what sort of auditing was done, all of that together gives us
the understanding of what’s going on. [MUSIC] >> Speaker 1: Now, Kush, you and your team have also spent a lot
of time thinking about fairness. Can you speak a little
bit more about that? >> Speaker 2: Yeah, fairness is
a topic I’m really passionate about. And in the traditional machine learning
sense, we talked about fairness for hiring algorithms, for
lending algorithms, these sort of things. But when we move to the generative AI
world, things are a little bit different. So the thing that we’re most concerned
about is stereotyping and other toxicity. Because it’s the most vulnerable members
of society that suffer the most when these systems are actually
doing things in a harmful way. >> Speaker 1: And this is one of the areas
where I feel like generative AI and machine learning have a lot in common. At the end of the day,
if they’re trained on biased data, they’re going to create biased outputs. And generative AI, for better or worse,
is trained on human created data. And humans have conscious and
unconscious biases. And the data that they
create can reflect that. >> Speaker 2: Yeah, absolutely. And it’s the algorithms that just
amplify all of those societal and cognitive biases as well. >> Speaker 1: So with all these risks and
considerations around trust, how can clients adopt generative AI in
a safe, responsible, and ethical way? >> Speaker 2: Yeah, I think the only
word I need to say is governance. And AI governance really
starts at the beginning. What is the intended use of these
systems that we’re creating? Where is the data coming from? Where is it sourced? Where are we processing it? Putting in all these different checks and
balances and doing all of the testing
in deployment as well. Can we continuously monitor
how they’re performing and step in if they go
beyond those guardrails? >> Speaker 1: Absolutely,
I think you put it really well. When the stakes are high,
you need to be able to trust, but have that trust validated and verified,
and not just trust for trust’s sake. >> Yeah. >> Speaker 1: Okay, it’s time to wrap up. Thank you so much, Kush. And for everyone else, thank you for
watching this episode of AI Academy. Please join us again for future episodes as we unpack some of the
most important topics in AI for Business. [MUSIC]

Reading: Generative AI and Corporate Social Responsibility

Reading

Practice Quiz – Considerations for Responsible AI

________________ in generative AI refers to the openness and clarity in how AI models work.

What is a recommended approach to mitigate concerns regarding content authenticity when using generative AI?

Which of the following tools developed by Google is designed to assist developers in identifying and eliminating biases from generative AI models?

Social and Economic Impact


Video: Economic Implications of Generative AI

Introduction

  • Generative AI is transforming the economy, with potential benefits including increased efficiency, new job roles, and professional growth
  • The video explores the potential economic growth that businesses can achieve with generative AI

Benefits of Generative AI

  • Four functions are poised to grow the most with assistance from generative AI: customer operations, marketing and sales, software engineering, and research and development
  • Example: a company using generative AI for customer service resolved 14% more requests per hour and reduced issue handling time by 9%
  • Goldman Sachs research predicts a 7% increase in global GDP and 1.5% increase in productivity in 10 years

Challenges of Generative AI

  • Job market shifts: the rise of the prompt engineer, a highly skilled job requiring a person with no particular degree
  • Gender bias in AI professionals: women make up only 22% of AI professionals globally
  • Automation of jobs: approximately 300 million full-time jobs to be partially automated by generative AI, with jobs performed typically by women (e.g. office secretary, human resources manager) at risk of being displaced

Impact on Job Market

  • Job seekers need to be aware that their future role in the workplace may be determined by AI, with algorithm-driven hiring software vetting resumes and applications
  • Concerns about AI filters: biased and unverified data used to train algorithms, leading to controversy and controversy (e.g. flagging students wearing headscarves or struggling to read skin tone of nonwhite students)

Conclusion

  • The global workforce is certifying themselves to become relevant in an AI-first economy
  • The economy will benefit from increased efficiency and performance, new job roles, and professional growth
  • However, there are concerns about widening the income gap along gender lines and displacement of customer service representatives, among others.

[MUSIC] Welcome to the Economic Implications
of Generative AI. After watching this video, you’ll be able
to describe the potential economic growth that businesses can achieve
with generative AI. List the expected benefits and challenges
associated with too much AI too soon. And identify how generative AI tools are
shaping job functions and job profiles. Businesses are happy with the many
benefits that generative AI applications are delivering. According to a McKinsey report, four
functions are poised to grow the maximum with assistance from generative AI,
customer operations, marketing and sales, software engineering and
research and development. In one example, a company with 5000
customer service representatives uses generative AI for
handling customer complaints. They are able to resolve 14%
more requests per hour and reduce their issue handling time by 9%. Maybe you’ve interacted with one of these
generative AI customer service chatbots, it’s tough to know whether you’re
talking to a machine or a human, but truth be told, today’s chatbots
are able to give prompt resolutions. The ability of generative AI tools to
respond like humans is at the center of this economic revolution. Here are two examples, in a landmark
moment, GPT4 scored 90th percentile when taking the Uniform Bar Exam, exceeding the
score of all prior large language models. Google’s AI translated Bengali
without ever learning the language, according to Google’s Senior Vice
President for Technology and Society. No doubt the economic impact of generative
AI is being felt across the world. Goldman Sachs research predicts
in that in 10 years time, generative AI could result in a 7% almost
$7 trillion increase in global GDP and 1.5% increase in productivity. Is such economic growth beneficial for
everyone? History has shown that each time the world
discovers a powerful technology such as the steam engine that drove
the Industrial Revolution or the Internet that is driving
the Social media revolution. And the foundation models driving
the Generative AI revolution, we see increased efficiency and
performance, new job roles and professional growth,
better distribution and revenue. And a general excitement as businesses and
consumers access each other more easily. The job market is seeing dramatic shifts, the rise of the prompt
engineer is one such surprise. A highly skilled job requiring a person
with no particular degree to gently and strategically audit, test and
train large language models so that they improve their responses. And in general,
people working in AI are much in demand. However, there is an existing
gender bias here, as, according to the World Economic Forum, women make up
only 22% of AI professionals globally. For the Non-AI workforce, Goldman Sachs
expects approximately 300 million fulltime jobs to be partially
automated by generative AI. According to a human
resources analytics firm, these are the jobs performed typically
by women, including but not limited to, office secretary or administrator, human
resources manager, teacher, writer and customer service representative. Historically, jobs lost have been replaced
with new jobs, did you know that 60% of today’s workers are employed in
occupations that didn’t exist in 1940. So it’s possible that those who
will be displaced will innovate and create new opportunities. But job seekers today need to be aware,
your future role in the workplace may also be determined by AI as
algorithm starts screening resumes. Today, most companies use AI-driven hiring
software to vet resumes and applications. When you send out yours, ensure to include the keywords
that the algorithm is looking for. Those taking tests to earn their degrees
must also contend with AI filters. Unfortunately, some filters
are coming across as controversial, as algorithms are often trained on
limited, biased and unverified data. For example, AI software used to
conduct online examinations for universities has been known to flag
students wearing headscarves or struggle to read skin tone of nonwhite
students while screening test takers. It may even stop students with visual
impairment from accessing screen readers. Furthermore, these algorithms are trained
to attack student’s keystrokes and or access their laptop camera to
determine if they’re cheating. For instance, students at California
State University have filed a petition to discourage the use of Proctorio, which
analyzes audio and video recordings and screen monitoring to identify
potential cheating behavior. This is partly due to lack of adequate
governance, as AI solutions are being implemented in a hurry and therefore are
incompetently implemented and regulated. With a strong desire for economic growth, many industries are leveraging generative
AI to increase their reach and revenue. The global workforce is cleverly
certifying themselves to become relevant in an AI-first economy. Will a possible fallout of too much AI
too soon include widening the income gap along gender lines? That’s something to think about. In this video, you explored the impact
of generative AI on the global economy. Customer operations, marketing and sales,
software engineering and research and development will feel an immediate impact. The economy will benefit from increased
efficiency and performance, new job roles, and professional growth,
such as the rise of the prompt engineer. We can also expect to see
gender-based income inequity and displacement of customer service
representatives, among others. [MUSIC]

Video: Social Implications of Generative AI

Benefits:

  • Enhanced advocacy: Generative AI can help governments, NGOs, and civil society access and analyze data faster, forecast scenarios, and prepare compelling advocacy material.
  • Increased inclusion: Generative AI can help people with disabilities, language barriers, and other challenges to access information and communicate more effectively.
  • Improved healthcare: Generative AI has expedited medical research, improved clinical decision making, and increased access to personalized health information and services.

Challenges:

  • Digital exclusion: 37% of the world’s population lacks internet access, which can exacerbate existing social and economic inequalities.
  • Biased algorithms: Generative AI systems can perpetuate racial and gender biases present in training data, leading to inaccurate clinical decision making.
  • Emotional isolation: Excessive online content consumption can lead to loneliness, which can have negative health consequences.
  • Environmental impact: Generative AI requires significant hardware and energy resources, contributing to e-waste and carbon emissions.

Solutions:

  • Increase diversity and accuracy of training data to mitigate bias.
  • Use generative AI responsibly to minimize environmental impact.
  • Develop more inclusive and accessible AI systems to reduce digital exclusion.
  • Explore ways to use generative AI to combat loneliness and promote social connection.

Overall, the video highlights the need for responsible use of generative AI to maximize its benefits while mitigating its risks and challenges.

Welcome to the social
implications of generative AI. After watching this video, you’ll be able to describe how generative AI can benefit
society and social well being. Identify the emerging challenges associated with the widespread
use of generative AI and explore solutions to balance the benefits versus risks
of using generative AI. To start with, what do we
mean by social implications? We mean the impact that
generative AI has or can have on society and its well being beyond indicators
such as productivity, economic growth, profitability,
and return on investment. Social impact
considers indicators such as advocacy,
inclusion, healthcare, and the environment,
all of which contribute to a well
structured, equitable society. Advocacy is all
about outreach and generative AI tools
can help governments, intergovernmental institutions, non governmental organizations, and civil society access
and analyze data faster, forecast scenarios to
list preventive actions, and prepare creative and
compelling advocacy material. By using generative AI, we will see an increase in international
collaboration leading to increased debate
and discussions. But can all people access and
use generative AI equally? According to the United Nations, an estimated 37% of the world’s population
or 2.9 billion people, still do not have Internet
access as of 2021. This concept is known
as digital exclusion, and 96% of these
digitally excluded people live in developing countries. This means that the
population that is slow to adopt generative AI will get further economically displaced and socially
marginalized, advocates and lawmakers need to work to include them
as soon as possible before generative AI further widens the gap in performance
and qualifications. Societies over centuries have struggled with social
inclusion to create a world in which all
feel represented and not discriminated against
because of their gender, ethnicity, race, disability,
or sexual orientation. Because generative AI
tools are multi modal, they allow people the
opportunity to learn and communicate in customized
and preferred formats. Easy translations into
multiple languages, quick text to speech conversion, AI voices for
increased anonymity and the creation
of AI portraits. These are some of the generative
capabilities that can help people represent themselves
and feel more included, more people are thinking
along these lines. The Massachusetts Institute
of Technology gave grants to 27 finalists to explore generative AI’s
impact on democracy, education, sustainability,
communications, and more. Think about this, how will this technology
impact our climate, the environment, our music, and our literature?
What about healthcare? Generative AI has had a very positive
impact on healthcare. We see expedited medical
research and drug discovery, early detection and diagnosis with improved clinical
decision making, increased capacity to manage
healthcare worker shortages, and access to personalized health information and services. However, one concern
has emerged. The data on which
generative AI systems are trained has inherent
racial and gender bias, and this affects clinical
decision making. Here’s one example. In 2019, the American
Civil Liberties Union flagged that AI algorithms were misinterpreting
patient data, which led to false assumptions that African American patients need less care as compared to white patients for the
same set of symptoms. To correct this
error, the algorithm was restructured to focus on a patient’s symptoms rather than historical records of
patient treatments. The question that emerges
here is how can we continually increase
the diversity and accuracy of training data? Another concern related
to health care is the possible emotional
isolation that comes with excessive
consumption of online content. In May 2023, the US Department of
Health and Human Services officially stated that the lack of social
connection may increase susceptibility to viruses
and respiratory illness. According to the US
Surgeon General, loneliness has become
an epidemic and represents an urgent
public health concern. Two interesting
questions then emerge. As generative AI
makes people more digitally dependent
and self sufficient, will it lead to
increased loneliness? Or can generative
AI tools such as ChatGPT help people
cope with loneliness? What about the environmental
impact of generative AI? Foundation models such as GPT-4, ChatGPT DALL-E-2, Midjourney, Stable Diffusion,
Lambda and BERT require a large amount of
hardware and Cloud space and use rare minerals. As they process a
large amount of data, the hardware needs to
be replaced often, generating e-waste
more frequently. Given this large
carbon footprint, generative AI is not a
friend of the environment. According to the Harvard
Business review, organizations can take steps to make these systems greener. Such as fine tuning existing
models for downstream tasks, rather than building
models from scratch. Evaluating the energy sources of Cloud providers or data centers, and using generative
AI only when needed. Responsible use of generative
AI always comes first. In this video, you explored the impact of generative
AI and society, specifically, its
contribution to advocacy in health care
and the environment. Benefits include increased
advocacy and inclusion, better healthcare
systems and services, and chatbots to
fight loneliness. Challenges include digital
exclusion, biased algorithms, digital dependency-induced
loneliness, and a sizable carbon footprint. We must take steps to
use generative AI tools responsibly to
maximize the benefits and mitigate the
associated risks.

Video: A Reimagined Workforce with Generative AI

Title: A Reimagined Workforce with Generative AI

Key Takeaways:

  1. Generative AI is revolutionizing the role of knowledge workers, augmenting rather than destroying jobs.
  2. The biggest challenge to workforce readiness is not employee replacement, but employee upskilling.
  3. Organizations must take steps to ensure workforce transformation to minimize business disruptions and maximize the available talent pool.

Impact of Generative AI on Knowledge Workers:

  • Partial automation of tasks, not entire roles
  • Human expertise still needed for nuanced and complex decisions
  • Examples: accountant Josh uses AI for data entry and financial reports, but still needs to interpret reports and provide personalized advice; PR manager Banona uses AI for predictive analytics, but still needs to make evidence-based decisions and build client relations.

Steps for Workforce Transformation:

  1. Redesign the workflow to align with updated priorities and technology exposure.
  2. Assess employee skills against three factors: resourcing requirements, automation, and people-centered roles.
  3. Hire for AI roles only to fill critical talent gaps, and prioritize in-house training and upskilling for existing employees.
  4. Identify roles most impacted by generative AI and prioritize them for coaching and upskilling, such as clerical roles and organizational leadership.

Conclusion:

Generative AI is not a replacement for human workers, but rather a tool to augment their abilities. By upskilling and reskilling employees, organizations can leverage their existing talent pool and ensure a high return on investment in AI technology.

[MUSIC] Welcome to the video. A reimagined workforce with generative AI. After watching this video, you’ll be able to understand how
generative AI impacts knowledge workers. Identify an organization’s biggest
challenge to workforce readiness in the context of generative AI. And list steps organizations can take to
ensure work workforce transformation. At a global level, generative AI is
helping automate many business functions such as marketing and sales, customer
service, legal, procurement, operations, and research and development. Gartner predicts that the AI
software market will reach nearly $134.8 billion by 2025. This means that more organizations will
increasingly rely on large language models for basic and repetitive tasks. But do they have the right people
to work with this technology? At the heart of the generative AI
revolution is the knowledge worker. Just as AI revolutionized
the role of factory workers, generative AI is revolutionizing
the role of knowledge workers. Therefore, workforce considerations
are critical to successful generative AI adoption if organizations want to derive
maximum value from their AI investments. According to
the International Labor Organization, generative AI is more likely to
augment rather than destroy jobs. It’ll automate some tasks,
not an entire role. This is because generative AI has limited
ethical and emotional intelligence and therefore cannot make intricate,
context dependent decisions like humans. While generative AI automates routine and
analytical tasks, human expertise is still needed to
make nuanced and complex decisions. When only certain functions are automated,
this is known as partial automation. Let’s explore the impact of partial
automation with a few examples. Meet Josh, he’s an accountant with
the multinational firm, generative AI will help him automate processes such as data
entry, bookkeeping, and reconciliations. He can query algorithms to
generate financial reports, and use AI-driven anomaly detection
software to detect fraud in the system. However, his input is still needed to
interpret reports, analyze complex financial data, interact with clients,
and provide personalized advice. Is every action marked as fraud in
the AI system a compliance breach? He’ll need to apply his ethical
judgment to decide this. Meet Banona, she’s the manager of
a small public relations firm. The generative AI tool her firm has
onboarded performs predictive analytics to plan resource allocation. It also analyzes data to generate
insights for managing risk. With this support, she’s able to
make evidence-based decisions, solve complex problems intuitively,
plan for the future, negotiate with stakeholders,
and build client relations. As generative AI does not automate
an entire role, jobs need not be lost. People need not be displaced. It takes, therefore, a simple realization
for organizations that the biggest challenge to workforce readiness
in the context of generative AI, is not employee replacement but
employee upskilling. The World Economic Forum
says 44% of workers must be upskilled/reskilled
over the next five years. This way, generative AI and human expertise is collaborate to
draw on each other’s strengths. While hiring new talent to perform
AI-specific tasks is mandatory in the short-term, organizations can begin
a workforce transformation initiative to upskill their existing employee pool for
long-term alignment. This will ensure that there
are minimal business disruptions while leaders get a generative
AI-friendly workforce ready. Let’s explore the possible steps
involved in making this happen. Step one, redesign the workflow. Step two, assess skills. Step three, hire for AI roles. And step four, prioritize training. Organizations must redesign business
workflows and align employees roles with the organization’s updated priorities and
technology exposure. Next, managers must assess the level of
employee skills against three factors, the resourcing requirements
as per the new workflow. While automation is inevitable,
people-centered roles are still important. Current employees must complement
generative AI outputs, not compete with them. For instance, in June 2023, the
recruitment of AI-skilled members was nine times higher than in
January 2016 globally. However, there was a simultaneous
increase in the demand for soft skills such as communication and
flexibility. A LinkedIn survey revealed that 72% of
US executives agree that soft skills are more valuable for
their organizations than AI skills. Managers must hire talent from outside
only to fill the critical talent gaps. Everyone else must be considered for
in-house training and upskilling. Organizations must identify the roles
that will be most impacted in-house and prioritize them for
coaching and upskilling. For example, clerical roles,
mostly held by women, will be automated the most
with generative AI. These vulnerable employees must be
trained to perform higher level tasks, while generative AI tools perform
repetitive and basic tasks. Another example is
organizational leadership. What is the expected role of leaders
toward the economy, society, and their employees with the widespread
use of generative AI? For instance,
a financial advisory company is using a generative AI model trained
on their proprietary data. The model generates market trends and
competitive analyses, helps in scenario planning and
risk assessment, and facilitates collaboration
between departments. Kumar is the CTO of
the financial advisory company. To ensure that the firm’s foundation
models are used ethically, he’s coached on the principles of
AI ethics, such as accountability, transparency, privacy, and security. By training on these aspects,
Kumar ensures that decisions guided by the company’s proprietary model impact
the company and society positively. With upskilling, organizations can leverage the rich pool
of talent already available to them. This is the true impact of
generative AI on the workforce. Job losses, risk of increased technology
exposure, lack of human touch, proliferation of biased data. All these are signs that organizations are
not investing in workforce transformation. In this video, you identified the impact
of generative AI on the global workforce. Generative AI will partially automate
job roles and not displace people. The biggest challenge to
preparing a workforce for generative AI is employee upskilling. Organizations must engage in
workforce transformation to minimize business disruptions,
maximize the available talent pool, and ensure a high return
on investment in AI tech. They can follow four steps to ensure this
transformation, redesign the workflow, assess skills, hire for AI roles,
and prioritize training. [MUSIC]

Reading: Lesson Summary: Social and Economic Impact

Reading

Practice Assignment: Practice Quiz – Social and Economic Impact

Which of the following is a growing concern as industries start leveraging the potential of generative AI?  
Creation of new job roles and professional growth 
The ability of GPT-4 to excel in the Uniform Bar Exam 
Widening income gap along gender lines 
Better distribution and revenue for businesses 

What is a serious challenge associated with using generative AI in healthcare?  
Accelerated medical research and drug discovery 
Improved health capacity to counter worker shortages 
Biased training data that influences algorithms 
Increased access to health information and services 

What is the biggest challenge for companies to prepare their workforce for generative AI?
Employee upskilling
Employee replacement
New workflows 
Job losses

Graded Quiz – Social and Economic Impact and Responsible Generative AI

In her report titled ‘Generative AI Insights’, Anita has identified ____ as the business function that is expected to see the highest growth, but also cause the maximum job displacement due to generative AI.
Customer operations
Research and development
Marketing and sales
Software engineering

Charlie is interested in working in the field of generative AI. His colleague informs him that the job role of a ______ has gained prominence due to the increasing use of foundation models.
Teacher
Customer service representative
Human resources manager
Prompt engineer

As the vice president of a big AI firm, what action can Amar take to reduce the firm’s carbon footprint?
Increase the diversity and accuracy of data used for training foundation models
Get people who are digitally excluded to access and use generative AI
Use generative AI apps to increase collaboration across the firm
Fine-tuning existing foundation models for downstream tasks

As the lead developer in her organization, Tina is tasked with ensuring that their large language model (LLM) protects the privacy of user data. How can she accomplish this?
Train the LLM using privacy-preserving algorithms
Add a disclaimer that says the AI model can’t be held accountable
Filter the AI content to prevent harmful or offensive outputs
Generate a ‘Transparency Report’ to explain how AI models work


Home » IBM » Generative AI Fundamentals Specialization » Generative AI: Impact, Considerations, and Ethical Issues » Week 2: Social and Economic Impact and Responsible Generative AI