Module 3 – Lesson 4: Beware of Bias

Album Art

Track Title

Artist Name

Track

We’ve talked a lot out bias already and how bias and fairness impact AI systems. It’s such an important topic that we’ll take a much deeper dive here and provide strategies to address these challenges. We’re framing the topics in the best way to help you understand key concepts for passing the Salesforce AI Associate certification exam.

Trailblazer Walking

The AI Fundamentals Podcast

Episode 13: Beware of Bias

What is Bias in Machine Learning?

  • Salesforce Definition of Bias: Bias in machine learning refers to “systematic and repeatable errors in a computer system that create unfair outcomes, in ways different from the intended function of the system, due to inaccurate assumptions in the machine learning process.”
  • Statistics Definition: Bias is a systematic deviation from the truth or error, which can distort the outcomes of machine learning models.

AI systems can be prone to bias if the underlying data used for training, or the assumptions made during model development, reflect unfair practices. These biases may lead to unintended consequences, such as favoring one group over another or reinforcing societal inequalities.

Fairness in AI Decisions

Fairness in AI refers to the development and deployment of AI systems that provide equitable outcomes for all users, regardless of their background or characteristics. Fair AI systems should avoid making decisions that disproportionately affect certain groups of people.

Types of Bias in AI

Understanding the different types of bias that can plague an AI system is needed in order to recognizing and mitigating unfair outcomes. So what are the various types of bias?

  • Association Bias: When the model correlates unrelated factors, such as associating certain job roles with specific genders.
  • Confirmation Bias: When data or AI models favor outcomes that confirm pre-existing beliefs or hypotheses.
  • Automation Bias: Over-reliance on automated systems, assuming that they are inherently accurate and unbiased.
  • Societal Bias: Bias that reflects societal stereotypes or prejudices embedded in the training data.
  • Survivorship Bias: Only focusing on successful outcomes, while ignoring those that did not make it into the dataset.
  • Interaction Bias: Bias introduced through interactions between humans and AI systems, such as users training chatbots with biased inputs.
  • Data Leakage (Hindsight Bias): When future information inadvertently influences the model’s predictions, resulting in an unfair advantage.

How Bias Enters AI Systems

Bias can enter AI systems through multiple channels:

  • Assumptions: Incorrect assumptions made during model design.
  • Training Data: Biased or unrepresentative training data that reinforces unfair patterns.
  • Model Development: Algorithms used to create AI that inadvertently favor certain outcomes.
  • Human Intervention (or Lack Thereof): Human oversight that fails to recognize or correct bias in the system.

AI systems have the potential to magnify bias, amplifying unfairness on a larger scale if unchecked.

Removing Bias from Data and Algorithms

To create fairer AI systems, it’s essential to take active measures to identify and remove bias. Here are some steps to mitigate bias:

  • Conduct Pre-Mortems: Anticipate how bias could enter the AI model before development. This involves brainstorming potential failure points and the ways bias may manifest.
  • Identify Excluded or Overrepresented Factors: Evaluate the dataset for underrepresented or overrepresented groups. Make sure the data represents diverse populations to avoid skewed outcomes.
  • Regularly Evaluate Training Data: Continuously monitor and update training data to reflect changes in society and reduce outdated or unfair patterns.

By following these practices, you can build AI systems that are more equitable and reliable.

Understanding bias and fairness in data is important when designing responsible AI systems. You’re now well equipped with the knowledge of responsible AI development and you’ve seen the role of ethics, bias and fairness. Up next, we’ll move onto the Salesforce’s trusted AI Principles!

More Reading on Bias: Trailhead Module on Recognizing Bias in AI

Now Drop In To Focus

What is Bias in Machine Learning?
Bias in machine learning refers to systematic and repeatable errors that create unfair outcomes due to inaccurate assumptions in the learning process. This can distort outcomes, favor certain groups, or reinforce inequalities.
What is Fairness in AI Decisions?
Fairness in AI ensures that systems provide equitable outcomes for all users, avoiding decisions that disproportionately affect certain groups.
What Are the Types of Bias in AI?
Types of bias in AI include:
  • Association Bias
  • Confirmation Bias
  • Automation Bias
  • Societal Bias
  • Survivorship Bias
  • Interaction Bias
  • Data Leakage (Hindsight Bias)
How Does Bias Enter AI Systems?
Bias can enter through assumptions, biased training data, algorithms favoring certain outcomes, or insufficient human oversight.
Why is Bias in AI Harmful?
Bias in AI can amplify unfairness, reinforce societal inequalities, and result in decisions that favor certain groups over others.
How Can Bias Be Removed from AI Systems?
Steps to remove bias include conducting pre-mortems, identifying excluded or overrepresented factors, and regularly evaluating and updating training data.
What Is Data Leakage in AI?
Data leakage, or hindsight bias, occurs when future information influences model predictions, creating an unfair advantage.
Why Is Responsible AI Important?
Responsible AI ensures fairness, equity, and trustworthiness in AI systems by addressing bias and ethical considerations.

Quiz Time!

Take this quiz to test your knowledge!

?