CIA.Bias
NEW for 2025-Fall: Content now AVAILABLE!
|
Reading: “Bias and Fairness in Pricing and Underwriting of Property and Casualty (P&C) Risks”, April 2023, Sections 1, 2 and 3. Official Link
Author: Canadian Institute of Actuaries
BA Quick-Summary: Bias & Fairness in Pricing
The purpose of this reading is to guide actuaries in effectively communicating uncertainty in actuarial work.
|
Contents
- 1 Pop Quiz
- 2 Study Tips
- 3 Section 1: Intent, Scope and Cross-references
- 4 Section 2: Historical Issues and Current Evolution
- 5 Section 3: Definitions
- 6 Practice Questions
- 7 Practice Questions Answer Key
- 7.1 Q: What is the key difference between bias and fairness in P&C pricing?
- 7.2 Q: How do direct and indirect discrimination differ in the context of insurance rating?
- 7.3 Q: Why might a pricing model be biased but not unfair (or vice versa)?
- 7.4 Q: An insurer discovers their auto pricing model charges higher premiums in postal codes with high immigrant populations. What steps should they take to evaluate if this is problematic?
- 7.5 Q: How would you apply the three ethical frameworks to a situation where territorial rates disadvantage a protected group?
- 7.6 Q: What sources of bias should actuaries check for when developing a new predictive model?
- 8 🎯 Study Tips Summary
- 9 POP QUIZ ANSWERS
Pop Quiz
What are the proposed reforms to the Canadian tort system?
Study Tips
💡 Key Insight: |
This paper provides P&C practitioners with tools to detect, evaluate, and mitigate potential bias in actuarial risk assessment models. Understanding bias and fairness is crucial as rating algorithms become more complex and data-driven. The paper serves as a starting point for actuaries to ensure their pricing models meet ethical standards while remaining actuarially sound.
📚 Study Strategy Summary: |
Focus on understanding the definitions of bias and fairness, recognizing how they differ, and grasping the historical context that makes these considerations increasingly important. Pay special attention to how bias can arise from multiple sources throughout the pricing process.
Estimated study time: 1-2 days
Section 1: Intent, Scope and Cross-references
Intent
The insurance industry has faced growing scrutiny over potential bias in pricing algorithms. As data volumes expand and rating algorithms become more sophisticated, P&C insurers increasingly rely on automated processes, models, and machine learning techniques to set premiums. This evolution brings both opportunities and challenges.
The Core Challenge |
While data-driven algorithms appear objective, they depend heavily on subjective decisions about:
- Which characteristics to include as rating factors
- How to categorize observations
- What data sources to use
- How to handle missing or incomplete data
These decisions can introduce unintended bias, as illustrated by examples like LinkedIn's search algorithm that inadvertently favored male names due to their higher frequency in the dataset. The intent of this paper is to equip practitioners with tools to evaluate bias and fairness in their actuarial pricing and modelling work.
Scope
🎯 Application Areas |
The paper provides guidance for actuaries performing various services:
- Development of risk segmentation or tiers - Creating meaningful risk groups while avoiding discriminatory classifications
- Measurement of price differentials, discounts and surcharges - Ensuring rate variations are actuarially justified
- Predictive analytics - Determining periodic cost levels or growth potential without perpetuating historical biases
- Other models - Any actuarial work where bias and fairness concepts apply
Important clarification: This paper does not address societal determinations of fairness, which remain outside the actuarial domain. The document should be considered holistically, with all sections contributing to a comprehensive understanding.
Cross-references
The concepts in this paper align closely with existing professional standards. When applying this guidance, practitioners should consider how it interacts with other regulatory and professional requirements. The paper's principles remain applicable even as specific regulations evolve over time.
⚖️ Key Connection: These concepts are closely linked with Section 1400 of the CIA Standards of Practice |
mini BattleQuiz 1 You must be logged in or this will not work.
Section 2: Historical Issues and Current Evolution
Fairness represents a social construct that evolves over time and varies across societal contexts. Practitioners must adapt their interpretation of fairness to current circumstances while recognizing that what society deems fair today may change tomorrow.
The Spectrum of Risk Segmentation
Traditional insurance work inherently involves fairness considerations across the risk segmentation spectrum - from no segmentation to extremely granular differentiation. The segmentation process aims to:
- Recognize characteristics that differentiate risk levels
- Group individuals with similar risk profiles
- Set appropriate prices for coverage
However, data collection and model development processes can introduce bias and perpetuate unfair outcomes, as demonstrated by several high-profile examples.
The Correctional Services Example
⚠️ Case Study: Systemic Bias in Risk Assessment |
In 2020, The Globe and Mail exposed systemic bias in Correctional Service Canada's risk assessment system. The assessment determined security classifications, reintegration potential, and program access for federal inmates. Key findings revealed:
- Indigenous and Black inmates received disproportionately high "maximum" security ratings
- These classifications limited access to treatment programs
- Lower reintegration scores led to negative parole decisions
- The system created a feedback loop perpetuating discrimination
Most troublingly, the data showed that Indigenous and Black men were actually less likely to reoffend than white men over a seven-year period, indicating the risk scores systematically overestimated their likelihood of recidivism. This example illustrates how data-to-score-to-outcome loops can perpetuate systemic discrimination even when based on "objective" actuarial scoring.
Insurance-Specific Examples
The insurance industry has its own history of grappling with fairness issues: Redlining and Neighborhood Risk The practice of redlining denied financial services, including insurance, to residents of certain neighborhoods often defined by racial or ethnic composition. While technically based on mathematical loss-cost analysis, this practice created devastating feedback loops:
- Disadvantaged communities were denied coverage
- Lack of insurance led to property deterioration
- Declining conditions reinforced the "technical justification"
- Communities spiraled into further decline
Gender in Rating Algorithms
A Tale of Two Jurisdictions |
The use of gender as a rating factor illustrates how fairness interpretations vary:
- Canada: Supreme Court allowed gender-based rating (though not unanimously)
- European Union: Banned gender in insurance pricing in 2012 as inherently unfair
The Supreme Court of Canada's judgment in Zurich Insurance Co. v. Ontario (Human Rights Commission) noted: "Human rights values cannot be overridden by business expediency alone. To allow 'statistically supportable' discrimination would undermine the intent of human rights legislation which attempts to protect individuals from collective fault." These examples demonstrate how fairness concepts evolve and vary across jurisdictions, requiring actuaries to remain adaptable in their approach.
mini BattleQuiz 2 You must be logged in or this will not work.
Section 3: Definitions
This section establishes key definitions that form the foundation for bias and fairness analysis in P&C pricing. Understanding these distinctions is crucial for practical application.
3.1 Bias
📖 Working Definition for P&C Pricing |
Multiple definitions of bias exist across disciplines. For clarity in actuarial applications, this paper adopts the definition from Bill C-27: Artificial Intelligence and Data Act: Biased output means content that is generated, or a decision, recommendation or prediction that is made, by an artificial intelligence system and that adversely differentiates, directly or indirectly and without justification, in relation to an individual on one or more of the prohibited grounds of discrimination set out in section 3 of the Canadian Human Rights Act, or on a combination of such prohibited grounds. For P&C pricing purposes, we define bias as:
Situations where ratemaking model outcomes are systematically less favorable to individuals within a particular group Where no relevant difference between groups justifies the difference in premiums or rates
In practical terms, biased outcomes assign higher or lower premiums for reasons not justified by differences in the cost of providing insurance. The justification typically comes from statistical correlation between rating variables and underlying risk, though causal understanding is ideal but often difficult to achieve.
💡 Important Distinction: This definition of bias differs from the statistical definition (where expected value differs from true value) - these are unrelated concepts |
3.2 Direct and Indirect Discrimination
Human rights legislation clearly prohibits using certain variables (race, disability status, sexual orientation, etc.) for risk classification. However, the challenge of indirect or proxy discrimination has become more pressing with AI evolution.
Direct Discrimination
A pricing model avoids direct discrimination if no discriminatory features protected by human rights legislation are used as rating factors This is the clearest and most straightforward requirement
Indirect Discrimination
More complex: occurs when neutral data serves as a proxy for protected characteristics A model avoids indirect discrimination if it avoids direct discrimination AND non-discriminatory features cannot implicitly infer discriminatory features Can happen intentionally or unintentionally
⚖️ Practical Challenge: Meeting requirements for avoiding implicit inference can still result in differential outcomes between groups |
3.3 Fairness
No single definition of fairness exists - it is dynamic, social, and context-dependent rather than purely statistical. As noted by AI ethics scholars, fairness constantly evolves through democratic debate and adaptation. When evaluating fairness, practitioners should consider:
- Who is harmed by potential pricing bias?
- How significant is the harm to affected individuals?
- How large is the pool of people harmed?
- Is the product/service essential?
- Does society view the price discrimination as egregious?
Two Categories of Fairness:
Type | Focus | Current Regulatory Emphasis |
---|---|---|
Procedural Fairness | How insureds are treated throughout the pricing process (e.g., handling missing data, variable selection) | Higher |
Distributive Fairness | Distribution of pricing outcomes across insureds (results and impacts) | Lower |
3.4 Contrasting Bias and Fairness
🔄 Critical Distinction: Bias and fairness are related but separate concepts |
Understanding the relationship between bias and fairness is essential:
Bias characteristics:
- Arises from data, model parameters, model type, and practitioner assumptions
- Static concept - biased today remains biased unless corrected
- Measurable property of predictive models
Fairness characteristics:
- Depends on model outcomes AND context of application
- Includes external factors beyond the model
- Dynamic concept - fair today may be unfair tomorrow
In P&C pricing context:
- Bias does not necessarily imply unfairness
- Lack of fairness does not necessarily imply bias
- Both must be evaluated independently
3.5 Ethics
The CIA Rules of Professional Conduct require members to uphold professional and ethical standards that serve the public interest. Practitioners must respect both the letter and intent of the law across all jurisdictions where they provide services.
📋 Legal Obligations by Jurisdiction |
Protected characteristics vary by location:
- Quebec: Prohibits discrimination based on social condition
- New Brunswick: Age is a protected ground
- Ontario: Auto insurers cannot use credit information
Practitioners must familiarize themselves with:
- CIA Rules of Professional Conduct
- All applicable laws in their jurisdiction
- How ethical principles enhance understanding of legal requirements
The ethical framework discussed later provides tools for navigating these complex requirements, but does not replace existing legal and professional obligations.
mini BattleQuiz 3 You must be logged in or this will not work.
Full BattleQuiz You must be logged in or this will not work.
Practice Questions
Conceptual Questions:
- What is the key difference between bias and fairness in P&C pricing?
- How do direct and indirect discrimination differ in the context of insurance rating?
- Why might a pricing model be biased but not unfair (or vice versa)?
Application Questions:
- An insurer discovers their auto pricing model charges higher premiums in postal codes with high immigrant populations. What steps should they take to evaluate if this is problematic?
- How would you apply the three ethical frameworks (utilitarian, deontological, virtue) to a situation where territorial rates disadvantage a protected group?
- What sources of bias should actuaries check for when developing a new predictive model?
Practice Questions Answer Key
Conceptual
Q: What is the key difference between bias and fairness in P&C pricing?
Answer: Understanding Bias vs. Fairness |
Bias is a measurable property of predictive models:
- Static concept - remains constant unless corrected
- Arises from data, model parameters, assumptions
- Exists when outcomes systematically disfavor a group without actuarial justification
- Can be objectively measured using statistical techniques
Fairness is about how model outcomes are applied in context:
- Dynamic concept - evolves with societal values
- Depends on both outcomes AND external factors
- Evaluated based on harm, essentiality of service, societal views
- Cannot be reduced to a single metric
💡 Key Insight: A model can be biased without being unfair (if the bias is actuarially justified), and a model can be unfair without being biased (if societal standards have evolved) |
Q: How do direct and indirect discrimination differ in the context of insurance rating?
Type | Definition | Example | Detection Difficulty |
---|---|---|---|
Direct Discrimination | Using prohibited characteristics explicitly as rating factors | Using race, religion, or sexual orientation in pricing | Easy - prohibited variables are clearly identified |
Indirect Discrimination | Using neutral variables that serve as proxies for prohibited characteristics | Using postal code that correlates with ethnicity | Difficult - requires analysis of correlations and outcomes |
Key Considerations:
Direct discrimination is straightforward - insurers simply cannot use protected characteristics listed in human rights legislation.
Indirect discrimination is more complex because:
- Superficially neutral data may capture protected status
- Can occur unintentionally through correlations
- May result from historical biases in data
- Requires ongoing monitoring to detect
⚠️ Important: Meeting technical requirements to avoid implicit inference can still result in differential outcomes between groups |
Q: Why might a pricing model be biased but not unfair (or vice versa)?
Answer: The Bias-Fairness Distinction |
Biased but Fair:
- A model charges different rates to groups defined by age
- This is "biased" as it systematically differentiates
- But if age correlates with accident risk, it may be actuarially justified
- Society generally accepts age-based pricing as fair (where legal)
Unbiased but Unfair:
- A model treats all customers identically (no bias)
- But fails to recognize legitimate differences in risk
- Example: Charging same price regardless of driving record
- Technically unbiased but unfair to safe drivers
Biased and Unfair:
- A model uses postal codes that correlate with race
- Higher premiums not justified by actual loss experience
- Both biased (systematic differentiation) and unfair (no actuarial justification)
Unbiased and Fair:
- A model appropriately differentiates based on risk
- No systematic disadvantage to any protected group
- The ideal state for insurance pricing
- Off the top of my head I would think something where the insured is rated purely by telematics (i.e. only based on their driving habits) would be something that is 100% unbiased and fair
💡 Key Insight: Bias (systematic differentiation) can be justified if based on genuine risk differences, making it fair. Conversely, treating everyone identically (no bias) can be unfair if it ignores legitimate risk factors. |
Application Questions
Answer: Systematic Evaluation Process |
Step 1: Measure the Bias
- Calculate average premiums by postal code
- Overlay demographic data to identify affected populations
- Quantify the premium differential (e.g., 15% higher)
- Determine statistical significance of differences
Step 2: Analyze Actuarial Justification
- Review loss costs by postal code
- Check if higher premiums reflect higher claims experience
- Examine other risk factors in these areas:
- → Traffic density
- → Road conditions
- → Vehicle theft rates
- → Weather patterns
Determine if territorial factors fully explain the differential
Step 3: Evaluate Data Quality and Age
- When were territorial factors last updated?
- Has the demographic composition changed significantly?
- Are you using current loss experience?
- Could historical biases be embedded in old data?
Step 4: Consider Fairness Dimensions
- Is auto insurance essential in these communities?
- Are there transportation alternatives?
- What is the socioeconomic impact of higher rates?
- How would media/public/regulators view this?
Step 5: Document and Decide
- Create bias assessment documentation
- If differential is actuarially justified → document thoroughly
- If not justified → develop remediation plan
- If partially justified → consider capping techniques
⚠️ Red Flag: If territorial factors haven't been updated in 5+ years, historical biases may be perpetuated |
Q: How would you apply the three ethical frameworks to a situation where territorial rates disadvantage a protected group?
Answer: Three Ethical Perspectives |
Scenario: Postal code rating results in 20% higher premiums for areas with predominantly Indigenous populations.
1. Utilitarian Framework (Greatest Good)
Considerations:
- Impact on majority vs. minority populations
- Business sustainability and ability to serve all customers
- Societal benefits of risk-based pricing
- Costs of cross-subsidization
Possible conclusions:
- Keep current structure: Accurate pricing for majority outweighs minority impact
- Modify structure: Long-term societal harm from discrimination exceeds short-term business benefits
2. Deontological Framework (Rules & Duties)
Considerations:
- Legal requirements prohibit discrimination
- Professional obligations under CIA standards
- Contractual duties to shareholders
- Regulatory compliance requirements
Possible conclusions:
- Keep if compliant: Not using race directly = following the rules
- Must change: Indirect discrimination still violates duty to treat fairly
3. Virtue Ethics Framework (Character)
Considerations:
- Company values and mission
- Professional reputation
- "Would I be proud of this decision?"
- Role model for industry
Possible conclusions:
Change needed: Good companies don't perpetuate inequality Depends on intent: If trying to be actuarially accurate, that's virtuous
💡 Best Practice: Use all three frameworks to get a complete picture, then document your reasoning |
Q: What sources of bias should actuaries check for when developing a new predictive model?
Answer: Comprehensive Bias Checklist |
1. Data Generation & Collection Biases
- Historical inequities: Does past discrimination affect your data?
- Selection bias: Who's included/excluded from dataset?
- Reporting bias: Are claims reported equally across groups?
- Survival bias: Are you only seeing "successful" risks?
2. Data Preparation Biases
- Missing data patterns: Do certain groups have more missing values?
- Categorization choices: How are continuous variables binned?
- Outlier treatment: Which observations are excluded?
- Time period selection: Does your data period advantage/disadvantage groups?
3. Model Development Biases
- Variable selection: Are you using appropriate predictors?
- Interaction effects: Do variables combine to create proxies?
- Model type limitations: Can your model capture non-linear relationships?
- Performance metrics: Are you optimizing for the right objective?
4. Implementation Biases
- Threshold settings: Where do you draw lines for tiers/categories?
- Capping and floors: Do limits affect groups differently?
- Transition rules: How do changes impact existing customers?
- Override practices: Are manual adjustments applied consistently?
📋 Testing Approach: Check for bias at each stage - don't wait until the model is complete |
🎯 Study Tips Summary
Key Takeaways for Exam Success |
- Definitions Matter: Bias is measurable and static; fairness is contextual and dynamic
- Two Types of Discrimination: Direct (using prohibited variables) vs. Indirect (proxies)
- Multiple Frameworks: Utilitarian, deontological, and virtue ethics offer different perspectives
- Sources of Bias: Can arise from data, models, assumptions, or implementation
- Documentation Critical: Always document bias assessments and remediation decisions
- Evolving Standards: What's acceptable today may not be tomorrow
POP QUIZ ANSWERS
- Joint & several liability (eliminate & replace with proportional liability)
- Collateral source rule (eliminate)
- Compensation basis (change from gross to net)
- Vicarious liability (eliminate)