There are well defined metrics of fairness in AI/ML. These primarily appear to relate to how AI generates tokens about specific groups of people. Fairness generally relates to ideas around equality and equity from the perspective of gender, sex, religion, race and other components of identity.
Some of these approaches can also (possibly) be applied to the concept of fairness when looking at analyzing the financial performance of public companies. Now this doesn't need to be done from an equality point of view as the art of active investing is building a qualitative understanding of whether a company has a specific value (and whether the market understands this).
Fairness as a concept as it applies to public companies therefor relates to building an understanding of the true value of the company and an understanding of the value the market puts on that company.
Types of fairness in AI
- **Demographic parity** - This requires outcomes or assessment be independent of a set of protected or identified characteristics. If for example you chose country as a protected characteristic, then the proportion of negative and positive assessments should be the same between countries. Demographic parity can lead to ignoring factors that are true of the protected characteristic (e.g. one countries economic capability and business stability may generally make outcomes worse even for a really good company based in that country). It is still however useful as a tool to understand the why and how an assessment is made.
- **Equalized odds** - False positives and false negatives should be equal across groups. This approach shifts fairness from outcomes to errors. E.g. a system should not make more mistakes for one demographic than another. E.g. predictions about companies based in different countries should have the same error rate, even if an AI system has generally more negative assessments of companies in one country vs another.
- **Equal Opportunity** - Like equalized odds but relaxed to just true positive rates be equal across groups. This as a concept related to people is generally used to adjust bias in underwriting or lending criteria. In financial and investment settings, this approach may be beneficial in the opportunity pipeline or investment discovery settings.
- **Predictive parity** - This form of fairness requires that predictions are equally calibrated (or generally accurate) across different groups, similar to equalized odds. This means that predictions against a group need to be back tested against results. For example if two companies one from group A and one from Group B have a 70% chance of a positive earnings beat, then 70% of each group should have an earnings beat.
AI and algorithmic bias can skew the gap between the true value of a company and the understanding of the value the market puts on that company.
Can you use this as a trade-able signal? Probably... if your good at that stuff, but at a minimum you should understand how these [[A flawed parrot|play out]].