Systemic Bias
What is Systemic Bias? Systemic bias is the unfair and inherent prejudice embedded within the design, training data, or deployment of an AI system that consistently and unjustly disadvantages specific, protected social groups over time. While data bias is the…
Security
What is Security (in the Context of AI)? Security, in the context of AI, refers to the defensive measures taken to protect the entire AI ecosystem—including the training data, the AI model, the API connections, and the user-facing tools—from cyberattacks,…
Responsible AI
What is Responsible AI? Responsible AI is an organizational framework encompassing the practices, policies, and tools necessary to design, develop, and deploy artificial intelligence systems that are transparent, fair, trustworthy, accountable, and legally compliant. This concept shifts the focus from…
Representational Harm
What is Representational Harm? Representational harm is a specific category of social harm where an AI system reinforces harmful stereotypes, creates cultural insensitivity, or unfairly subordinates specific social groups through the way it portrays or speaks about them. This type…
Quality-of-service Harm
What is Quality-of-service Harm? Quality-of-service harm is a category of social harm that occurs when an AI system performs its intended function, but does so poorly, inconsistently, or unreliably for specific groups of users, leading to substandard outcomes or denial…
Privacy
What is Privacy (in the Context of AI)? Privacy, in the context of AI, refers to the policies, mechanisms, and legal frameworks designed to govern the collection, storage, processing, and use of personally identifiable information (PII) to prevent unauthorized access,…
Transparency
What is Transparency (in the Context of AI)? Transparency, in the context of AI, refers to the practice of providing clear, understandable, and accessible information about how an AI system functions, what data it was trained on, and the logic…
Interpersonal Harm
What is Interpersonal Harm? Interpersonal harm is a specific type of social harm caused by AI systems that affects individual relationships, groups, or communities through acts like defamation, harassment, erosion of trust, manipulation, or discrimination. This category of harm is…
Deepfakes
What are Deepfakes? Deepfakes are synthetic media—primarily video or audio—that have been manipulated or entirely generated by deep learning artificial intelligence to convincingly portray a person saying or doing something they never actually did. The term Deepfakes is a portmanteau…
Data Bias
What is Data Bias? Data bias is the systematic tendency of data used to train an AI model to disproportionately reflect or favour specific values, outcomes, or demographic groups, resulting in skewed and unfair results when the model is put…