𝗣𝗼𝗹𝗶𝘁𝗶𝗰𝗮𝗹 𝗕𝗶𝗮𝘀 𝗶𝗻 𝗟𝗟𝗠𝘀

Ali Mirzaei
Sep 23, 2024
ChatGPT and other Large Language Models (LLMs) exhibit political biases and varying ideological leanings. Here are some key observations:
𝟭. 𝗘𝘃𝗼𝗹𝘂𝘁𝗶𝗼𝗻 𝗼𝗳 𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝗗𝗮𝘁𝗮 𝗮𝗻𝗱 𝗔𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺𝘀: A possible reason is training earlier models like BERT predominantly on traditional book texts, leaning more conservative (authoritarian), compared to exposing newer models like GPT to broader internet text, leaning more liberal (libertarian). This is enhanced by human reinforcement feedback loops in newer models.
𝟮. 𝗗𝗮𝘁𝗮 𝗗𝗶𝘀𝘁𝗿𝗶𝗯𝘂𝘁𝗶𝗼𝗻𝘀: Even non-toxic training data with diverse opinions can lead to biases and unfairness if it includes subtle imbalances in data distributions.
𝟯. 𝗠𝗼𝗱𝗲𝗹 𝗦𝗶𝘇𝗲 𝗮𝗻𝗱 𝗕𝗶𝗮𝘀 𝗩𝗮𝗿𝗶𝗮𝘁𝗶𝗼𝗻: Within model families, larger models might capture more nuanced biases or exhibit better generalization.
𝟰. 𝗕𝗶𝗮𝘀 𝗶𝗻 𝗦𝗼𝗰𝗶𝗮𝗹 𝘃𝘀. 𝗘𝗰𝗼𝗻𝗼𝗺𝗶𝗰 𝗜𝘀𝘀𝘂𝗲𝘀: LLMs demonstrate stronger biases on social issues (Y axis) over economic ones (X axis), potentially due to the predominance of social discussions online compared to economic ones since the latter requires deeper understanding of economics.
[source: Shangbin et al., 2023]
share/comment on
