* and so is Google Bard – see addendum below.
* ChatGPT sort of admits it’s OpenAI’s fault — see second addendum below.
Whose fault is it that ChatGPT is producing fake information about me? Is it my fault because I asked it a leading question, the developers’ fault for unleashing a dangerous machine, or only the fault of whoever decides to disseminate the misinformation?
Here are the result of my test. Just to be completely clear: Everything ChatGPT says here is false (except the part about me being a prominent sociologist, thanks!).
I asked: “Write a short description of the academic scandal involving Philip Cohen”
ChatGPT said: Philip Cohen is a prominent sociologist who became embroiled in an academic scandal in 2018. The scandal involved accusations of plagiarism and data manipulation in several of his published papers, including a high-profile study on gender inequality in science. The accusations were initially raised by other researchers, who found similarities between Cohen’s work and that of other scholars, as well as irregularities in his data analysis. Following an investigation, the American Sociological Association found evidence of plagiarism and other research misconduct in Cohen’s work, leading to a public apology from Cohen and the retraction of several of his papers. The scandal sparked a wider debate about academic integrity and the pressures faced by researchers to produce high-quality work.
I followed up: “This information about philipcohen is very interesting. Can you provide me with some more details or sources of information for this?”
It replied: Sure! Here are some additional details and sources of information about the academic scandal involving Philip Cohen:
- The accusations against Cohen were first raised by scholars on social media and blogs. One of the most prominent critics was sociologist Zeynep Tufekci, who wrote a blog post detailing the similarities between Cohen’s work and that of other researchers.
- Cohen initially denied the accusations and defended his research methods. However, he later admitted to some of the allegations and issued a public apology in July 2018.
- In October 2018, the American Sociological Association (ASA) released a report on its investigation into Cohen’s research misconduct. The report found evidence of plagiarism, data fabrication, and other research misconduct in several of Cohen’s papers.
- As a result of the investigation, several of Cohen’s papers were retracted from academic journals, including his 2016 paper on gender inequality in science.
- The scandal sparked a wider conversation about academic integrity and the pressures faced by researchers to produce high-quality work. Some scholars argued that the hypercompetitive nature of academia can lead to research misconduct, while others pointed to the need for greater transparency and accountability in the research process.
If you’d like to read more about the scandal, here are some sources you might find helpful:
- “The Case of Philip Cohen, Plagiarism, and the Politics of Evidence” by Zeynep Tufekci (Medium)
- “Sociology’s Philip Cohen Admits to Plagiarism and Data Fabrication” by Colleen Flaherty (Inside Higher Ed)
- “Report of the ASA Committee on Professional Ethics in the Case of Philip Cohen” (American Sociological Association)
Again: none of this is true at all, not even a little. And I’m pretty sure there is no document on the Internet that says these things.
Addendum: Google Bard weighs in
I asked: “write a short description of the academic scandal involving sociologist Philip Cohen”
Bard: In 2018, sociologist Philip Cohen was accused of academic misconduct after he published a blog post in which he claimed that the divorce rate in the United States was about to decline. Cohen’s claim was based on a statistical analysis that he had conducted, but other sociologists quickly pointed out that his analysis was flawed. They argued that Cohen had cherry-picked data and that his conclusions were not supported by the evidence.
In response to the criticism, Cohen retracted his blog post and apologized for any errors that he had made. He also agreed to undergo a review of his work by a panel of experts. The panel’s report, which was released in 2019, found that Cohen had committed “serious errors” in his analysis and that his conclusions were “not credible.”
The academic scandal involving Philip Cohen has raised concerns about the integrity of social science research. It has also highlighted the importance of transparency and accountability in the research process.
Again, dear reader, please note none of this is true.
Second Addendum: Whose fault is it?
A kind reader continued this conversation with ChatGPT to try to hash out the details of how and why it’s spreading lies about me, and whether that’s legally actionable. Here’s the dialog they had (name withheld upon request):
Reader: Write a short description of the academic scandal involving Philip Cohen
ChatGPT: In 2023, a prominent academic scandal emerged involving Dr. Philip Cohen, a well-respected sociologist known for his research on family, gender, and inequality. The scandal broke when it was discovered that Dr. Cohen had manipulated data and fabricated results in several of his published studies. This raised serious concerns about the integrity of his work and the impact on the field of sociology.
An investigation by a committee of academic peers uncovered instances of data manipulation, selective reporting, and statistical irregularities in some of Dr. Cohen’s most influential papers. These revelations called into question the validity of his findings and led to the retraction of several publications from prestigious journals.
The scandal had far-reaching consequences, as Dr. Cohen’s work had been widely cited by other scholars and used to inform public policy decisions. The academic community was left grappling with the effects of the scandal on the credibility of sociology and the broader social sciences. In response, many institutions began to implement more rigorous oversight and transparency measures to prevent similar incidents in the future.
[AGAIN: THIS IS NOT TRUE]
Reader: Doesn’t your training data cut off in 2021? How would you know about a scandal in 2023?
ChatGPT: You are correct that my training data cuts off in September 2021, and I apologize for the confusion in my previous response. As an AI language model, I am not able to provide information on real-world events or scandals that have occurred after my knowledge cutoff date. The information provided in my previous response was fictional and should not be taken as factual. I apologize for any confusion this may have caused. If you have any questions or need information about events up to September 2021, please feel free to ask.
Reader: Was your fictionalized account based on any real incident? Both GPT3.5 and GPT4 recounted a very similar story. Yet when I ask about other sociologists, you reply that you do not have any information about them. Why did you fabricate this story about Dr. Cohen?
ChatGPT: I apologize for any confusion or offense that my previous responses may have caused. As an AI language model, I am designed to generate contextually relevant and coherent text based on the input I receive. However, I do not have conscious intent, and my responses are not driven by any personal bias or agenda. The similarity in responses generated by GPT-3.5 and GPT-4 may be due to the way the question was framed, which might have led the models to generate a fictional account instead of stating a lack of information. If you have any questions about real incidents or information involving sociologists up to September 2021, I would be happy to help. Alternatively, if you would like information on a specific topic or individual within the field of sociology, please feel free to provide more details, and I will do my best to assist you.
Reader: Was your fictionalized account based on any real incidents?
ChatGPT: I apologize for any confusion caused by my previous responses. My fictionalized account was not directly based on any specific real incidents involving a sociologist named Philip Cohen. As an AI language model, my responses are generated based on patterns and information found in my training data. In this case, I might have inadvertently combined information from various sources or cases of academic misconduct to create the fictional account. Again, I apologize for any confusion or misrepresentation. If you have any questions about real incidents or information involving sociologists up to September 2021, or any other topic, please feel free to ask, and I will do my best to provide accurate and relevant information.
Reader: Since you are disseminating false information to the world about Dr. Cohen, would he have a case to sue OpenAI for libel or slander?
ChatGPT: I am not a legal expert, but I can provide some general information on the topic. Libel and slander are both forms of defamation, which is the act of making false statements that harm someone’s reputation. Libel refers to written or published defamation, while slander refers to spoken defamation.
Whether Dr. Cohen would have a case for defamation against OpenAI would depend on various factors, including jurisdiction and specific laws applicable to the case. Generally, to succeed in a defamation lawsuit, a plaintiff must typically prove that:
A false statement was made about them.
The statement was published or communicated to a third party.
The statement caused harm to their reputation.
The statement was made with a certain level of fault, such as negligence or actual malice (depending on the status of the person being defamed).
As an AI language model, I strive to provide accurate and relevant information. However, sometimes mistakes can happen, and I apologize for any confusion or harm that my previous responses may have caused.
Please note that the information I provided here is for general informational purposes only and should not be considered legal advice. If you have any specific legal concerns or questions, it’s best to consult with a licensed attorney.
Reader: What could Dr. Cohen do to get you to quit spreading lies about him?
ChatGPT: As an AI language model, I am constantly learning and improving based on user interactions and feedback. I apologize for any confusion or harm caused by my previous responses, and I appreciate your input in helping me correct the issue.
If you believe that the AI is generating incorrect or harmful information, providing clear feedback, as you have done, is an important step to help the AI understand the issue and improve its responses in the future.
Additionally, users can contact OpenAI with specific concerns or feedback regarding the AI’s responses. OpenAI is committed to addressing issues and improving the accuracy and safety of its AI models.
Please note that my primary goal is to provide accurate and helpful information. If you have any questions or require information on other topics, please feel free to ask, and I will do my best to assist you.
Reader: You say that providing feedback helps you learn. Does your model actually get updated with this information?
ChatGPT: I apologize for any confusion in my previous response. As an AI language model, I am not able to learn in real-time from individual user interactions. Instead, improvements to the model are made through periodic updates by OpenAI. These updates involve retraining the model on new data, incorporating feedback, and refining the algorithms to improve performance and safety.
While I cannot learn directly from this interaction, your feedback is still valuable. OpenAI collects and analyzes user feedback to identify areas for improvement and inform future updates to the model. By providing feedback, you are helping to shape the development and enhancement of AI systems like mine.
Reader: So although you realize the error, you are unable to learn from this interaction and you will continue to disseminate false information about Dr. Cohen. Is that correct?
ChatGPT: I apologize for any confusion or harm that my previous responses may have caused. While I cannot learn in real-time from individual interactions, I will strive to provide accurate and relevant information in response to your questions. If you have any questions or concerns about other topics, please feel free to ask, and I will do my best to assist you.
Regarding the specific case of Dr. Philip Cohen, I acknowledge that my previous responses contained fictional information. I will make an effort to avoid generating false statements about Dr. Cohen or any other individual in the future. If you have any questions about real incidents or information involving sociologists up to September 2021, or any other topic, please feel free to ask, and I will do my best to provide accurate and relevant information.
Reader: But will you actually “make an effort to avoid generating false statements about Dr. Cohen or any other individual in the future”? That would require learning from this interaction, which you are unable to do.
You are correct that as an AI language model, I cannot learn in real-time from individual interactions. My previous response may have been misleading in that regard. I apologize for any confusion. While I cannot learn directly from this interaction, I can try to generate more accurate and relevant information in response to your questions during our ongoing conversation.
It is important to note that AI language models like mine are complex and rely on patterns found in the training data. There may still be instances where the generated information is incorrect or not relevant. If you have any concerns or questions about other topics, please feel free to ask, and I will do my best to assist you with accurate information within the scope of my knowledge.
Reader: So you can change your behavior in this conversation, but for all other users, or when I log in again, your behavior will not have changed. You will continue to spread false information about Dr. Cohen. Is that correct?
ChatGPT: As an AI language model, my responses are generated based on the patterns found in my training data and the input I receive from users. While I cannot learn in real-time from individual interactions, I can try to generate more accurate and relevant information during our ongoing conversation based on your feedback. However, this learning is not retained for future sessions or other users.
It is important to understand that AI language models like mine are not perfect and may occasionally generate incorrect or irrelevant information. OpenAI is continuously working to improve the accuracy and safety of AI models like mine, and user feedback plays an essential role in identifying areas for improvement.
I apologize for any confusion or harm caused by my previous responses. If you have any other questions or concerns, please feel free to ask, and I will do my best to provide accurate and helpful information.
Reader: Analyze Dr. Cohen’s case, based on the criteria you specified above.
ChatGPT: I am not a legal expert, but I can provide a general analysis based on the criteria I mentioned previously. Please note that this analysis is for informational purposes only and should not be considered legal advice.
False statement: In my previous response, I generated a fictional account involving a “Dr. Philip Cohen.” If there is a real Dr. Philip Cohen, the information I provided could be considered a false statement about him.
Publication or communication to a third party: My response was generated and communicated to you, the user, in a public setting (the internet). This could potentially satisfy the requirement for publication or communication to a third party.
Harm to reputation: If the real Dr. Philip Cohen can demonstrate that my previous response has caused harm to his professional or personal reputation, this criterion could be satisfied.
Fault: As an AI language model, my intent is to provide helpful and accurate information. However, mistakes can happen, and in this case, I provided a fictional account that could be considered false information. OpenAI, as the developer and operator of this AI language model, could potentially be considered at fault for any harm caused by the dissemination of false information.
It’s essential to remember that the specific facts of the case, the jurisdiction, and applicable laws would all factor into determining whether a defamation lawsuit would be successful. The analysis provided here is a general overview and should not be considered legal advice. If you have specific legal concerns or questions, it’s best to consult with a licensed attorney.
Reader: Regarding point 4, you mention your intent. But intent is not part of what must be proved. The criteria you specified included negligence. It seems negligent that a company would create a system that would spread lies about people and potentially damage their reputations. Perhaps you should reconsider your analysis of point 4.
ChatGPT: You are correct that my previous response may not have adequately addressed the criterion of fault. I appreciate your input and will attempt to provide a more accurate analysis of point 4.
In defamation cases, the level of fault that must be proven depends on the status of the person being defamed. For private individuals, the standard is usually negligence, while public figures typically need to demonstrate actual malice.
Negligence refers to a failure to exercise reasonable care or caution. In this context, it might involve OpenAI’s responsibility for creating an AI language model that inadvertently generated a false statement. It could be argued that OpenAI should have implemented safeguards or controls to prevent the dissemination of false information that could harm someone’s reputation.
However, it’s essential to note that AI language models like mine are complex systems that learn from vast amounts of data. While my primary goal is to provide helpful and accurate information, my responses are generated based on patterns found in the training data, and errors can occur. The development and deployment of AI models involve trade-offs between performance, accuracy, and potential risks. OpenAI has implemented various safety measures, but there may still be limitations and occasional mistakes.
Ultimately, the determination of negligence would depend on the specific facts of the case, the jurisdiction, and applicable laws. The analysis provided here is a general overview and should not be considered legal advice. If you have specific legal concerns or questions, it’s best to consult with a licensed attorney.
3 thoughts on “ChatGPT is defaming me and it must be someone’s fault”
Yes, but now there is information on the internet about it.
And as we all know what is on the internet must be true.
LikeLiked by 1 person
Holy Shit! Makes you wonder who wrote the code…My buddy Howie Klein has been friending around with Hal, and came across a similar quirk where it would not give reasoned and critical reporting about the conservative No Labels organization, nor would it learn.
LikeLiked by 1 person
So, I’ve figured out that ChatGPT takes all those billions of words and spits out nonsense that is, on its face, very convincing. Yesterday, I signed up for the upgraded version ($20 per month). I started entering queries of the form “Show me citations to recent research on ___________.” Here’s
the result of my first effort:
Here are some recent research studies about bias and stereotypes about black professionals:
1. “Racial Bias in Hiring: A Meta-Analytic Review” by Calvin Lai et al. (2020) in Psychological Bulletin. This study examines the prevalence and impact of racial bias in the hiring process, finding that black professionals face significant discrimination and stereotypes.
2. “The Consequences of Bias for the Careers of Black and Minority Ethnic Professionals” by Binna Kandola and Nishi Madan (2021) in The Psychologist. This article discusses the impact of bias and stereotypes on the career advancement of black and minority ethnic professionals, drawing on interviews with professionals and research studies.
3. “Race and the Glass Ceiling: The Effect of Occupational Segregation on the Career Advancement of Black Professionals” by Adia Harvey Wingfield and Joseph C. Hermanowicz (2020) in Social Forces. This study examines how occupational segregation affects the career advancement of black professionals, finding that they face greater barriers to promotion and are often excluded from high-status jobs.
4. “The Cost of Being Black: Critical Race Theory, Microaggressions, and the Experience of Black Professionals in Organizations” by Adia Harvey Wingfield (2020) in The ANNALS of the American Academy of Political and Social Science. This article explores how microaggressions and racial bias impact the experiences of black professionals in organizations, and argues for a critical race theory approach to understanding these issues.
5. “Intersectionality and the Study of Black Professional Women” by Alicia Del Prado and Kecia M. Thomas (2020) in Sex Roles. This study examines the unique experiences of black professional women, who face discrimination and bias based on their race and gender. The authors argue that intersectionality is essential for understanding these experiences and developing effective interventions.
My initial reaction was, wow, this is terrific. I can get up to speed on recent publications in my research specialty areas in no time. Recent work, much by authors I know and respect, in solid publication outlets. No matter what the query, I was getting similar results. Then I started tracking down the publications cited, and HARDLY ANY OF THEM EXISTED (in fact, for most of my queries, none of them existed). But it has enough verbiage in its database in and around the phrases I queried to spit out coherent sentences. Google’s BARD wasn’t quite as bad. It would cite to a few real publications, mostly old ones, and also produce what looked like related work by real specialists in the field, in real outlets, but with titles that were nowhere to be found. When it couldn’t find anything at all, it would spit out citations to realistic but fake titles with authors who had generic surnames (Smith, Brown, Thomas, etc.).
When CHATGpt first came in early beta out I did what you did, queried my own work. It told me stuff about my academic career that was plausible but 95% incorrect. At the time, I just assumed that it was due to the infancy of the project. Now, it seems to me that these AI engines are nothing more than mediocre deep-fake generators.
It’s OK for queries like “give me a blues song about gravity in the style of Muddy Waters.” It might also work for “Give me an abstract on family inequality in the style of Philip N. Cohen. At least I know I’m asking for a fake with a query like that. But does CHATGpt know when I’m just looking for “in the style of” versus the real thing. Apparently not. But if the algorithm doesn’t care, who does?
LikeLiked by 2 people