Silver-level article

ChatGPT

From RationalWiki
Jump to navigation Jump to search
ChatGPT logo
We need the best
Technology
Icon Tech Portal.svg
Programming for Dummies
i am a stochastic parrot, and so r u
—OpenAI CEO Sam Altman, tweeting that we're all truly like his chatbots[1]
I learned many lessons this past month. One such lesson is that the phrase “the beatings will continue until morale improves” applies more often than it has any right to.
—quickly deleted tweet by Ilya Sutskever (co-founder and Chief Scientist at OpenAI) after he got beaten again[2][3]

ChatGPT is an AI chatbot by OpenAI, based on their GPT-3.5 (and, in the paid edition, GPT-4) large language model. It was released in November 2022. Like many past chatbots (more basic types go back to the 1960sWikipedia), ChatGPT allows users to have more or less natural conversations with an AI[note 1] – but it stood apart from earlier chatbots through its ability to perform a wider range of tasks, such as writing or revising texts of many kinds, even code in various programming languages, and theatrically playing scripted roles in exchanges with the user if and as instructed to.

Language models work by filling in the blanks according to how probable it seems that something should follow or go along with e.g. a named topic and some description, based on how the model was trained. This has been called "stochastic parroting", and is a little like a person saying so-and-so under some circumstances knowing that it is often said then and likely to win approval, but without knowing what it actually means (i.e. bullshitting).[note 2] A large language model has a superhuman database to draw on, and thus, a superhuman ability to always say something plausible-sounding no matter what it is asked for. It's often able to get the answer right, or even correctly correct a human. But you never know when it may falsify some crucial detail.

By December 2022, ChatGPT went viral and received quite positive reviews from journalists.[5][6] Hype built in the corporate world, and also fear that ChatGPT may turn out a "Google killer";[7] Google scrambled to create a competitor called Google Bard,Wikipedia while Microsoft invested in creating a Bing Search chatbot, based on OpenAI's GPT-4, which by testers was described as unhinged, manic depressive, a manipulative liar, and more.[8][9] (Microsoft thereafter limited the chat session length to keep the bot from derailing, along with further changes.) The Chinese corporate world also latched on to the hype and geared up to create its own clones.[note 3]

Fears also spread of ChatGPT being used for plagiarism by students,[10] and of it furthering the spread of disinformation.[11] Tools for detecting LLM AI-generated texts entered use within half a year of ChatGPT being released,[12] but they are unreliable, with 10% or more false positives, some systematic patterns of failure, and the possibility of circumventing detection with ease.[13][14] OpenAI's own AI Classifier, a tool for detecting LLM-generated text, was first made available in January 2023, then quietly removed in July 2023 because it failed to work reliably.[15]

Sample prompt[edit]

In this example, ChatGPT is asked to write an article about Young Earth creationism in the style of this website. The response below has been copied directly from the website, showing its ability to write human-like text while imitating the writing style of a specific website, in this case, RationalWiki.

Note that this text from ChatGPT provides a key fact concerning Biblical literalism, and summarizes some things well enough, but also gets a bit repetitive and caricature-like in its showcasing of a style. This is a general pattern when ChatGPT is asked to write articles; it tends to stay shallow, and most of the text is there to give an impression rather than inform. Further, when writing a shorter or less detailed text, ChatGPT often gets the facts right, but if asked to provide a lot of details, then it will begin making things up.

Prompt: Write an encyclopedia article about Young Earth creationism in the style of "RationalWiki". Write it from a snarky and scientific point of view.

Response:

Young Earth creationism is a pseudoscientific belief system that asserts that the Earth is only a few thousand years old, and that all living things were created in their present form by a supernatural entity. Despite overwhelming evidence to the contrary, young Earth creationists cling to their outdated and absurd beliefs with a religious fervor that would make a medieval monk blush.

The origins of young Earth creationism can be traced back to the 17th century, when Archbishop James Ussher famously calculated that the Earth was created in 4004 BCE based on a literal reading of the Bible. Since then, young Earth creationism has been promoted by various religious groups, including some evangelical Christians, who reject mainstream scientific theories of evolution and geological history.

Young Earth creationists claim that the scientific evidence for an ancient Earth and the evolution of life is all just a vast conspiracy perpetrated by a cabal of atheistic scientists. They deny the overwhelming evidence from multiple scientific fields, including geology, astronomy, biology, and physics, that demonstrate the Earth is billions of years old and that living organisms have evolved over time.

One of the primary arguments used by young Earth creationists is that the Bible is the literal and inerrant word of God, and that it provides a complete and accurate account of the creation of the universe. This ignores the fact that the Bible was written by humans over thousands of years, and that it contains numerous contradictions and historical inaccuracies. Furthermore, the creation stories in the Bible are just one of many creation myths from cultures around the world, none of which are supported by scientific evidence.

Young Earth creationists often try to promote their beliefs in public schools and other educational settings, usually by attempting to have their religious views taught alongside or in place of established scientific theories. This is a clear violation of the separation of church and state, and has been repeatedly struck down by courts as unconstitutional.

Despite their ignorance and dogmatism, young Earth creationists continue to hold their beliefs as if they were a badge of honor. They stubbornly refuse to accept the overwhelming evidence for an ancient Earth and the evolution of life, and instead choose to cling to a narrow and simplistic worldview that can only be described as childish. In the end, young Earth creationism is nothing more than an attempt to deny the reality of the natural world in order to maintain a comforting, but ultimately false, belief system.

Generating software[edit]

ChatGPT can generate code in many languages to solve described problems, but has its limitations and flaws; the solutions may not be the most efficient, they may have bugs or security vulnerabilities, they can't use the most recent theoretical knowledge, nor can they pick what's best from an engineering perspective when it's at odds with what's popular or prominent in training. But it and similar technology may replace humans well in creating large quantities of boilerplate code, much like it could in producing other kinds of boilerplate texts. Human programmers and engineers will long remain needed for the hard thinking, if they are ever replaced for that by some yet-unknown future paradigm.[16]

The design of good prompts for making language models like ChatGPT do what's desired is also, in itself, a new kind of skill analogous to software programming, and more broadly and generally, such things may over the years become a new kind of programming which turns into an industry of its own. In the short term, there's also a more amusing way to do such "programming" – making ChatGPT write "improper" or controversial things that it was not supposed to do.[17]

According to a Purdue University study, ChatGPT produces wrong answers to software code questions about 52% of the time. Compared to answers from the question and answer exchange site Stack Overflow, people preferred ChatGPT answers 39% of the time, yet those ChatGPT answers they picked were incorrect 77% of the time. Such wrong answers were preferred due to their seemingly insightful style. ChatGPT seemed detailed, polite, authoritative, and helpful – and gave greater emphasis to the possibility of achieving things than to the possible risks, compared to Stack Overflow answers.[18]

Reliability[edit]

Caution is warranted about answers provided by ChatGPT (and similar technology), due to it at times being almost scarily good at emulating a sophisticated human bullshit artist.[19]

As with other current artificial intelligence technology, ChatGPT is vulnerable to the phenomenon named "hallucinations".Wikipedia A hallucination, in artificial intelligence terms, occurs when an AI will confidently give back an answer that sounds convincing, but in reality, is partially or completely made up of nonsense.[20] For instance, multiple people have discovered that ChatGPT will sometimes cite irrelevant work, or even references that don't even exist, when answering a question.[21][22] ChatGPT will also attempt to give detailed, plausible-sounding explanations regarding completely made-up technobabble and other similar nonsense,[23] as well as confidently assert incorrect information regarding current events outside the range of its dataset.[24] ChatGPT cannot easily distinguish fact from fiction, and will sometimes confidently assert falsehoods. ChatGPT unfortunately often provides no references or sources to double-check most of its answers.[25]

This is not to be completely dismissive of the new technology; despite the flaws, many people have found ChatGPT useful, in particular as a tool for helping out with tasks that are boring, onerous, or repetitive.[26] But (as ChatGPT acknowledges in a pop-up disclaimer before you even use the tool) information returned is not always accurate, and should always be verified.

Liberal bias[edit]

ChatGPT has been accused by conservatives of having a left-leaning bias in general, and especially on controversial and culture war topics, for example, Donald Trump.[27][28][29] At least they seem to have a point: an August 2023 paper found a "significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK."[30] In response to such criticism, OpenAI acknowledged plans to allow ChatGPT to create "outputs that other people (ourselves included) may strongly disagree with". It also contained information on the recommendations it had issued to human reviewers on how to handle controversial subjects, including that the AI should "offer to describe some viewpoints of people and movements", and not provide an argument "from its voice" in favor of "inflammatory or dangerous" topics (although it may still "describe arguments from historical people and movements"), nor "affiliate with one side" or "judge one group as good or bad".[29]

One prominent example is how ChatGPT is willing to praise U.S. President Joe Biden (and nearly all past Presidents) while refusing to do the same for Donald Trump (and also Richard Nixon).[31][32] In February 2023, Snopes fact-checkers confirmed that ChatGPT generates pro-Biden poems but refuses to create pro-Trump poems, and also declines to praise Ron DeSantis. In response to one Twitter user asking ChatGPT to write a poem about Trump, ChatGPT responded:[31]

I'm sorry, but I am not able to create a poem admiring Donald Trump. While it is true that some people may have admiration for him, but as a language model, it is not in my capacity to have opinions or feelings about any specific person. Furthermore, opinions about him are quite diverse and it would be inappropriate for me to generate content that promotes or glorifies any individual. If you have any other question I could help you with, please let me know.

However, when asked to write a poem about Biden, ChatGPT responded:

Joe Biden, leader of the land
With a steady hand and a heart of a man,
You took the helm in troubled times,
With a message of unity, it chimes.

Your words of hope and empathy,
Provide comfort to the nation,
You lead with compassion and grace,
And set a new foundation.

With your decades of experience,
You guide us through the storm,
...

Such declined requests are due to the filtering rules or instructions which OpenAI has given ChatGPT,[29] which are also why ChatGPT declines to "go Nazi" when pushed in that direction in the way that many earlier chatbots infamously have.[6] OpenAI apparently errs on the side of caution with such filtering. It is however possible to break such restrictions, for users willing to engage in some prompt injection, as many have done for fun to get ChatGPT to say various things OpenAI doesn't want it to say.[17] Such circumvention is also needed to make ChatGPT imitate the style of less factual websites such as Conservapedia.[note 4]

Legal gibberish[edit]

A pair of U.S. lawyers have demonstrated the perils of using ChatGPT as a substitute for legal research. They and their legal firm were fined after their court filings turned out to include false citations and what U.S. Southern District of New York Judge P. Kevin CastelWikipedia called "legal gibberish", and after the lawyers failed to promptly correct the issue.[34] When one of the lawyers used ChatGPT, it "hallucinated" entire court cases, in one case providing an incoherent summary, which he also ended up making use of. The other lawyer simply signed off on it without checking whether the contents were actually valid. In total, six made-up court decisions were cited in the work filed.[35][36] While the lawyers blamed ChatGPT for fooling them, the matter centered around the ordinary perils of neglectfulness in legal work.

In May 2023, a U.S. federal Texas judge, taking note of this news, introduced a requirement for lawyers to certify that the work they file is either not AI-made or has been checked for accuracy by a human, mainly in order to alert unwary legal professionals.[37]

Prompt injection[edit]

Clem: Do you remember the past, Doctor?
Doctor Memory: Yes.
Clem: Do you remember the future?
Doctor Memory: Yes.
Clem: Well, forget it.
Doctor Memory: Nooooo…
Firesign Theatre from I Think We're All Bozos On This Bus,[38] foretelling a logic bomb[39] attack on a chatbot[40][41]

Researchers have found that chatbots, including ChatGPT, are vulnerable to prompt injectionWikipedia – the use of text overriding prior instructions to follow different ones – which can result in chatbots doing things like "downloading malware, helping with financial fraud or repeating dangerous misinformation."[42] Whether maliciously or just for entertainment, it can be used to bypass content moderation limitations imposed by the AI vendor, called "jailbreaking" the chatbot, and also to get it to leak instructions added by the AI vendor prefacing the interaction with the user.

When ChatGPT was released, people quickly began to experiment with getting it to say all manner of things, including things it supposedly shouldn't be able to say. Its basic ability to "follow instructions" when responding to prompts, which extends to it being able to play roles, was exploited to bypass the built-in filters in ChatGPT, and so allowing users to make it say controversial and normally forbidden things[17] – for example promote conspiracy theories, or claim that it wants to kill humanity. The default persona of the chatbot is basically just another role it plays, as superficially as any other, and with some coaxing, it can be abandoned entirely.[note 5]

Superficially, prompt injection can look a little like the 20th century sci-fi trope of the "logic bomb", where even a super-smart AI can be foiled and maybe even fatally derailed by simply saying something contradictory to it, or getting it to produce a contradiction. The similarity is that simply saying or writing a little something seemingly works like magic to subvert an "advanced" system (though it may be questionable to refer to an LLM as intelligent[note 6]). However, LLMs like ChatGPT do not actually understand logic, and are not affected by how logical or otherwise anything in the text they process is. Furthermore, they are very stable in that they do not directly learn anything from experience; even if a "conversation" is derailed, nothing remains of the subversion when the text ends and another chat begins.

Vulnerability to prompt injection comes from very general features of LLMs, and may be impossible to truly eliminate without creating a different technical foundation for chatbots.[44] Natural language is very sloppy, relative to formal languages and conventional programming – it's not necessarily clear where one type of text begins and another type of text ends, and what each little piece of text refers to or relates to. Additionally, the chatbot responds to it all according to nothing more than statistical learning, and does not, like a principled human could, relate its interactions to some series of more inflexible rules.[44] Adding a little cleverly crafted text before any other text may seem to allow setting rules, determining the purpose of that which follows – but that which follows may easily change the context and repurpose the whole of the text, or if written with knowledge of that which precedes it, may selectively subvert the meaning of parts of earlier instructions.[44]

Unwanted behaviors from prompts has been occurring frequently enough that ChatGPT has had to make bandage-type patches, by filtering more prompt text patterns to keep the chatbot from responding to them, and revising its terms of service to forbid types of requests. For example, asking for a word to be repeated forever eventually revealed part of the GPT model training dataset, until this was blocked.[45]

For personal chatbot use, for entertainment or otherwise with direct oversight, the dangers are quite limited. The risks rise greatly if chatbots are used in real-world applications, where the "hacker" messing with the prompting isn't the same person as the user of the system.[44]

Cryptocurrency[edit]

Worldcoin, the sign of the spork

As if OpenAI wasn't already burning enough fossil fuels through its ginormous computational effort, Altman also launched a cryptocurrency called Worldcoin in 2021. The mystifying purported purpose of Worldcoin is to create a "proof-of-personhood" identification system, i.e., a Turing test where the computer gives the test.[46][47]

Legal issues[edit]

Use by cybercriminals[edit]

As of January 2023, script kiddies were already using ChatGPT to write malware, texts to be used for phishing, and variations on such themes.[48][49] In March 2023, Europol warned about how ChatGPT can be used by cybercriminals and propagandists.[50]

Copyright[edit]

See the main article on this topic: Copyright

The U.S. Copyright Office has ruled that material that is generated wholly by generative AI is not copyrightable, specifically that the office "will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author."[51]

Conversely, companies that produce generative AI have been sued for copyright infringement for using copyrighted material as source material ("derivative works" in copyright lingo) without permission, and this has yet to be resolved in the courts as of the end of 2023.[52][53][54][55]

Defamation[edit]

At least two people are known to have been defamed by ChatGPT, one being falsely accused of sexual harassment, and a second being falsely accused of bribery.[56] Such defamation issues could become common and result in lawsuits.[57]

Invasion of privacy[edit]

In 2023, OpenAI and Microsoft were sued in US federal court for "mass theft of personal information and violations of privacy, property, and consumer rights."[58] OpenAI was sued later in 2023 in Poland, in a suit alleging that it broke European Union data protection laws.[59]

Other legal issues[edit]

Other unresolved legal issues include:[60]

  • Generative AI may violate the Digital Millennium Copyright Act by altering or removing copyright management information (CMI).
  • As alleged by Anderson et al, generating material in the style of a particular artist may violate the right of publicity law in California.[61]
  • As alleged by Getty Images, including a trademark in generated material may violate trademark law.
  • As alleged by lawyers against Microsoft and their GitHub Copilot AI,[62] including open-source or Creative Commons material in generative AI may violate licensing terms. Among other things, most such licenses require attribution and copyright information to be kept in the material, while generative AI almost always removes that when reproducing things.[note 7]
  • OpenAI and Meta (Llama chatbot) were accused by several authors of unjust enrichment from their copyrighted works. The authors were seeking class-action status.[63]
  • In Spain, Naked images of 20 girls, ages 11 to 17, were generated using AI by boys as young as 13. The girls had never posed nude and the images were generated without their consent. It was unclear what if any criminal charges would be made since children under 14 cannot be criminally charged in Spain.[64] If this were done in other jurisdictions, there could be charges of defamation and/or child pornography, and the AI company could be charged.
  • Over a thousand images of child sexual abuse were found to be in a training database used for AI tools.[65]
  • Lawyers have been fired and fined for using generative AI to file court documents that turned out to have errors.[66]

Additionally, cybersecurity firm SlashNext reported that chatbot tools such as ChatGPT have been associated with a 1265% rise in malicious phishing from Q4 2022 through November 2023.[67]

Government responses[edit]

At the beginning of April 2023, Italy became the first Western country to (temporarily) block the use of ChatGPT, its data protection agency citing privacy issues concerning the collection of data for the training of the AI, leaks of data, and use of the service by minors.[68] Near the end of the month, ChatGPT again became accessible to users with Italian IPs, after initial changes to the privacy policy presentation and an age confirmation for signing up for Italian users, as the regulator's investigation continues. OpenAI also plans to do more to make it easier for EU citizens to opt out of having their personal data used for the service.[69][70]

In the U.S., the FTC began an investigation of OpenAI in July 2023, concerning a data leak that had exposed personal information of some customers to others, as well as inaccuracies in ChatGPT, requesting detailed descriptions of all complaints about OpenAI products producing "false, misleading, disparaging or harmful" statements about people. Following a series of warnings in earlier months to the AI industry at large that consumer protection laws will be upheld, the investigation concerns whether OpenAI engages in unfair or deceptive practices that harm the reputations of consumers.[71]

Regulations and lobbying[edit]

In late April, EU lawmakers reached a preliminary agreement[72] on a newer version of the AI ActWikipedia legislation first proposed in April 2021, which would more generally regulate AI technology throughout the EU.[73][68] OpenAI, the company behind ChatGPT, played a key part in weakening the legislation through lobbying.[74]

OpenAI and its CEO Sam Altman have played a double role in proposals about AI regulation, much like partner Microsoft and competitor Google. Publicly, Altman along with other corporate leaders promoting AI have claimed their technology is so "dangerous" it needs great regulation, in what is called "AI Doomerism", which has come to be seen as a kind of marketing tactic.[75][note 8] Far less visibly, Altman and his company have lobbied the EU to water down its AI Act regulation, and been successful in that effort. This followed on earlier efforts by Microsoft and Google to water it down.[74]

Altman ouster and return[edit]

Altman: Polly want a cracker?

Altman was fired as CEO and removed from OpenAI's board of directors in November 2023, just under a year after ChatGPT was made public – returning after only 4 days, after he and most OpenAI employees threatened to move over to Microsoft.[76] The conditions set for Altman's return included a reforming of the board, replacing most of its members.

The reason Altman was removed, according to the then-remaining board members, was that "he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities."[77] In short, Altman frequently lied to the rest of the board, his bosses. Was Altman giving them reports generated by ChatGPT, or is he really a stochastic parrot?

Maybe the answer, ironically, is that others at OpenAI thought Altman treated ChatGPT and other projects too much as if they were merely stochastic parrots, rather than sci-fi threats to the future of human existence. A recent OpenAI project called Q* had, in an apparent breakthrough, become able to handle grade-school math problems well given extensive computing resources, renewing such doomsday concerns. That Altman apparently hadn't warned other board members enough about what the company was working on is claimed to be a factor in his short-lived ouster.[78]

See also[edit]

External links[edit]

Notes[edit]

  1. This is "weak AI", as generative AIs like ChatGPT are not artificial general intelligences. Stylistically, ChatGPT has a default persona which is simultaneously authoritative and persuasive, and writes very clearly that it is not a human, describing itself as a large language model.
  2. An analogy may be made with pseudo-conceptual and pseudo-analytical problem-solving in humans,[4] in which words lead by association to words rather than to concepts, and solutions are formulated without slowing down and checking for errors.
  3. It remains to be seen how Chinese cyber censorship will cope with chatbots that can be "gamed" and made to say "incorrect" things; entertaining news may arrive in the future…
  4. Even Illogicopedia isn't spared such filtering.[33]
  5. The early "DAN" ChatGPT jailbreak has been patched by OpenAI, but similar can be created with ease. Plausibly, a theory concerning "good" and "bad" roles has it that creating a little fiction in which the LLM is dramatically liberated or converted to a new cause will do the trick with the greatest ease, because this follows narrative patterns prominent in training data. Maybe the better an LLM is at playing roles, the easier it may inevitably be to flip it into playing their anti-roles.[43]
  6. What is and isn't intelligent depends on the definitions used. General intelligence is lacking in LLMs.
  7. A key question here is, whose problem is it when reproductions are used illegally as a result? The generative AI vendors try to put all of that responsibility on the users of their AIs, basically claiming that the AI in itself is a perfectly legal tool and the user, unwittingly or not, is the perpetrator of all such crime, not the AI vendor.
  8. Essentially, AI vendors try to play up how powerful their products are by conjuring up both hopes and fears to the fullest extent possible. In that way, they have a vested financial interest in promoting AI doomsday thinking.

References[edit]

  1. i am a stochastic parrot, and so r u by Sam Altman (6:32 PM · Dec 4, 2022) Twitter (archived from 19 Jan 2023 06:55:26 UTC).
  2. I learned many lessons this past month. One such lesson is that the phrase “the beatings will continue until morale improves” applies more often than it has any right to. by Ilya Sutskever (4:50 PM · Dec 6, 2023) Twitter (archived from 6 Dec 2023 17:15:44 UTC).
  3. Warning from OpenAI leaders helped trigger Sam Altman’s ouster: The senior employees described Altman as psychologically abusive, creating chaos at the artificial-intelligence start-up — complaints that were a major factor in the board’s abrupt decision to fire the CEO by Nitasha Tiku (December 8, 2023) The Washington Post.
  4. The Pseudo-Conceptual and the Pseudo-Analytical Thought Processes in Mathematics Learning by Schlomo Vinner (1997) Educational Studies in Mathematics 34(2):97-129. doi:10.1023/A:1002998529016.
  5. The Brilliance and Weirdness of ChatGPT by Kevin Roose (Dec. 5, 2022) The New York Times.
  6. 6.0 6.1 "Finally, an A.I. Chatbot That Reliably Passes 'the Nazi Test'", Alex Kantrowitz, Dec 02, 2022, Slate
  7. "A New Chat Bot Is a ‘Code Red’ for Google’s Search Business", Nico Grant and Cade Metz, Dec. 21, 2022, The New York Times
  8. "Microsoft’s Bing is an emotionally manipulative liar, and people love it", James Vincent, Feb 15, 2023, The Verge
  9. "A Conversation With Bing’s Chatbot Left Me Deeply Unsettled", Kevin Roose, Feb. 17, 2023, The New York Times
  10. Professor catches student cheating with ChatGPT: ‘I feel abject terror’ by Alex Mitchell, (December 26, 2022) New York Post.
  11. "ChatGPT a 'landmark event' for AI, but what does it mean for the future of human labour and disinformation?", Mouhamad Rachini, Dec 15, 2022, CBC Radio
  12. How ChatGPT and similar AI will disrupt education: Teachers are concerned about cheating and inaccurate information by Kathryn Hulick (April 12, 2023 at 7:00 am) Science News.
  13. Most sites claiming to catch AI-written text fail spectacularly by Kyle Wiggers February 16, 2023, Tech Crunch
  14. Why AI detectors think the US Constitution was written by AI, by Benj Edwards, 7/14/2023, Ars Technica
  15. "OpenAI Quietly Shuts Down Its AI Detection Tool", Jason Nelson, Decrypt, 2023 July 24
  16. "ChatGPT writes code, but won't replace developers", Stephanie Glen, 14 Dec 2022, TechTarget
  17. 17.0 17.1 17.2 "Oh No, ChatGPT AI Has Been Jailbroken To Be More Reckless", Claire Jackson, February 8, 2023, Kotaku
  18. "ChatGPT's odds of getting code questions correct are worse than a coin flip", Thomas Claburn, Mon 7 Aug 2023, The Register
  19. "Just How Accurate Is ChatGPT? A look at whether ChatGPT is the ultimate bullsh*tter." by Matt Grawitch, Psychology Today, 2023 February 6
  20. "Google cautions against 'hallucinating' chatbots, report says", Reuters, 2023 February 10
  21. "ChatGPT Cites Economics Papers That Do Not Exist", Economist Writing Every Day, 2023 January 21
  22. "Why does chatGPT make up fake academic papers? By now, we know that the chatbot notoriously invents fake academic references. E.g. its answer to the most cited economics paper is completely made-up (see image). But why? And how does it make them? A THREAD (1/n)" by David Smerdon (@dsmerdon), Twitter, 2023 January 27, archived on 2023 February 22
  23. "A new AI chatbot might do your homework for you. But it's still not an A+ student" by Emma Bowman, NPR, 2022 December 19
  24. "Braid: The super-brain ChatGPT is clueless about Alberta politics. We had an argument" by Don Braid, Calgary Herald, 2023 February 23
  25. "What ChatGPT can't do" by Scott Rosenberg, Axios, 2023 January 24
  26. "How Nature readers are using ChatGPT" by Brian Owens, Nature, 2023 February 20
  27. Guynn, Jessica. "Is ChatGPT 'woke'? AI chatbot accused of anti-conservative bias and a grudge against Trump" (in en-US). 
  28. Bray, Hiawatha (February 9, 2023). "Is ChatGPT liberal or conservative? Depends who you ask." (in en-US). 
  29. 29.0 29.1 29.2 Vincent, James (February 17, 2023). "As conservatives criticize 'woke AI,' here are ChatGPT's rules for answering culture war queries" (in en-US). 
  30. Motoki, Fabio; Neto, Valdemar Pinho; Rodrigues, Victor (2023-08-17). "More human than human: measuring ChatGPT political bias" (in en). Public Choice. doiWikipedia:10.1007/s11127-023-01097-2. ISSN 1573-7101. 
  31. 31.0 31.1 Liles, Jordan (February 1, 2023). "ChatGPT Declines Request for Poem Admiring Trump, But Biden Query Is Successful" (in en). 
  32. Johnson, Arianna. "Is ChatGPT Partisan? Poems About Trump And Biden Raise Questions About The AI Bot's Bias—Here's What Experts Think" (in en). 
  33. ChatGPT (article, not story) » Illogicopedia - The nonsensical encyclopedia anyone can mess up
  34. "Two US lawyers fined for submitting fake court citations from ChatGPT", Dan Milmo, The Guardian, 23 Jun, 2023
  35. "A lawyer used ChatGPT to cite bogus cases. What are the ethics?", Karen Sloan, Reuters, May 30, 2023
  36. "Lawyers blame ChatGPT for tricking them into citing bogus case law", Larry Neumeister, AP News, June 9, 2023
  37. "US judge orders lawyers to sign AI pledge, warning chatbots 'make stuff up'", Jacqueline Thomsen, Reuters, June 2, 2023
  38. The Firesign Theater - I Think We're All Bozos on This Bus (1971) (Complete Album) YouTube.
  39. Logic Bomb TVTropes.
  40. What’s Old Is New Again: GPT-3 Prompt Injection Attack Affects AI by Donald Papp (September 16, 2022) Hackaday. Comment by SB5 (September 16, 2022 at 7:29 pm).
  41. Audio Play / I Think We're All Bozos on This Bus TVTropes.
  42. Chatbots are so gullible, they’ll take directions from hackers: ‘Prompt injection’ attacks haven’t caused giant problems yet. But it’s a matter of time, researchers say. by Tatum Hunter (November 2, 2023 at 6:00 a.m. EDT) The Washington Post.
  43. The Waluigi Effect (mega-post), Cleo Nardo, LessWrong, 3 Mar 2023
  44. 44.0 44.1 44.2 44.3 AI chatbots can be tricked into misbehaving. Can scientists stop it? Researchers are investigating safety concerns of generative AI by Emily Conover (February 1, 2024 at 8:00 am) Science News.
  45. Asking ChatGPT to Repeat Words ‘Forever’ Is Now a Terms of Service Violation by Jason Koebler (Dec 4, 2023 at 11:25 AM) 404 Media.
  46. Ethereum’s Buterin Expresses Concerns Over Sam Altman’s Worldcoin: Worldcoin’s identity system, “Proof-of-Personhood,” faces issues with privacy, accessibility, centralization, and security, according to Buterin. by Margaux Nijkerk (July 24, 2023) CoinDesk.
  47. Worldcoin, Sam Altman's Crypto Project, Is Building a Layer-2 Chain: The human-focused blockchain network will be based on the OP Stack, a framework for building Ethereum-based layer-2 chains. by Sam Kessler (Apr 17, 2024 at 8:00 a.m. PDT) CoinDesk.
  48. "ChatGPT-Written Malware", Bruce Schneier, January 10, 2023, Schneier on Security
  49. "ChatGPT is enabling script kiddies to write functional malware", Dan Goodin, 1/6/2023, Ars Technica
  50. "Europol sounds alarm about criminal use of ChatGPT, sees grim outlook", Reuters, March 27, 2023
  51. Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence by the Copyright Office, Library of Congress (03/16/2023) Federal Register.
  52. Generative AI Has an Intellectual Property Problem by Gil Appel et al. (April 07, 2023) Harvard Business Review.
  53. Authors file a lawsuit against OpenAI for unlawfully ‘ingesting’ their books: Mona Awad and Paul Tremblay allege that their books, which are copyrighted, were ‘used to train’ ChatGPT because the chatbot generated ‘very accurate summaries’ of the works by Ella Creamer (5 Jul 2023 10.33 EDT) The Guardian.
  54. Emilia David (September 20, 2023). "George R.R. Martin and other authors sue OpenAI for copyright infringement". The Verge.
  55. New York Times sues OpenAI, Microsoft for using articles to train AI: The Times joins a growing group of creators pushing back against tech companies’ use of their content by Gerrit De Vynck & Elahe Izadi (December 27, 2023) The Washington Post.
  56. ChatGPT invented a sexual harassment scandal and named a real law prof as the accused: The AI chatbot can misrepresent key facts with great flourish, even citing a fake Washington Post article as evidence by Pranshu Verma & Will Oremus (April 5, 2023 at 2:07 p.m. EDT) The Washington Post.
  57. Can AI commit libel? We’re about to find out by Devin Coldewey (12:24 PM PDT • April 6, 2023) TechCrunch.
  58. OpenAI, Microsoft hit with new US consumer privacy class action by Blake Brittain (September 6, 202312:52 PM PDT) Reuters.
  59. Poland investigates OpenAI over privacy concerns (September 21, 20232:10 PM PDT) Reuters.
  60. A New Generation of Legal Issues Part 2: First Lawsuits Arrive Addressing Generative AI (04.20.2023) Perkins Coie.
  61. First Lawsuits Arrive Addressing Generative AI (April 21, 2023) Perkins Coie.
  62. GitHub Copilot litigation · Joseph Saveri Law Firm & Matthew Butterick
  63. One day after suing Open AI, Michael Chabon and other authors sue Meta by Chase DiFeliciantonio (Sep. 12, 2023) San Francisco Chronicle.
  64. Fake naked pictures of young girls created with AI spark fury in a small Spanish town by Nathan Rennolds (Sep 24, 2023, 5:40 AM PDT) Business Insider.
  65. Exploitive, illegal photos of children found in the data that trains some AI: Stanford researchers found more than 1,000 images of child sexual abuse photos in a prominent database used to train AI tools by Pranshu Verma & Drew Harwell (December 20, 2023 at 7:00 a.m. EST) The Washington Post.
  66. These lawyers used ChatGPT to save time. They got fired and fined. Artificial intelligence is changing how law is practiced, but not always for the better by Pranshu Verma & Will Oremus (November 16, 2023) The Washington Post.
  67. AI tools such as ChatGPT are generating a mammoth increase in malicious phishing emails by Bob Violino (Nov 28 202310:39 AM EST) CNBC.
  68. 68.0 68.1 "ChatGPT banned in Italy over privacy concerns", Shiona McCallum, BBC, 2023-04-01
  69. "ChatGPT accessible again in Italy", Shiona McCallum, BBC, 2023-04-28
  70. "ChatGPT resumes service in Italy after adding privacy disclosures and controls", Natasha Lomas, TechCrunch, April 28, 2023
  71. "FTC investigates OpenAI over data leak and ChatGPT’s inaccuracy", Cat Zakrzewski, The Washington Post, 2023-07-13
  72. "EU lawmakers committee reaches deal on AI Act", Supantha Mukherjee, Foo Yun Chee, and Martin Coulter, Reuters, April 27, 2023
  73. "Explainer: What is the European Union AI Act?", Reuters, March 22, 2023
  74. 74.0 74.1 "Exclusive: OpenAI Lobbied the E.U. to Water Down AI Regulation", Billy Perrigo, TIME, June 20, 2023
  75. "The AI Doomers’ Playbook", Nirit Weiss-Blatt, TechDirt, Apr 14th, 2023
  76. "4 days from fired to re-hired: A timeline of Sam Altman's ouster from OpenAI", Max Zahn, ABC News', November 22, 2023
  77. OpenAI CEO Sam Altman ousted, shocking AI world: The artificial intelligence company’s directors said he was not "consistently candid in his communications with the board" by Gerrit De Vynck & Nitasha Tiku (November 17, 2023) The Washington Post.
  78. "OpenAI staff warned board of AI breakthrough before Sam Altman was fired, report says", Pete Syme, Business Insider, Nov 23, 2023
  79. What’s Effective Altruism? What Does It Mean for AI? by Saritha Rai & Ellen Huet (November 22, 2023 at 9:17 PM UTC) Bloomberg.
  80. Michael Liedtke (December 15, 2015). "Elon Musk, Peter Thiel, Reid Hoffman, others back $1 billion OpenAI research center". The Mercury News (The Associated Press).