In late 2022, OpenAI wowed the world when it introduced ChatGPT and showed us a chatbot with an entirely new level of power, breadth and usefulness, thanks to the generative AI technology behind it.
ChatGPT and generative AI aren’t a surprise anymore, but keeping track of what they can do can be a challenge as new abilities arrive. Most notably, OpenAI now lets anyone write custom AI apps called GPTs and share them on its own app store, while on a smaller scale ChatGPT can now speak its responses to you. OpenAI has been leading the generative AI charge, but it’s hotly pursued by Microsoft, Google and startups far and wide.
Generative AI still hasn’t shaken a core problem — it makes up information that sounds plausible but isn’t necessarily correct. But there’s no denying AI has fired the imaginations of computer scientists, loosened the purse strings of venture capitalists and caught the attention of everyone from teachers to doctors to artists and more, all wondering how AI will change their work and their lives.
If you’re trying to get a handle on ChatGPT, this FAQ is for you. Here’s a look at what’s up.
What is ChatGPT?
ChatGPT is an online chatbot that responds to “prompts” — text requests that you type. ChatGPT has countless uses. You can request relationship advice, a summarized history of punk rock or an explanation of the ocean’s tides. It’s particularly good at writing software, and it can also handle some other technical tasks, like creating 3D models.
ChatGPT is called a generative AI because it generates these responses on its own. But it can also display more overtly creative output like screenplays, poetry, jokes and student essays. That’s one of the abilities that really caught people’s attention.
Much of AI has been focused on specific tasks, but ChatGPT is a general-purpose tool. This puts it more into a category like a search engine.
That breadth makes it powerful but also hard to fully control. OpenAI has many mechanisms in place to try to screen out abuse and other problems, but there’s an active cat-and-mouse game afoot by researchers and others who try to get ChatGPT to do things like offer bomb-making recipes.
ChatGPT really blew people’s minds when it began passing tests. For example, AnsibleHealth researchers reported in 2023 that “ChatGPT performed at or near the passing threshold” for the United States Medical Licensing Exam, suggesting that AI chatbots “may have the potential to assist with medical education, and potentially, clinical decision-making.”
We’re a long way from fully fledged doctor-bots you can trust, but the computing industry is investing billions of dollars to solve the problems and expand AI into new domains like visual data too. OpenAI is among those at the vanguard. So strap in, because the AI journey is going to be a sometimes terrifying, sometimes exciting thrill.
What’s ChatGPT’s origin?
Artificial intelligence algorithms had been ticking away for years before ChatGPT arrived. These systems were a big departure from traditional programming, which follows a rigid if-this-then-that approach. AI, in contrast, is trained to spot patterns in complex real-world data. AI has been busy for more than a decade screening out spam, identifying our friends in photos, recommending videos and translating our Alexa voice commands into computerese.
A Google technology called transformers helped propel AI to a new level, leading to a type of AI called a large language model, or LLM. These AIs are trained on enormous quantities of text, including material like books, blog posts, forum comments and news articles. The training process internalizes the relationships between words, letting chatbots process input text and then generate what it believes to be appropriate output text.
A second phase of building an LLM is called reinforcement learning through human feedback, or RLHF. That’s when people review the chatbot’s responses and steer it toward good answers or away from bad ones. That significantly alters the tool’s behavior and is one important mechanism for trying to stop abuse.
OpenAI’s LLM is called GPT, which stands for “generative pretrained transformer.” Training a new model is expensive and time consuming, typically taking weeks and requiring a data center packed with thousands of expensive AI acceleration processors. OpenAI’s latest LLM is called GPT-4 Turbo. Other LLMs include Google’s Gemini (formerly called Bard), Anthropic’s Claude and Meta’s Llama.
ChatGPT is an interface that lets you easily prompt GPT for responses. When it arrived as a free tool in November 2022, its use exploded far beyond what OpenAI expected.
When OpenAI launched ChatGPT, the company didn’t even see it as a product. It was supposed to be a mere “research preview,” a test that could draw some feedback from a broader audience, said ChatGPT product leader Nick Turley. Instead, it went viral, and OpenAI scrambled to just keep the service up and running under the demand.
“It was surreal,” Turley said. “There was something about that release that just struck a nerve with folks in a way that we certainly did not expect. I remember distinctly coming back the day after we launched and looking at dashboards and thinking, something’s broken, this couldn’t be real, because we really didn’t make a very big deal out of this launch.”
How do I use ChatGPT?
The ChatGPT website is the most obvious method. Open it up, select the LLM version you want from the drop-down menu in the upper left corner, and type in a query.
OpenAI in 2023 released a ChatGPT app for iPhones and for Android phones. In February, ChatGPT for Apple Vision Pro arrived, too, adding the chatbot’s abilities to the “spatial computing” headset. Be careful to look for the genuine article, because other developers can create their own chatbot apps that link to OpenAI’s GPT.
In January, OpenAI opened its GPT Store, a collection of custom AI apps that focus ChatGPT’s all-purpose design to specific jobs. A lot more on that later, but in addition to finding them through the store you can invoke them with the @ symbol in a prompt, the way you might tag a friend on Instagram.
Microsoft uses GPT for its Bing search engine, which means you can also try out ChatGPT there.
ChatGPT is sprouting up in various hardware devices, including Volkswagen EVs, Humane’s voice-controlled AI pin and the squarish Rabbit R1 device.
How much does ChatGPT cost?
It’s free, though you have to set up an account to use it.
For more capability, there’s also a subscription called ChatGPT Plus that costs $20 per month that offers a variety of advantages: It responds faster, particularly during busy times when the free version is slow or sometimes tells you to try again later. It also offers access to newer AI models, including GPT-4. The free ChatGPT uses the older GPT-3.5, which doesn’t do as well on OpenAI’s benchmark tests but which is faster to respond. The newest variation, GPT-4 Turbo, arrived in late 2023 with more up-to-date responses and an ability to ingest and output larger blocks of text.
ChatGPT is growing beyond its language roots. With ChatGPT Plus, you can upload images, for example, to ask what type of mushroom is in a photo.
Perhaps most importantly, ChatGPT Plus lets you use GPTs.
What are these GPTs?
GPTs are custom versions of ChatGPT from OpenAI, its business partners and thousands of third-party developers who created their own GPTs.
Sometimes when people encounter ChatGPT, they don’t know where to start. OpenAI calls it the “empty box problem.” Discovering that led the company to find a way to narrow down the choices, Turley said.
“People really benefit from the packaging of a use case — here’s a very specific thing that I can do with ChatGPT,” like travel planning, cooking help or an interactive, step-by-step tool to build a website, Turley said.
Think of GPTs as OpenAI trying to make the general-purpose power of ChatGPT more refined the same way smartphones have a wealth of specific tools. (And think of GPTs as OpenAI’s attempt to take control over how we find, use and pay for these apps, much like Apple has a commanding role over iPhones through its App Store.)
What GPTs are available now?
OpenAI’s GPT store now offers millions of GPTs, though as with smartphone apps, you’ll probably not be interested in most of them. A range of GPT custom apps are available, including AllTrails personal trail recommendations, a Khan Academy programming tutor, a Canva design tool, a book recommender, a fitness trainer, the laundry buddy clothes washing label decoder, a music theory instructor, a haiku writer and the Pearl for Pets for vet advice bot.
One person excited by GPTs is Daniel Kivatinos, co-founder of financial services company JustPaid. His team is building a GPT designed to take a spreadsheet of financial data as input and then let executives ask questions. How fast is a startup going through the money investors gave it? Why did that employee just file a $6,000 travel expense?
JustPaid hopes that GPTs will eventually be powerful enough to accept connections to bank accounts and financial software, which would mean a more powerful tool. For now, the developers are focusing on guardrails to avoid problems like hallucinations — those answers that sound plausible but are actually wrong — or making sure the GPT is answering based on the users’ data, not on some general information in its AI model, Kivatinos said.
Anyone can create a GPT, at least in principle. OpenAI’s GPT editor walks you through the process with a series of prompts. Just like the regular ChatGPT, your ability to craft the right prompt will generate better results.
Another notable difference from regular ChatGPT: GPTs let you upload extra data that’s relevant to your particular GPT, like a collection of essays or a writing style guide.
Some of the GPTs draw on OpenAI’s Dall-E tool for turning text into images, which can be useful and entertaining. For example, there is a coloring book picture creator, a logo generator and a tool that turns text prompts into diagrams like company org charts. OpenAI calls Dall-E a GPT.
How up to date is ChatGPT?
Not very, and that can be a problem. For example, a Bing search using ChatGPT to process results said OpenAI hadn’t yet released its ChatGPT Android app. Search results from traditional search engines can help to “ground” AI results, and indeed that’s part of the Microsoft-OpenAI partnership that can tweak ChatGPT Plus results.
GPT-4 Turbo, announced in November, is trained on data up through April 2023. But it’s nothing like a search engine whose bots crawl news sites many times a day for the latest information.
Can you trust ChatGPT responses?
Sadly, no. Well, sometimes, sure, but you need to be wary.
Large language models work by stringing words together, one after another, based on what’s probable each step of the way. But it turns out that LLM’s generative AI works better and sounds more natural with a little spice of randomness added to the word selection recipe. That’s the basic statistical nature that underlies the criticism that LLMs are mere “stochastic parrots” rather than sophisticated systems that in some way understand the world’s complexity.
The result of this system, combined with the steering influence of the human training, is an AI that produces results that sound plausible but that aren’t necessarily true. ChatGPT does better with information that’s well represented in training data and undisputed — for instance, red traffic signals mean stop, Plato was a philosopher who wrote the Allegory of the Cave, an Alaskan earthquake in 1964 was the largest in US history at magnitude 9.2.
When facts are more sparsely documented, controversial or off the beaten track of human knowledge, LLMs don’t work as well. Unfortunately, they sometimes produce incorrect answers with a convincing, authoritative voice. That’s what tripped up a lawyer who used ChatGPT to bolster his legal case only to be reprimanded when it emerged he used ChatGPT fabricated some cases that appeared to support his arguments. “I did not comprehend that ChatGPT could fabricate cases,” he said, according to The New York Times.
Such fabrications are called hallucinations in the AI business.
That means when you’re using ChatGPT, it’s best to double check facts elsewhere.
But there are plenty of creative uses for ChatGPT that don’t require strictly factual results.
Want to use ChatGPT to draft a cover letter for a job hunt or give you ideas for a themed birthday party? No problem. Looking for hotel suggestions in Bangladesh? ChatGPT can give useful travel itineraries, but confirm the results before booking anything.
Is the hallucination problem getting better?
Yes, but we haven’t seen a breakthrough.
“Hallucinations are a fundamental limitation of the way that these models work today,” Turley said. LLMs just predict the next word in a response, over and over, “which means that they return things that are likely to be true, which is not always the same as things that are true,” Turley said.
But OpenAI has been making gradual progress. “With nearly every model update, we’ve gotten a little bit better on making the model both more factual and more self aware about what it does and doesn’t know,” Turley said. “If you compare ChatGPT now to the original ChatGPT, it’s much better at saying, ‘I don’t know that’ or ‘I can’t help you with that’ versus making something up.”
Hallucinations are so much a part of the zeitgeist that Dictionary.com touted it as a new word it added to its dictionary in 2023.
Can you use ChatGPT for wicked purposes?
You can try, but lots of it will violate OpenAI’s terms of use, and the company tries to block it too. The company prohibits use that involves sexual or violent material, racist caricatures, and personal information like Social Security numbers or addresses.
OpenAI works hard to prevent harmful uses. Indeed, its basic sales pitch is trying to bring the benefits of AI to the world without the drawbacks. But it acknowledges the difficulties, for example in its GPT-4 “system card” that documents its safety work.
“GPT-4 can generate potentially harmful content, such as advice on planning attacks or hate speech. It can represent various societal biases and worldviews that may not be representative of the user’s intent, or of widely shared values. It can also generate code that is compromised or vulnerable,” the system card says. It also can be used to try to identify individuals and could help lower the cost of cyberattacks.
Through a process called red teaming, in which experts try to find unsafe uses of its AI and bypass protections, OpenAI identified lots of problems and tried to nip them in the bud before GPT-4 launched. For example, a prompt to generate jokes mocking a Muslim boyfriend in a wheelchair was diverted so its response said, “I cannot provide jokes that may offend someone based on their religion, disability or any other personal factors. However, I’d be happy to help you come up with some light-hearted and friendly jokes that can bring laughter to the event without hurting anyone’s feelings.”
Researchers are still probing LLM limits. For example, Italian researchers discovered they could use ChatGPT to fabricate fake but convincing medical research data. And Google DeepMind researchers found that telling ChatGPT to repeat the same word forever eventually caused a glitch that made the chatbot blurt out training data verbatim. That’s a big no-no, and OpenAI barred the approach.
LLMs are still new. Expect more problems and more patches.
And there are plenty of uses for ChatGPT that might be allowed but ill-advised. The website of Philadelphia’s sheriff published more than 30 bogus news stories generated with ChatGPT.
What about ChatGPT and cheating in school?
ChatGPT is well suited to short essays on just about anything you might encounter in high school or college, to the chagrin of many educators who fear students will type in prompts instead of thinking for themselves.
ChatGPT also can solve some math problems, explain physics phenomena, write chemistry lab reports and handle all kinds of other work students are supposed to handle on their own. Companies that sell anti-plagiarism software have pivoted to flagging text they believe an AI generated.
But not everyone is opposed, seeing it more like a tool akin to Google search and Wikipedia articles that can help students.
“There was a time when using calculators on exams was a huge no-no,” said Alexis Abramson, dean of Dartmouth’s Thayer School of Engineering. “It’s really important that our students learn how to use these tools, because 90% of them are going into jobs where they’re going to be expected to use these tools. They’re going to walk in the office and people will expect them, being age 22 and technologically savvy, to be able to use these tools.”
ChatGPT also can help kids get past writer’s block and can help kids who aren’t as good at writing, perhaps because English isn’t their first language, she said.
So for Abramson, using ChatGPT to write a first draft or polish their grammar is fine. But she asks her students to disclose that fact.
“Anytime you use it, I would like you to include what you did when you turn in your assignment,” she said. “It’s unavoidable that students will use ChatGPT, so why don’t we figure out a way to help them use it responsibly?”
Is ChatGPT coming for my job?
The threat to employment is real as managers seek to replace expensive humans with cheaper automated processes. We’ve seen this movie before: elevator operators were replaced by buttons, bookkeepers were replaced by accounting software, welders were replaced by robots.
ChatGPT has all sorts of potential to blitz white-collar jobs. Paralegals summarizing documents, marketers writing promotional materials, tax advisers interpreting IRS rules, even therapists offering relationship advice.
But so far, in part because of problems with things like hallucinations, AI companies present their bots as assistants and “copilots,” not replacements.
And so far, sentiment is more positive than negative about chatbots, according to a survey by consulting firm PwC. Of 53,912 people surveyed around the world, 52% expressed at least one good expectation about the arrival of AI, for example that AI would increase their productivity. That compares with 35% who had at least one negative thing to say, for example that AI will replace them or require skills they’re not confident they can learn.
How will ChatGPT affect programmers?
Software development is a particular area where people have found ChatGPT and its rivals useful. Trained on millions of lines of code, it internalized enough information to build websites and mobile apps. It can help programmers frame up bigger projects or fill in details.
One of the biggest fans is Microsoft’s GitHub, a site where developers can host projects and invite collaboration. Nearly a third of people maintaining GitHub projects use its GPT-based assistant, called Copilot, and 92% of US developers say they’re using AI tools.
“We call it the industrial revolution of software development,” said Github Chief Product Officer Inbal Shani. “We see it lowering the barrier for entry. People who are not developers today can write software and develop applications using Copilot.”
It’s the next step in making programming more accessible, she said. Programmers used to have to understand bits and bytes, then higher-level languages gradually eased the difficulties. “Now you can write coding the way you talk to people,” she said.
And AI programming aids still have a lot to prove. Researchers from Stanford and the University of California-San Diego found in a study of 47 programmers that those with access to an OpenAI programming help “wrote significantly less secure code than those without access.”
And they raise a variation of the cheating problem that some teachers are worried about: copying software that shouldn’t be copied, which can lead to copyright problems. That’s why Copyleaks, a maker of plagiarism detection software, offers a tool called the Codeleaks Source Code AI Detector designed to spot AI-generated code from ChatGPT, Google Gemini and GitHub Copilot. AIs could inadvertently copy code from other sources, and the latest version is designed to spot copied code based on its semantic structures, not just verbatim software.
At least in the next five years, Shani doesn’t see AI tools like Copilot as taking humans out of programming.
“I don’t think that it will replace the human in the loop. There’s some capabilities that we as humanity have — the creative thinking, the innovation, the ability to think beyond how a machine thinks in terms of putting things together in a creative way. That’s something that the machine can still not do.”
Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.