Skip to Main Content

Generative AI: ChatGPT and Beyond

A guide to the various artificial intelligence (AI) algorithms that use deep learning techniques and massively large data sets to understand, summarize, generate and predict new content.

What are LLMs?

Large Language Models (LLMs) refer to large general-purpose language models that can be pre-trained and then fine-tuned for specific purposes. They are trained to solve common language problems, such as text classification, question answering, document summaries, and text generation. The models can then be adapted to solve specific problems in different fields using a relatively small size of field datasets via fine-tuning.

The ability of LLMs taking the knowledge learnt from one task and applying it to another task is enabled by transfer learning. LLMs predict the probabilities of next word (token), given an input string of text, based on the language in the training data. Besides, instruction tuned language models predict a response to the instructions given in the input. These instructions can be "summarize a text", "generate a poem in the style of X", or "give a list of keywords based on semantic similarity for X".

LLMs are large, not only because of their large size of training data, but also their large number of parameters. They display different behaviors from smaller models and have important implications for those who develop and use A.I. systems. To develop effective LLMs, researchers must address complex engineering issues and work alongside engineers or have engineering expertise themselves.

Want to learn more about AI terms? Take a look at the AI Glossary.

Interesting Reads related to Generative AI and LLMs

Using Generative AI & LLMs

Remember, you'll always need to verify the information, because LLMs will sometimes make things up (known as "hallucination.")

What is it good for?

  • First drafts of writing projects
  • Brainstorming ideas
  • Coming up with topic ideas for a research paper, and keywords for searching in library databases.
  • Explaining information in ways that are easy to understand
  • Summarizing and outlining
  • Asking questions (be sure to fact check the results). You can ask a million questions without fear of being judged.
  • Translating text to different languages (not completely fluent in every language)
  • Helping write or debug computing code
  • Humor and improvisation

What is it not so good for?

  • Library research (not yet). For now, it's best to use Library search, Library databases, or Google Scholar. This may change in the future with more specialized search tools based on LLMs.
  • Asking for any information that would have dire consequences if it was incorrect (such as health, financial, legal advice, and so on). This is because of its tendency to sometimes make up answers, but still sound very confident.