What is Generative AI & How does it Work?

Large Language Models (LLM) where generative AI belongs are trying to model the human brain with benefits increasing productivity of analyzing or writing text or creating pictures and/or video. Goldman Sachs believes that it will eliminate 300 million workers.
There are many applications (see picture) and the question arises how does it work?

A simplified explanation will follow for the case of text:

1. Every word needs to be encoded in a language computers understand.
In this case every word corresponds to a Word Vector which is a numerical representation of the word. You will see later on how these vectors are created.

2. In order to understand a word’s meaning, LLMs first observe a specific word e.g. "process" in the context of using billions of words of training data from the internet by analyzing nearby words.

3. Based on the huge set of the words found alongside the word "process" in the training data, as well as those that weren’t found near it, the model uses this set of words, to produce the numerical vector and adjusts it based on each word’s proximity to "process" in the training data. This vector is known as a word embedding. The vector represents the word’s linguistic features!

4. Similar meaning words that we expect to be used in comparable ways often have similar word vectors.

5. While until recently the state of the art AI analyzing method was recurrent neural networks (RNNs), which scanned each word in a sentence and processed it sequentially, in 2017 Google researchers published an article about a technology called Transformers. Transformers process an entire sequence of words at once analyzing all its parts and not just individual words. In this way the software captures context and patterns better, to analyze or generate text more accurately.

6. Understanding context and the meaning of a word within a sentence or paragraph is crucial for advanced text generation. Without it, words that can have similar meaning in some contexts but not others can be used incorrectly.
The transformer technology allows LLMs to take context from beyond sentence boundaries, giving the model a greater understanding of how and when a word is used.

7. Transformers during text generation do not only predict the next word based on probabilities which is a very slow and inaccurate process, instead they focus to predict the highest probability of a larger set of words as a whole. In this way it produces better results, leading to more coherent, human like text. E.g. GPT4 exhibits human level performance on several professional or academic benchmarks such as the US bar exam and the SAT school exams.

8. Because of this inherent predictive nature LLMs are not always accurate and can also fabricate information in a process that researchers call “hallucination”. They can generate made-up numbers, names, dates, quotes — even web links or entire articles.

Any comments?

#AI #GPT4 #GenerativeAI #LLMs

Previous
Previous

How Do You Know A Company Is Ready For AI?

Next
Next

How Do You Create Exponential Companies?