The TechCrunch AI glossary | TechCrunch

Artificial intelligence is a deep and complicated world. The scientists who work in this area often rely on Jargon and Lingo to explain what they are working on. As a result, we often have to use those technical terms in our coverage of the artificial intelligence industry. That is why we thought it would be useful to put together a glossary with definitions of some of the most important words and sentences we use in our articles.
We will regularly update this glossary to add new entries, because researchers are constantly discovering new methods to shift the boundary of artificial intelligence and at the same time identify emerging safety risks.
An AI agent refers to a tool that uses AI technologies to perform a series of tasks on your behalf -further than what a more simple AI -Chatbot could do -such as submitting costs, booking tickets or a table in a restaurant, or even writing and maintaining code. However, as we have explained, there are many moving pieces in this emerging space, so that different people can mean different things when they refer to an AI agent. Infrastructure is also still being built to deliver the intended possibilities. But the basic concept implies an autonomous system that can draw from multiple AI systems to perform tasks from multiple steps.
Given a simple question, a human brain can answer without even thinking too much about it – things like “Which animal is bigger between a giraffe and a cat?” But in many cases you often need a pen and paper to come up with the correct answer because there are intermediaries. For example, if a farmer has chickens and cows, and together they have 40 heads and 120 legs, you may have to write down a simple comparison to come up with the answer (20 chickens and 20 cows).
In an AI context, the reasoning of thought for large language models means splitting a problem into smaller, intermediate steps to improve the quality of the end result. It usually takes longer to get an answer, but the answer is more likely to be right, especially in a logic or coding context. So -called reasoning models are developed from traditional large language models and optimized for thought thinking thanks to learning to strengthen.
(To see: Large language model))
A subset of self-strengthening machine learning in which AI algorithms are designed with a multi-layered, artificial neural network (Ann) structure. This allows them to make more complex correlations compared to simpler machine learning -based systems, such as linear models or decision trees. The structure of the deep leather algorithms gets inspiration from the interconnected paths of neurons in the human brain.
Deep Learning AIs can identify important characteristics in data themselves, instead of that human engineers must define these characteristics. The structure also supports algorithms that can learn from errors and, through a process of repetition and adjustment, improve their own outputs. However, deep learning systems require many data points to deliver good results (millions or more). It usually also takes longer to train deep learning versus easier machine learning – algorithms – so development costs are usually higher.
(To see: Neural network))
This means further training of an AI model that is intended to optimize performance for a more specific task or area than before feeding a central point of the training mastery by feeding new, specialized (ie task-oriented) data.
Many AI startups take large language models as a starting point to build a commercial product, but fighting to strengthen the usefulness for a target sector or task by supplementing previous training cycles with sophistication based on their own domain-specific knowledge and expertise.
(To see: Large language model (LLM)))
Large language models, or LLMS, are the AI models used by popular AI assistants, such as chatgpt, claude, Google’s Gemini, Meta’s Ai Lama, Microsoft Copilot or Mistral’s Le Chat. When you chat with an AI assistant, you communicate with a large language model that processes your request directly or with the help of various available tools, such as web browsen or code -interpreting.
AI assistants and LLMS can have different names. GPT is, for example, the large language model of OpenAi and Chatgpt is the AI assistant product.
LLMS are deep neural networks made of billions of numeric parameters (or weights, see below) who learn the relationships between words and sentences and create a representation of language, a kind of multidimensional word card.
They are made from coding the patterns they find in billions of books, articles and transcriptions. When you address an LLM, the model generates the most likely pattern that fits the prompt. Then it evaluates the most likely next word after the last based on what was said earlier. Repeat, repeat and repeat.
(To see: Neural network))
Neural network refers to the multi-layered algorithmic structure that underlies deep learning and more, more generally, the entire tree in generative AI tools after the rise of large language models.
Although the idea to get inspiration from the closely interconnected routes of the human brain as a design structure for data processing algorithms dates back all the way back to the 1940s, it was the much more recent rise of graphic processing hardware (GPUs) – via the video game industry – which really unlock the power of theory. These chips were very suitable for training algorithms with many more layers than possible in earlier Timens-Waostor Neural Network-based AI systems could achieve much better performance in many domains, whether it is voice recognition, autonomous navigation or drug discovery.
(To see: Large language model (LLM)))
Weights are the core of AI training because they determine how much interest (or weight) is given to various functions (or input variables) in the data used for training the system -which forms the output of the AI model.
In other words, weights are numerical parameters that determine what is most striking in a data set for the given training task. They reach their function by applying multiplication to inputs. Model training usually starts with weights that are randomly assigned, but as the process unfolds, the weights adapt when the model tries to reach an output that better matches the target.
Een AI-model voor het voorspellen van huizenprijzen die zijn getraind op historische onroerendgoedgegevens voor een doellocatie, kan gewichten omvatten voor functies zoals het aantal slaapkamers en badkamers, of een woning is losgemaakt, semi-vrijstaand, als het al dan niet parkeren, een garage, een garage, een garage, een garage, een garage, een garage, een garage, een garage, een garage, een garage, een garage, A garage, a garage, a garage, a garage, etc., includes, and so on.
Ultimately, the weights that confirm the model to each of these inputs is a reflection of how much they influence the value of a property, based on the data set given.