10+ essential AI, LLM, and prompt engineering terms explained for developers. From system prompts to RAG, tokens to fine-tuning.
An AI system that can autonomously plan and execute multi-step tasks, using tools and making decisions to achieve a goal.
The maximum amount of text (measured in tokens) that an AI model can process at one time, including both input and output.
Numerical vector representations of text that capture semantic meaning, enabling AI systems to measure similarity between pieces of text.
The process of further training a pre-trained AI model on a specific dataset to improve its performance on a particular task or domain.
A type of AI model trained on massive amounts of text data, capable of understanding and generating human-like text across a wide range of tasks.
An open standard by Anthropic that enables AI models to connect with external tools, data sources, and services through a unified interface.
The practice of designing and optimizing text inputs (prompts) to get the best possible outputs from AI language models.
A technique that enhances AI responses by retrieving relevant information from an external knowledge base before generating a response.
A set of instructions given to an AI model before the user conversation begins, shaping its behavior, tone, and capabilities.
The basic unit of text that AI models process. A token is roughly 4 characters or 0.75 words in English.
EXPLORE FURTHER
Browse our database of verified system prompts from Cursor, Claude Code, v0, and 50+ more AI tools.
Browse System Prompts