AI Basics: A Conceptual Map

When people say “AI” today, they often mean systems that: recognise objects in images understand and generate text answer questions, write code, or summarise documents combine text, images, audio, video… Most of these systems are based on modern machine learning, especially deep learning, and more recently large language models (LLMs) and multimodal models. This post builds a conceptual map of modern AI: how we got here, what the key building blocks are, and where backpropagation, transformers, LLMs and multimodal models fit in. ...

December 6, 2025 · 6 min

Underfitting and Overfitting: How Models Go Wrong

Model $(X,Y) = x + y \frac{a}{b} \mathcal{XYD}$ When we train a model, we want it to generalise: perform well not only on the data it has seen, but also on new, unseen data. Two of the most common ways this can go wrong are: Underfitting – the model is too simple to capture the underlying pattern Overfitting – the model fits the training data too closely, including noise, and fails to generalise Understanding these two behaviours, and how to recognise them in practice, is one of the most important skills in machine learning. ...

December 6, 2025 · 6 min