From Functions to Skills: The Full Story Behind Two Years of AI Terminology
1/16/2026
Have you noticed how new AI terms change faster than phone models?
You finally understand Function Calling, and then โSkillsโ shows up. Someone mentions MCP, and before you digest it, they are talking about Agents. Every time, the first thought is: โAm I already behind?โ
Do not panic. Today we will unpack the last couple of years clearly.
More importantly, I will show how these concepts relate to each other. You will see this is not a random pile of new words. It is a layered progression. Once you understand the evolution, new terms become much easier to place.

Layer 1: The starting point - โfunctionsโ in programming
Let us start with the most familiar thing: functions in code.
A function is like a helper. You give it an input and it returns an output. Like a restaurant waiter: you order, they deliver, every time through a fixed process.
For example, a developer writes a function called calculate_tax(income). You pass in the income, it returns the tax. Need it again? Call it again. No need to re-implement the logic each time.
The value of functions is three words: encapsulation, reuse, standardization.
You package a procedure once, anyone can use it, and the way to use it is consistent. This has been a core productivity tool for developers for decades.
But functions have a limit: they live only in the code world.
In code, getWeather() will always run. But normal users do not write code, and AI does not directly run your code either. So how can AI use those helpers?
Layer 2: Building the bridge - Function Calling (2023)
Around 2023, the idea of Function Calling took off.
Think of it as giving AI a task dispatch center.
Before that, if you asked โWhat is the weather in Beijing today?โ, AI either guessed based on training data or said โI do not know.โ It had no hands.
With Function Calling, it changes.
Developers tell the model: โYou now have a dispatch center. There is a helper called get_weather. When you need weather, send it to that helper.โ When AI sees โWhat is the weather in Beijing today?โ, it decides: โI should dispatch get_weather.โ
Then it outputs a standard note (usually JSON) like:
{
"function": "get_weather",
"arguments": {
"city": "Beijing"
}
}
An external program receives this note, executes the real call, and returns the result. The AI then replies in natural language: โSunny, 15C.โ
Key shift: deterministic to probabilistic
There is a subtle but important detail:
- Traditional functions are deterministic -
getWeather()runs 100 percent if you call it. - LLM function calling is probabilistic - the model decides whether to call a tool based on its understanding. It can make mistakes.
The essence: AI can dispatch helpers, but it decides whether and whom to dispatch.
This was a big leap - AI became an actor, not just a knowledge base.
But Function Calling had limits: it was ad-hoc and single-step.
You might have 10 tools. Each time the model can only dispatch one. If a task needs multiple steps, branching logic, or referencing docs, Function Calling alone is not enough.
Worse, each vendor used different formats. OpenAI, Google, Anthropic - all different. Developers had to write multiple adapters.
Layer 3: Paving the road - MCP (late 2024)
In November 2024, Anthropic introduced MCP (Model Context Protocol).
Think of MCP as a universal socket standard.
Previously, to integrate tools you had to:
- build an adapter for OpenAI
- build another for Claude
- build yet another for Google
Like having different plug shapes for every device.
MCP defines a standard for AI-tool communication. If you build a tool with MCP, any MCP-capable model can use it. Like USB-C for everything.
What MCP solves
- Standardization: build once, use across models
- Two-way communication: models can call tools, read files, query databases
- Extensibility: anyone can build MCP servers and share them
Example: you build an MCP server that queries internal docs. It works in Claude Code and other MCP-enabled tools, without rewriting integrations.
MCP gives AI and tools a shared language.
But MCP focuses on connection, not workflow.
If a task requires:
- read docs
- write code
- run tests
- generate a report
MCP can call each tool, but the โwhat to do nextโ logic still sits with the model or hardcoded rules.
So can we package โhow to do the workโ itself?
Layer 4: The leap - Skills (Oct 2025)
On Oct 16, 2025, Anthropic released Claude Skills.
Think of Skills as a combination of a playbook and a toolbox.
- The playbook tells the model: when to use this skill, what steps to take, which tools to use.
- The toolbox provides scripts and resources for those steps.
Concretely, a Skill is a folder with three parts:
1. SKILL.md (the playbook)
Written in plain language. It tells the model:
- what the skill does
- when to use it
- how to use it
- cautions and constraints
2. Scripts (the toolbox)
Python, JavaScript, or other code the model can run when it needs to act.
3. Resources (reference materials)
Docs, templates, configs. The model can consult them during execution.
Skills vs Function Calling
Function Calling is a single tool. Skills are a full solution.
Analogy:
- Function Calling gives you a hammer, screwdriver, and wrench.
- Skills give you an IKEA manual that includes steps, tools, and parts.
Progressive disclosure
Skills also rely on progressive disclosure.
Models have limited working memory (context window). If you load everything at once, the model gets overwhelmed.
Skills keep it lightweight: the model knows the playbook exists, and only opens it when needed.
Four layers: from code to agents
Put together:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Skills (workflow level) โ
โ playbook + toolbox + knowledge โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ MCP (protocol level) โ
โ standard AI-tool dialogue โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Function Calling (API level) โ
โ AI dispatches helpers for tasks โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Functions (code level) โ
โ encapsulation, reuse, standard โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
From bottom to top, abstraction increases:
- Functions: code-level encapsulation
- Function Calling: interface bridge for AI
- MCP: protocol standardization
- Skills: workflow packaging
Skills can include Function Calling and MCP. Those are components inside a Skill.
Like a cookbook that includes not only steps, but also why each step matters and how to fix mistakes.
The last two years: AI moves from talking to doing
Tell the story this way:
Before 2022: AI as a talking advisor
It answered based on training data. It knew things, but could not act.
2023: AI gains dispatch power
Function Calling let AI call external tools. It could finally do things, not just talk.
But it could only dispatch one helper at a time. Complex workflows were still hard.
Late 2024: unified dispatch rules
MCP standardized tool communication. No more per-vendor adapters.
But the workflow problem remained: multi-step orchestration was still fragile.
2025: AI gains a project playbook
Skills packaged the workflow itself. AI could execute an entire process, step by step.
What is next?
You guessed it: Agents.
Agents are the autonomous version of Skills:
- Skills: you tell AI what to do, it follows the playbook
- Agents: you give a goal, it plans, executes, and adapts
If Skills are obedient employees, Agents are managers with autonomy.
Summary: how to understand new terms
When you see a new term, ask:
-
Which layer is it on?
- code? (functions, scripts)
- interface? (Function Calling)
- protocol? (MCP)
- workflow? (Skills)
- agent? (Agents)
-
What problem does it solve?
- encapsulation and reuse? (functions)
- enabling action? (Function Calling)
- standardizing connections? (MCP)
- packaging workflows? (Skills)
- autonomous decisions? (Agents)
-
What is its limitation?
- functions: only in code
- Function Calling: ad-hoc, single-step
- MCP: not a workflow solution
- Skills: requires triggers
- Agents: risks of autonomy
Once you have that frame, new terms will not scare you.
These technologies are not replacements. They are layers. Each layer solves the limits of the last, and gets AI closer to being a true productivity tool.
Final note
The AI field will keep producing new terms. Do not get distracted by labels.
From functions to Function Calling, from MCP to Skills, the evolution is clear:
AI is moving from a talking advisor to a manager who can lead work.
This road is just getting started.
Maybe next year you will see another wave of terms. Remember the layers: code, interface, protocol, workflow, agent.
No matter how fancy the term is, it fits somewhere in that stack.
Further reading:
Claude Code docs: https://docs.anthropic.com/claude-code
MCP spec: https://modelcontextprotocol.io
Claude Skills guide: https://docs.anthropic.com/claude-code/skills
ๆฌข่ฟๅ ณๆณจๅ ฌไผๅทFishTech Notes,ไธๅไบคๆตไฝฟ็จๅฟๅพ