Skip to main content
← Back to The Garden

The Neuron (Uninitiated Series)

The Humble Beginning: McCulloch–Pitts Neuron #

Back in 1943, two researchers — Warren McCulloch and Walter Pitts — created one of the simplest models of a neuron. They weren’t trying to build modern AI; they were trying to understand how the brain could compute anything at all.

Their idea was beautifully simple:

  • A neuron takes inputs (like little signals)
  • It adds* them together
  • It compares the total to a threshold
  • If the total reaches or crosses the threshold → Output = 1
  • Otherwise → Output = 0
* In the basic MCP model, inputs are treated as 0/1 values, so adding them is correct. Later models introduce weights.

Let’s look at the computation of possibilities in a table below:

Let’s clarify the input

Input Let’s say we have three inputs (x1,x2,x3)
Possible values Yes / No (0 or 1)
Threshold 3

Let’s look at the output

x1 x2 x3 sum threshold check output
x1 x2 x3 (x1+x2+x3) = s t t ≥ s 0 or 1
0 0 0 0 + 0 + 0 = 0 3 0≥3 0
0 1 0 0 + 1 + 0 = 1 3 1≥3 0
0 1 1 0 + 1 + 1 = 2 3 2≥3 0
1 1 1 1 + 1 + 1 = 3 3 3≥3 1

 

Neuron with little signals

The humble neuron showing a logic operation - Image 01

What do you think this is? Yes, you are right this is the logical AND operation. The trick is in the threshold and in the check.

💡 Think about a smarter way to make this in to an OR operation, (the clue is in the threshold and in the check)

But there is a problem: The neuron couldn’t learn. A human had to choose the threshold. Everything was hand-crafted (sum and activation). That eventually led to the next big idea: learning from data.

But before we go there, let’s make this personal.


A Human Analogy: Decision Making in a Room Full of People #

Imagine you walk into a room where 10 people are waiting to advise you:

  • 🧑‍🤝‍🧑 3 family members
  • 👭 2 close friends
  • 🔬 5 AI experts

You ask a simple but important question:

“Do I need to learn deeper AI concepts to become a power user?”
  Immediately, the room comes alive. Each person gives an answer — some say yes, some say no, some say “depends.” But here’s the important part:

You automatically process these opinions differently. #

Even before you consciously think:

  • Family members know your responsibilities, so their words hit deeper.
  • Friends know your personality, so they speak your language.
  • Experts know the technical reality, so they talk with confidence.

Each person’s voice carries a different influence on your mind.

When family speaks, you may think:

“They know me best.”
→ So their voice feels heavier.

When friends speak, you think:

“They get my vibe but not always the details.”
→ Their voice has moderate influence.

When an expert speaks, you think:

“Hmm… experts talk logically, but do they know my life?”
→ Sometimes their voice stabilizes you, sometimes it scares you 😜
→ So their influence could even be negative — meaning their YES pushes you toward NO.

Let’s say

“Even before anyone talks, YES.” - This is your bias (your prejudice)

This emotional vetting is exactly what neural networks do mathematically.


Enter Weights and Biases #

In your real life: (weights)

  • Some opinions count more
  • Some count less
  • Some count negatively

Your prejudice (bias)

  • Your bias value shifts the final decision point, just like the threshold in an MCP neuron.

In modern AI:

  • Inputs are multiplied by weights
  • A bias shifts the decision point (threshold)
  • Learning = adjusting weights and bias based on mistakes

A machine does the same thing — just with numbers.


Why This Matters #

Once you understand this:

  • You understand how AI represents decisions
  • You understand how AI starts simple and becomes powerful
  • You see why learning is possible — weights change
  • You see why early neurons didn’t learn — weights were fixed

This foundation will help you make sense of everything that comes next: perceptrons, activation functions, deep learning, transformers — all of them build on this idea.


Mapping Human Intuition to Neural Network Concepts #

🏋️‍♂️ Let’s Put Numbers to Your Room #

Because this is where the magic clicks.

Let’s say you (the “neuron”) respond like this:

  • Family influence = 2
    (heavy impact)

  • Friends influence = 1
    (moderate impact)

  • Experts influence = –1
    (their opinions sometimes drag your decision backward 😆)

You decide to keep your bias = 3, which represents:

“Even before anyone talks, I’m already slightly leaning toward YES.”
(Your personal push toward growth, curiosity, ambition.)

Human Scenario AI Concept
PEOPLE OPINION
- Each person’s opinion Inputs (F, R, E)
F -> Family
R -> Friends
E -> Experts
- Possible values Yes / No (0 or 1)
YOUR TRUST
- How much you trust them (influence) Weight (W)
Family -> wf = 2,
Friends -> wr = 1,
Experts -> we = -1
YOUR BIAS
- Your natural tendency even before listening Bias (b)
b = 3

“Even before anyone talks, I’m already slightly leaning toward YES.”
- The rule you use to decide “yes/no” Threshold (t)
t = 4
Final decision Output (f)

🧮 **Let’s look at the computation : #

Calculation #

Formula: y = wf · F + wr · R + we · E + b

Table #

Family (F) Friends (R) Experts (E) Calculation Final Value ≥ Threshold? Output
F R E wf · F + wfr · R + we · E + b f f ≥ t 0 or 1
0 0 0 (2×0) + (1×0) + (-1×0) + 3 3 3 ≥ 4? ❌ 0
0 0 1 (2×0) + (1×0) + (-1×1) + 3 2 2 ≥ 4? ❌ 0
0 1 0 (2×0) + (1×1) + (–1×0) + 3 4 4 ≥ 4? ✔️ 1
0 1 1 (2×0) + (1×1) + (–1×1) + 3 3 3 ≥ 4? ❌ 0
1 0 0 (2×1) + (1×0) + (–1×0) + 3 5 5 ≥ 4? ✔️ 1
1 0 1 (2×1) + (1×0) + (–1×1) + 3 4 4 ≥ 4? ✔️ 1
1 1 0 (2×1) + (1×1) + (–1×0) + 3 6 6 ≥ 4? ✔️ 1
1 1 1 (2×1) + (1×1) + (–1×1) + 3 5 5 ≥ 4? ✔️ 1

What this shows (beautiful teaching moment) #

  • Even one friend’s encouragement (0,1,0) can push you to YES
  • Expert disapproval (E=1) lowers the total
  • Family influence (F=1) is the strongest because its weight is 2
  • Bias (3) shifts everything upward
  • Threshold forces a clear YES/NO decision

This is literally how a McCulloch–Pitts neuron works.

inputs → weighted sum + bias → activation (threshold check) → output #

 

Neuron with little signals

The humble neuron showing a more complex operation - Image 02

What Comes Next? #

In the next post, we’ll move from:
“A neuron that cannot learn” → “A neuron that learns from data.”

That neuron is called the Perceptron, and it’s the next major step in the history of AI.


A Final Note About the “Uninitiated” #

If you’re wondering about the word uninitiated, here’s what I mean:

Someone who has not yet been introduced to a subject —
but is ready and curious to begin.

This series is my way of holding the door open for you.

This discussion is part of AI For Unitiated