Machine Learning is NOT Artificial Intelligence

Oct 17, 2022 | Jagannath Rajagopal | 4 min read


When building an ML model, you may want to think about whether it’s for something specific or general. In NLP, the task may be specific-tagging for emails in your company. On the other hand, an ML API for Natural Language Tagging is usually pre-trained with a large variety of text from different contexts to generate a range of tags for several tasks. If you want to liken it to something, the specific model would be like a custom made suit, whereas the general one would be one where you could accessorize from off the shelf.


Regardless of how skilled either one is — i.e. how well either performs — neither may be considered Intelligent.

Here lies the crux of the matter — most practical use cases of Machine Learning develop agents that are skilled at either a narrow task or at relatively wide sets of highly similar tasks. The keyword here is skill. Skill is not intelligence, in spite of the ubiquity inherent in ML being called AI.


Widely speaking, there is a step in-between on the journey to intelligence. In human terms, this is ability. Can a singing model play the cello? Can it render a beautiful Mozart on the piano? Can it replace a sick band-member and play the rhythm guitar? Can it support an existing band by adding a mix of music from different instruments as needed? Can a western classical violin model play the mridangam?

A popular test in the AI mindspace addresses a simpler scenario. The Wozniak Coffee Test measures this — can an agent walk into any home, office or location, and make a cup of coffee? For an artificial agent, even this is a complicated task — mixing ingredients in the right amounts, preparing the brew, selecting add-ons etc. The number of ways you can order a coffee at a Starbucks is what — 80,000? The possibilities are endless.


While there is hierarchy involved, you can see that ability is broader than skill.

I’ve created a LOT of resources on topics like these. Here’s my course on Design Thinking for Hero Methods. Here’s my YouTube channel with previews of course videos. Here’s my website; navigate through the courses to find free previews & pdfs.

- - -

Intelligence — at least from a human perspective — refers to the capacity to elicit insights out of what we already know, and gain new abilities. Can an artistic agent bake a pizza? Can it drive a car? Can it perform an eye-exam?


While animals demonstrate limited generalization — and hence intelligence — the best example of course is people.

One of the ways to cheat at intelligence is to “buy” ability. Build a deeper neural net. Encode more “common sense” or “domain” knowledge into symbolic rules.

With the former, since the human brain itself is a neural net, why not build more and more complex architectures until they start to behave like their biological counterparts? This is facetious since there was a path biology took to evolve human brains and it took a couple of billion years. There is a lot of specialization in the human brain that arose from the series of accidents we call evolution. So why not try the same in a lab? Create the conditions to scale a neural network to reach brain-level cognition. If you do that, you’ll quickly run into a bottleneck — not performance, not memory — but of data. There is only so much data available on the planet and it might just be that all of it would still be insufficient for the task.


With the latter — as with CYC — it’s possible to reach near-human level performance. It took them 34 years or so to encode human “common-sense” into logical rules that involve symbols, along with a language to encode the knowledge. It is estimated that CYC has approximately 10 million statements that contain things like “if it rains, you get wet”. Here the bottleneck clearly is the representation. Assuming one spent the time to duplicate what CYC did (CYC is not available open-source; you can of course get a license), it is only a little different than creating a 100 million if-then statements that do the same thing. The difference is that the representation has smarts built into it, since it’s an Ontology and can perform sophisticated logical-reasoning.


No, for it to be considered intelligent, what you need in the CYC scenario, is an agent that can elicit the 10 million statements on its own, from its own experience based on some priors and a learning curriculum. That’s what the human mind does. We are born with a little prior knowledge and most of what we need we learn from our own experiences.

And because we can do that, and we do that differently from one another, we are considered smart or dumb based on how well. This is intelligence. You may end up with the same scenario in intelligent machines, due to different learning contexts and randomness imbued into the learning process.

In other words, intelligence is an agent’s capacity, given some prior knowledge and a curriculum, to develop abilities — not skills — on its own. In doing so, it creates rich representations of the world — again by itself.


As you can see, what we see today in the skillful world of Machine Learning/Data Science — or in other Hero Methods like Optimization, & Knowledge Representation — is only akin to the first layer of the onion. We are very shallow right now.

- - -

At Kado, we are also guilty of relating ML to AI in our content since the analogy is everywhere and so that we can relate to our audience. But we are under no illusion that ML represents AI in any way. To be generous, and only if you want to refer to “skill” as “task intelligence”, you could consider an ML model to be intelligent and only in that context.

Sources -

Don't hesitate!

Design Thinking for Hero Methods

Created with