The quantified mind - humans vs AI
Generated by Dall-E

The quantified mind - humans vs AI

There's a really useful exercise any startup can do in the early days - work through what $1m, $10m or $100m in annual revenue actually means on a daily basis. Like - how many products need to be shipped, customer interactions completed, and sales transactions performed in a single day? Many entrepreneurs never think that through in detail and as a result make wild growth predictions in pitches to investors immediately hitting the credibility barrier.

So in a similar vein I recently wondered about working through quantifying human minds in the context of AI large language model specs. 

1. Neural connectivity

According to Wikipedia there are 86 billion neurons in the average human brain. Each neuron forms between 1,000 to 10,000 connections with other neurons (synaptic connections). So our brains could be viewed as a model with up to 860 trillion connections. In contrast GPT-3 is reported to be based on 175 billion neural connections. The number of parameters for GPT-4o is not publicly available but is rumoured to be around 1.7 trillion (i.e. 10 times more than GPT-3). If that growth rate continued (and is accurate) we'd be 3 GPT versions away from the same number of connections as exist in a human brain.

2. Processing ability

Turns out that humans have between 60-80,000 thoughts per day. We can read about 200-300 words per minute (432,000 words max if we read non-stop for 24 hours). If we assume that pictures convey 1000 times the information of a word and we could process 432,000 pictures in 24h that would give us a max processing capacity of 432 million words per day. Assuming an average word length of 4.7 characters in English (let's round up to 5 characters per word) and expressed in bytes of data would give us a processing capacity of around 2.1GB per day. This thought experiment completely ignore the memory retention issue of humans of course.

The context window of GPT-4 is reported to be 128KB - how many queries using a full context window could a single instance of GPT-4 process? Even at one query per second (which seems low) - GPT-4 would be able to process 128KB x 86000 = 11.008GB.

So if the total "mind power" can be determined by the level of complexity (number of neural connections) and processing capacity (number of pieces of information processed), AI is getting close.

Mind power and total available intelligence

An interesting hypothesis in this context is the relationship between "mind power" (simplistically defined by neural connectivity and processing capacity) and the emergence of intelligence. We know that intelligence cannot be defined by a single dimension - there are many forms of it (e.g. as categorised by Howard Gardner’s theory of multiple intelligences).

Humans have biological limits to the amount of mind power we can develop (e.g. our eye/brains can't process more than a few images per second discretely and at 25 per second the images turn into a "video" in our mind and we miss information contained on individual images). But these limits don't apply to AI.

Would a sufficiently large model trained on all available information and sensory input in the world automatically lead to maximum scores across all dimensions of intelligence?

Most humans probably have a mixed bag of scores across the various dimensions. It is observable that the people with the highest scores in a single dimension (e.g. IQ) don't often have the biggest impact on the world (regardless of whether impact is judged as positive or negative to the world). Most highly influential or impactful people likely require a good mix of forms of intelligence to make a big dent in the world. If an artificial model existed that has higher levels of intelligence across all dimensions than is attainable by humans it seems possible that such a model could create significant impact in the world (again good or bad).

A model with maximum availability of all forms of intelligence would conceivably provide the most convincing, kind, smart, funny, creative, quirky, etc interactions and suggestions to achieve a certain goal. In this scenario, how do we avoid being at the whim of a single super convincing and super smart AI? Lots of AI models with different architectures, input / training data and different objectives - just like in the natural world.

Balance and survival through diversity

The question of whether future AI models will be inherently be "good" or "bad" is pointless. The answer is they will be both. Mostly because the currently most intelligent lifeform - humans - are neither inherently good or bad either. "Good" is a highly loaded concept because it depends on perspective (i.e. good for whom, over what timeframe, etc) and good intentions don't always lead to good outcomes. No technology we've ever discovered was in and of itself beneficial or detrimental and the same will apply to AI.

However - there is an important objective that's worth striving for and that is overall balance. I.e. avoiding the ability for one particular AI model to dominate all resources or interactions on the planet is crucial. Diversity is a fundamental concept observable everywhere in nature and the reason it exists is to create a mechanism that maintains balance and maximises the richness of life. Balance in nature is achieved through a mix of constructive and destructive behaviour in ecosystems - we need the same in the AI landscape and AI ecosystem. Hackers and threat agents are already using AI today - so the destructive element of the ecosystem seems definitely covered. Let's make sure there's enough on the constructive side as well.

Our main goal over the next few years in the emerging AI ecosystem must be to strive for maximum diversity in models, approaches, data, tasks, system goals and objectives, ownership of AI models, etc.

This way, various AI models can argue among themselves about the best way of doing things, potentially fight each other, and maintain some overall balance that avoids domination by a single model. We live in interesting times.

Richard McCulloch

Managing Director KM Medical Ltd

7mo

Very Interesting Times!

Like
Reply
Chris Karamea Insley

Chair 🌏 Leader 🌏 and, Influencer 🌏 Always Innovating and, always Delivering…

7mo

Useful analysis Stefan! The greatest use I am finding when posing issues or questions to ChatGPT is its ability to rapidly scan the available knowledge and information sets and generate a response in literally ‘seconds’.. The world is moving at such a fast pace, this response-time is invaluable versus the days, weeks and months it would take to do this research manually. And the $1,000’s or $10,000’s if you were to pay someone to do the research.. Again, useful analysis and intuitively I expect ‘you are right’ - AI has a ‘way to go yet’ to match the human brain! But, it is already making a massive contribution to day to day decision-making in the rapidly changing times we live in. Kind regards, Chris

Like
Reply
Jonathan Usher

🔹 Chief Product & Marketing Officer | Prev. Managing Director @ Datacom & Global Product and Industry Lead @ Microsoft

7mo

Such an interesting topic! Another parameter to consider - energy-efficiency. The human brain requires approximately 20 watts equivalent power. Average power requirement per inference for contemporary foundation models would be, erm, somewhat higher. Orders of magnitude higher...

Barron Braden

Sales Leader | Sales & Strategy | SaaS

7mo

Humans vs AI... 😁

  • No alternative text description for this image

To view or add a comment, sign in

More articles by Stefan Korn

Insights from the community

Others also viewed

Explore topics