What Is The Next Level Of AI Technology?
What Is The Next Level Of AI Technology?

What Is The Next Level Of AI Technology?

Artificial Intelligence (AI) has permeated all aspects of our lives – from the way we communicate to how we work, shop, play, and do business – AI tools are everywhere we look.

Already it’s delivering tangible business benefits across just about any industry you can name – but it’s clear we’re only just getting started. The technology available today will no doubt look as antiquated as the pocket calculator in a decade's time. Computers will get smarter, quicker, and increasingly become capable of tasks that traditionally could only be carried out by humans, such as making complex decisions or engaging in creative thought. Here’s a rundown of some possibilities that might seem like science fiction today but could be a part of everyday reality sooner than you think!

The road to general (strong) AI

Most AI applications today are classified as “narrow” or “weak” AI, meaning that, while in some ways they meet general criteria that we have for intelligence – most prominently an ability to learn – they usually only carry out a specific task they are designed for. Truly intelligent (let’s say, “naturally intelligent”) entities are not “designed” for any task but have evolved to carry out any number of tasks they need to be able to do. The search for “general AI” is concerned with developing smart machines that can act in a similar way.

It helps to think about this in terms of how it would improve the AI applications we have available to us today. Amazon’s Alexa, for example, uses AI to understand what we are saying. That’s more or less the extent of its “smartness”, though – once it understands our instructions, it carries them out in a wholly programmatic way.

Moving towards more generalized applications of AI, home assistant devices will have the ability to “think” more proactively. As well as far more active and flowing conversations as natural language processing technology (NLP) improves, it will become better at predicting what we need or how we will act and taking action to accommodate that. This could mean anything from ordering shopping to monitoring our health, scheduling maintenance for our car when it's needed, and calling the police if it detects an intruder in our home. Crucially, it would do all of this because it calculates that it’s the best thing for it to do in a given situation, rather than because it had been explicitly told to do so.

A quantum-powered future

Compute power is the engine of AI, and the big leaps forward we've seen in the last ten years have largely been down to the growing amount of processing grunt we have had available. In particular, research at the start of the last decade into the use of graphics processing units (GPUs) has directly led to many of the deep learning techniques and applications that are so useful today.

Quantum computing, along with other next-level processing capabilities such as biological and neuromorphic computing, is likely to unlock even more possibilities.

Quantum computing is certainly not an easy concept to explain in a bite-sized segment of text like this one, but basically, it works by harnessing the strange and somewhat baffling (if you don’t have a Ph.D. in physics!) ability of sub-atomic particles to exist in more than one state at the same time. You can find a more detailed discussion here, but for the purpose of this prediction, suffice to say it is theoretically capable of completing some calculations up to 100 trillion times faster than today’s fastest computers.

In order to continually evolve to become smarter, machine learning models will inevitably become larger. One of today’s most sophisticated “generative” AI models – OpenAI’s GPT-3 – already contains over 175 billion different parameters within its code. This will require increasing amounts of processing power. Additionally, more processing power means we will be able to create larger amounts of “synthetic” data for training purposes, reducing the need for collecting real data to feed into algorithms for many applications.

For a good example, think of the data needed to train a self-driving car. The algorithms need exposure to many hundreds of hours of driving experience in order to learn how to navigate safely on roads. More processing power means more accurate and realistic simulations can be built, so more and more of this learning can be carried out in simulated environments. Not only is this likely to be cheaper and safer, but it can also be carried out at a vastly accelerated rate – thousands of real-time driving hours could be compressed into a far shorter duration of computer run-time.

While truly useful quantum computing with applications outside of specialized academic research may be a way off still, other technologies like neuromorphic computing will create waves in the meantime. These aim to mimic the “elastic” capabilities of the human brain to adapt themselves to processing new forms of information. Intel recently unveiled its Loihi processing chip, packed with more than two billion transistors, which is one application that was able to identify ten different types of hazardous material by smell alone – more quickly and accurately than trained sniffer dogs.  

Creative AI

These days we can see art, music, poetry, and even computer code being created by AI. Much of this has been made possible by the ongoing development of “generative” AI (including the GPT-3 model mentioned above). This is a term used to describe AI when its function is to create new data rather than simply analyzing and understanding existing data.

With generative AI, analyzing and understanding is still the first step of the process. It then takes what it has learned and uses it to build further examples of the models that it has studied. The most impressive results available today are usually obtained when this is done via an “adversarial” model – effectively, two AIs are pitted against each other, with one tasked with creating something based on existing data and the other tasked with finding flaws in the new creation. When these flaws are discovered, the creative network (known as the “generator”) learns from its mistakes and eventually becomes capable of creating data that its opponent (the “discriminator” network) finds increasingly hard to distinguish from the existing data.

While it’s amazing in its own right that computers can create, the quantifiable value lies in its ability to create synthetic data that can be used to train other machines. For example, facial recognition algorithms rely on having access to a huge library of pictures of people’s faces in order to learn how to recognize individuals, much the same way as self-driving cars need a lot of driving experience to learn how to drive. This ability to create synthetic data will lead us into an era where the headline-grabbing applications of AI aren’t simply those with the “wow” factor – machines doing things we simply haven't seen them do before. Instead, the focus will shift to the truly valuable and useful applications of AI that work towards solving major real-world challenges.    

Ethical and accountable AI

This clearly leads onto the final but in some ways most significant way in which AI will evolve. At the moment, much of the inner workings of today's AI is obscured. Sometimes it’s locked within proprietary algorithms that are treated as closely-guarded corporate secrets, while sometimes it’s simply too complex for most of us to understand.

Either way, this leads to a significant problem – we’re increasingly putting important decisions that could affect people’s lives in the hands of machines that we don’t fully understand. This has huge implications for trust – and if people don’t trust AI, they’re unlikely to feel great about the idea of letting it make decisions, even when the data clearly shows it’s likely to make the right ones.

If AI is going to live up to its potential, then the smart machines of the near future will have to be more transparent, explainable, and accountable than the ones we're familiar with now. We are seeing steps being taken to ensure this is the case, with the establishment of organizations like the Partnership on AI, OpenAI, and the UK Government's Alan Turing Institute. Of course, some people will see the unlimited opportunities that AI offers for turning a profit and be tempted to skirt around or simply ignore guidelines and recommendations put out by such groups. So we’re also likely to see legislative and regulatory changes being put in place to minimize the potential for damage that this could cause. All of these solutions, working together, can help ensure AI lives up to its potential.


For more on the topic of artificial intelligence, have a look at my book ‘The Intelligence Revolution: Transforming Your Business With AI’.

Thank you for reading my post. Here at LinkedIn and at Forbes I regularly write about management and technology trends. To read my future posts simply join my network here or click 'Follow'. Also feel free to connect with me via Twitter,  FacebookInstagramSlideshare or YouTube.

About Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the field of business and technology. He is the author of 18 best-selling books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has over 2 million social media followers and was ranked by LinkedIn as one of the top 5 business influencers in the world and the No 1 influencer in the UK.

Malwina Kwiatkowska

Freelancer 3D/2D Character Artist open for remote fork

3y

I agree with Your wisedom!

Like
Reply

Hi this is amazing 😊

Like
Reply

Optical - posted today :-)

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics