Artificial Intelligence and the Risks

The Beginning

Back in the 1990's the public Internet was born and the interest in the Internet was wavering between excitement, disbelief and some felt that the Internet was just a fad that would quickly lose steam and disappear. The rise of the Internet Service Provider (ISP) quickly took shape and the Internet boomed into existence, quickly showing its exciting potentials for both individuals and businesses. On the flip side, was the bad actors, hackers and more extreme, also taking shape, and showing that there were vulnerabilities on the Internet, and other failures, gaps, threats and risks, to organizations and the government, and so, Internet security, trust, safety and privacy also began to take shape and form to combat the bad and ugly.

The Internet itself has grown significantly too. As of 2023, there are around 15 billion devices connected on the Internet. Some studies have stated that the number of devices can fluctuate, and in some instances, have been high as 21 billion. Statista estimates that by 2030, we will see at least 29 billion regularly connected devices.

The Birth of AI

Artificial Intelligence has an interesting story as well. The thoughts and concepts of artificial intelligence (e.g., machine learning, "engines", robots) has been around since the dawn of time, thousands of years ago. AI was first used in a program written in 1951 by Christopher Stratchey, using AI on a computer at the University of Manchester, England. In 1956, AI seemed to take more shape and was defined as a field of research by scientists attending a conference T Dartmouth.

Just as the Internet was booming with creativity, features, connection possibilities, communication, the World Wide Web and its infinite capabilities, AI was booming through the 1950's and the 1960's, with a variety of studies, developments and testing numerous objects (e.g., robots, programs) that started to shape the field of AI. The creative minds were also in check, with numerous films, like 2001: A Space Odyssey, released in 1968, featuring HAL - Heuristically programmed Algorithmic Computer, and in 1977, Star Wars was released, featuring numerous humanoid robots, like C3PO and R2D2.

Developing and Advancing AI

After Y2K and the hype, AI continued to develop and advance, especially with internet connected toys, games, and the like, think - Sony's Fury artificial intelligence robot dog. IOT, or Internet of Things, was inclusive of AI devices/ Think about internet connected devices, or robots for that matter, like self-operating vacuum cleaners, smart home automation; constantly taking shape, advancing and in some cases, dangerous or possibly harmful to some. Physical security companies have already started to deploy robotic watch dogs and technology, saving or decreasing the number of humans from the equation. Cellphones, audio speakers, home products and tablet devices with AI assistants, autonomous vehicles for consumer, commercial and industrial application, Chatbots and the similar products will continue to outshine other solutions and products, and the technology will continue to evolve. The same can be said for military weapons and products, that if the technology fails, or goes rogue, these could cause the next global war, or geopolitical crisis. It is certainly plausible that AI could fail, because it is consistently learning, and it could be plausible that it becomes smarter than a human? Currently, anything is possible.

Like the Internet and security, AI requires the same thoughtfulness and appreciation, as well, prevention, detection and mitigation of vulnerabilities and threats resulting from this advanced technology.

IOT and AI = AIoT

As mentioned previously, the IOT space has seen constant growth over the last 15 years, and only positioned to grow more. CISOs and similar organization leaders have genuine concerns when the business connects any device to the internal network, especially if the same device also has free reign to access the Internet. Of course there are ways to mitigate this and control your networks, but it's a thought that people have in the organization when trying to keep everything safe and secure.

If we thought IoT was mind boggling by itself, now enters artificial intelligence and IOT calling itself AIoT. In the industry, IOT has been synonymous with medical devices, manufacturing OT - with sensors and processors, warehouse operations with robots, etc. In manufacturing operations, a control loop is a system in which various devices, may transmit signals or collect data, that is sent to a hub for analysis, etc. This loop cuts down the amount of work (reduction), but still requires human interaction. In AI, the technology is assuming the human aspect, to step in and improve IOT altogether.

Cybersecurity and AI

Artificial intelligence also plays an important role in today's security operations and engineering functions, in a way to provide CISOs and their organizations, ways to stay in front of the curve, while having the ability to mitigate risks, detect activity on their networks and be as efficient as possible, especially responding to security events, or worse yet, a security incident.

Especially in the current market conditions of many sectors, CISOs have been faced with loss of spending, decreased budgets, layoff of team members, which in turn has led to some transformative activity in cybersecurity. The question, "How do we stay on top of alerts?" while preventing an incident, but we have a shortage of staff, and possibly unskilled staff, so CISOs turn to advanced technology like machine learning, artificial intelligence and other capabilities, cross them with their security technologies, and poof, a new way of analyzing and handling cyber threats happens, or does it? There is still the need for playbooks, IR plans and other processes and still human intervention is needed. Another area of improvement is SOAR - Security Orchestration, Automation and Response, which assists a security operation with automating relevant alerts at the top of the list, or adding richer context and enhancing the alerting methodology, so that what matters, is seen first, handled first, mitigated first, etc.

AI for the Greater Good: Immediate Risks of AI

Don't get me wrong, AI is here to stay, for advancing science, improving and enhancing technology, or assisting scientists with the development of vaccines or improving operations, all valid and valuable capabilities of AI. The bad is also inherent in any technology including AI. Concerns from the public and lawmakers about data protection and privacy, or industry leaders and executives concerned that humankind could be taken hostage by AI, or as depicted in recent simulations and movies, where AI is runaway or rogue technology, all of this would be concerning.

Artificial Intelligence is a technology that requires development. The development like any other application or solution, should utilize base standards and foundations that incorporate concepts such as "Security by Design" and "Privacy by Design". It's imperative that enough testing of AI technology occurs, especially since the technology can become very smart and in some cases surpass human capabilities. Take ChatGPT as a prime example. ChatGPT has been cited for misinformation provided to an attorney, that was in the middle of a lawsuit, who was then sanctioned by a judge, for providing false information during the trial -- case law that did not exist. Others have cited concerns for ChatGPT; inherent bias, privacy concerns and the ability to jailbreak the technology outside its normal programmed terms and conditions.

In some cases, some organizations like schools, universities and business, have blocked the use of ChatGPT over plagiarism or data loss concerns. This blocking or banning, is similar to what some state governments have done by banning certain social media platforms, for various cultural and commonsense reasons. It is likely that government regulators will introduce new requirements, laws and compliance requirements, that should be adhered to while using these technologies.

Some Suggestions on Keeping an Eye on AI Risk

  1. Executive Buy-In. It may be prudent to assemble an executive or leadership committee, if one doesn't exist already for privacy and security topics. Having an executive sponsor will be helpful, especially when determining if AI tools and tech are suitable for your organization and end-users. Policies around the use of AI and other similar tools and technology, in the form of an Acceptable Use Policy (AUP) will be a good document to add to your policy arsenal.
  2. Security Operations. Whether your organization utilizes EDR. MDR, XDR, SIEM or a combination of them, and/or utilizes the services of an MSSP, normal monitoring for suspicious behavior, threat actor tactics and anomalous traffic are all important. Sprinkle in specialized monitoring of human risk, user behavior analytics, data loss prevention and cloud security access broker (CASB) is also a sure way to prevent advanced tactics, lateral movement, insider threat and data misuse.
  3. Proactive Security. The idea of Shift Left and be as proactive as you can. Many organizations might deploy a DLP or CASB tool, but they don't actually run it in the enforce or block mode, and security departments should start sooner rather than later. Data loss, misuse and theft are realities and as such an organization should be preventing and blocking these events from becoming worse than they have to.
  4. Cyber Threat Intelligence. A formal program will be helpful in adding contextual information to the data/logs you are currently monitoring and can help to enrich these alerts and events, so that you place some priority on them over other "noise". There are of course varying levels of cyber threat intelligence to be had, everything from vendor related hardware/software patches, government alerts (CISA), industry groups (e.g. RH-ISAC, FS-ISAC), everyday threats and vulnerability information (CVE) and then dark web and other advanced threat intelligence sources.
  5. DevSecOps. If you organization plans to incorporate AI specific technology into their solution, ensuring the development of these applications is in line with best practice that incorporates "privacy by Design" and "Security by Design", along with relative frameworks and controls, such as OWASP.


AI is a powerful tool that can bring both opportunities and risks - it's important to stay informed and prepared.

To view or add a comment, sign in

More articles by Harris D. Schwartz

Insights from the community

Others also viewed

Explore topics