AI its Impact and More…
Credits: Forbes Magazine for the Picture

AI its Impact and More…

With Microsoft inadvertently releasing GPT4 yesterday, connected to its Bing search engine I felt this would be a beneficial document to post. Humanity has seen Artificial Intelligence idealized for years in the scenes of Hollywood productions such as Will Smith's iRobot, Bladerunner, and The Matrix. An early example of Artificial Intelligence or Machine Learning is in such applications as Clippy, the Microsoft Office’s helper that was used in the early days of Microsoft Office. The technologies were initially implemented to help one learn, but in the years post-Y2k crisis, we have seen the technology grow and become an aspect of life that we are going to have to live with, adapt to, and in the long run, rely upon in most facets of our daily existence. We have moved from a slightly intelligent model used for assistance to a complex intellect that is becoming a replacement to the human in many areas where the person either doesn’t wish to do the job or a machine could do it better. Or the management would rather have a more efficient and reliable system to achieve a task. Artificial Intelligence can be everything we feared from the movies and grow to be the intelligence that is superior to all others in the world.                          

 

Artificial intelligence or what we call current AI is technically an advanced form of Machine Learning, the collection of data and the analysis of that data to provide a series of answers. The advanced mathematical analysis takes those answers and reduces the results into a specific answer using advanced mathematics and entropy. Entropy is a mathematical term used in Data Science to determine the most logical answer that one would arrive at if done without the use of a machine. This learning style has been around for quite a while but only recently has it made strides in enterprises as both an aid or complete replacement to man for tasks that were previously thought to have been something that a computer could not be used to achieve. 

 

One of the initial publicly announced instances of AI started in the early-1990s with Deep Blue, a precursor to IBM’s newer AI, Watson. This was one the first systems to compete against a human in an interactive series of games to determine the intelligence of the data-driven intellect vs human-derived intelligence. This iteration was attempted to be used in multiple manners, first, the use of AI in a game of chess against a grand champion. Watson was furthered to be used in even the Television show "Jeopardy." This model of competition was continued with Deepmind’s programs of AlphaGo, in which artificial intelligence first was taught through models of the human training of a series of the game being played by two humans, and using the analysis of the moves to create the computer's intelligence in an extremely complex game named Go. In a newer iteration of the AlphaGo, an attempt to train the intelligence was implemented in a series of the machine playing against itself millions upon millions of times. This second iteration of the intelligence was able to defeat the champion it was playing against in a very memorable manner having won 4 of the 5 games played, showing that the use of the self-teaching neural network is as effective if not more effective than using existing datasets as a method of teaching. Deepmind has continued with its learning approaches and has through the later years been publicly heard of for its scientific breakthroughs, especially in complex areas such as protein folding, or the way DNA and other proteins curl as to not produce a miles-long strand but to produce the microscopic chunk we know.                                    

 

Even with these advancements though, the latest widely known iteration of Artificial Intelligence has been making waves around the world and in the news as the first true AI that can mimic human intellect. This talk cannot be missed if you own a Television, newspaper, magazine, or simply have been around the water cooler at work is a publicly released version of AI, created by Open.ai, ChatGPT. This product is now in its 3rd and early 4th phase of development released on Jan 9th, 2023, and Feb 4th, 2023 called GPT-3 and GPT-4 now released as I stated yesterday. 


The concept of GPT (Generative Pre-trained Transformer) is that the intelligence is being taught or trained in the same way Google gets the websites in their results page and that is by the "crawling" of data through the internet. This is taking the public data and scraping the important facts and creating datasets that use incredibly complex mathematical equations to determine the entropy or the likelihood of an answer being given to a question being asked. The difference between ChatGPT and all the others is that it is currently an open product. The world can utilize the system in a much more customizable manner than others like Watson which can still be used if you have an IBM cloud account. The high usage rate of this intelligence model will only increase the learning of the system as well, as the more, it is being used the more datasets it is being exposed to, and the more resulting confirmation of being correct or incorrect. The training of the system on a much wider scale than EVER has been implemented in the past, giving the intelligence the potential to increase the likelihood of the answer being given correctly exponentially by providing the machine with a series of similar answers or methods to get to the answer rather than what was given in the past for these artificial learning methods. 

 

What is this technology doing to the community of cybersecurity and the world as a whole? Decreasing the need for humans in the workplace. As we have seen since the release of the news in January more and more publicly announced projects that are using deep learning to give the machine an advantage over humans in the workplace as the machine needs no sleep, rest, food, or anything but a goal and a series of data to analyze. What is deep learning? This is the methodology most widely used in today's AI and that is the taking of the data and creation of outcomes in which mathematics is then applied and the most correct answer is then taken and given as the result of the question or the application's goal. As stated earlier these datasets originally needed to be provided to the machine by the programmer. Thus these were complex sets of data that the machine then would have to take and analyze, but in the latest iterations of the intelligence with AI such as ChatGPT or Github’s CoPilot. This lesser-known but equally impressive technology was taking existing programming language data sets, analyzing the data, and returning it in a future model around the globe not strictly dedicated to the team that was the creator of the original dataset. It is in this last statement that we have come to the first hiccup for the technology in that recently it has been found that the technology has produced proprietary information for others that were not "privileged" or should not have had access to that data and this has produced a lawsuit against the developers. Some of the information in the datasets that the technology used as a method to learn was proprietary information. And within the scraping or the crawling of the datasets that are used to generate the knowledge bank of the technology the machine cannot determine whether this is patented or not and thus it will take that data and use it to produce the answers for other questions that are similar or that can be derived from these datasets. This property use has been taken to court as it is in effect the use of the intellectual property of one source to achieve the answer to another or if so desired to copy the original. This as I said has resulted in a lawsuit as one’s proprietary property should not be a result of the question, if it was to manipulate the core code into something unrecognizable and unique then this would have been a less clear-cut case but in this case, it is evident enough to be shown that the result is derived from the code of the property owned and hidden in one’s private GitHub not to be distributed and reproduced by an intellectual response, as with the correct question one could most likely get it to very closely represent the original product. This is where this style of learning has come under fire but the sources of the intelligence GitHub, OpenAI, and now due to a substantial investment into OpenAI, Microsoft have all asked for this case to be dismissed. Judgment on this will be quite impactful as to the future of this method of deep learning at least using private sites and intellectual properties in training the intelligence’s algorithms. 

 

Artificial intelligence has the potential to do many things, good and bad but with the current implementation, it takes a human to produce either result. In the last few weeks, we have witnessed the ability of AI intelligence to write books, poems, authors,  and applications for a myriad of uses including yes malware but also in protection and cybersecurity. “Good” in the creation of applications and data that is being used and consumed by the enterprise of others for uses that benefit humans, but we have also seen that this technology can be used for the "bad" side of software development and that is in the creation of malware and ransomware to infect systems, penetrate the holes that are in existing architectures as to exploit the systems for uses that were not outlined in the design of the architecture and worse in cases in which ransomware a type of software that has been popularized by hackers due to its profit's that are created by the encrypting of the data on the machines in which the malware has been run, and taking that data and making it completely unusable unless a "ransom" has been paid to the author of the software and a decryption key has been given to the victim in which they can then take their data and decrypt it. The first cases of this were in 2017 with the widespread implementation of a software called WannaCry in which all models since have either copied or further exploited the systems to make it even more necessary to pay the threat actor in order to obtain one's data. 

 

But is this all that AI’s use will do to the world? No. If this is all it’s going to do to the world is the reduction of writing essays and other examples of life’s getting a little easier, not having to learn, and reduction of work. But it is going to be a revolution in which careers are amended because they are no longer needed as to the fact that technology can do their jobs better, and quickly without any breaks. This is a possibility, but at the current moment, changing it is based on a large database of material that uses algorithms and mathematical entropy to determine the answer to what is asked, or what it is told to do. So how will this affect us? In every way from economics to raising children, I have seen examples of choosing the correct diaper being analyzed by AR and AI. The whole world is going to evolve, currently depending on the intelligence that is within our current World Wide Web, but soon the intelligence will have produced enough data to where it is like AlphaGo learning against information produced by its own logic. We are not at the point of Cyberdyne’s Skynet, Terminator’s AI that destroyed mankind, but with enough time and effort, one could see the evolution of the technology into one that could indeed teach itself, manufacture its units, and at this point where are we? That is the question I will leave everyone with, as myself I am a strong proponent of AI but with a strong emphasis on the limitation of the growth of the intelligence to use in which it is intended and not to be placed in front of weapons, healthcare, or other vital human needs. In the last year emerging technologies have come into the world that will change the world as much as the microprocessor, it’s up to mankind to take these changes and ensure we are not made into an organism completely reliant on this technology or we will have seen a major shift in humanity’s current role and entered a new one machines being the dominant intelligence. 

I want to thank those of you that helped read and edit this John W. Gillette, Jr. , Jerod D. , Angelique "Q" Napoleon , Karl Pfefferle , Asher McInerney , Judd Jaffe , and my dad Alan Lax .


Also want to comment for the following thanks Aditya Ranjan Patro , Chuck Brooks , Steve Nouri , William (Bill) Kemp , Carmen Marsh , Anil Yendluri , Kevin Apolinario , Ronald van Loon , Yessenia Sembergman , of course Thinkers360 , Bob Schiff , Bob Carver , and all Cybersecurity Insiders group members.

Aaron Lax

Info Systems Coordinator, Technologist and Futurist, Thinkers360 Thought Leader and CSI Group Founder. Manage The Intelligence Community and The Dept of Homeland Security LinkedIn Groups. Advisor

1y
Like
Reply
Athena George

Associate Data Engineer @Shell | Cyber Security Enthusiast | PL-300 | C|EH v10 | SC-900 | CSE (Cyber Security and Digital Forensics) | C|EH Hall of Fame 2023 Finalist

1y

Interesting article!

Aaron Lax

Info Systems Coordinator, Technologist and Futurist, Thinkers360 Thought Leader and CSI Group Founder. Manage The Intelligence Community and The Dept of Homeland Security LinkedIn Groups. Advisor

1y

Julien Provenzano ☁ this and the addendum

Aaron Lax

Info Systems Coordinator, Technologist and Futurist, Thinkers360 Thought Leader and CSI Group Founder. Manage The Intelligence Community and The Dept of Homeland Security LinkedIn Groups. Advisor

1y

Constantine Chelios, Reyben T. Cortes, Jason Pessemier, Rami Krispin

Aaron Lax

Info Systems Coordinator, Technologist and Futurist, Thinkers360 Thought Leader and CSI Group Founder. Manage The Intelligence Community and The Dept of Homeland Security LinkedIn Groups. Advisor

1y
Like
Reply

To view or add a comment, sign in

More articles by Aaron Lax

Insights from the community

Others also viewed

Explore topics