Can Machine learning optimise my organisation's results?
Image generated by AI (MidJourney)

Can Machine learning optimise my organisation's results?

The last 3 years are a good reflection on the fast-paced and VUCA (Volatile, Uncertain, Complex and Ambiguous) world we live in. Change is the only constant, and we see that externalities and events happen faster and more frequently. If we analyse the root-cause, probably, technology and globalisation have a key role in it. So we can complain about it thinking how good were the old days, or, alternatively  we can adapt and transform ourselves  towards these new times and up-skill ourselves in order to navigate better in the environment we have to deal with. 

Opening a newspaper might be depressing, even some of the conversations we have with our peers or friends don’t boost the mood either. Inflation, layoffs, war, several types of crisis, corruption or multiple conflicts are the usual headlines. However, it’s up to us to decide how to deal with this situation. It’s true that these are not easy times, but technology is helping to face challenges faster and with impressive results. We can look how technology (artificial intelligence specifically) has impacted the new drugs and molecules discovery process and how this has sped up in the last years, allowing us to have vaccines (COVID19) faster, and how some cancer research has been accelerated as well. However, we should be conscious of the big responsibility and analyse the impact that the use of new technology brings. Technology is not good or bad, it all depends on the usage we do of it. So, lead with empathy, in an ethical way, and thinking in the short and long term at the same time is pivotal. The effect on actions we take today will impact faster and broadly that decades ago, and the potential harm is bigger, therefore the importance of a technical-ethical approach in current and future initiatives (thinking in terms of polarisation of society via fake news, decision making with biased artificial intelligence models, privacy concerns with hyper personalisation,..)

In order to make it more tactile and specific, let’s look into the corporate world. Companies around the globe are scrutinising their budgets, in order to improve their P&L by 1) improve their operating margin (reduce cost, improve efficiencies, ..) 2) Increase revenue (finding new revenue streams, finding new customers, launching new offerings,..).Not all companies has the same appetite for change or find solutions of different nature. The way how risk is managed (even defined or assumed), will mark approaches and tactics. Despite this, it’s innegable the relevance of technology and the role this ally is playing.

As described above companies are looking for methods to improve their P&L, reducing costs/improving efficiencies and/or improving revenue. Either way it’s crucial to understand the business context in order to find solutions. Networking, industry forums and catching ups with professionals helps to understand the state of the art, and also make us aware that we are not alone in this journey, and we can find similarities across organisations and industries.  It is very interesting to use Google trends and search for terms like Artificial Intelligence and/or Digital transformation (link). The former surpasses the latest. Anyway, aside from these dull term competition, these two words are misused and overused in corporate presentations in board meetings. They were misused because leaders should demonstrate their organisations are competitives and modern, when the reality is that bringing this kind of innovation is about change. And change is very difficult. Technology is only effective when it is adopted, and this requires change in terms of processes, skills, roles..It requires a business transformation, using digital techniques.

The potential of these technologies is about to be unveiled. Artificial intelligence is not new, but faced a boost in the last 18-24 months. This change is accelerated, fundamentally,  due to more accessible computing power - GPU (thanks to graphical cards used in gaming) and cheaper data storage (with the help of cloud computing). Another interesting factor is the close collaboration between research entities and corporations. All the big techs like Meta, Google, …have research teams in this space. They used to publish all their investigations, with full transparency on their models (training sets, model weights,..) being the big reference OpenAI. I said, intentionally, used to, as this is not what happens with new models (GPT4 in contrast with predecessors), so this might be an inflection point driving to less collaboration, resulting in less spillover innovation.

If we analyse the Artificial Intelligence Google search trend, we can see that GPT is one of the higher searched terms. Indeed, this model is the new shiny object everyone is talking about. And it deserves a space in the hall of fame of disruptive technology. GPT is a model (based in a transformer architecture, one of the newest and high-performance artificial neural networks) that blows away and disrupts the world. 

In a nutshell, GPT is a LLM (large language model), a model that processes natural language providing a variety of results. Probably the most known application is ChatGPT (chatbot function), but it can be used for classification, summarisation, or new next creation based on a certain input use cases that can help in multitude of use cases from copywriting, letters creation, content creation and analysis. This is not a new field of study, spotting the beginning of activities of study more than half a century ago, with a basic model called Eliza. Applied techniques have changed a lot, from pattern recognition based on statistics to generate responses, to pre-trained models based on the transformer architecture (where the attention layer was the breaking point to change these models, introduced by Google in 2017).

LLM are top of mind, as they are in the news, blogs and everyone is talking about them. Independently of the model, architecture or technique used, artificial intelligence and machine learning are surmountable in the automation and efficiency value that can provide. Not only progress in the computing power (the above-mentioned GPU or graphic processing power from companies like Nvidia), the digital storage availability (on-premise or cloud) and collaboration public-private across organisations set the tone for these new models. Organisations looking to reduce costs, increase efficiency, streamline processes,  or search for new revenue streams can get huge value from these technologies. There is no corporate pitch that doesn’t talk about technology, digital or AI. However, we should not forget that these models are not the solution to all the problems. It is critical to understand the problem and determine, with experienced teams, how to address that challenge. It might be that a deep learning model is the solution, but we might look first to classical algorithms or even classical software engineering principles (software 1.0 as it was coined by Andrej Karpathy ) as they could be a better fit (taking from granted that technology is the answer to the problem, as it might be that it’s not). Typically sweet spot can be found in repetitive tasks that are difficult for humans to process (due large set of variables, hidden correlations to be found, high-demand-computing probabilistic needs), or even we can think in the computer vision and object recognition, where models like YOLO and others are helping doctors to spot anomalies and detect potential harmful elements, speeding up and cement health diagnostics (Healthcare is one of the industries looking to provide tools to their professionals in order to make better and more precise decisions). Same principles are being applied in robotics or,  for example, in the insurance world where models are build to accelerate car-reparations paperwork, detect fraud, prevent machinery errors or breaks, improving the customer experience, reducing operational costs and speeding up resolution times. These image recognition models are triggering also use cases in the agriculture, improving the quality of crops, and making them more efficient in the water or nutrients usage. The transformers architecture (base for GPT, BERT and the LLMs in hype now) has disrupted and is the tipping point for this technology. I’m sure you already played around with Midjourney, DALL-E or Runaway (generative AI models). These diffusion models, and others, are changing the audiovisual industry, shortening the production process, sparking new ideas on creatives and generating a new whole way of working. Similar case like in the LLM, diffusion models are based on a common architecture, with big search teams behind them, and trained with millions and billions of parameters (typically public data sets). We might not need to go so far in the adoption of advance models because classic models such regressions or clustering are helping multitude of companies in their distribution efforts, improving price, analysing customer database, optimising product portfolio, or helping to make priorities on the generated leads.

It’s surprising the fast-paced innovation happening in the AI domain. There is not a single week without news and updates. Keeping up-to-speed is not easy, not only for the amount of updates (new models, techniques,..) but also the broad spectrum of fields. Artificial Intelligence and deep learning is not a single-simple term, it’s a very complex discipline that covers lots of areas. There are innumerable uses, and businesses that are built around those, and more to come. However there is a general pattern to have a look:

State of the art,  3 different approaches:

  • Foundational models, referring to these large models (LlaMa, ChinChilla, Bert, GPT, …) powered by big corporations. They are trained mainly with large public datasets, and can be used in different ways. In some cases are open via a web interface (with some limitations), like chatGPT (the model based on the GPT3.5 version, trained as chatbot) or we can consume services via API.
  • Full stack: technical stack that includes an AI model embedded in an APP (or front end) that solves a particular use case providing end-to-end technology to address that.
  • Front end or that approach that uses foundational models (one or several) as engine, in some cases doing a fine tuning of that, and adding a front end that helps users to interact and work in the designed use case. 


There are lots of movements in startups and big techs, but probably dynamic will stick to the above approaches for some time. Although we are talking about digital assets, there are several differences with regular/tradicional software (software 1.0) that we should factor in when introducing an AI strategy in our organisations.


  • The use of foundational models provide quick results and it’s faster to implement. However, we should evaluate how to apply the model to our use case (is it needed a fine tune?). Another consideration is that models will/can change, and we are dependent on external organisations (if models change, how will we react to that?)
  • ChatGPT and other GPT family models are popular nowadays. Although the results are impressive, we shouldn’t forget the big issue with these models are the hallucinations (confidence response by an AI that does not seem to be justified by its training data). Organisations behind these models are putting some mitigation around, but I guess it’s intrinsic to how the model works. Also models are or could be biased (sex, age, race,..) so again, the ethical (and legal) aspect is something to have in mind. This is something critical if we plan to put these models directly interacting with end users, or use the results of ingested prompts directly with our customers.
  • If we choose to use a foundational model as base for our app/backend, adding a self-built layer on top as front-end, we should check license limitations (some of these models are Open source)
  • Underestimating resources is a common pitfall. Success is driven by a proper use case and data preparation before putting the model working in production, plus adjustment and maintenance. This required specialized roles (data engineers, data science, ML engineers,..). Not all organisations are ready, and getting external help will be crucial.
  • Not every problem required an AI response. Like any other digital/technical innovation, look at organisational vision and strategy, processes and resources should be done in combination with the technical elements.Innovation is about change, and change is impossible without people. Very interesting discussions are happening around GPT and the education sector. It’s is about tech introduction in the sector, or is it about how to change the sector, using technology to achieve newer goals? 
  • The elephant in the room is the potential mass job destruction. I think we are not at that stage where we can talk about AGI (Artificial General Intelligence) . What it’s clear is that professionals not adopting new tools will be in a less advantageous position. Anyway, very important discussions should be held around this topic.
  • Ethical aspect in machine learning was previously mentioned, but we shouldn’t forget about the legal angle either. Generative AI is under an interesting debate due to some recent lawsuits. Can models use Github code to train models? Is art the images that are generated by Generative AI models? 


There are very interesting times ahead in technology in general, and artificial intelligence in particular. We will see a fierce competition between the big ones to show muscle in this space (i.e Microsoft/Open AI vs Google), we’ll see more efficient models (less computing power will be required), new business models will be built around these big models, new models will break records (in terms of accuracy/recall/F1 metrics ) and companies will enter in this field, in order to streamline their operations, and follow the mandate of P&L optimisation. Watch the space, the game just has started.

Kari Poutanen

Director - Education at Fluido, an Infosys Company | Member of Salesforce Partner Leadership Council for Education Cloud | Digital Transformation | Senior Consultant | Advisor

1y

Great article Jaime, lots of good points about AI. If you haven't make sure you catch the Gartner webinar on strategic tech trends, one of them was on AI Trust, Risk and Security management (AI TRiSM). Key topics when applying AI on the enterprise level to be sure as you point out here. I can also personally recommend The Age of AI by Kissinger, Schmidt and Huttenlocher. A great primer on AI from a human society / geopolitical point-of-view. Everyone should be thinking on how to bake AI into their current and future business capabilities.

Gregor McCall

Nonprofit Advisor | Empowering Greater Impact Through Technology Solutions

1y

Very thought provoking and articulately written Jaime Jimenez

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics