Future Ready Digest Vol.1 No.4
Welcome to Vol.1 No.4 of Future Ready Digest - keeping you up-to-date with the latest research and thinking on building future ready organisations. Please register to receive notifications of future digests.
Vol.1 No.4 Summary - going backward in employee engagement; bold and frequent upskilling required; AI threatens the middle classes; coding sucks; comeback of the humanities; cutting through the metaverse hype; advanced data and analytics practices; the problem with data literacy; disrupting Google; ChatGPT's implications for HEd; AI in banking; insidious disease in our scientific process; and others.
With employee engagement levels at their lowest since 2015, "the world is closer to colonizing Mars than it is to fixing our broken workplaces." The pandemic presented us with a once-in-a-lifetime opportunity to radically change "the way we do things around here." With many employers now mandating a return to the office and reimposing a command and control management style that was already outdated pre-pandemic, it looks like we have blown it. As Depeche Mode would say: We're going backward.
A new report by McKinsey looks at the skills revolution required in a world of AI and automation. With almost half of all current work activities being exposed to automation, bold and frequent upskilling will be required at all levels. "As companies and organizations in all sectors deploy new technologies - including automation and artificial intelligence - ensuring this evolution fosters shared, sustainable prosperity will likely hinge on how well societies prepare the workforces of tomorrow. Private and public-sector leaders have a critical role to play in helping to create family-sustaining jobs, close skills gaps, and ensure tech-fueled growth leaves no one behind."
"History suggests profound technological change presents significant challenges for policymakers. Each of the three previous industrial revolutions had a similar initial impact: it hollowed out jobs across the economy, led to an increase in inequality, and to a decline in the share of income going to labour......AI threatens to have precisely the same effects, but with one key difference. Left unchecked, owners of the new machines will make enormous sums of money out of their innovations. Capital will see its share of income rise at the expense of labour. There will be a hollowing out of some sectors of the economy but there will be employment growth in other sectors. The difference this time is that the jobs most at risk will be white-collar, middle-class jobs, while many of the jobs created might be of the low-paid, dead-end variety."
Are we seeing the beginning of the end of programming as we know it - from a job that humans do, to one that robots will do, thanks to technologies like ChatGPT and Copilot?
Can the decades-long decline of the humanities, due to the pro-STEM movement, be blamed for the proliferation of falsehoods on social media, crass political discourse, the rise in racism and the parlous state of democracy? Fair question to ask IMO.
With 30 percent of global businesses projected to have products and services ready for the metaverse by 2026, a timely summary of the current state of play via INSEAD.
Just another example of how far behind the curve we are in this country. Why are the obscene profits being made by energy companies in the UK not being reinvested in future ready technologies and business models? "Speaking on the launch of the program, appointed Deputy Chief Executive Officer of the city’s tech ecosystem Hub71, Ahmad Ali Alwan, noted that the move represents Abu Dhabi’s openness to disruptive businesses that drive change and transformation on a global scale. Decentralization is the future of the blockchain-based internet, and Web3 startups will play a significant role in accelerating this transition."
Useful five-article download from MIT Sloan faculty sharing thought leadership, research, and insights on advanced data and analytics practices; applying the tools of modern data science, optimization, and machine learning to solve real-world business problems.
Recommended by LinkedIn
Yes, totally agree with this. As data literacy reaches the peak of its "hype cycle", three main problems remain - an excessive focus on the methods & techniques of data creation which negatively impacts productivity; assuming user illiteracy to be the main reason for the low value being delivered from data which in turn creates a toxic divide between producers & consumers; not measuring the business impact, effectiveness or success rate of data programs. IMO, focus on the actionable insights you need from the data first; don't be seduced by the nice visuals or spreadsheets.
Is the disruptor only a year or two away from being disrupted? "AI will eliminate the Search Engine Result Page, which is where they make most of their money. Even if they catch up on AI, they can't fully deploy it without destroying the most valuable part of their business!"
I would argue the complete opposite of this. IMO, AI and technology generally have been used by banks to reduce costs, close branches, reduce headcount, and maximise profits - often at the expense of the customer experience. "The impact of artificial intelligence in the banking sector has been transformative, with benefits ranging from improved customer experiences to enhanced efficiency and security. One of the most significant impacts of artificial intelligence in the banking industry has been the ability to provide more personalized and convenient experiences for customers. With AI-powered chatbots and virtual assistants, financial institutions can provide 24/7 customer support and reduce wait times, improving customer satisfaction."
At last, we seem to be moving from the "all students are cheats" debate. "The most immediate value in ChatGPT is as a stimulus for brainstorming. For example, if students task the tech to come up with five ideas in digital health care, in the worst-case scenario, you're going to get five OK ideas. But even better, [if] you feel like well, those ideas suck, I can do better than that - now you've started jumping into the water, being creative, right?" This is exactly how I am currently using AI in my Future Ready programmes. This week, Masters students will use ChatGPT, or similar, to assess the future readiness of five well-known companies - Red Bull, Nike, Tecnica, Ferrari, and Under Armour. Using the right prompt engineering, AI returns good enough answers that can be used as a foundation for class discussion, debate, and further research when necessary.
I (with others) once had a paper turned down for an academic tourism conference. The paper evaluated the Web marketing activities of National Destination Marketing Organisations. One peer reviewer commented that "the authors claim that the research is based on 'National' Organisations, yet this includes Visit Scotland. Scotland is not a country." And that my friends was the last time I submitted anything for peer review. Better to get it out there and let the crowd comment rather than a couple of self-proclaimed experts. If the peer review system was not fit-for-purpose 15yrs ago, it is even less so today. With scientific and academic journals facing a potential tsunami of AI-produced research papers, some hard-hitting comments in the paper below... "The problem is not so much a result of sophisticated computer mimicry but rather an insidious disease at the heart of our scientific process – a lack in quality control due to insufficient vetting by peers combined with increasing pressure on journals for bigger, faster results without necessarily improving standards." With academic job security, tenure, and career development being dependent on the number and quality of your publications, will be an interesting one to watch.
=============================================================BEST OF THE REST
I have been a big fan of Noam Chomsky's work since the 1970s, so sad to hear him speak this way about ChatGPT and similar tools.... “basically high-tech plagiarism” and “a way of avoiding learning.” Sorry, but I totally disagree. Used effectively, generative AI can significantly enhance the student learning experience. The key term, of course, is "used effectively". There is no such thing as good or bad technology; only good or bad people.
The three main pitfalls of agility - hubris, impulsiveness, and resource fatigue - via MIT.
Scorched earth strategic thinking is a major part of our Future Ready Programmes. Here's one to think about. Would a "pay as you go" business model work for the car insurance industry?
Can creative design save the NHS?; an AI-produced glossary of emerging digital technologies; creating a culture of change - the six key components; don't focus on the symptoms, frame the problem better instead; VUCA thinking is no longer enough - scorched earth is required; an insidious disease at the heart of academic publishing; ChatGPT - depressing stuff from one of the world's leading Business Schools; MOD unable to deliver digital transformation; a game of monopoly, otherwise known as the English Premier League; the future in 2050; integrity anyone?; reimagining leadership, management and capitalism; tech CEOs screw up; AI can break down data silos; launch of our new Future Ready Digital Programme; and others.
You can register to receive our Future Ready Digest Updates below:
Take care.
Jim H