🤔 w/146: AI Lessons From Harvard Business School
AI Generated Image by Canva

🤔 w/146: AI Lessons From Harvard Business School


⚡️ Supercharge your career, productivity and wisdom with Wiser! - the tech newsletter for professionals who want to know more than their competition.

Cover Image: AI generated with Magic Media by Canva.


w/Wiser! #146 - 5th Nov 2023

In this issue:

  • Harvard Business School Study tells us what we need to know about using AI in the workplace
  • UK hosts first of a kind global summit on AI
  • Apple’s AI long game.

Plus the top stories on what’s going on in AI and emerging technologies shaping tomorrow’s digital world and your workplace.

⚡️ Join the mailing list and never miss an issue of Wiser!


w/Insight

AI Lessons From Harvard Business School

I get asked a lot of questions about using AI. By far the biggest category is the “is it worth it and can I rely on it ?” group of questions. The short answer is “yes, you can.” But it’s a conditional answer because the slightly longer and more correct answer is “yes, you can, but be careful, know what it’s good at, what it’s not and remember that the buck stops with you!”

To support this advice I lean on the data from a recent working paper by Harvard Business School. The study involved 758 consultants from Boston Consulting Group and it’s purpose was to look at how to best use AI tools in the workplace.

The headline finding is when you train people to use AI, they produce higher quality results faster than people with no training and people who are not using AI at all.

BUT (there’s a “but”)…this only applied for tasks within the AI’s core capabilities. Which makes sense when you think about it. Because when you give AI a task it’s not trained to do, the human wins hands down, every time. It’s like using a calculator to write words. After “hello,” “bill,” and “boobies” you’re running out of options.

For the study, Harvard used GPT-4 for the test, which involved splitting the BCG consultants into 3 groups; those that had been given “prompt” training, those that hadn’t but were left to use the AI anyway, and a third group that had no access to ChatGPT at all.

When given tasks to do that were within ChatGPT’s range of capabilities, the study found:

  • Consultants using AI produced higher quality results.
  • They were also faster and more productive.
  • Below-average performers got the biggest boost from using AI tools.
  • Low performers + AI performed better than high performers without AI.
  • Training in AI improved both the quality and the speed of output for all consultants.

When the consultants were asked to perform tasks outside of ChatGPT’s range of capabilities, the study found:

  • Those who used AI were more likely to trust ChatGPT even when the output was incorrect.
  • The AI was consistently wrong and also dangerously convincing and “confidently correct.”
  • Training of the consultants in AI led to even greater levels of trust in the output even though it was consistently wrong.
  • There was less variety in the results that were more generic and similar across the groups.

➜ Here’s The Thing: When you play to AI’s strengths, it can raise your performance and enable you to compete above your pay grade. It will also improve your productivity to do more with less.

On the flip side, the reverse is true. When you cross the line and take AI outside its comfort zone, you’re heading for trouble (see case of lawyer who used ChatGPT to prepare for a court case, didn’t check it, used it, and is now in a heap of trouble.)

The issue is knowing where the line is.

This applies to all of the AIs, not just ChatGPT. They’re convincing and confident when they’re wrong. That’s because they don’t know they’re wrong!

So, the key takeaway is: use AI to raise your game, but do so with a critical eye.

See AI as your multi-purpose Swiss Army productivity tool. It can do many things, but at the end of the day, it’s in your hands.

There’s no doubt that AI can both supercharge your output AND help you compete above your pay grade. Just be careful, that’s all.

Read Harvard Business School Study


Join 15k Subscribers and Get Wiser! Every Week

➜ Subscribe for free here


w/News

Global Powers Sign Declaration to Jointly Govern AI Risks at UKs Safety Summit

The United States, China, UK, EU and over 25 countries signed a declaration to collectively govern AI risks at the UK’s AI Safety Summit. Following on from President Biden’s 100+ page Executive Order on Monday, this gathering of political and tech leaders marked a milestone in the debate on AI oversight and regulation. Notably, China was included in this first major West-led effort on AI safety, agreeing to increased collaboration. Known as the ‘Bletchley Declaration’, the agreement lays the groundwork for joint risk assessment and policies to ensure safe, ethical AI development. Whilst many commentators, including me, remain skeptical about geopolitical rivals cooperating in the AI arms race, there’s no denying that was a significant moment/photo opportunity. Let’s see how long it lasts!

Sources: Reuters (Biden Exec Order) | AI Safety Summit | The King | Sunak Interviews Musk

Apple Is Playing The AI Long Game

Microsoft and Google have been the notable Big Tech front runners in the AI gold rush this year, closely followed by Meta. Last month Amazon joined the party and started to show its hand on AI, leaving a notable absentee - Apple. That’s not to say Apple aren’t in the game. Every product Apple has is packed with AI. Just wait until SiriGPT hits every device! It’s just that Apple don’t classify or report on this tech capability as discretely “AI”. Having said that, this week we got a glimpse into how Apple are playing the AI long game with the announcement of their new M3 AI supercharged chips powering the new MacBook Pro. Support for up to 128 gb of memory on these chips unlocks workflows previously not possible on a laptop. The Neural Engine is up to 60% faster than in the M1 family of chips, which is super helpful for image and video work. Apple Press Release

Breakthrough AI Learns Concepts from Words, Like Humans

A recent study from New York University revealed an AI model that mimics the way that a toddler learns a language. This is different to the conventional way that large language models learn. As I mentioned above, a toddler can learn and understand that a prairie dog is not a dog, but that’s not easy for an AI. The model was trained to learned from mistakes in much the same way as humans learn a language when children. This breakthrough has significant implications for natural language processing and the development of AI to be more human-like.

Source: Singularity Hub

Sam Bankman-Fried Found Guilty Of Crypto Fraud

I know it’s not an AI story, but it’s a significant one in the world of crypto and Web3. This week a jury found Sam Bankman-Fried, the founder of one of the largest cryptocurrency exchanges called FTX, to be guilty of defrauding customers and investors. In a nutshell, SBF used customer’s crypto assets for risky investments, buying things and paying millions of dollars for celebrity endorsements. The former crypto executive faces up to 110 years in prison, although he won’t be sentenced until March next year.

Source: The Information

Five Snippets Of News

66% of interactions with customer service chatbots were rated 1 out of 5. Full research paper.


w/Productivity

Supercharged Productivity Tips and Tricks

Raycast

  • If you’re a Apple user then you’re probably using Raycast on your Mac instead of the native Spotlight, why wouldn’t you be?? Raycast is free but the app has just added a paid layer that integrates with ChatGPT and GPT-4’s real-time web results. For $8/month you can now search the Internet from your Mac without opening a browser.-

Brave Browser

  • If privacy is your thing, then look at what Brave, the privacy-centric browser, have done with their new AI chatbot called Leo. It is designed to offer "anonymous and secure" assistance to its users, such as translating, answering questions, summarising web pages, and content generation. The unique selling point of Leo is that there is no recording of your conversations, nor is any of them used in AI training. Leo is based on Meta’s Llama 2 AI model and is free for all desktop users, but a premium version using Anthropic’s Claude Instant model (with faster responses) will be available for $15 per month. (NOTE: I’ve just ditched Microsoft Edge and moved back to Brave, I’ll let you know how I get on.)

Hal9

  • Are you an SME and want/need to get instant answers from your data but don’t have your own data analysts to do it for you? Then meet Hal9, an AI tool that lets you chat with your enterprise databases using secure generative AI. Where Hal9 is different to using ChatGPT’s Code Interpreter for data analysis is that your data never leaves your database. Where as ChatGPT requires that you upload your commercially sensitive data to be analysed, Hal9 brings the power of AI to you, not the other way round.


Join the mailing list and get your free beginner's guide to ChatGPT

🙏 Support For Wiser!

Thank you for reading Wiser! If you got value and would like to support what I’m doing, do this:

  • Forward this email: send it to anyone you know who’s interested in the tech economy.
  • Make a donation: go to BuyMeACoffee to make a donation in the form of a virtual cup of coffee, they only cost €2 each. And who doesn’t love a coffee, right?
  • Check out my website: You’ll find links to everything I do at rickhuckstep.com.

Richard Turrin

Helping you make sense of going Cashless | Best-selling author of "Cashless" and "Innovation Lab Excellence" | Consultant | Speaker | Top media source on China's CBDC, the digital yuan | China AI and tech

1y

Fabulous read Rick. Love the HBS study. It made it clear that AI is a double edged knife cutting both ways. One way is productivity gains the other losses!

Like
Reply
Efi Pylarinou

Top Global Fintech & Tech Influencer • Trusted by Finserv & Tech Global • Content & Influencer Services • Advisory for Digital Transformation • Speaking • connect@efipylarinou.com

1y

Great edition Rick Huckstep. I especially appreciated the Harvard study.

Is Al Bundy ok ?

To view or add a comment, sign in

Insights from the community

Explore topics