This AI newsletter is all you need #41
What happened this week in AI by Louie
This week, there was a focus on AI safety, privacy, and regulation. Although the emergence of the next generation of AI models has many advantages, democratization, accessibility, and affordability of generative AI tools, as well as the increased capabilities of LLMs, have created a significant potential for misuse and misinformation, either produced by users or the systems themselves. The current discrepancy between the pace of growth and the development of safety measures and regulations makes the discussion of AI safety even more pressing.
Last week, one of the radical steps toward AI safety was taken with an open letter from The Future of Life Institute (FLI). The letter, signed by over 50,000 people, urges all AI labs to pause the training of AI systems that are more powerful than GPT-4 for at least six months. The letter highlights the risks of AI, criticizing the "out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control." It also emphasizes the lack of appropriate planning and management for this potentially highly disruptive technology.
The FLI letter has sparked a wide range of opinions on whether LLMs and AI should be regulated, and what the actual risks are. Many responses have been generated, both for and against, with individuals highlighting their own perceived risks. Eliezer Yudkowsky, a researcher at the Machine Intelligence Research Institute, wrote an article expressing his support for the letter's intentions but explaining why he did not sign it. He believes that the letter understates the severity of the situation and does not request enough action to address it. In contrast, Yann LeCun has been dismissive of the need for regulation, responding with an "Okay doomer..." attitude.
Another recent development towards AI safety is Italy's ban on the use of ChatGPT due to privacy concerns, citing non-compliance with EU data protection regulations (GDPR) by OpenAI. Consequently, OpenAI has been compelled to implement a complete geo-block on the usage of ChatGPT within Italy.
At Towards AI, we see several potential risks associated with AI, including 1) the amplification of misinformation through automated propaganda or deepfakes and other tools to empower bad actors, 2) over-reliance and overconfidence in systems that still make mistakes, 3) the misalignment of existing laws and regulations such as copyright and GDPR, 4) social and economic disruption resulting from rapid AI adoption and its impact on jobs and existing industries, and 5) existential risks from superintelligence or misaligned AGI. While the open letter has valid points regarding AI risks and the need for regulation, we don't believe that a pause in AI development would be effective or desirable, as it's difficult to ensure that other countries like China wouldn't continue to develop these models. AI progress and adoption are inevitable, and countries that limit its use are likely to fall behind. However, we do believe that more thought, care, and investment should be put into optimizing the odds of positive AI outcomes while minimizing risks. We also think that AI should be regulated, and governments should establish new departments, policies, and internal expertise to preempt and manage some of these risks. Although it's difficult to determine the likelihood of misaligned AGI risks and their timescale, given what's at stake, it makes sense to invest heavily in researching and managing these risks, even if the odds are small.
- Louie Peters — Towards AI Co-founder and CEO
Hottest News
Twitter has made its tweet recommendation algorithm code available on GitHub, which provides insight into the factors that determine whether a tweet appears on a user's timeline. The blog post that accompanies the code release serves as an introduction to how the algorithm selects tweets for a user's timeline.
Time magazine published an opinion piece by Eliezer Yudkowsky in response to the letter by FLI. He stated, “I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it.” According to him, AI poses an existential threat, and we are not adequately prepared to deal with it. Therefore, he argued that it is necessary to "shut it all down."
OpenAI's plugins expand ChatGPT's capabilities to interact with the Internet, enabling functions like flight booking, grocery ordering, web browsing, and more. These plugins are small pieces of code that instruct ChatGPT on how to utilize external online resources. However, some AI researchers worry that giving AI models access to external systems can lead to harm, without the need for consciousness or sentience.
One of the prominent aspects of GPT-4 is its ability to respond to queries with confidence. However, this is both a feature and a bug. The developers of GPT-4 acknowledge in a technical report that it can sometimes make basic reasoning mistakes that are inconsistent with its proficiency across numerous domains.
The Italian data protection agency has ordered OpenAI to block ChatGPT in Italy, citing unlawful data gathering. Their main concern is privacy violations, arguing that OpenAI is non-compliant with EU data protection regulations (GDPR). OpenAI has complied with the order by disabling ChatGPT for users in Italy.
Three 5-minute reads/videos to keep you learning
In an experiment, AI was used to generate a comprehensive marketing campaign in just 30 minutes for a new educational game launch. The AI conducted market research, developed a website and social media campaign, and more. This post explores the potential and disruptive power of AI in marketing.
This article explores the significant changes that LLMs may enable in the creation and distribution of software, as well as in how people interact with software. It answers various questions surrounding topics, such as interaction models, software customization, intent specification, and more.
This is an interview with Sander Schulhoff, the creator of learnprompting.org, the largest online resource for prompting. It explores the exciting skill of prompting, which can lead to various opportunities and enhance productivity. It discusses the significance of learning this skill and provides tips for improving it.
This article provides a summary of recent work from Stanford aimed at significantly increasing the context window for language models. By enabling longer prompts and outputs, this advancement may lead to new possibilities in tasks such as summarizing entire books, editing entire repositories of code, and generating multimodal videos.
This article offers an overview of the history of Generative Pre-trained Transformer (GPT) research, emphasizing the latest state-of-the-art models and their distinctions. It showcases how the current GPT research is leading to significant advancements in the field.
Papers & Repositories
Vicuna-13B is an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. Preliminary evaluation using GPT-4 as a judge shows that Vicuna-13B achieves over 90% quality of OpenAI ChatGPT and Google Bard while outperforming other models like LLaMA and Stanford Alpaca in over 90% of cases.
This paper introduces a new pre-training paradigm that improves both the training data efficiency and capabilities of LMs in the infilling task. The effectiveness of this paradigm is demonstrated through extensive experiments on both programming and natural language models, where it outperforms strong baselines.
LLaMA-Adapter is an effective method for fine-tuning models into instruction-following ones by using learnable prompts. With multi-modal inputs, it produces high-quality responses and achieves excellent reasoning capabilities. With the help of 52K self-instruct demonstrations, LLaMA-Adapter introduces only 1.2M learnable parameters to the frozen LlaMa 7B model and takes less than one hour to fine-tune on 8 A100 GPUs.
Recommended by LinkedIn
This paper investigates the potential of large language models (LLMs) for text annotation tasks, specifically focusing on ChatGPT. The paper shows that ChatGPT zero-shot classifications, without any additional training, outperform MTurk annotations and achieve this at a significantly lower cost.
This paper provides a survey of Artificial Intelligence Generated Content (AIGC), highlighting recent advancements in complex modeling and large datasets, and exploring new ways to integrate technologies such as reinforcement learning. It also offers a comprehensive review of the history of generative models, covering both unimodal and multimodal interaction.
Enjoy these papers and news summaries? Get a daily recap in your inbox!
The Learn AI Together Community section!
Weekly AI Podcast
Louis Bouchard has launched a weekly podcast aimed at demystifying the various roles in the AI industry and discussing interesting AI topics with expert guests. The podcast, available on YouTube, Spotify, and Apple Podcasts, features interviews with industry experts. In the latest episode, Louis interviews Sander Schulhoff, the creator of Learn Prompting, the most comprehensive guide on prompt engineering. As shared in our learning section above, the interview is all about prompting, demystifying it, and condensing it into a one-hour discussion. This is the goal of this weekly podcast: to demystify something about AI every week with the help of an expert. Each episode features a specific topic, sub-field, or different roles related to AI, with the aim of teaching and sharing knowledge from experts who have worked hard to gather it.
A small teaser for the next episode: it will be about self-driving cars!
Meme of the week!
Meme shared by neuralink#7014
Featured Community post from the Discord
Oliver Z#1100 has created a Chrome extension called TwOp that can generate AI-powered social media posts by entering topics, keywords, themes, and desired tones. The extension is open-source and available for download on the Chrome Web Store and GitHub. Check it out and support a fellow community member. Share your feedback or questions in the thread here.
AI poll of the week!
TAI Curated section
Article of the week
In this article, the author provides an in-depth tour of PaLM-E, Google's latest publication, which is described as an embodied multimodal language model. This means that it can comprehend various types of data, including text and images, from the ViT and PaLM models mentioned earlier.
Our must-read articles
If you are interested in publishing with Towards AI, check our guidelines and sign up. We will publish your work in our network if it meets our editorial policies and standards.
Job offers
Interested in sharing a job opportunity here? Contact sponsors@towardsai.net.
If you are preparing your next machine learning interview, don’t hesitate to check out our leading interview preparation website, confetti!
Data Science Intern at Taylor Corporation
1yHow can be determined if a model is more powerful than GPT-4?
Project Controls P6 Scheduler
1yTerrible cover image. Do better.
Realtor Associate @ Next Trend Realty LLC | HAR REALTOR, IRS Tax Preparer
1yThanks for sharing.