Ariella B.’s Post

View profile for Ariella B., graphic

You can afford better content! Bring your brand story to life with an unbeatable combination of creativity, analytics, and a bias for action! 🎯 Strategy 💼Project Management 📖Storytelling 👻Ghostwriting 📑Editing

The greatest detective in fiction, Sherlock Holmes, believed that memory is limited. Accordingly, he limited his knowledge of facts only to those he considered relevant. It's debatable whether this is how humans should approach learning. But there may be something to it when it comes to AI. Putting in selective forgetting can have it focus on meaning independent of language to then pick up additional languages more easily, or so a study shows. From the abstract: "Pretrained language models (PLMs) are today the primary model for natural language processing. Despite their impressive downstream performance, it can be difficult to apply PLMs to new languages, a barrier to making their capabilities universally accessible. While prior work has shown it possible to address this issue by learning a new embedding layer for the new language, doing so is both data and compute inefficient. We propose to use an active forgetting mechanism during pretraining, as a simple way of creating PLMs that can quickly adapt to new languages. Concretely, by resetting the embedding layer every K  updates during pretraining, we encourage the PLM to improve its ability of learning new embeddings within limited number of updates, similar to a meta-learning effect. Experiments with RoBERTa show that models pretrained with our forgetting mechanism not only demonstrate faster convergence during language adaptation, but also outperform standard ones in a low-data regime, particularly for languages that are distant from English.See link in comment."

To view or add a comment, sign in

Explore topics