Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    I’ve tested the latest Switch 2 controllers and this one is the best

    October 5, 2025

    Version History is live — here’s how to find it

    October 5, 2025

    The best lists to keep — and the best ways to keep them

    October 5, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » Selective Forgetting Can Help AI Learn Better
    Business

    Selective Forgetting Can Help AI Learn Better

    News RoomBy News RoomMarch 11, 20243 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    The original version of this story appeared in Quanta Magazine.

    A team of computer scientists has created a nimbler, more flexible type of machine learning model. The trick: It must periodically forget what it knows. And while this new approach won’t displace the huge models that undergird the biggest apps, it could reveal more about how these programs understand language.

    The new research marks “a significant advance in the field,” said Jea Kwon, an AI engineer at the Institute for Basic Science in South Korea.

    The AI language engines in use today are mostly powered by artificial neural networks. Each “neuron” in the network is a mathematical function that receives signals from other such neurons, runs some calculations, and sends signals on through multiple layers of neurons. Initially the flow of information is more or less random, but through training, the information flow between neurons improves as the network adapts to the training data. If an AI researcher wants to create a bilingual model, for example, she would train the model with a big pile of text from both languages, which would adjust the connections between neurons in such a way as to relate the text in one language with equivalent words in the other.

    But this training process takes a lot of computing power. If the model doesn’t work very well, or if the user’s needs change later on, it’s hard to adapt it. “Say you have a model that has 100 languages, but imagine that one language you want is not covered,” said Mikel Artetxe, a coauthor of the new research and founder of the AI startup Reka. “You could start over from scratch, but it’s not ideal.”

    Artetxe and his colleagues have tried to circumvent these limitations. A few years ago, Artetxe and others trained a neural network in one language, then erased what it knew about the building blocks of words, called tokens. These are stored in the first layer of the neural network, called the embedding layer. They left all the other layers of the model alone. After erasing the tokens of the first language, they retrained the model on the second language, which filled the embedding layer with new tokens from that language.

    Even though the model contained mismatched information, the retraining worked: The model could learn and process the new language. The researchers surmised that while the embedding layer stored information specific to the words used in the language, the deeper levels of the network stored more abstract information about the concepts behind human languages, which then helped the model learn the second language.

    “We live in the same world. We conceptualize the same things with different words” in different languages, said Yihong Chen, the lead author of the recent paper. “That’s why you have this same high-level reasoning in the model. An apple is something sweet and juicy, instead of just a word.”

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleAirbnb is banning indoor security cameras
    Next Article Russia Attacked Ukraine’s Power Grid at Least 66 Times to ‘Freeze It Into Submission’

    Related Posts

    China Rolls Out Its First Talent Visa as the US Retreats on H-1Bs

    October 3, 2025

    OpenAI’s New Sora App Lets You Deepfake Yourself for Entertainment

    October 3, 2025

    This AI-Powered Robot Keeps Going Even if You Attack It With a Chainsaw

    October 3, 2025

    Chatbots Play With Your Emotions to Avoid Saying Goodbye

    October 3, 2025

    Exclusive: Mira Murati’s Stealth AI Lab Launches Its First Product

    October 2, 2025

    Why One VC Thinks Quantum Is a Bigger Unlock Than AGI

    October 1, 2025
    Our Picks

    Version History is live — here’s how to find it

    October 5, 2025

    The best lists to keep — and the best ways to keep them

    October 5, 2025

    Jeff Bezos’ Blue Origin Wins Contract to Take NASA Rover to the Moon

    October 5, 2025

    This week’s best deal is a ‘kids’ Kindle Paperwhite that’s better than the adult version

    October 4, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    Breaking up (Google) is hard to do

    By News RoomOctober 4, 2025

    Breaking up Google’s ad tech monopoly is, apparently, like going to Mars or trying to…

    Ecovacs’ Deebot X8 and X9 Pro Omni robovacs have hit a new low price

    October 4, 2025

    Instagram wants me to make content — I just want to post a photo

    October 4, 2025

    Acer’s latest Spin 514 is so close to Chromebook greatness

    October 4, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.