So-called “unlearning” techniques are used to make a generative AI model forget specific and undesirable info it picked up from training data, like sensitive private data or copyrighted material. But current unlearning techniques are a double-edged sword: They could make a model like OpenAI’s GPT-4o or Meta’s Llama 3.1 405B much less capable of answering […]
© 2024 TechCrunch. All rights reserved. For personal use only.
source https://techcrunch.com/2024/07/29/making-ai-models-forget-undesirable-data-hurts-their-performance/
0 comments:
Post a Comment