st1nger :unverified: 🏴☠️ :linux: :freebsd:<p>Researchers have uncovered a new supply chain attack called <a href="https://infosec.exchange/tags/Slopsquatting" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Slopsquatting</span></a> where threat actors exploit hallucinated, non-existent package names generated by <a href="https://infosec.exchange/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> coding tools like <a href="https://infosec.exchange/tags/GPT4" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPT4</span></a> and <a href="https://infosec.exchange/tags/CodeLlama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CodeLlama</span></a> </p><p>These believable yet fake packages (amounting to 19.7% or 205,000 packages), recommended in test samples were found to be fakes., can be registered by attackers to distribute malicious code. </p><p>Open-source models -- like <a href="https://infosec.exchange/tags/DeepSeek" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DeepSeek</span></a> and <a href="https://infosec.exchange/tags/WizardCoder" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>WizardCoder</span></a> -- hallucinated more frequently, at 21.7% on average, compared to the commercial ones (5.2%) like GPT 4. </p><p>We Have a Package for You! A Comprehensive Analysis of Package Hallucinations<br>by Code Generating LLMs (PDF) <a href="https://arxiv.org/pdf/2406.10279" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/pdf/2406.10279</span><span class="invisible"></span></a></p>