Article 6EVRG Microsoft AI Researchers Accidentally Exposed Terabytes of Internal Sensitive Data

Microsoft AI Researchers Accidentally Exposed Terabytes of Internal Sensitive Data

by
msmash
from Slashdot on (#6EVRG)
Microsoft AI researchers accidentally exposed tens of terabytes of sensitive data, including private keys and passwords, while publishing a storage bucket of open source training data on GitHub. From a report: In research shared with TechCrunch, cloud security startup Wiz said it discovered a GitHub repository belonging to Microsoft's AI research division as part of its ongoing work into the accidental exposure of cloud-hosted data. Readers of the GitHub repository, which provided open source code and AI models for image recognition, were instructed to download the models from an Azure Storage URL. However, Wiz found that this URL was configured to grant permissions on the entire storage account, exposing additional private data by mistake. This data included 38 terabytes of sensitive information, including the personal backups of two Microsoft employees' personal computers. The data also contained other sensitive personal data, including passwords to Microsoft services, secret keys and more than 30,000 internal Microsoft Teams messages from hundreds of Microsoft employees.

twitter_icon_large.pngfacebook_icon_large.png

Read more of this story at Slashdot.

External Content
Source RSS or Atom Feed
Feed Location https://rss.slashdot.org/Slashdot/slashdotMain
Feed Title Slashdot
Feed Link https://slashdot.org/
Feed Copyright Copyright Slashdot Media. All Rights Reserved.
Reply 0 comments