Microsoft Briefly Restricted Employee Access To OpenAI's ChatGPT, Citing Security Concerns
Microsoft has invested billions of dollars in OpenAI. But for a brief time on Thursday, employees of the software company weren't allowed to use the startup's most famous product, ChatGPT, CNBC reported. From a report: "Due to security and data concerns a number of AI tools are no longer available for employees to use," Microsoft said in an update on an internal website. "While it is true that Microsoft has invested in OpenAI, and that ChatGPT has built-in safeguards to prevent improper use, the website is nevertheless a third-party external service," Microsoft said. "That means you must exercise caution using it due to risks of privacy and security. This goes for any other external AI services, such as Midjourney or Replika, as well." The company initially said it was banning ChatGPT and design software Canva, but later removed a line in the advisory that included those products. After initial publication of this story, Microsoft reinstated access to ChatGPT. In a statement to CNBC, Microsoft said the ChatGPT temporary blockage was a mistake resulting from a test of systems for large language models. "We were testing endpoint control systems for LLMs and inadvertently turned them on for all employees," a spokesperson said. "We restored service shortly after we identified our error. As we have said previously, we encourage employees and customers to use services like Bing Chat Enterprise and ChatGPT Enterprise that come with greater levels of privacy and security protections."
Read more of this story at Slashdot.