Microsoft Employees Temporarily Lose Access to ChatGPT
In a move that raised suspicions over potential security issues, Microsoft briefly stopped internal access to ChatGPT on Thursday. Microsoft employees trying to access the AI chatbot using company devices were rerouted and presented with a message that Microsoft had blocked access to the AI chatbot.
Due to security and data concerns, a number of AI tools are no longer available for employees to use", read an update on Microsoft's internal website.
However, the lockout lasted only briefly, and according to a clarification later shared by Microsoft, it was unintended.
Microsoft Quashes Security Fear RumorsThe sudden pulling of internal access led to widespread rumors that the tech giant had made the move over potential cyber security fears. This isn't too surprising considering the timing of the lockout, which came just a day after OpenAI announced that its APIs and ChatGPT had weathered a major DDoS attack.
At OpenAI's developer conference DevDay earlier this week, CEO Sam Altman called the company's collaboration with Microsoft as the best partnership in tech".However, Microsoft later quashed these theories, stating that the lockout had been enabled inadvertently while testing LLM endpoint control systems.
Microsoft cutting off internal access to ChatGPT over security fears would have indeed been ironic, considering the tech giant has already poured $13 billion into OpenAI.
The two companies work very closely together, with Microsoft constantly integrating OpenAI's generative AI tools into its vast portfolio of Windows and Office software.
The update posted on Microsoft's internal website reminded employees to exercise caution while using ChatGPT.
While it is true that Microsoft has invested in OpenAI, and that ChatGPT has built-in safeguards to prevent improper use, the website is nevertheless a third-party external service", it read, adding that the same warning also applied to using other AI services like Midjourney or Replika.
However, when responding to queries about the accidental lockout, a Microsoft spokesperson reported that the company encourages its customers and employees to use ChatGPT Enterprise and Bing Chat Enterprise for enhanced privacy and security.
The widespread suspicion of Microsoft turning its back on ChatGPT further led to rumors that OpenAI had blocked internal access to Office 365 in retaliation. However, Altman himself took to microblogging platform X to address these rumors, calling them unfounded".
Potential Security Threats RemainWhile this was a cause of an accidental lockout rather than a response to a cybersecurity situation, potential threats still remain when it comes to the use of AI. Anonymous Sudan, a hacktivist group with links to Russia, claimed responsibility for the attack that brought ChatGPT down this week.
Back in January, a high-ranking engineer at Microsoft wrote that while employees could use ChatGPT, they should refrain from feeding the chatbot any confidential information.
The UK government's spy agency GCHQ issued similar warnings earlier this year, cautioning against feeding large language models (LLMs) with sensitive corporate data.
Samsung banned its staff from using ChatGPT for three weeks in March after sensitive information, including that of an in-development semiconductor, was leaked to the AI service.
The post Microsoft Employees Temporarily Lose Access to ChatGPT appeared first on The Tech Report.