“Stop GPT” – Apple’s Curtain Call on ChatGPT’s Internal Use
Apple Inc. has joined the league of corporations banning the internal use of ChatGPT. The irony is palpable as this decision coincides with OpenAI's recent launch of a mobile app version of ChatGPT for iOS.
An internal Apple document reviewed by a trusted source informed employees of the ban.The Apple mandate aligns with the concerns raised by corporations like Amazon and leading banks like JPMorgan Chase, Bank of America, Citigroup, and Deutsche Bank. These organizations have proscribed ChatGPT and similar products internally.
The apprehensions hinge on the possible leakage of sensitive corporate information. This issue is compounded by AI systems used for improving bots' proficiency.
ChatGPT is Not Alone, Nor is AppleThe ban is not limited to ChatGPT. Apple has extended it to Copilot - the coding robot from GitHub. Fueling the rumor mill, this move could potentially signal Apple's venture into developing a proprietary large language model (LLM). This potential model may compete with the likes of ChatGPT and Google Bard.
On the other hand, Apple's competitor, Samsung, has twice imposed and lifted bans on ChatGPT.
According to sources, Samsung staff leveraged ChatGPT to resolve source code bugs. In addition, they have used the AI tool to transform meeting notes into minutes. Samsung reestablished its ban earlier this month to preclude similar episodes.
Such incidents illustrate the conundrum facing corporations regarding LLM bots. The UK's spy agency, GCHQ, has flagged the risk that these models may inadvertently divulge confidential business data if identical queries are made.
In addition, the agency warns that AI providers could review the queries. This may further increase the risk of exposing corporate secrets.
The Acknowledgement and ResponseNotably, OpenAI acknowledged a bug in the Redis-py library a few months back. The bug made parts of user conversations with ChatGPT publicly visible. This incident underlines the potential risks associated with LLM chatbots.
ChatGPT warns on login that conversations may be reviewed by our AI trainers'...Users should have zero expectations of privacy when using the ChatGPT web demo.Vlad Tushkanov, Lead Data Scientist at KasperskyOpenAI recently launched a new feature to disable chat history. This launch seemingly indicates the organization's effort to handle privacy concerns more efficiently.
It is expected that the new feature will prevent the conversations from being used to train their models. However, OpenAI will retain conversations for 30 days. During this period, the company can review them to scan and monitor abuses.
Furthermore, OpenAI announced plans to introduce a business version of ChatGPT. It will likely allow businesses to enjoy more control over their data usage.Although OpenAI is developing several new measures to ensure data security, giant organizations are perhaps not satisfied with its effort.
At least, Apple's recent move points to this fact prominently.
With the growing popularity of AI, issues surrounding privacy, data security, and ethical use are also amplifying. This underscores the necessity of employing proper measures to establish AI as an ethical and reliable alternative.
What OpenAI chooses to do to handle the security concerns and uplift ChatGPT's reliability remains to be seen.
The post Stop GPT" - Apple's Curtain Call on ChatGPT's Internal Use appeared first on The Tech Report.