Microsoft briefly blocks ChatGPT for its employees, attributing error to systems test
Microsoft briefly prevented its employees from using ChatGPT and other artificial intelligence (AI) tools on Nov. 9, CNBC reported on the same day.
CNBC claimed to have seen a screenshot indicating that the AI-powered chatbot, ChatGPT, was inaccessible on Microsoft’s corporate devices at the time.
Microsoft also updated its internal site, stating that due to security and data concerns, “a number of AI tools are no longer available for employees to use.”
That notice alluded to Microsoft’s investments in ChatGPT parent OpenAI as well as ChatGPT’s own built-in safeguards. However, it warned company employees against using the service and its competitors, as the message continued:
“[ChatGPT] is … a third-party external service … That means you must exercise caution using it due to risks of privacy and security. This goes for any other external AI services, such as Midjourney or Replika, as well.”
CNBC said that Microsoft briefly named the AI-powered graphic design tool Canva in its notice as well, though it later removed that line from the message.
Microsoft blocked services accidentally
CNBC said that Microsoft restored access to ChatGPT after it published its coverage of the incident. A representative from Microsoft told CNBC that the company unintentionally activated the restriction for all employees while testing endpoint control systems, which are designed to contain security threats.
The representative said that Microsoft encourages its employees to use ChatGPT Enterprise and its own Bing Chat Enterprise, noting that those services offer a high degree of privacy and security.
The news comes amidst widespread privacy and security concerns around AI in the U.S. and abroad. While Microsoft’s restrictive policy initially appeared to demonstrate the company’s disapproval of the current state of AI security, it seems that the policy is, in fact, a resource that could protect against future security incidents.