According to reports from Bloomberg News, Samsung may not permit its employees to utilize generative artificial intelligence technologies like ChatGPT on company-owned devices or on the company’s internal networks because of the security dangers associated with doing so.

A note has been distributed to all of the employees, and this change will only be temporary. This decision was made to guarantee that the organization is able to “create a secure environment” in which generative AI tools can be used without risk.

Bloomberg reports that the corporation is concerned that data sent to AI systems is stored on external servers, which makes it impossible to recover and remove the data.

ChatGPT is used worldwide to summarize reports, but that means disclosing sensitive information.


People all over the world are utilizing ChatGPT as a useful tool to summarize reports; nevertheless, this necessitates the sharing of sensitive information, which may then be accessible to OpenAI. ChatGPT has gained widespread popularity across the globe. However, the user’s level of privacy is dependent on the method by which they access the service.

According to The Verge, if a company is ChatGPT’s API, then chats with the chatbot are not visible to OpenAI’s support team and are not used to train the company’s models.

Similarly, OpenAI’s help team does not have access to these talks. On the other hand, this is not the case if the text is entered into the general web interface using the parameters that are pre-configured for it.

And Samsung is not the only company moving in this direction. Bank of America, Citigroup, Deutsche Bank, Goldman Sachs, and Wells Fargo are keeping a watch on the use of such tools (and have taken various steps). JPMorgan Chase is also monitoring the situation. Because of concerns about the spread of false information, schools in New York City have prohibited ChatGPT.

    Leave a Reply