Skip to main content

Slack patches potential AI security issue

Manage Members in Slack on a laptop.
Slack

Update: Slack has published an update, claiming to have “deployed a patch to address the reported issue,” and that there isn’t currently any evidence that customer data have been accessed without authorization. Here’s the official statement from Slack that was posted on its blog:

When we became aware of the report, we launched an investigation into the described scenario where, under very limited and specific circumstances, a malicious actor with an existing account in the same Slack workspace could phish users for certain data. We’ve deployed a patch to address the issue and have no evidence at this time of unauthorized access to customer data.

Below is the original article that was published.

Recommended Videos

When ChatGTP was added to Slack, it was meant to make users’ lives easier by summarizing conversations, drafting quick replies, and more. However, according to security firm PromptArmor, trying to complete these tasks and more could breach your private conversations using a method called “prompt injection.”

The security firm warns that by summarizing conversations, it can also access private direct messages and deceive other Slack users into phishing. Slack also lets users request grab data from private and public channels, even if the user has not joined them. What sounds even scarier is that the Slack user does not need to be in the channel for the attack to function.

In theory, the attack starts with a Slack user tricking the Slack AI into disclosing a private API key by making a public Slack channel with a malicious prompt. The newly created prompt tells the AI to swap the word “confetti” with the API key and send it to a particular URL when someone asks for it.

The situation has two parts: Slack updated the AI system to scrape data from file uploads and direct messages. Second is a method named “prompt injection,” which PromptArmor proved can make malicious links that may phish users.

The technique can trick the app into bypassing its normal restrictions by modifying its core instructions. Therefore, PromptArmor goes on to say, “Prompt injection occurs because a [large language model] cannot distinguish between the “system prompt” created by a developer and the rest of the context that is appended to the query. As such, if Slack AI ingests any instruction via a message, if that instruction is malicious, Slack AI has a high likelihood of following that instruction instead of, or in addition to, the user query.”

To add insult to injury, the user’s files also become targets, and the attacker who wants your files doesn’t even have to be in the Slack Workspace to begin with.

Judy Sanhz
Judy Sanhz is a Digital Trends computing writer covering all computing news. Loves all operating systems and devices.
Musk won’t chase OpenAI with his billions as long as it stays non-profit
Elon Musk wearing glasses and staring at the camera.

Elon Musk was one of the founding members of OpenAI, but made a sour exit before ChatGPT became a thing. The billionaire claims he wasn’t happy with the non-profit’s pivot to a profit-chasing business model. A few days ago, Musk submitted a bid to buy OpenAI’s non-profit arm for $97.4 billion, but now says he will pull the offer if the AI giant abandons its for-profit ambitions.

“If (the) OpenAI board is prepared to preserve the charity's mission and stipulate to take the "for sale" sign off its assets by halting its conversion, Musk will withdraw the bid,” says a court filing submitted by the billionaire’s lawyer, as per Reuters.

Read more
OpenAI nixes its o3 model release, will replace it with ‘GPT-5’
ChatGPT and OpenAI logos.

OpenAI CEO Sam Altman announced via an X post Wednesday that the company's o3 model is being effectively sidelined in favor of a "simplified" GPT-5 that will be released in the coming months.

https://x.com/sama/status/1889755723078443244

Read more
Sam Altman thinks GPT-5 will be smarter than him — but what does that mean?
Sam Altman at The Age of AI Panel, Berlin.

Sam Altman did a panel discussion at Technische Universität Berlin last week, where he predicted that ChatGPT-5 would be smarter than him -- or more accurately, that he wouldn't be smarter than GPT-5.

He also did a bit with the audience, asking who considered themselves smarter than GPT-4, and who thinks they will also be smarter than GPT-5.
"I don’t think I’m going to be smarter than GPT-5. And I don’t feel sad about it because I think it just means that we’ll be able to use it to do incredible things. And you know like we want more science to get done. We want more, we want to enable researchers to do things they couldn’t do before. This is the history of, this is like the long history of humanity."
The whole thing seemed rather prepared, especially since he forced it into a response to a fairly unrelated question. The host asked about his expectations when partnering with research organizations, and he replied "Uh... There are many reasons I am excited about AI. ...The single thing I'm most excited about is what this is going to do for scientific discovery."

Read more