Teknobu – UK Based Business Growth and IT Specialists

Teknobu – UK Based Business Growth and IT Specialists

Security Update: AI Is Your New Co-worker—But Can You Trust It?

Security Update: AI Is Your New Co-worker—But Can You Trust It?

Generative AI tools like OpenAI’s ChatGPT and Microsoft’s Copilot are becoming essential in our daily work lives, but they also bring along significant privacy and security risks. As these tools rapidly evolve, concerns are growing about their potential to expose sensitive data, especially in workplace settings.

Recently, privacy advocates raised alarms about Microsoft’s new Recall tool, which takes frequent screenshots of your laptop. This feature has caught the attention of the UK’s Information Commissioner’s Office, which is now seeking more information from Microsoft regarding the safety measures of the upcoming Copilot+ PCs.

OpenAI’s ChatGPT has also come under scrutiny for its screenshotting capability in its soon-to-launch macOS app. Privacy experts warn that this could inadvertently capture sensitive data. In response, the US House of Representatives has banned the use of Microsoft’s Copilot among staff due to concerns over data leaks to unauthorised cloud services.

It has been noted that using Copilot for Microsoft 365 could expose sensitive data both internally and externally. Meanwhile, Google recently had to adjust its new search feature, AI Overviews, after it generated some bizarre and misleading results that quickly went viral.

The Risks of Overexposure

One of the biggest challenges of using generative AI at work is the risk of unintentionally sharing sensitive information. These AI systems absorb vast amounts of data to train their models, making them potential risks if not handled carefully.

AI companies are highly motivated to gather as much data as possible to improve their models, which can lead to sensitive information being stored in systems outside your control. This data could potentially be accessed through sophisticated prompting techniques.

There’s also the risk of hackers targeting the AI systems themselves. If an attacker gains access to a company’s AI models, they could steal sensitive data, manipulate outputs, or even spread malware.

Even AI tools considered “safe” for work, like Microsoft’s Copilot, carry risks if security settings aren’t properly configured. Employees might inadvertently access or expose sensitive information, such as confidential pay scales or merger and acquisition details, which could then be leaked or sold.

Another concern is the potential for AI tools to be used for monitoring staff, raising privacy issues. Microsoft claims that Recall’s screenshots remain local to your PC and are under your control, but there are fears that this technology could eventually be used for employee surveillance.

Staying Safe: Practical Tips

While generative AI presents several risks, there are steps you can take to protect your privacy and security. First, avoid putting confidential information into public AI tools like ChatGPT or Google’s Gemini. Be vague with your prompts and avoid sharing specific details.

When using AI for research, always validate the information it provides. Ask for references and links, and review any AI-generated content before using it to ensure accuracy and relevance.

Microsoft has highlighted the importance of correctly configuring Copilot and applying the “least privilege” principle, which limits user access to only what’s necessary. This approach is crucial, as organisations shouldn’t blindly trust the technology without implementing proper safeguards.

It’s also important to remember that unless you disable the setting or use an enterprise version, ChatGPT may use the data you input to train its models.

AI Companies Respond

The companies behind these AI tools assert that they are taking steps to protect security and privacy. Microsoft points out that you can control the Recall feature in your settings, and Google reassures users that generative AI in Workspace doesn’t alter its core privacy protections. OpenAI offers enterprise versions with additional controls and maintains that their models don’t learn from user data by default.

Conclusion: Your AI Co-worker Is Here to Stay

As generative AI continues to evolve and integrate into our work lives, the associated risks will only grow. The rise of multimodal AI, like GPT-4, which can handle not just text, but also images, audio, and video, means businesses will need to safeguard more than just text-based data.

With this in mind, it’s crucial to treat AI like any other third-party service—don’t share anything you wouldn’t want to be made public.

author avatar
Callum
Content associate at Teknobu, Callum is responsible for writing blogs, creating our social media presence, and is also the editing executive for Teknobu's "Unnamed Podcast".
Comments are closed.