Skip to main content

Cyber Safety: What Is a Deepfake and How You Can Spot It

The Basics

A deepfake is a video, photo, or audio recording that seems real but has been manipulated using artificial intelligence (AI). The underlying technology can replace faces, manipulate facial expressions and synthesize speech. These tools are used most often to depict people saying or doing something they never said or did. To learn about the dangers of deepfakes and how to spot one, watch the video below.

 

 

Safe Use of ChatGPT at Work

During this month’s Cyber Safety Campaign, please familiarize yourself with the risks associated with ChatGPT usage: 

  • Data Security – Using AI tools like ChatGPT can risk exposing sensitive Company or personal data if not handled correctly. ChatGPT processes the information provided to generate responses, which could lead to data breaches if sensitive information is inadvertently inputted.
  • Accuracy of Information – ChatGPT may occasionally provide incorrect or outdated information. Relying on unverified data for decision-making or external communication can lead to errors and potentially harm our Company’s reputation.
  • Compliance and Ethical Concerns – Ensuring that our interactions with AI technologies adhere to regulatory requirements and ethical standards is crucial, especially when dealing with privacy, confidentiality, and data protection laws.

Stay vigilant!

  • Be Cautious with Sensitive Information – Avoid Inputting Sensitive Data: When interacting with ChatGPT, ensure you do not input or discuss sensitive or confidential information. Treat the AI like any public communication tool where data security cannot be guaranteed.
  • Verify Information Accuracy – Double-Check AI Responses: Before using information provided by ChatGPT in your work, especially in decision-making or external communication, verify its accuracy through trusted sources or consult with a supervisor.
  • Adhere to Ethics and Compliance – Follow Training and Guidelines: Engage with training sessions provided by Republic Services that outline the ethical use of AI tools. Familiarize yourself with the Company’s guidelines on technology use to ensure compliance with industry regulations.
  • Report Anomalies – Use Reporting Mechanisms: If you encounter any unusual behavior or suspect a security issue while using ChatGPT, report it immediately to infosec@republicservices.com. Quick reporting can help prevent potential data breaches or misuse.
  • Use Secure Platforms – Access ChatGPT via Authorized Channels: Only use Company-approved methods to access ChatGPT to ensure that your interactions are protected by our corporate cybersecurity measures.

To learn more, refer to Acceptable Use Of Gen Ai Technologies Policy (ITD-106) and Acceptable Use Of Gen Ai Technologies Procedures (ITD-106P).