What Is AI Safety at Work?
AI Safety is about using AI tools (like ChatGPT) responsibly. It means being careful with company information when using these tools, following our guidelines, and avoiding potential problems. As more of us start using these helpful tools at work, we need to understand how to use them safely.
Key Risks When Using AI at Work
- Data Leakage: Information you type into public AI tools (like ChatGPT) might be stored or seen by others, potentially exposing company secrets or customer information.
- Incorrect Information: AI can sometimes create content that looks correct but contains errors or made-up facts.
- Deceptive Content: AI can create fake videos or clone voices that look and sound like real people. These “deepfakes” can be used for scams or fraud.
- Unapproved AI Tools: Using AI applications not approved by our IT department can create security gaps.
How to Use AI Safely
- Stick to Approved Tools: Only use AI tools that Republic Services has reviewed and approved.
- Protect Sensitive Information: Never enter confidential data, customer information, or company secrets into unauthorized AI tools.
- Verify AI Outputs: Always verify information generated by AI before using it in your work.
- Report Concerns: If you notice suspicious behavior or potential misuse, let us know at InfoSec@republicservices.com.
Always check with your manager or IT team before using new AI tools, and make sure important decisions that affect privacy, compliance, or safety have human oversight.
Thank you for helping to keep Republic Services Cyber Safe.
Watch the video below to learn more about AI Safety!