Senior AI Security Analyst
AppDirect Ver todas las vacantes
- Buenos Aires
- Permanente
- Tiempo completo
- Work within the Information Security team as an AI Security Analyst, owning the security and governance of AI tool usage across AppDirect's corporate environment.
- Define, operationalize, and continuously improve the corporate AI usage policy, including acceptable use guidelines, tool classification, and employee awareness.
- Lead the evaluation and ongoing monitoring of AI tools used by employees (e.g. ChatGPT, Copilot, Claude, Lovable, etc.), assessing their data handling practices and associated risks.
- Own and mature the company's DLP capabilities, with a focus on preventing sensitive data from being inadvertently exposed through AI tools and corporate SaaS platforms.
- Drive data governance initiatives, including classifying crown jewel data assets, defining handling requirements, and ensuring controls are operationalized across the organization.
- Collaborate with IT, Legal, Privacy, and Engineering to ensure corporate AI usage aligns with regulatory obligations (GDPR, HIPAA, SOC 2, etc.).
- Investigate incidents related to shadow AI usage, unauthorized data sharing, or policy violations involving AI tools.
- Monitor the evolving AI threat and tooling landscape and translate findings into actionable policy or control improvements.
- Contribute to AI governance documentation and support executive or board-level reporting on AI risk posture.
- 5 years of experience in information security, with demonstrated exposure to data protection, DLP, or AI governance
- Strong expertise in DLP tools and platforms, including policy configuration, tuning, and incident triage, along with data classification and governance frameworks to identify and protect sensitive data assets.
- Experience evaluating SaaS and AI tools for security and privacy risks as part of vendor or tool onboarding processes.
- Solid understanding of how employees interact with AI tools in a corporate setting and the associated data leakage and shadow AI risks
- Familiarity with CASB and access control mechanisms applied to AI and SaaS tool usage
- Working knowledge of AI governance frameworks such as NIST AI RMF or ISO/IEC 42001
- Understanding of relevant regulatory requirements (GDPR, HIPAA, SOC 2) and their implications for corporate AI usage
- Proven ability to work cross-functionally and communicate data and AI risks clearly to non-technical stakeholders
- Creative, risk-aware, and solution-oriented mindset - comfortable operating in a space where standards and tooling are still maturing
- Any Information Security certification (CISSP, Security+, CIPP, CISM) is an asset