Senior Ai Data Engineer

Santex Ver todas las vacantes

  • Comodoro Rivadavia, Chubut
  • Permanente
  • Tiempo completo
  • Hace 15 horas
Santex is a US-based global company founded in ****, with 26 years of experience in the software industry.
Headquartered in California with offices in Córdoba, Argentina, its talent network spans over 18 countries thanks to its flexible, remote-first culture.
Santex specializes in custom enterprise software development, operating through Hubs that include eCommerce, BIM, Mobility, Content Delivery, Integration, Web & Mobile Development, Cloud Computing, Artificial Intelligence (AI), Data Science, IT Consulting, and Services.
The company is committed to making a positive impact across three dimensions: economic, social, and environmental.
Job description
We are seeking an AI Data Engineer to design, build, and scale the data foundations that power AI, advanced analytics, and natural language interfaces across a global Quick Service Restaurant (QSR) enterprise.
This is a high-impact role at the intersection of data engineering and applied AI, where the pipelines you build become the building blocks for use cases like automated insight generation, conversational analytics, and intelligent decision support at restaurant scale.
A core focus of this role is data ingestion from POS systems and back-office vendor platforms spanning labor, inventory, finance, and supply chain.
You will ensure reliable, standardized, and scalable ingestion of high-volume, multi-source data into the enterprise data platform, transforming fragmented operational data into AI-ready assets.
You will also play a critical role in connecting raw data to business meaning, supporting governed KPI definitions and AI-driven context engineering through a robust semantic layer that enables both human analysts and AI agents to query, reason over, and act on trusted data.
Key Responsibilities – Data Ingestion & Integration (POS & Back Office)
Design and build ingestion pipelines for POS systems across global markets (transactions, orders, payments, speed of service)
Integrate data from back-office vendor systems (labor scheduling, inventory, supply chain, finance)
Handle diverse ingestion patterns (batch, streaming, APIs, file-based ingestion)
Normalize and standardize data across vendors, regions, and brands
Ensure data freshness, completeness, and consistency across all ingestion pipelines
Manage vendor-specific schemas and evolving data contracts
Key Responsibilities – Data Platform & Pipeline Engineering
Build scalable ELT/ETL pipelines using modern tools (DBT, Airflow, etc.)
Support real-time and batch processing for operational and analytical use cases
Optimize ingestion pipelines for performance, reliability, and cost
Ensure high availability across global data workloads
Key Responsibilities – Semantic Layer Enablement
Build data models supporting standardized KPIs (same-store sales, speed of service, traffic, labor productivity)
Align ingested raw data with business-friendly semantic abstractions
Support both governed (manual) and AI-driven semantic approaches
Key Responsibilities – Metadata & Context Engineering
Capture metadata, lineage, and schema details from ingestion pipelines
Enable AI systems using structured context (metadata, lineage, example queries)
Ensure AI systems can reliably interpret POS and operational data
Key Responsibilities – Governance & Data Quality
Implement validation and monitoring for ingestion pipelines
Ensure data accuracy across POS and vendor systems
Support governance for consistent metric definitions across regions
Maintain versioning and schema evolution processes
Required Qualifications
5+ years of experience in data engineering
Strong SQL and Python skills
Experience building ingestion pipelines from APIs, streaming, and batch sources
Experience with POS or operational data systems preferred
Experience with cloud data platforms and modern data stack
Preferred Qualifications
Familiarity with Agentic development tools such as Codex, Cursor or Claude Code
Experience in QSR or retail environments
Experience integrating vendor systems (labor, inventory, finance)
Familiarity with semantic layers and metadata tools
Experience enabling data for AI/LLM use cases
Success Metrics
Reliability and latency of POS and back-office data ingestion
Standardization across vendor data sources
Reduction in data inconsistencies
Improved downstream analytics and AI performance
Summary
This role ensures that high-quality, standardized data from POS and back-office systems flows into the enterprise platform—enabling trusted analytics, semantic consistency, and AI-driven insights at global scale.
Location
LATAM
#J--Ljbffr

Buscojobs