Hybrid or full remote\nFull time\nRole Overview The SQL Senior / Data Engineer owns the data and analytics support queue — responsible for monitoring, troubleshooting, and remediating failures across cloud data platforms, ETL/ELT pipelines, and Power BI reporting. The role also handles SQL-level data fixes and record corrections that overflow from the application support queues. Minimum 5 years of relevant experience is required per contractual staffing requirements.\nResponsibilities\n\n\n- Own the data and analytics support queue: pipeline failures, ETL/ELT errors, Power BI dashboard issues\n- Monitor and remediate data pipelines across Azure and AWS environments\n- Write and execute SQL scripts for data fixes, record corrections, and diagnostic queries\n- Support Snowflake and Databricks: query optimization, job failure diagnosis, cluster management\n- Maintain CI/CD pipelines supporting data operations (Tier 1 monitoring, Tier 2 remediation)\n- Handle SQL-heavy overflow tickets from the application support queues (HUB-Report, data corrections)\n- Coordinate with client data team on schema changes, pipeline updates, and capacity planning\n- Perform minor data platform enhancements as assigned (up to 80 engineering hours)\n- Support vehicle telematics data infrastructure and related ingestion pipelines\n\nRequired Technical Skills\n\n\n- Azure: Data Factory, Azure SQL, Synapse Analytics — 5+ years required per contractual staffing requirements\n- AWS: S3, RDS, Glue — data pipeline and infrastructure support\n- Snowflake: Query writing, data loading, performance troubleshooting, Snowpipe\n- Databricks: Notebook execution, cluster management, job failure diagnosis, Delta Lake basics\n- Power BI: Dashboard connectivity, data source troubleshooting, refresh failures, DAX basics\n- SQL: Advanced — complex queries, stored procedures, performance tuning across SQL Server and cloud DBs\n- Pipeline tools: Apache Airflow or Azure Data Factory — DAG/pipeline monitoring and repair\n- Python: Scripting for data transformation, automation, and diagnostic tasks\n- Version control: GitLab — CI/CD pipeline basics for data workflows\n\nExperience\n5+ years in data engineering or data platform support — mandatory per contractual staffing requirements. Hands-on experience with at least 3 of: Azure, AWS, Snowflake, Databricks. Production pipeline support experience strongly preferred.\nOn-Call Requirements\n⚠ ON-CALL ROTATION — data incidents\n\n\n- Covers P1/P2 data pipeline and platform failures outside business hours\n- Shared rotation between Ukraine and Argentina Data Engineers — approximately every other week per person\n- Activation expected when a critical pipeline failure impacts business operations or reporting delivery\n- Response expected within 1 hour of activation\n- On-call compensation applies per company policy\n\nLanguage\n\n\n- English — required (all tickets, escalations, client communication)\n- Spanish\n\nAbout Us\nEstablished in 2011, Trinetix is a dynamic tech service provider supporting enterprise clients around the world. Headquartered in Nashville, Tennessee, we have a global team of over 1,000 professionals and delivery centers across Europe, the United States, and Argentina. We partner with leading general brands, delivering innovative digital solutions across Fintech, Professional Services, Logistics, Healthcare, and Agriculture.\nOur operations are driven by a strong business vision, a people-first culture, and a commitment to responsible growth. We actively give back to the community through various CSR activities and adhere to international principles for sustainable development and business ethics.\nTo learn more about how we collect, process, and store your personal data, please review our Privacy Notice: https://www.trinetix.com/corporate-policies/privacy-notice