Epicareer Might not Working Properly
Learn More

AI Data Engineer

Salary undisclosed

Checking job availability...

Original
Simplified
Job Title: AI Data Engineer

Department: Data Engineering / AI Data Services

Location: Global

Reports to: VP of AI Ops

Contract: Permanent

Travel Requirements: No

Salary: Open Budget

Job Summary / Overview

The AI Data Engineer designs, develops, and maintains robust data pipelines to support AI data

services operations, ensuring smooth ingestion, transformation, and extraction of large,

multilingual, and multimodal datasets. This role collaborates with cross-functional teams to

optimize data workflows, implement quality checks, and deliver scalable solutions that

underpin our analytics and AI/ML initiatives.

Key Responsibilities And Accountabilities

Data Pipeline Development: Create and manage ETL workflows using Python and relevant

libraries (e.g., Pandas, NumPy) for high-volume data processing.

Data Optimization: Monitor and optimize data workflows to reduce latency, maximize

throughput, and ensure high-quality data availability.

Collaboration: Work with Platform Operations, QA, and Analytics teams to guarantee

seamless data integration and consistent data accuracy.

Quality Checks: Implement validation processes and address anomalies or performance

bottlenecks in real time.

Integration & Automation: Develop REST API integrations and Python scripts to automate

data exchanges with internal systems and BI dashboards.

Documentation: Maintain comprehensive technical documentation, data flow diagrams, and

best-practice guidelines.

Main Job Requirements

Education and Specific Training

Bachelors degree in Computer Science, Data Engineering, Information Technology, or a

related field.

Relevant coursework in Python programming, database management, or data integration

techniques.

Work Experience

35 years of professional experience in data engineering, ETL development, or similar roles.

Proven track record of building and maintaining scalable data pipelines.

Experience working with SQL databases (e.g., MySQL, PostgreSQL) and NoSQL solutions (e.g.,

MongoDB).

Special Certifications (Non-Mandatory)

AWS Certified Data Analytics Specialty, Google Cloud Professional Data Engineer, or similar

certifications are a plus.

Required Skills

Technical Skills

Advanced Python proficiency with data libraries (Pandas, NumPy, etc.).

Familiarity with ETL/orchestration tools (e.g., Apache Airflow).

Understanding of REST APIs and integration frameworks.

Experience with version control (Git) and continuous integration practices.

Exposure to cloud-based data solutions (AWS, Azure, or GCP) is advantageous.

Soft Skills / Competencies

Communication Skills (verbal and written): 4

Planning and Organization: 4

Problem Solving: 5

Goal Orientation: 4

Proactivity: 4

Team Collaboration / Customer Orientation: 4

Qualified Candidates should send CV to [email protected]
Job Title: AI Data Engineer

Department: Data Engineering / AI Data Services

Location: Global

Reports to: VP of AI Ops

Contract: Permanent

Travel Requirements: No

Salary: Open Budget

Job Summary / Overview

The AI Data Engineer designs, develops, and maintains robust data pipelines to support AI data

services operations, ensuring smooth ingestion, transformation, and extraction of large,

multilingual, and multimodal datasets. This role collaborates with cross-functional teams to

optimize data workflows, implement quality checks, and deliver scalable solutions that

underpin our analytics and AI/ML initiatives.

Key Responsibilities And Accountabilities

Data Pipeline Development: Create and manage ETL workflows using Python and relevant

libraries (e.g., Pandas, NumPy) for high-volume data processing.

Data Optimization: Monitor and optimize data workflows to reduce latency, maximize

throughput, and ensure high-quality data availability.

Collaboration: Work with Platform Operations, QA, and Analytics teams to guarantee

seamless data integration and consistent data accuracy.

Quality Checks: Implement validation processes and address anomalies or performance

bottlenecks in real time.

Integration & Automation: Develop REST API integrations and Python scripts to automate

data exchanges with internal systems and BI dashboards.

Documentation: Maintain comprehensive technical documentation, data flow diagrams, and

best-practice guidelines.

Main Job Requirements

Education and Specific Training

Bachelors degree in Computer Science, Data Engineering, Information Technology, or a

related field.

Relevant coursework in Python programming, database management, or data integration

techniques.

Work Experience

35 years of professional experience in data engineering, ETL development, or similar roles.

Proven track record of building and maintaining scalable data pipelines.

Experience working with SQL databases (e.g., MySQL, PostgreSQL) and NoSQL solutions (e.g.,

MongoDB).

Special Certifications (Non-Mandatory)

AWS Certified Data Analytics Specialty, Google Cloud Professional Data Engineer, or similar

certifications are a plus.

Required Skills

Technical Skills

Advanced Python proficiency with data libraries (Pandas, NumPy, etc.).

Familiarity with ETL/orchestration tools (e.g., Apache Airflow).

Understanding of REST APIs and integration frameworks.

Experience with version control (Git) and continuous integration practices.

Exposure to cloud-based data solutions (AWS, Azure, or GCP) is advantageous.

Soft Skills / Competencies

Communication Skills (verbal and written): 4

Planning and Organization: 4

Problem Solving: 5

Goal Orientation: 4

Proactivity: 4

Team Collaboration / Customer Orientation: 4

Qualified Candidates should send CV to [email protected]