Job Title: Big Data Developer
Company: Tavant
Years of Exp: 3 – 5 Years
Location: Bengaluru, Karnataka, India
Role Type: Full-Time
Salary: As per industry standards
Eligibility: Bachelor’s degree in Computer Science / IT or related field with 3–5 years of experience in Data Engineering, ETL/ELT pipelines, and Azure-based data platforms.
Role Overview
We are seeking a Data Integration Engineer to design and implement data ingestion pipelines and governance workflows for an enterprise data platform on Azure. The role involves building scalable ETL/ELT pipelines, processing structured and unstructured data, implementing metadata extraction, and ensuring data security, compliance, and PII handling.
Key Responsibilities
- Build data ingestion pipelines using Azure Data Factory (ADF).
- Develop ETL/ELT pipelines for structured and unstructured data processing.
- Write Python and PySpark code for data transformation and processing.
- Implement metadata extraction, automated metadata generation, and enrichment workflows.
- Build change data capture (CDC) and incremental data load patterns.
- Develop data quality validation, monitoring, logging, and alerting solutions.
- Integrate data sources like SharePoint, cloud data warehouses, blob storage, and relational databases.
- Implement data anonymization, PII detection, security controls, and compliance frameworks.
- Support enterprise data lake architectures and storage solutions.
- Troubleshoot and optimize pipeline performance and data quality issues.
Skills and Qualifications
- Azure Data Factory (ADF)
- ETL / ELT Pipeline Development
- Python, PySpark
- Structured & Unstructured Data Processing
- Metadata Extraction & Data Governance Tools
- Data Lake Architectures
- SharePoint Integration
- Cloud Data Warehouses
- Data Security, Compliance & PII Handling
Desirable:
- Microsoft Purview
- Experience with enterprise data platforms and governance frameworks.