ETL Pipeline — Project Name
Brief description: e.g. batch ingestion from APIs into a raw layer, dbt transformations, and scheduled Airflow DAGs with data quality checks.
- Python
- Airflow
- PostgreSQL
Data Engineer
I design and operate scalable data infrastructure — ETL/ELT pipelines, warehouses, and orchestration — so teams can trust their data.
Replace this with your background — years of experience, domains you've worked in (finance, healthcare, SaaS), and what drives your work in data engineering.
Highlight certifications, education, or notable achievements. Keep it concise and focused on outcomes: latency reduced, cost saved, pipelines migrated, teams enabled.
Production-style pipelines, warehouses, and tooling. Each card links to the GitHub repository — update titles, descriptions, tech tags, and URLs below.
Brief description: e.g. batch ingestion from APIs into a raw layer, dbt transformations, and scheduled Airflow DAGs with data quality checks.
Brief description: e.g. Delta Lake on S3, Spark jobs, and medallion architecture with CI/CD for infrastructure and dbt models.
Brief description: e.g. Kafka consumers, Flink or Spark Structured Streaming, and sink to warehouse or OLAP for dashboards.
More repos on GitHub
Open to data engineering roles, consulting, and collaboration on pipeline and platform work.