I design and build production-grade data engineering solutions that turn messy, siloed data into reliable, queryable assets your team can trust. Whether you need automated ETL/ELT pipelines, a scalable cloud data warehouse, or migration from legacy systems to modern platforms, I deliver systems that run smoothly and scale with your business.
My core expertise spans the full data engineering stack: pipeline orchestration with Apache Airflow and dbt, cloud data warehouses on Snowflake, BigQuery, and AWS Redshift, and database design across PostgreSQL, MongoDB, and SQL Server. I build modular, well-documented workflows covering data ingestion, cleaning, transformation, scheduling, and monitoring — so your analytics teams get accurate, timely insights without firefighting broken pipelines.
What sets me apart is a hybrid engineering background. With 7+ years as a Principal Mechanical Engineer working on complex systems and power electronics, combined with hands-on Python development across 95+ repositories, I bring a systems-thinking approach to data architecture that pure software engineers often miss. I optimize for real-world performance — query tuning, partitioning, indexing, parallel processing — not just theoretical best practices.
Every engagement includes CI/CD pipeline setup, version control, automated data validation checks, and monitoring with alerting so issues get caught before they reach stakeholders. I work across AWS, GCP, and Azure, and I'm equally comfortable building a greenfield warehouse from scratch or untangling an existing pipeline that's become unreliable. Whether your project is a one-time migration or an ongoing data infrastructure build-out, I deliver clean, scalable, maintainable systems that just work.