Banner Image

All Services

Programming & Development

Hadoop/Spark Developer

$22/hr Starting at $100

5.1 years of overall IT experience includes 3 years of Application Development in Big Data (Hadoop and Spark frameworks) • Have hands on experience of 1.6 year on Spark Core, Spark SQL and Spark Streaming. • Experience in writing Spark programs in Scala. • Have hands on experience of around 1.2 years on Hadoop and its components like HDFS, Map Reduce using JAVA, Hive and Sqoop. • Have domain knowledge of Banking and Finance sector. • Experience in ingestion, storage, querying, processing and analysis of Big Data with experience in Hadoop Ecosystem including MR, HDFS, HIVE,PIG,SQOOP,FLUME and SPARK. • Capable of processing large sets of structured, semi-structured data and supporting system application architecture. • Experience in writing HIVE queries, very good understanding of partitions and concepts in hive and designed both Managed and External tables in hive for optimized performance. • Maintaining and monitoring clusters, loading data into the cluster from dynamically generated files using LFS and from relational database management system using SQOOP

About

$22/hr Ongoing

Download Resume

5.1 years of overall IT experience includes 3 years of Application Development in Big Data (Hadoop and Spark frameworks) • Have hands on experience of 1.6 year on Spark Core, Spark SQL and Spark Streaming. • Experience in writing Spark programs in Scala. • Have hands on experience of around 1.2 years on Hadoop and its components like HDFS, Map Reduce using JAVA, Hive and Sqoop. • Have domain knowledge of Banking and Finance sector. • Experience in ingestion, storage, querying, processing and analysis of Big Data with experience in Hadoop Ecosystem including MR, HDFS, HIVE,PIG,SQOOP,FLUME and SPARK. • Capable of processing large sets of structured, semi-structured data and supporting system application architecture. • Experience in writing HIVE queries, very good understanding of partitions and concepts in hive and designed both Managed and External tables in hive for optimized performance. • Maintaining and monitoring clusters, loading data into the cluster from dynamically generated files using LFS and from relational database management system using SQOOP

Skills & Expertise

Apache HadoopApache HivePythonScalaSparkSQL

0 Reviews

This Freelancer has not received any feedback.