Banner Image

All Services

Engineering & Architecture

Cloud Solutions Engineer in Big Data

$20/hr Starting at $25

• Have 7 Years of experience in building complex software applications using BIG DATA, AWS Cloud and ETL/ELT Technologies in Banking, Healthcare and Retail sectors. • Around 5 plus years of work experience as a Big Data Developer, Data Engineer with sound knowledge in Hadoop, Spark and AWS Big data services. • Designed and developed new solutions on the AWS data lake platform • Expert in AWS Athena and AWS Glue services. • Proficiency with Linux/Unix systems • Hands on experience in major Big Data components like HDFS, Map Reduce, Hive, Impala, Presto, Oozie, Spark, Kafka. • Performed complex transformations on Spark data frames using SparkSQL. • Worked with different compression formats like ZLib, Snappy, GZip for storing and computation of data. • Experience in Data Analysis, Data Validation, Data Verification, Data Cleansing, Data Completeness and identifying data mismatch. • Good understanding of distributed computing environments • Experienced with Linux system monitoring and analysis • Experience in writing Pig scripts, Shell Scripts, Map Reduce, Spark jobs to perform ETL operations. • Expertise with IPsec, VPN, Load Balancing, Iperf, MTR, Routing Protocols, SSH, Network Monitoring / Troubleshooting tools • Development experience in RDBMS like Oracle, MySQL, Teradata, PostgreSQL and SQL Server. • Experience with build tools like Maven and IDEs like Eclipse. • Quick learner and self-starter to go ahead with any new technology.

About

$20/hr Ongoing

Download Resume

• Have 7 Years of experience in building complex software applications using BIG DATA, AWS Cloud and ETL/ELT Technologies in Banking, Healthcare and Retail sectors. • Around 5 plus years of work experience as a Big Data Developer, Data Engineer with sound knowledge in Hadoop, Spark and AWS Big data services. • Designed and developed new solutions on the AWS data lake platform • Expert in AWS Athena and AWS Glue services. • Proficiency with Linux/Unix systems • Hands on experience in major Big Data components like HDFS, Map Reduce, Hive, Impala, Presto, Oozie, Spark, Kafka. • Performed complex transformations on Spark data frames using SparkSQL. • Worked with different compression formats like ZLib, Snappy, GZip for storing and computation of data. • Experience in Data Analysis, Data Validation, Data Verification, Data Cleansing, Data Completeness and identifying data mismatch. • Good understanding of distributed computing environments • Experienced with Linux system monitoring and analysis • Experience in writing Pig scripts, Shell Scripts, Map Reduce, Spark jobs to perform ETL operations. • Expertise with IPsec, VPN, Load Balancing, Iperf, MTR, Routing Protocols, SSH, Network Monitoring / Troubleshooting tools • Development experience in RDBMS like Oracle, MySQL, Teradata, PostgreSQL and SQL Server. • Experience with build tools like Maven and IDEs like Eclipse. • Quick learner and self-starter to go ahead with any new technology.

Skills & Expertise

Apache HiveApache MavenAthenaAWS Command Line Interface (CLI)AWS Key Management Service (KMS)AWS Virtual Private Cloud (VPC)Cloud ComputingData ManagementDeveloperDynamoDBGlueHadoopLinuxMonitoringPostgreSQLS3Software DevelopmentSQLTroubleshooting

0 Reviews

This Freelancer has not received any feedback.