Full Job Description
Role Responsibilities:
Collaborate with a team of Big Data Engineers, Big Data and Cloud Architects and Domain SMEs to drive the product ahead
Stay up to date with the progress of in the domain since We work on cutting-edge technologies and are constantly trying new things out design and build solutions for massive scale. This requires extensive benchmarking to pick the right approach
Understand the data in and out,and make sense of it. You will at times need to draw conclusions and present it to the business users
Be independent, self-driven and highly motivated. While you will have the best people to learn from and access to various courses or training materials, we expect you to take charge of your growth and learning
Requirements (Desired skills & experience)
3+ Year of relevant experience in Big Data
Experience in the Big Data ecosystem is a must.
Hands on experience of distributed computing and Big Data Ecosystem – Hadoop, MapReduce, HDFS, Spark etc
Good understanding of data lake and their importance in a Big Data Ecosystem
Hands on knowledge of using visualization tools and creating data dashboard for easy consumption of the data.
Familiarity with search engines like Elasticsearch and Bigdata warehouses systems like AWS Redshift, Google Big Query etc
Experience of working in the Cloud Environment (AWS, Azure or GCP)
skills
Python, Azure, PySpark, Scala, Spark, Data Analytics, big data ecosystem
qualification
B.E/B.Tech