Responsibilities:
| - Work with business and technology teams to analyze and assess business requirements and develop technical blueprints.
- Develop cloud enabled, highly scalable Big Data applications using technologies such as Spark, Scala, and open-source tools.
- Designing data models using RDBMS and NoSQL
- Leveraging industry best practices and experience and recommend solutions including solution architecture and design.
- Adhere to high design and coding standards, policies, and procedures.
- Willing to explore and learn new technologies.
|
Qualification:
| - 7+ years of hands-on expertise using Big Data technologies and Frameworks.
- Min 4 years of relevant experience as Big Data Engineer
- Min 4 years of hands-on solid experience on Spark with Scala and Python
- Solid Experience in implementing Spark RDD Transformations, actions to
implement business analysis. - Hands on experience on Hadoop framework and other Big Data technologies like HDFS, Oozie, Airflow, Hive or Impala.
- Good hold on any cloud platform among AWS, Azure or GCP
- Good hold on container-based frameworks like Docker and Kubernetes
- Strong Problem-Solving, Analytical, and programming experience using Spark, Scala, RDD, DB2, Hadoop, APIs.
- Debugging applications to ensure low-latency and high-availability.
- Solid working experience with Databases like PostgreSQL or MySQL and writing optimized SQL queries.
- Well versed with Design patterns and various architectural patterns
- Good to have exposure on writing APIs using Java based frameworks like Spring Boot
- Good knowledge of agile methodology of Software development.
- Excellent communication and presentation skills.
- BS / MS in Computer Science or equivalent
|