Back
Back
Back
Back

Customer Job

AWS Data Engineer 3851

Job ID: 25-10803
Daily Tasks Performed:
Develop and Maintain Data Integration Solutions:
  • Design and implement data integration workflows using AWS Glue EMR, Lambda, Redshift
  • Pyspark, Spark and Python for data processing large datasets
  • Ensure data is extracted, transformed, and loaded into target systems
  • Build ETL pipelines using Iceberg
Ensure Data Quality and Integrity:
  • Validate and cleanse data
  • Ensure data quality and integrity by implementing monitoring, validation, and error handling mechanisms within data pipelines
Optimize Data Integration Processes:
  • Enhance the performance, optimization of data workflows to meet SLAs, scalability of data integration on AWS cloud infrastructure
  • Data Analysis and Data Warehousing concepts (star snowflake schema design, dimensional modeling, and reporting enablement)
  • Resolve performance bottlenecks
  • Optimize data processing to enhance Redshift's performance
  • Refine integration processes
Support Business Intelligence and Analytics:
  • Translate business requirements to technical specifications and coded data pipelines
  • Ensure integrated data for business intelligence and analytics
  • Meet data requirements
Maintain Documentation and Compliance:
  • Document all data integration processes , workflows , and technical & system specifications.
  • Ensure compliance with data governance policies , industry standards, and regulatory requirements.

What will this person be working on
  • Design , development , and management of data integration processes
  • Integrating data from diverse sources, transforming it to meet business requirements, and loading it into target systems such as data warehouses or data lakes

Position Success Criteria (Desired) - 'WANTS'
  • Bachelor's degree in computer science, information technology, or a related field. A master's degree can be advantageous.
  • 4-6+ years of experience in data engineering , database design , ETL processes
  • Experience with Iceberg
  • 5+ in programming languages such as PySpark, Python and SQL
  • 5+ years of experience with AWS tools and technologies (S3 , EMR , Glue , Athena , RedShift , Postgres , RDS , Lambda , PySpark)
  • 3+ years of experience of working with databases data marts data warehouses
  • ETL development , system integration , and CI CD implementation
  • Experience in complex database objects to move the changed data across multiple environments
  • Solid understanding of data security , privacy, and compliance.
  • Participate in agile development processes including sprint planning stand-ups and retrospectives
  • Provide technical guidance and mentorship to junior developers

CV or resume

Choose file
or drag and drop file here
For best results, upload *.doc/.docx/.pdf format files only (File size must be less than 2MB)

Personal information

Tell us something about yourself

How may I help you?