Data Engineer II
Aumni
You thrive on diversity and creativity, and we welcome individuals who share our vision of making a lasting impact. Your unique combination of design thinking and experience will help us achieve new heights.
As a Data Engineer II at JPMorgan Chase within the Employee Platforms team, you are part of an agile team that works to enhance, design, and deliver the data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As an emerging member of a data engineering team, you are responsible for overseeing Databricks adoption and supporting internal teams as they integrate with Databricks. As a Software Engineer, you will collaborate with colleagues across the organization to deliver secure, scalable, and reliable technology solutions that drive business success.
Job responsibilities
- Design, develop, and troubleshoot software solutions with a focus on Databricks integration and adoption
- Build and maintain data pipelines using Databricks to ensure efficient and reliable data processing for internal teams
- Write secure, high-quality production code in Python and participate in code reviews and debugging
- Implement and support CI/CD pipelines to automate software delivery and improve operational stability
- Collaborate with internal teams to provide guidance and best practices for Databricks usage
- Participate in team discussions and activities to share knowledge and drive continuous improvement
- Contribute to a positive team culture that values diversity, inclusion, and respect
- Support the adoption of best practices in software engineering and data management
- Troubleshoot and resolve issues related to data pipelines and software integration
- Document technical processes, workflows, and solutions for team reference
- Engage in ongoing learning to stay updated with advancements in Databricks, Python, and related technologies
Required qualifications, capabilities, and skills
Formal training or certification on software engineering concepts and 2+ years applied experience
- Show proficiency in Python programming
- Build and maintain data pipelines using Databricks
- Apply practical experience with CI/CD tools and automation methods
- Exhibit familiarity with agile development practices, application resiliency, and security
- Participate in code reviews and debugging activities
- Collaborate with teams to implement best practices for Databricks and data engineering
- Troubleshoot and resolve technical issues in data pipelines
- Document and communicate technical solutions effectively
- Engage in continuous improvement and learning within the team environment
Preferred Qualifications, Capabilities, and Skills
- Demonstrate experience with AWS services such as S3, EMR, Glue, ECS/EKS, and Athena
- Obtain certifications in AWS, Databricks, or automation tools
- Gain exposure to open table formats like Iceberg or Delta Lake and data catalog tools such as AWS Glue Data Catalog
- Pursue interests in cloud computing, artificial intelligence, or mobile development
Be part of an agile team that works to enhance, design, and deliver data collection, storage, access, and analytics solutions