Data Transformation and Automation- Associate Sr.
Aumni
Job Description
Become an integral part of our Data Automation & Transformation team! Each day will be unique – bring a positive attitude, entrepreneurial spirit and get ready to roll up your sleeves. This position is an essential part of the team and will have exposure to several aspects of running a banking office.
Job Summary:
As a Data Automation & Transformation team within Private Client Office Data & Analytics. You will be focused on building curated Data Products, modernizing data and moving it to SNOWFLAKE. You will have required working knowledge of Cloud Databases (e.g. AWS, SNOWFLAKE) and coding languages such as SQL, Python, Pyspark, etc.
You will be able to scan across large multi-platform data ecosystems, research data patterns, and build automation solutions, analytics framework, and data consumption architecture used by Decision Sciences, Product Strategy, Finance, Risk and Modeling teams. You will be analytically/technically strong with experience in financial services, preferably serving the small business banking or commercial banking segments.
Job Responsibilities:
- Migrate Private Client Office Data to Public Cloud (AWS and Snowflake)
- Partner closely with Executive Director of Automation and Transformation to execute on new build and conversion book of work to the Cloud data consumption platform
- Partner with Data Owners and D&A Leads to understand data roadmap that support analytics needs
- Partner with CCB Architecture and technology scrum teams where needed on data modernization
- Identify, prioritize, develop and coordinate the migration of legacy and new data needs to align with Small Business Data Strategy
- Develop consumption data model that bridges the data within and across line of business to increase scale, use and value
- Streamline and automate data assets that support cross-product data sharing, self-serve analytics, and dashboards
- Research and identify technology data gaps (missing fields for analytical calculations) and partner with product technology to build into upstream systems to support analytics needs
- Define, develop and establish Modeling Team roadmap and implementation planning for our Modeling, Machine Learning & AI Ecosystems
- Support data integration projects with external data providers into our systems
- Build and incorporate data audit checks and control standards
Required qualifications, capabilities and skills:
- 3+ years of analytics, business intelligence, data warehouse, data architecture, or data governance experience
- Master’s or Bachelor's degree in related field (e.g. Data Analytics, Computer Science, Math/Statistics or Engineering) with 3+ years’ experience in related discipline
- Demonstrated understanding in programming languages such as: SQL, SAS, Python, Spark, Java or Scala
- Experience building relational data models in multiple technology platforms (Teradata, Oracle, Hadoop or Cloud e.g. AWS, GCP, or Azure)
- Hands-on experience in researching, testing and developing automated data processes including building a common framework to drive consistency and coding best practices
- Excellent documentation and communication skills both written and verbal
- Excellent time management, multi-tasking and prioritization skills including ability to self-manage
- Experience with internal controls and compliance with regulatory and contractual obligations
- Experience working with data visualization and presentation tools
- Knowledge of how to appropriate classify risk of data to tag for access management
- Ideally knowledge of Business/Commercial Banking products and services, including deposits, lending, cash management, credit cards and merchant services
Preferred qualifications, capabilities and skills:
- Experience in Big Data and Cloud platforms (Hadoop, Teradata, AWS, GCP, Azure)
- Experience in using data wrangling tooling (SQL, SAS, Alteryx, Python, Spark, Java, Scala, Snowflake, Redshift, Databricks)
- Experience in dynamic and interactive reporting/visualization applications such as Tableau
- Proficient in standard data architecture, data extraction, load processing, data mining and analytical methodology (e.g. logistic regression, match pairs, and neural networks)
- Proficient in scheduling job workflows using software such as Control-M or Alteryx Scheduler
- Working knowledge of code versioning software (e.g. Bitbucket) and document change management/workflow software (e.g. JIRA, Confluence)