Praca Senior / Expert AWS AI Engineer Gdańsk, pomorskie; Gdynia, pomorskie; Łódź, łódzkie; Warszawa, mazowieckie

Praca Senior / Expert AWS AI Engineer Gdańsk, pomorskie; Gdynia, pomorskie; Łódź, łódzkie; Warszawa, mazowieckie

Nordea Bank ABP profil

Nordea is a leading Nordic universal bank. We are helping our customers realise their dreams and aspirations – and we have done that for 200 years. We want to make a real difference for our customers and the communities where we operate – by being a strong and personal financial partner

 

Firma: Nordea Bank ABP | Senior / Expert AWS AI Engineer

Miejsce: Gdańsk, pomorskie; Gdynia, pomorskie; Łódź, łódzkie; Warszawa, mazowieckie

Nr ref. 30562

Opis stanowiska

We are seeking an exceptionally skilled and visionary AWS Senior / Expert AI Engineer to spearhead our advanced DevOps, MLOps, and LLMOps initiatives. This is a senior technical role where you will define, establish best practices, and lead the implementation of robust, scalable, and secure operational pipelines for our software, machine learning models, and large language model applications across multiple, high-impact project teams.

What you’ll be doing:

  • Cloud Infrastructure & Orchestration: Design, implement, and manage scalable and secure AWS-based infrastructure for AI/ML workloads, utilizing services like AWS Step Functions, EventBridge, AWS.
  • Managed Workflows for Apache Airflow (MWAA), and AWS Lambda for workflow orchestration.
  • Data Processing & ETL: Develop, optimize, and maintain robust big data ETL and analytics pipelines using PySpark with Python and/or Spark with Scala on AWS Glue, Amazon EMR, and Amazon EKS.
  • Data Storage & Management: Implement efficient data storage solutions primarily on Amazon S3, ensuring data accessibility, security, and integrity for AI/ML applications.
  • Data Querying & Analysis: Utilize AWS Athena for ad-hoc querying and analysis of large datasets stored in S3, supporting data exploration and model development.
  • Hadoop Ecosystem Integration: Leverage expertise in the Hadoop ecosystem (Hive, Impala, Sqoop, HDFS, Oozie) for managing and processing large-scale datasets.
  • Programming & Data Transformation: Apply strong Python programming skills, including extensive experience with Pandas DataFrame transformations, for data manipulation and analysis.
  • Deployment & MLOps: Implement and maintain Continuous Integration (CI) and Continuous Delivery (CD) pipelines using Jenkins, and manage infrastructure as code (IaC) with Terraform for automated deployment of AI/ML solutions.
  • Performance Optimization: Continuously monitor, evaluate, and optimize the performance, cost-efficiency, and reliability of deployed AI/ML infrastructure and data pipelines.
  • Collaboration: Work closely with data scientists, product managers, and business stakeholders to translate requirements into scalable technical solutions.
  • Code Quality: Write clean, well-documented, and testable code, adhering to best practices in software development, MLOps, and cloud engineering.
  • Mentorship (Senior/Expert): Mentor junior engineers, share knowledge, and contribute to the overall growth and technical excellence of the team.

Wymagania

Collaboration. Ownership. Passion. Courage. These are the values that guide us in being at our best- and that we imagine you share with us.

To succeed in this role, you should have:

  • Education: Bachelor's or Master's degree in Computer Science, Software Engineering, Data Engineering, or a related technical field.

Experience:

  • Senior: 5+ years of professional experience in data engineering, cloud infrastructure, or AI/ML engineering, with a strong focus on AWS.
  • Expert: 8+ years of professional experience, including leading complex data/AI infrastructure projects and significant contributions to production systems on AWS.
  • AWS Expertise (Must Have):
  1. Orchestration Services: Hands-on experience with AWS Step Functions, EventBridge, AWS Managed Workflows for Apache Airflow (MWAA), and AWS Lambda.
  2. Processing Services: Proven experience with AWS Glue, Amazon EMR, and Amazon EKS.
  3. Storage Service: Expert knowledge of Amazon S3.
  4. Querying & Analysis: Experience with AWS Athena.
  • Big Data & ETL:
  1. Extensive experience with big data ETL and analytics development using PySpark with Python and/or Spark with Scala.
  2. Working experience with the Hadoop ecosystem including Hive, Impala, Sqoop, HDFS and Oozie.

Programming:

  • Expert proficiency in Python, including extensive experience with Pandas DataFrame transformations.
  • Proficiency in Scala for Spark development (highly preferred).
  • CI/CD & IaC:
  1. Strong knowledge of Continuous Integration (CI) and Continuous Delivery (CD) Pipelines using Jenkins.
  2. Experience with Infrastructure as Code (IaC) using Terraform.
  3. Problem-Solving: Excellent analytical and problem-solving skills, with the ability to design and implement robust, scalable data and AI solutions.
  4. Communication: Strong communication skills, with the ability to articulate complex technical concepts to both technical and non-technical audiences.

Preferred Skills & Qualifications:

  • Experience with MLflow for ML lifecycle management.
  • Hands-on experience with AWS SageMaker and / or AWS Bedrock for model training and deployment.
  • Familiarity with other AWS AI/ML services (e.g., Amazon Rekognition, Amazon Comprehend).
  • Experience with real-time data processing and streaming technologies (e.g. Apache Kafka).
  • Certifications such as AWS Certified Solutions Architect, AWS Certified Data Analytics, or AWS Certified Machine Learning.

Informacje dodatkowe

Submit your application no later than 30/11/2025.

Komentarze (0)