Are you an experienced Data Engineer who has been working with the Azure technology stack for at least five years? If yes, this could be the assignment!

Assignment description:

The Data & Analytics Team, in the IT Department, are building a Data Lakehouse to meet the needs. Having defined both the Target and Solution Architecture, they are now expanding the team to increase the number of Data Pipelines they can deliver in the coming year.

 In a day to day role, they will be expected to manage the full development life cycle of Data Pipelines. Starting with working with the business to define their requirements through turning them into live BI solutions. We are particularly interested in developers who have expertise in working with Data Lakehouse solutions.


Technical platforms we work with are:

– Azure platforms like Data factory, Databricks, Data Lake Storage, Purview, DevOps, etc.
– Codebase like SQL, Python, PySpark etc
– Tools like Visual Studio, Visual Studio Code, and Redgate.


To be successful in this role you have:

– Experience in technical roles within cloud and data environments.
– Hands-on experience with best practices in implementing data-driven solutions using Azure and big data technologies.
– Knowledge of major big data solutions including Databricks, Synapse Analytics, Azure Event Hub, and Delta Lake.
– Proficiency in data modelling, ETL processes and data lakehouse concepts.
– Familiar with Azure DevOps and Infrastructure as Code (IaC) tools such as Terraform.

Estimated Workload: 100%

Working Language: English/Swedish

We offer continuously. That means that we sometimes remove the assignments before deadline. If you are interested we recommend that you apply immediately.