4503 jobs - 9 added today
Recruiting? Call us on 01772 639610
Email me newest jobs similar to this one
4 months ago
only 9 days until close

Data Engineer


Ovo Energy
Location: London
Job type: Permanent
Sector: Power & Energy
Category: General Engineering Jobs
Apply
Select how you want to share:
View similar
Where in the world of OVO will I be working?

You will be joining our Data teams in our London office.

That sounds great! What will I be doing?

As a Data Engineer you will be working with all internal product, platform and data teams to create and maintain ETL pipelines.

You will collaborate with the Data Platform team in order to create best practices for working with data and jointly come up with requirements for the Data Platform. You will also keep track of both the ownership of implemented business logic, as well as the actual ownership of the source data.

You will report into the Software Engineering Manager of the Data Platform team and will work closely with our Product, Tech and Analytics teams along with a wide array of stakeholders across the organisation.

You will most likely have worked in businesses where data and analytics are at the core of their culture and business. You will place a high value on documentation and will use insights from data and related fields to constantly improve our products.

Main responsibilities:

* Create and maintain ETL pipelines, tasks specifically written in either SQL or Python. Work closely with the data architect to maintain a solid definition

* Creating domain-specific core tables, business-wide derived views, derived aggregated views, and pre-processed aggregated views for visualisations and analytics

* Building automated monitoring processes for data quality and integrity. Be responsible for integrity and timely delivery of core data tables across the business.

* Team up with data scientists and analysts in the business to build optimised data views for algorithms or visualisation

* Productionalise ML/algorithms written by data scientists, and lend expertise across the business in data engineering best practices.

* Executing on the migration strategy and tasks to move existing data into the new platform - taking into account the above.

* Work with the data teams’ strategy for data stewardship and data championship programs, metadata management, master data management, and other lifecycle management programs, and coordinate with security and privacy programs for integrated approaches

* Identify gaps in data governance and information management practices and orchestrate the elimination of those gaps, leading to higher data management maturity

* Define metrics to measure and demonstrate information strategy and management value

* Weigh in conversation about new technologies/tools to be used across the business, to augment the data processing capabilities of the business. Contribute to design discussions and challenge technology assumptions where required

* Maintain data product documentation.

That sounds super exciting, but… Do I have what it takes?

* Proven Data engineering background, having worked with both Python and SQL for a minimum of 3 years;

* Exposure to the other Data Science/BI/Analytics tools/languages/concepts such as Tableau, Scala, Machine Learning, etc…

* A solid background in the fields of data processing, data warehousing and business intelligence

* Team player, with an entrepreneurial mindset and a roll-up sleeves attitude. Detail oriented, creative problem solver with excellent people management and stakeholder engagement skills.

* Proven experience with big data infrastructure. Experience in Kafka, Google Cloud Platform services, and Spark are a definite plus.

* Previous experience with a data workflow tool (Airflow, Luigi, etc) is a plus.

* A solid understanding of logging, tracking logistics as well as the inner workings of A/B testing. Experience in implementation of such platform is a definite plus.
Where in the world of OVO will I be working?

You will be joining our Data teams in our London office.

That sounds great! What will I be doing?

As a Data Engineer you will be working with all internal product, platform and data teams to create and maintain ETL pipelines.

You will collaborate with the Data Platform team in order to create best practices for working with data and jointly come up with requirements for the Data Platform. You will also keep track of both the ownership of implemented business logic, as well as the actual ownership of the source data.

You will report into the Software Engineering Manager of the Data Platform team and will work closely with our Product, Tech and Analytics teams along with a wide array of stakeholders across the organisation.

You will most likely have worked in businesses where data and analytics are at the core of their culture and business. You will place a high value on documentation and will use insights from data and related fields to constantly improve our products.

Main responsibilities:

* Create and maintain ETL pipelines, tasks specifically written in either SQL or Python. Work closely with the data architect to maintain a solid definition

* Creating domain-specific core tables, business-wide derived views, derived aggregated views, and pre-processed aggregated views for visualisations and analytics

* Building automated monitoring processes for data quality and integrity. Be responsible for integrity and timely delivery of core data tables across the business.

* Team up with data scientists and analysts in the business to build optimised data views for algorithms or visualisation

* Productionalise ML/algorithms written by data scientists, and lend expertise across the business in data engineering best practices.

* Executing on the migration strategy and tasks to move existing data into the new platform - taking into account the above.

* Work with the data teams’ strategy for data stewardship and data championship programs, metadata management, master data management, and other lifecycle management programs, and coordinate with security and privacy programs for integrated approaches

* Identify gaps in data governance and information management practices and orchestrate the elimination of those gaps, leading to higher data management maturity

* Define metrics to measure and demonstrate information strategy and management value

* Weigh in conversation about new technologies/tools to be used across the business, to augment the data processing capabilities of the business. Contribute to design discussions and challenge technology assumptions where required

* Maintain data product documentation.

That sounds super exciting, but… Do I have what it takes?

* Proven Data engineering background, having worked with both Python and SQL for a minimum of 3 years;

* Exposure to the other Data Science/BI/Analytics tools/languages/concepts such as Tableau, Scala, Machine Learning, etc…

* A solid background in the fields of data processing, data warehousing and business intelligence

* Team player, with an entrepreneurial mindset and a roll-up sleeves attitude. Detail oriented, creative problem solver with excellent people management and stakeholder engagement skills.

* Proven experience with big data infrastructure. Experience in Kafka, Google Cloud Platform services, and Spark are a definite plus.

* Previous experience with a data workflow tool (Airflow, Luigi, etc) is a plus.

* A solid understanding of logging, tracking logistics as well as the inner workings of A/B testing. Experience in implementation of such platform is a definite plus.
Apply

Email me newest jobs similar to this one

  Back to the top