Talent Job Seeker

Mid/Senior Data Engineer

About the position

Get to know us better CodiLime is a software and network engineering industry expert and the first-choice service partner for top global networking hardware providers, software providers and telecoms. We create proofs-of-concept, help our clients build new products, nurture existing ones and provide services in production environments. Our clients include both tech startups and big players in various industries and geographic locations (US, Japan, Israel, Europe). While no longer a startup - we have 250+ people on board and have been operating since 2011 we’ve kept our people-oriented culture. Our values are simple: Act to deliver. Disrupt to grow. Team up to win. The project and the team You will join a data engineering team working on cloud-based solutions for clients in industries such as finance, banking, insurance, and healthcare. Typical projects include: Migration from on-premise systems to modern cloud data platforms Building modern and scalable data pipelines and analytics layers Designing data architectures (lakehouse / data warehouse) You will work on real production systems, delivering reliable data solutions that power business decisions across entire organizations. You will also have real influence on technical decisions and the shape of the data platform. Depending on the project, you will work with a subset of the following technologies: Databricks, Spark Python, SQL Snowflake Power BI / Tableau Microsoft Azure (Blob Storage, Azure Functions, Fabric, Databricks, AI Search) (Optional) Apache Airflow / Dagster We work on multiple projects at the same time, so we may suggest a different project if it better matches your experience and profile. Your role As a Mid/Senior Data Engineer, you will focus on building and optimizing data platforms, with some exposure to analytics. Responsibilities: Design and build scalable ETL / ELT pipelines Develop data transformations (SQL / Python / dbt) Model data using best practices Optimize performance and cost of data processing Ensure data quality (testing, validation, monitoring) Support cloud migration initiatives Collaborate with analytics teams to deliver clean, reliable datasets Support analytics teams in building dashboards and reporting layers Do we have a match? As a Data Engineer, you must meet the following criteria: Strong experience with Databricks (Spark) Advanced SQL skills (including query optimization) Experience with Python Experience with Snowflake Experience with BI tools (Power BI / Tableau) Experience with version control systems (Git) Experience with AWS or Azure Good knowledge of English (minimum C1 level) Beyond the criteria above, we would appreciate the nice-to-haves: Experience with Apache Airflow, Dagster, or similar orchestration tools Experience with Azure AI Search or AWS OpenSearch More reasons to join us Flexible working hours and approach to work: fully remotely, in the office or hybrid Professional growth supported by internal training sessions and a training budget Solid onboarding with a hands-on approach to give you an easy start A great atmosphere among professionals who are passionate about their work

Place of work

Talent Job Seeker
Województwo mazowieckie
app.general.countries.Poland

About the company

Identifica el mejor Talento con Talent Job Seeker



Job ID: 10551791 / Ref: 3f949a8ae095ece2a854f699ec2ac3f9

Open application open_in_new

Talent Job Seeker