Versatility is the keyword when it comes to what we do in IT. Data Competency is one of our initiatives to support digital transformation of financial sector, enabling our customers to become truly data-driven companies. Our mission is to ensure capabilities to address financial sector data-related needs.
As a Spark Engineer, you will be responsible for building and optimizing our clients’ data pipelines and optimizing data flows. You will work on various data initiatives and will ensure optimal data delivery. You are ideal candidate if you are experienced in Spark ecosystem. If you enjoy building data systems from scratch and you like modifying existing ones, you will have those opportunities at Sollers.
Key facts about Sollers Consulting:
We are a Team of over 900 professionals who build the Digital Future for the world’s largest insurance, banking and leasing organisations. Our history of business advisory and software implementation goes back to the year 2000. Sollers Consulting’s roots are in Europe, but the company’s footprint is visible around the world.
As an international company with offices & projects around the world and Sollers of 20+ nationalities, we thrive in our multi-culture. We guarantee you will feel like you belong here, whether you are from Poland, the West, the East or another hemisphere.
Tools & technologies used on projects:
• Data architecture, data modeling, design patterns
• RDBMS and NoSQL databases
• DataOps, ETL technologies
• Real-time data streaming, Spark, Airflow, Kafka
• OLTP, OLAP, DWH, data lakes
• BI & predictive analytics; AI/ML
• Python, Java, Scala, R
You will have an opportunity to:
• Build scalable data processing pipelines and SQL database integrations.
• Advise on the use of appropriate tools and technologies.
• Recommend potential improvements to existing data architecture.
• Collaborate with analysts, experts and tech leads in Agile methodology to meet clients' needs.
• Address aspects like data privacy and security, compliance with regulations, integrity and availability, etc.
• Guide the team with good Spark development practices.
• Create Spark jobs for data transformation and aggregation.
• Perform Spark query tuning and performance optimization.
• Define feasible test strategies and troubleshoot failures.
We bet on you, so we expect you to:
• Have proven, hands-on experience with Spark.
• Deeply understand distributed systems (e.g. CAP theorem, partitioning, replication, consistency, and consensus).
• Be proficient with SQL as well as with Java, or – preferably – Scala.
• Know how to write useful abstractions to process similarly formatted datasets in a generic way.
• Be experienced with defining a strategy to handle data schemas in a manner that changes in the data don’t break the code.
• Speak English (min. B2).
• Communicate effortlessly with clients and team members.
• Be able to work in the European Union.
We offer you:
• The opportunity to quickly develop professionally.
• Clear career path and future salary projection.
• Individual learning & development budget.
• German and French language classes.
• Comprehensive health care, life insurance, travel insurance.
• Home office policy.
• Family support: wedding gifts, generous layette for newborns, family parties.
• Relocation package (if you come from another city).