We are looking for a Data Engineer to help build our client's reporting and analytics products.
About the role & team:
The team is part of the group that is responsible for the whole big data pipeline in the company. The hiring team is responsible for transforming 3 petabytes of raw unstructured data into a scalable high-performance data warehouse for online analytical processing. Besides serving clients, the team is also helping internal functions with data insights and solutions.
Responsibilities:
- Own data and business logic of reporting platform
- Write Spark code that transforms granular data into pre-aggregates
- Build and support tools to share analytics data internally and to clients
- Provide data to Vertica analytical database, optimize and tune its performance
- Write and deploy code following standards and market best practices
- Proactively identify and report risks, escalate them within the team
We expect you to have:
- 3+ years experience in data engineering, BI or data analytics
- Good SQL knowledge and experience using databases for analytics
- Experience with Python, Scala or Java
- Confidence using command-line interfaces
- Knowledge of contemporary software development practices
- Willingness to work in a highly collaborative environment
- Good sense of humor
It would be an advantage if you have knowledge of
- Apache Spark
- Vertica
- Hadoop ecosystem
Company offers:
- A chance to make an impact on Internet evolvement worldwide
- Being a part of a competition with world level companies outside of the Lithuanian market
- Working with highly loaded and highly available systems
- Friendly, fun and professional team
- A lot of servers, complex infrastructure and a fast-growing product
- Possibilities to learn (conferences, training, knowledge sharing sessions)
- Fun things: leisure zone, team buildings and others events
- Monthly salary range for this position starts from 2000 EUR gross depending on candidate’s qualification. The final offer will depend on the experience and competencies