The salary range for this position is (contract of employment):
Mid role: 14 200 - 19 690 PLN in gross terms
Junior role: 10 600 - 14 410 PLN in gross terms
A hybrid work model that incorporates solutions developed by the leader and the team
About the job
We are seeking a skilled and motivated Data Engineer to join our team. The successful candidate will have an understanding of SQL and Python, with nice-to-have knowledge in Spark, Google Cloud Platform or Keboola. Your main responsibility will be to build data pipelines, enhance existing jobs, as well as:
Design, develop, and maintain robust, scalable data pipelines using Python, SQL, Spark and other relevant technologies
Develop and maintain Python/Flask based microservices serving predictions of Machine Learning models
Respond to business data requests promptly and efficiently, prioritizing tasks as directed by the team leader and product owner
Carry out reverse engineering when necessary, especially in instances where documentation may be lacking
Ensure that the data solutions are aligned with the company's business requirements and strategic goals
Play an active role in decision-making processes regarding the selection and implementation of data frameworks
We are looking for people who
Have Bachelor's degree in Computer Science, Information Systems, or a related field. Advanced degree is a plus
Have proven experience as a Data Engineer or in a similar role
Can demonstrate proficiency in SQL, Python and other engineering tools
Can automate and test code delivery using DevOps principles - CI/CD, experience with github action pipelines is a plus
Understand the concept of RESTful API
Are aware of data pipelines orchestration tools like Apache Airflow
Have strong communication skills, capable of conveying complex ideas in a clear, concise manner
Are detail-oriented and capable of working in a fast-paced, dynamic environment
What we offer
A hybrid work model that you will agree on with your leader and the team. We have well-located offices (with fully equipped kitchens and bicycle parking facilities) and excellent working tools (height-adjustable desks, interactive conference rooms)
Annual bonus up to 10% of the annual salary gross (depending on your annual assessment and the company's results)
A wide selection of fringe benefits in a cafeteria plan – you choose what you like (e.g. medical, sports or lunch packages, insurance, purchase vouchers)
English classes that we pay for related to the specific nature of your job
16" or 14" MacBook Pro with M1 processor and 32GB RAM or a corresponding Dell with Windows (if you don’t like Macs) and other gadgets that you may need
Working in a team you can always count on — we have on board top-class specialists and experts in their areas of expertise
A high degree of autonomy in terms of organizing your team’s work; we encourage you to develop continuously and try out new things
Hackathons, team tourism, training budget and an internal educational platform, MindUp (including training courses on work organization, means of communications, motivation to work and various technologies and subject-matter issues)
If you want to learn more, check out this webpage or listen to Allegro Tech Podcast Episode about recent projects in the Data Science Hub
Why is it worth working with us
Data plays a key role in the operation of Allegro - we are a data-driven technology company, and through the models and analyses provided you will have a significant impact on one of the largest eCommerce platforms in the world
Gain invaluable experience and deepen your skills through continuous learning and development opportunities
Collaborate with a network of industry experts, enhancing your professional growth and knowledge sharing
We are happy to share our knowledge. You can meet our speakers at hundreds of technological conferences such as Data Science Summit, Big Data Technology Warsaw Summit. We also publish the content on allegro.tech blog
We use, depending on teams and their needs, the latest versions of Java, Scala, Kotlin, Groovy, Go, Python, Spring, Reactive Programming, Spark, Kubernetes, TensorFlow
Microservices – a few thousand microservices and 1.8m+ rps on our business data bus
In the Data&AI team you would be a part of a team consisting of over 200 data, ML & product specialists overseeing dozens of products, few hundred production ML models and governs all data in Allegro (several dozen petabyte scale)
We practice Code Review, Continuous Integration, Scrum/Kanban, Domain Driven Design, Test Driven Development, Pair Programming depending on the team
GenAI tools (e.g. Copilot, internal LLM bots) support our everyday work
Our internal ecosystem is based on self-service and widely used tools, such as Kubernetes, Docker, github (including CI/CD). This will allow you, from day one, to develop software using any language, architecture and scale, restricted only by your creativity and imagination
We actively participate in the life of the biggest user groups in Poland centered around technologies we use at work (Java, Python, Devops)
Technological autonomy: you get to choose which technology solves the problem at hand (no need for management’s consent), you are responsible for what you create
Apply to Allegro and see why it is #dobrzetubyć (#goodtobehere)