let’s make something together

Give us a call or drop by anytime, we endeavour to answer all enquiries within 24 hours on business days.

Find us

PO Box 16122 Collins Street West
Victoria 8007 Australia

Email us


Phone support

Phone: + (066) 0760 0260
+ (057) 0760 0560

DevOps Automation Engineer (ML projects)

  • By Anna Bondar
  • 19 October 2021

Our client is the world’s largest peer-to-peer learning community for students, parents and teachers.

Every month they are proud to be home for 350 million users around the world. When you join Search Infrastructure team, you will contribute to their mission to give every student in the world straightforward access to the information they need. Their team works in two significant areas:

The first is traditional search, where they focus on:

  • Providing users new tools that help them expressing their search questions more precisely
  • Developing and enhancing search algorithms that improve retrieval precision and personalize the search

The second is what they call visual search, and in this area they focus on:

  • Improving and introducing new capabilities to visual search like math solving
  • Developing and enhancing search algorithms that improve retrieval precision and personalize the search
  • Leveraging mobile devices capabilities to provide a better search experience

You will have the chance to work with top-class scientists, engineers, and domain experts and to drive the data science process of their technology end-to-end. Oue client integrate the R&D workflow on state-of-the-art machine intelligence into product features and internal services aiming at understanding and personalizing the learning experiences of their users.

The ideal candidate is an enthusiast of educational technologies with a background in software development and a skillset that blends cloud infrastructure, machine learning, and DevOps.


  • Work with the ML infra team, Solutions Architects, and the Automation infra team to identify and architect infrastructure solutions to empower the team to move faster, more effectively, and with a higher level of automation in their work.
  • Turn machine learning artifacts into production systems, integrated with other product features or business processes.
  • Deploy robust pipelines for training, evaluation, and inference at scale.
  • Implement platform-level machine learning operations workflows and solutions.
  • Build tools for supporting experiments, development, and debugging of machine learning models.
  • Build and maintain robust monitoring frameworks for the machine learning scheduled jobs and microservices.
  • Maintain and update platform-level machine learning capabilities and infrastructure.
  • Create automatic workflow for building, testing, tracking experiments, versioning, deployment, using CI/CD tools.
  • Implement safe release and deployment models (e.g. canary release, blue/green deployment, load autoscaling) for achieving resilience in case of component failures or traffic bursts.
  • Create and maintain the infrastructure required for both development and production environments using infrastructure-as-a-code.
  • Ensure DevOps culture and practices among the whole team.


  • 3+ years experience with building and working with production environments.
  • Experience in operationalization of projects in the cloud – AWS with services like VPC, ELK, EKS, ECR, ECS, EC2, S3, RDS, SNS, Lambda.
  • Experience with using infrastructure as a code (with Terraform or other IaaC frameworks), and other best practices.
  • Working experience of Python or Golang, or other modern programming languages.
  • Knowledge of Bash and Unix command line toolkit.
  • Experience with CI/CD pipelines or other code automation techniques and tools like GitHub actions, AWS CodePipeline, CircleCI, DVC/CML or similar.
  • Experience with logging, debugging, monitoring, alerting tools (e.g. Elastic stack, DataDog, AWS CloudWatch, Sentry, AWS X-Ray, Thundra, NewRelic or similar).
  • Interest in and at least basic understanding of Machine Learning domain (eg. SageMaker).
  • Team player attitude and clear communications skills.
  • Familiar with agile development and lean principles.
  • Culture of DevOps and high-quality software standards.
  • Fluent English

Nice to have:

  • Experience with large volume ETL jobs or data streaming.
  • At least some of the data and cloud infrastructure technologies such as Spark, DataBricks, Glue, EMR, Docker, AWS Batch, Kubernetes, AWS Fargate, AWS Lambda, Postgres, key-value stores, Redshift, Snowflake.
  • At least some of the deployment and orchestration technologies such as AWS StepFunctions, AWS Sagemaker pipelines, Seldon, KubeFlow, Tensorflow Extended, AirFlow.
  • At least some of the ML technologies such as Tensorflow, PyTorch, Spark ML, scikit-learn, XGBoost, MLFlow, Wandb, or related frameworks.


  • Location: Kraków or Barcelona, or remotely from Poland/Spain
  • Budget: up to 27 000 PLN gross monthly
  • Start date: as soon as possible (however, they’re happy to wait for the right person)
  • Some of benefits – the final offer will depend on the location:
    • Flexible working hours and the possibility to work remotely
    • Personal development budget 800$ per year +  unlimited time off policy for participation in conferences and workshops and access to an online learning platform with courses from Udemy, Harvard Manage Mentor and many others
    • Fully paid private health care packages for you and your family (dental care included) provided by Luxmed
    • Fully paid life insurance provided by Warta
    • Multisport Plus card
    • Access to the Mental Health Helpline – providing virtual support of external psychologists, psychotherapists, and coaches
    • AskHenry services – personal concierge services to help you to settle your everyday matters (like Ikea shopping or shoemaker visit)
    • Possibility to join one of Employee Resource Groups and initiatives
    • If needed, additional budget for remote work accessories

Note: Prepare your CV in English (PDF), fill in the form and apply! 🙂

Please include in your CV the following clause necessary for the recruitment process: 

I agree to the processing of personal data that I have made available voluntarily in the recruitment process by the Administrator of personal data, i.e. Dotcommunity Spółka z ograniczoną odpowiedzialnością [Ltd.] based in Cracow, 15 Żabiniec Street, 31-215 Cracow, registered in Poland, the Cracow’s District Court – Śródmieście, XI Commercial Division of the National Court Register under number 0000468484, VAT number: 9452174499, (“Dotcommunity”) in order to carry out the recruitment process for DevOps Automation Engineer ML projects position on the basis of Art.6 item 1a of the Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)

* - required