The Machine Learning team at OpenTable has two opposing challenges that manifest themselves as opportunities:

OpenTable is the world's leading provider of online restaurant reservations, seating more than 25 million diners per month via online bookings across approximately 60,000 restaurants. It has an extensive wealth of diner and restaurant data going back over 20 years.

OpenTable fields a lean team, with just over 1,000 employees globally. We currently have a team of 15 people, but aspiring to grow.

As a member of the team, you will benefit from these factors because your projects will have sufficient data and usage to be interesting and have a meaningful impact, and you will have the opportunity to work on a variety of exciting projects across the company. However, you will be required to think critically and ruthlessly prioritize, since the team has finite bandwidth. If these challenges sound appealing, then we look forward to hearing from you!

OpenTable strives to provide a fair, collaborative, and balanced work environment.

Unfortunately, we do not provide visa sponsorship.

In this role, you will:

Our team is looking for someone passionate about Natural Language Processing. You will be working within a team of scientists and engineers that leverage NLP to improve the quality of OpenTable product.

  • Conduct research and development of Natural Language Processing (NLP) models, and collaborate with engineers to bring these models into production. The responsibilities include, but are not limited to:
  • Utilizing prompt engineering and parameter-efficient fine-tuning methods to enhance the performance of Large Language Models (LLMs) for a specific task.
  • Constructing data pipelines essential for the training and evaluation of NLP models.
  • Implementing comprehensive evaluation suites for measuring and assessing the performance and safety of models on a specific task.
  • Develop applications that utilize large language models (LLMs), enhancing their capabilities through RAG and through incorporation of different Tools.
  • Contribute to the internal ML packages, and help the team to build tools for training, evaluating, debugging, and interpreting NLP models for retrieval, reranking and generation.
  • Stay current with NLP research, know when to apply it to your work, and how to communicate it to your partners.

Please apply if:

  • Experience applying transformer model architectures to NLP challenges is crucial, we expect you to have significant experience experimenting with encoder and especially decoder architectures.
  • Experience applying parameter-efficient fine-tuning for LLM as well as familiarity with different training techniques SFT, DPO, PPO.
  • Experience with optimizing LLMs for inference
  • Strong understanding of data structures, algorithms, and OO design
  • Strong knowledge of Python and hands-on experience with NLP-related /scientific computing packages (HuggingFace ecosystem (Transformers, TRL, PEFT), DeepSpeed, PyTorch, NumPy, SciPy).
  • Passion for continuous learning and self-development
  • Strong communication skills and the ability to work with others in a closely collaborative team.

Nice to have:

  • Contributions to an open-source Machine Learning (ML) package, showing your skills in researching, implementing, and evaluating academic papers.
  • Publications in the field of NLP.
  • Hands-on experience in deploying Large Language Models (LLMs) to real-world products, particularly in environments sensitive to latency where the model processes many queries simultaneously.
  • Proficiency in Large Language Model (LLM) inference deployment, with knowledge in relevant technologies and packages, such as ONNX, FasterTransformer/TensorRT-LLM, llama-cpp, Triton Inference Server and VLLM.
  • Participation in Kaggle competitions focused on NLP, demonstrating your in-depth understanding of problems and data, as well as your ability to experiment with a diverse set of NLP techniques / models to find an effective solution.
  • Experience in building Retrieval-Augmented Generation (RAG) or other applications leveraging Large Language Models (LLMs).
  • A graduate degree or equivalent in Machine Learning, Mathematics, Statistics, Physics, Economics, or a related technical field is preferred.

Benefits:

  • Paid Time Off - 20 days a year
  • Birthday/celebration PTO - 1 day
  • Flexible sick time off
  • Paid volunteer time
  • Annual company week off
  • Parental Leave Benefits
  • Dental & Vision Insurance
  • Life & Disability Insurance
  • Group RRSP and DPSP
  • Major Medical Insurance (dependent care options)

There are a variety of factors that go into determining a salary range, including but not limited to external market benchmark data, geographic location, and years of experience sought/required. The range for this remote Canada based role is $132,000-$188,000 CAD.

In addition to a competitive base salary, roles are eligible for additional compensation and benefits including: annual cash bonus, equity grant; health benefits; flexible spending account; retirement benefits; life insurance; paid time off (including PTO, paid sick leave, medical leave, bereavement leave, floating holidays and paid holidays); and parental leave and benefits.

Diversity, Equity, and Inclusion

OpenTable aspires to be a workplace that reflects the diverse communities we serve and a culture that is inclusive and welcoming. Hiring people with different backgrounds, experiences, perspectives, and ideas is essential to innovation and to how we deliver great experiences for our users and our partners. Representation matters.

We ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform job responsibilities, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Salary

$132,000 - $188,000

Yearly based

Location

Remote, Canada (PST Working Hours Preferred)

Remote Job

Job Overview
Job Posted:
8 months ago
Job Expires:
Job Type
Full Time

Share This Job: