Company Description

Renesas is one of the top global semiconductor companies in the world. We strive to develop a safer, healthier, greener, and smarter world, and our goal is to make every endpoint intelligent by offering product solutions in the automotive, industrial, infrastructure and IoT markets. Our robust product portfolio includes world-leading MCUs, SoCs, analog and power products, plus Winning Combination solutions that curate these complementary products. We are a key supplier to the world’s leading manufacturers of electronics you rely on every day; you may not see our products, but they are all around you.

Renesas employs roughly 21,000 people in more than 30 countries worldwide. As a global team, our employees actively embody the Renesas Culture, our guiding principles based on five key elements: Transparent, Agile, Global, Innovative, and Entrepreneurial. Renesas believes in, and has a commitment to, diversity and inclusion, with initiatives and a leadership team dedicated to its resources and values. At Renesas, we want to build a sustainable future where technology helps make our lives easier. Join us and build your future by being part of what’s next in electronics and the world.

Overview

We are seeking a talented and motivated AI Model Optimization, Quantization and Framework Engineer to join our team. In this role, you will be part of the AI & Cloud Engineering (ACE) Division and Hybrid Compiler team.

The team has been developing a comprehensive AI Compiler strategy that delivers a highly flexible platform to explore new DL/ML model architectures, combined with auto-tuned high performance for production environments across a wide range of hardware architectures. The compiler framework, ML graph optimizations and kernel authoring specific to the hardware impacts performance, developer efficiency & deployment velocity of both AI training and inference platforms. You will be developing AI compiler frameworks to accelerate machine learning workloads on the next generation of AI hardware. You will work closely with AI researchers to analyze deep learning models and how to lower them efficiently on AI platforms. You will also partner with hardware design teams to develop compiler optimizations for high performance. You will apply software development best practices to design features, optimization, and performance tuning techniques. You will gain valuable experience in developing machine learning compiler frameworks and will help in driving next generation hardware software co-design for AI domain specific problems.

Our division’s mission is to use the latest AI and cloud technologies to develop the best AI inference for advanced driver safety engineers building self-driving vehicles and other high performance compute products. Renesas is the leading automotive electronics supplier globally, and this is a rare opportunity to develop the infrastructure required to deploy our AI software to the billions of devices we ship to customers every year. You will join our newly formed AI & Cloud Engineering organization of around 100 software engineers. Due to strong demand for our AI-related products we are planning to triple in size in the next three years, so there is lots of room for you to help us grow the team together while remaining small. Our team’s key locations are Tokyo, London, Paris, Dusseldorf, Beijing, Singapore, Ho Chi Minh City, and other metropolitan areas, but you can also join fully remotely from other locations globally or get our support to relocate to our key hubs such as Tokyo. 

Job Description

  • Development of AI compiler framework, high performance kernel authoring and acceleration onto next generation of hardware architectures.
  • Contribute to the development of the industry-leading machine learning framework core compilers to support new state of the art inference and training machine learning/AI accelerators and optimize their performance.
  • Collaborating with AI research scientists to accelerate the next generation of deep learning models such as recommendation systems, computer vision, or natural language processing.
  • Performance tuning & optimizations of deep learning frameworks.
  • Model optimization by developing the pruning & quantization algorithms and hardware neural architecture search technique.

Qualifications

  • Bachelor’s or Master’s degree in computer science, machine learning, mathematics, physics, electrical engineering or related field.
  • Experience in C/C++, Python, or other related programming language
  • Experience in accelerating deep learning models or libraries on hardware architectures.
  • Experience with Post Training Quantization (PTQ), Quantization Aware Training (QAT) and other quantization techniques and strategies
  • Experience working with machine learning frameworks such as PyTorch, TensorFlow, ONNX etc.
  • Ability to speak and write in English at a business level.
  • Experience of Product Owner of scrum team is plus.

Additional Information

ルネサスは「人々の暮らしを楽(ラク)にする」技術で持続可能な将来を築いていく日本を代表する半導体企業です。自動運転やIoTなど多様な分野において、先進的な製品やソリューションを提供しています。当社の製品は、世界中の主要な電子機器メーカーに採用され、日々の暮らしに欠かせない身の回りのあらゆる電子機器に使用されています。 当社では世界25カ国の製造・開発・販売拠点において20,000人以上の従業員が働いています。グローバルチームとして、すべての従業員が、行動指針である「Renesas Culture」をもとに、互いに学習、協力、成長、目標に向けて前進しながら、日々さまざまな課題解決に取り組んでいます。 当社を取り巻く環境は近年の活発な大型M&Aを含めて、他に類を見ないほどダイナミックに動いています。今後もインフラやデータエコノミー関連の急成長市場でのシェア拡大や、産業/IoTや自動車分野でのプレゼンス強化を図ります。変化の激しい中、グローバルチームの一員として、私たちと一緒に持続可能な将来を築いていただける方をお待ちしています。

Location

Tokyo, Japan

Remote Job

Job Overview
Job Posted:
7 months ago
Job Expires:
Job Type
Full Time

Share This Job: