Job Details:

Job Description: 

We are offering a student internship/master thesis/student worker position (f/m/d) to enhance our capability to perform research in the area of Automated Driving.
You will be involved in the research topic of explainable AI related to LLM-based vision models and form tools to be used enhance safety-critical applications. In this position, you will be able to sharpen your profile in the development of real-world applications and learn to work in an open development team. Please note, that to be taken into consideration for this role you need to be enrolled in a study program.

Please note that the student needs to be enrolled for the full duration of the student employment.
You will be working on

  • Conduct research on combining LLM-based vision models with explainable AI techniques to improve model interpretability and transparency.

  • Participate in experimental design and execution, including training and fine-tuning LLM-based vision models with explainable AI components.

  • Conduct literature reviews with the latest advancements in LLM-based vision models.

  • Analyze experimental results and contribute to the interpretation and discussion of findings.

  • Creating novel architectures and tools to further explore explainable AI related to safety-critical applications.

  • Collect and analyze data using existing research methodologies and tools.

Qualifications:

Required

  • Currently pursuing a master's degree in computer science, neuroscience, electrical engineering, or a related field.

  • Strong understanding of AI models and practical experience in programming with PyTorch.

  • Proficiency in advanced linear algebra, with a solid technical background in mathematical concepts.

  • Experience with Unix/Linux systems, including shell scripting and command-line operations.

  • Fluent in English; proficiency in German is a plus.


Advantageous

  • Integration of neuroscience with explainable AI in computer vision enhances comprehension of both biological vision systems and AI interpretability, fostering innovative insights at the intersection of these fields.

  • Experience with SLURM workloads

  • Familiarity with uncertainty estimation techniques of AI models.

  • Previous involvement in computer vision research, familiarity with LLM models, and knowledge of explainable AI techniques.

Job Type:

Student / Intern

Shift:

Shift 1 (Germany)

Primary Location: 

Germany, Munich

Additional Locations:

Business group:

Intel Labs is the company's world-class, industry leading research organization, responsible for driving Intel's technology pipeline and creating new opportunities. The mission of Intel Labs is to deliver breakthrough technologies to fuel Intel's growth. This includes identifying and exploring compelling new technologies and high risk opportunities ahead of business unit investment and demonstrating first-to-market technologies and innovative new usages for computing technology. Intel Labs engages the leading thinkers in academia and industry in addition to partnering closely with Intel business units.

Posting Statement:

All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance.

Position of Trust

N/A

Work Model for this Role

This role will require an on-site presence.

Location

DEU - Neubiberg

Job Overview
Job Posted:
9 months ago
Job Expires:
Job Type
Full Time Intern Part Time

Share This Job: