We are offering a student internship/master thesis/student worker position (f/m/d) to enhance our capability to perform research in the area of Automated Driving.
You will be involved in the research topic of explainable AI related to LLM-based vision models and form tools to be used enhance safety-critical applications. In this position, you will be able to sharpen your profile in the development of real-world applications and learn to work in an open development team. Please note, that to be taken into consideration for this role you need to be enrolled in a study program.
Please note that the student needs to be enrolled for the full duration of the student employment.
You will be working on
Conduct research on combining LLM-based vision models with explainable AI techniques to improve model interpretability and transparency.
Participate in experimental design and execution, including training and fine-tuning LLM-based vision models with explainable AI components.
Conduct literature reviews with the latest advancements in LLM-based vision models.
Analyze experimental results and contribute to the interpretation and discussion of findings.
Creating novel architectures and tools to further explore explainable AI related to safety-critical applications.
Collect and analyze data using existing research methodologies and tools.
Required
Currently pursuing a master's degree in computer science, neuroscience, electrical engineering, or a related field.
Strong understanding of AI models and practical experience in programming with PyTorch.
Proficiency in advanced linear algebra, with a solid technical background in mathematical concepts.
Experience with Unix/Linux systems, including shell scripting and command-line operations.
Fluent in English; proficiency in German is a plus.
Advantageous
Integration of neuroscience with explainable AI in computer vision enhances comprehension of both biological vision systems and AI interpretability, fostering innovative insights at the intersection of these fields.
Experience with SLURM workloads
Familiarity with uncertainty estimation techniques of AI models.
Previous involvement in computer vision research, familiarity with LLM models, and knowledge of explainable AI techniques.
Work Model for this Role
This role will require an on-site presence.