Master Thesis - LLM for AI Explainer
In this master thesis, we would like to explore and implement techniques of using LLMs as AI model explainers, either by directly using LLMs to generate explanations or translating XAI explanations into natural language format. The outcome of the selected/proposed technique will be evaluated with trustworthiness assessments such as consistency or other metrics. The results from this master thesis will be an important aspect in bridging the knowledge gap for non-AI people to understand AI models with explainability.
The thesis is suitable for 1 or 2 student(s) and would involve the following tasks:
• Conduct literature review on explainable AI, prompting techniques, retrieval-augmented generation.
• Select a suitable technique for directly generate or translating explanation(s) into textual information.
• Evaluate the trustworthiness of the textual explanations.
• Optionally, implement and test the solution on a telecom use-case.
• Writing of an article draft with the research findings.
Qualifications
We are looking for an open-minded student who seeks a challenging research project with the freedom to propose and develop their own ideas. To be successful in this thesis work the candidates would need the following:
• MSc studies in machine learning, computer science, electrical engineering, statistics, applied math, or a similar area.
• Good theoretical knowledge of deep learning and machine learning.
• Experience with generative AI / LLM project is a plus.
• Familiarity with Git. Familiarity with Docker and Kubernetes is a plus.
• Excellent programming skills in Python and PyTorch.
• Good writing skills and fluency in English