en

2 PhD research fellowship positions within “Explainable AI systems for business-critical applications".

2 PhD research fellowship positions within “Explainable AI systems for business-critical applications".

Norway 01 Feb 2021
NTNU

NTNU

State University, Browse similar opportunities

OPPORTUNITY DETAILS

Total reward
0 $
State University
Area
Host Country
Deadline
01 Feb 2021
Study level
Opportunity type
PhD
Specialities
Opportunity funding
Full funding
Eligible Countries
This opportunity is destined for all countries
Eligible Region
All Regions

This is NTNU

At NTNU, creating knowledge for a better world is the vision that unites our 7 400 employees and 42 000 students.

We are looking for dedicated employees to join us.

You will find more information about working at NTNU and the application process here.

   



About the position

Explainable AI and the EXAIGON project

The recent rapid advances of Artificial Intelligence (AI) hold promise for multiple benefits to society in the near future. AI systems are becoming ubiquitous and disruptive to industries such as healthcare, transportation, manufacturing, robotics, retail, banking, and energy. According to a recent European study, AI could contribute up to EUR 13.3 trillion to the global economy by 2030; EUR 5.6 trillion from increased productivity and EUR 7.73 trillion from opportunities related to consumer experience. However, in order to make AI systems deployable in social environments, industry and business-critical applications, several challenges related to their trustworthiness must be addressed first. 

Most of the recent AI breakthroughs can be attributed to the subfield of Deep Learning (DL), based on Deep Neural Networks (DNNs), which has been fueled by the availability of high computing power and large datasets. Deep learning has received tremendous attention due to its state-of-the-art, or even superhuman, performance in tasks where humans were considered far superior to machines, including computer vision, natural language processing, and so on. Since 2013, Deep Mind has combined the power of DL with Reinforcement Learning (RL) to develop algorithms capable of learning how to play Atari games from pixels, beating human champions at the game of Go, surpassing all previous approaches in chess, and learning how to accomplish complex robotic tasks. Similarly, DL technology has been used in combination with Bayesian Networks (BNs), resulting in Deep Bayesian Networks (DBNs), a framework that dramatically increases the usefulness of probabilistic machine learning. Despite their impressive performance, DL models have drawbacks, with some of the most important being lack of transparency and interpretability, lack of robustness, and inability to generalize to situations beyond their past experiences. These are difficult to tackle due to the black-box nature of DNNs, which often end up having millions of parameters, hence making the reasoning behind their predictions incomprehensible even to human experts. In addition, there is a need to better understand societal expectations and what elements are needed to ensure societal acceptance of these technologies.


Explainable AI (XAI) aims at remedying these problems by developing methods for understanding how black-box models make their predictions and what are their limitations. The call for such solutions comes from the research community, the industry and high-level policy makers, who are concerned about the impact of deploying AI systems to the real world in terms of efficiency, safety, and respect for human rights. In order for XAI to be useful in business-critical environments and applications, it should not be limited to algorithm design because the experts who understand decision-making models the best are not in the right position to judge the usefulness and structure of explanations. It is necessary to enhance XAI research by incorporating models of how people understand explanations, and when explanations are sufficient for trusting something or someone. Such models have been researched for many years by philosophers, social and cognitive psychologists, and cognitive scientists. It becomes evident that significant interdisciplinary contributions are needed for AI systems to be considered trustworthy enough for deployment in social environments and business-critical applications.

The EXAIGON (Explainable AI systems for gradual industry adoption) project (2020-2024) will deliver research and competence building on XAI, including algorithm design and human-machine co-behaviour, to meet the society’s and industry’s standards for deployment of trustworthy AI systems in social environments and business-critical applications. Extracting explanations from black-box models will enable model verification, model improvement, learning from the model, and compliance to legislation. 

EXAIGON aims at creating an XAI ecosystem around the Norwegian Open AI-Lab, including researchers with diverse background and strong links to the industry. The project is supported by 7 key industry players in Norway who will provide the researchers with use cases, including data, models and expert knowledge. All involved researchers will work closely with each other, the industry partners, and researchers already working on relevant topics at NTNU, hence maximizing the project’s impact and relevance to the real world.  

You will report to the Head of the Department.


Other organizations


Choose your study destination


Choose the country you wish to travel to study for free, work or volunteer

Please find also


Featured tags


phd scholarships 2024 PhD program PhD Theses