Published: 20-04-2026 10:54 | Updated: 20-04-2026 10:57

KI opens new centre to elevate AI from project to practice

Decorative image
On 22 April, Karolinska Institutet inaugurates the Centre for AI Innovation. Photo: Erika Bellander

The Karolinska Institutet Centre for AI Innovation is to serve as a support centre and skills hub. Here, two of its representatives argue that the healthcare profession needs to be in the driver’s seat so that the technology is adapted to the users rather than vice versa.

Portrait of Clara Hellner, presidential advisor, KI.
Clara Hellner. Photo: Ulf Sirborn

“There’s a lot of AI use at KI, but instead of doing everything piecemeal, we’ll be gathering skills from within different fields,” says Clara Hellner, adjunct professor at the Department of Clinical Neuroscience and presidential advisor in Life Science. She is also part of the new centre’s steering group, one task of which is to serve as a support function. “We’re there to support the researchers in their AI work as needed,” she explains.

The Centre for AI Innovation is also going to be developing policy in the field, which lies within Maja Fjaestad’s sphere of interest as senior advisor to the president. 

Maja Fjaestad.
Maja Fjaestad. Photo: Andreas Andersson

She is also a member of the steering group and was previously at the European AI Office in Brussels. “I look forward to bringing the European policy perspective to the table,” she says. “KI excels at medical research, no one can slap our wrists there, but I can contribute a broader social science perspective.”

An important part of the centre’s activities will be to act as a collaborative node. According to Professor Hellner, collaboration with the healthcare sector, for example, will involve such issues as data management and intellectual property rights as well as behavioural science.

“How do we interact with AI? There’s so much to dig into there,” she says. “Especially making sure that the technology serves us and not vice versa. I’m sure we’ve all despaired over useless websites and apps.”

The new centre will also play an important part in KI’s collaborations with other universities.

“It’s often a strength to be a single-faculty university, but in this context it’s a weakness, too, as it’s further away from research environments that think in a different way,” she says. 

When AI meets healthcare

Dr Fjaestad says that incorporating AI into healthcare might appear simple. “but the people working in this area describe many bumps on the road – everything from what jurisdiction applies to how to ensure that data doesn’t go astray during a procurement process.”

She is surprised by how the technology has not always reached maturity, and gives an example: “When running a post-infarction scan, it turned out that the patient had a broken collarbone, but that wasn’t what the AI looked for. Teaching AI that ‘If there’s a fracture, tell us’ is one prompt,” she says. She goes on to stress that far from being neutral, AI is shaped by the data it is trained on. Previously, for instance, medical research was done mainly on men.

“We got really good at detecting heart attacks in men, but not in women, who presented other symptoms,” she says. “Developing algorithms on the basis of too narrow a data sample is a mistake we’ll not be making again.”

Dr Fjaestad also raises the question of what AI is to be used for in the healthcare sector: “It’s obvious that AI is good at spotting anomalies in images. But what unexpected things is AI good at? Can we use AI to organise healthcare better? The kind of complex information that AI excels at processing can be found in many other places than in just image recognition. I think we’d do well to keep the description open of what AI in healthcare is, rather than just quickly pigeonholing it for the obvious applications.”

Professor Hellner mentions that there are a number of AI projects on the go in Region Stockholm and Karolinska Institutet, from optimising care flows in the event of suspected sepsis to the development of new vaccines.

“The idea is for the centre to help enhance system capabilities when it comes to AI development,” she says. “KI is home to many small, independent units that work together in networks. It’s like an anthill. Certain issues require a concerted effort and we’re now in the middle of a huge technical leap forward, so we need to understand what the researchers can manage unaided and what they need help with.”

Multidisciplinary research

KI provides different kinds of innovation support, such as through KI Innovation. Some questions are specific to AI, however, such as those concerning safety, rights, sharing and access to data.

“In these respects, the centre will be identifying and working on recurring issues,” says Professor Hellner. “The entire centre has been set up to create system capability.”

Dr Fjaestad develops the point. “I envisage the centre as being well-suited to tying together multidisciplinary research that can lead to this very kind of innovation, which I think will be exciting.” 

The above can be described as “inside-out” innovation, but Professor Hellner points out that the Centre for AI Innovation will also be working with “outside-in” innovation.

“We get contacted by companies that want to use testing platforms here at KI or validate their algorithms on our data,” she says.

The centre supports SMEs: “Much of the innovation work is about achieving the ‘triple helix’ that allows academic, public sector and business actors to work constructively together,” explains Professor Hellner. She gives an example: “Some startup has an idea, but needs access to data or a test bed or partners to collaborate with. Or why not a skilled clinician who can say if they’re on the right track? We’ll lead the companies to make their navigation of the process as smooth as possible.”

Professor Hellner cites freed-up time as an example of societal and patient benefit.

“When doctors or other medical personnel realise that an AI agent can handle data volumes so that they can increase the time they spend with patients, things will really take off,” she says.

For her, it’s a huge problem that “there aren’t enough hands and feet” when healthcare staff have to spend so much time in front of the computer. “But when building systems that genuinely make things easier for the users, you must make sure to base them on their needs.”

Dr Fjaestad also address this matter:

“The centre will put the healthcare profession in the driving seat when it comes to AI development in that it’s taking place at a university thus rooted in knowledge.”

Responsible use

Dr Fjaestad works part time at Umeå University’s AI Policy Lab with Professor Virginia Dignum. The focus of the group is the responsible use of AI, which alongside its ethical aspects also includes system transparency and accountability.

“Virginia’s known for asking ‘question zero’, that is – is this a problem that we should use AI to solve?” she says. “Actively asking that question is important, not least in the healthcare sector. Having a foot in AI policy research makes me a better person here at KI and I believe that the centre can help improve that dialogue at KI.”

Implementing AI in healthcare is not without its challenges, however. Professor Hellner address the clinical angle: “How can we guarantee quality? What happens if you just press buttons and access the patient’s treatment plan – how are we to prevent digital dementia? There are going to be actors of differing degrees of professionality, so we’ll have to also work with the quality dimension. There’s enormous potential in handling large data volumes and seeing patterns, but we must also reserve the right to use AI as an aid and not get lazy.”

For her part, Dr Fjaestad stresses the importance of focusing on patient benefit:

“It’s so clear in healthcare that we can’t just introduce new techniques for fun. The question we should ask ourselves is whether or not it’s in the patients’ interests. But there are also risks involved in not digitising and not using AI. To opt out of digitisation is also an active decision.”