黑料网

News

Better AI models by incorporating user feedback into training

New research improves a popular method for fine-tuning AI models by 60% using visualization tools.
Diagram showing individual and group behaviours, with comparison views of a robotic arm on a checkered floor.
An interactive visualizer that lets users give feedback during training of AI models. Image from Kompatscher et al. (2025)

A new study shows that reinforcement learning from human feedback (RLHF) , a common way to train artificial intelligence (AI) models, gets a boost when human users have an interactive dashboard to give feedback during training. This approach led to better outputs of the AI model and faster training. The study, recently published in the journal Computer Graphics Forum, also produced open-source code for improving RLHF.

Aligning human preferences with AI behavior is important for making AI tools more useful. One way to do this is with reinforcement learning, where users can, for example, compare two AI outputs side by side, rewarding one and punishing the other to guide the training of an AI system towards desired outputs. Gathering human feedback this way is not only slow, it also doesn鈥檛 give users the full picture of possible outputs or an idea of what end goal they should aim for with their feedback, says Antti Oulasvirta, professor at Aalto University and the Finnish Center for Artificial Intelligence FCAI.  

The team of researchers from Aalto, the University of Trento and KTH Royal Institute of Technology therefore created a way to augment traditional reinforcement learning. 鈥淪ince humans have a great ability to compare and explore data visually, we developed an interactive visualizer that shows all the possible AI behaviors, in this case of a simple robotics simulation,鈥 explains doctoral researcher Jan Kompatscher. In the study, people were asked to make comparisons of the positions of on-screen robotic skeletons that were learning certain postures and movements, like walking or doing backflips.

With the visualizer, test subjects training the AI model were no longer confined to a side-by-side comparison of two items at a time. They could interactively explore comparisons they had already made, see suggested new comparisons, and access the entire catalog of possible simulated movements and positions. Test subjects reported that the training process with the visualizer felt more efficient and useful but was not as easy to use as the more familiar comparison method. The simulated skeletons also performed up to 60% better when trained by users with the interactive visualizer, compared to just side-by-side comparisons, though the number of test subjects in this study was limited.

鈥淕iving people full control lets them express preferences over groups of behaviors, making the process more efficient,鈥 says Kompatscher. 鈥淚n the same amount of time for the conventional comparison process, test subjects were able to give more informative feedback, leading to better learning for the model.鈥 

鈥淲e believe that harnessing humans鈥 cognition and abilities, and giving them agency in the process, leads to better training of AI models,鈥 Oulasvirta concludes.

Publication and contacts:

Kompatscher, J., Shi, D., Varni, G., Weinkauf, T., & Oulasvirta, A. (2025). Interactive groupwise comparison for reinforcement learning from human feedback. Computer Graphics Forum, DOI:

FCAI

The Finnish Center for Artificial Intelligence FCAI is a research hub initiated by Aalto University, the University of Helsinki, and the Technical Research Centre of Finland VTT. The goal of FCAI is to develop new types of artificial intelligence that can work with humans in complex environments, and help modernize Finnish industry. FCAI is one of the national flagships of the Academy of Finland.

Professor Antti Oulasvirta. Photo: Aalto University / Jaakko Kahilaniemi

Researchers investigate how AI could better understand humans

Antti Oulasvirta has received a EUR 2.5 million Advanced Grant by the European Research Council (ERC) for the study of user models.

News
  • Updated:
  • Published:
Share
URL copied!

Read more news

Left: person wearing a black jacket and pearl necklace. Right: molecular structure illustration against a cosmic background.
Research & Art Published:

Decoding the chemistry of space with machine learning

Astronomers can detect complex chemical fingerprints聽in stardust聽鈥 but many of them remain unidentified. The聽SpaceML聽project combines machine learning and computational chemistry to simulate how molecules form and evolve in space, helping researchers decode these signals.
A close-up of numerous small, rectangular particles with rounded edges, appearing grey on a dark background.
Research & Art Published:

Catalysis in a new light: Microscale interactions could enhance clean energy technologies

A new study provides a more detailed view of how catalysts function during chemical reactions. The discovery could help develop more efficient materials for applications such as green hydrogen production and a more sustainable chemical industry.
A conference hall filled with attendees sitting at tables, watching a presentation on a large screen.
Campus, Research & Art Published:

Physics Days 2026 gathered Finnish physicists 黑料网

The 2026 edition of the annual conference featured talks on moir茅 matter, women in physics and paper cuts.
A speaker addresses a large audience in a dark auditorium. A large screen behind shows a vibrant image with the text 'Welcome'.
Awards and Recognition, Research & Art Published:

Annual review looked back on the past year

The annual review of the School of Arts, Design and Architecture provided a comprehensive overview of the past year. Members of the community were also awarded in the event.