How researchers use AI: results of a current survey
Artificial Intelligence (AI) is increasingly making its way into scientific practice. But how exactly is it being used in everyday research—and what opportunities and challenges do researchers see in it? A recent survey among DaFNE project participants provides insightful answers.
The survey examined how researchers use AI-based applications—such as chatbots, text generators, or translation tools - within the scope of their research activities. The goal was to capture the current state of AI usage, highlight potential benefits, and identify risks and support needs.
Conducted online in April 2025, 58 people participated, mostly project leaders from completed or ongoing DaFNE projects. The majority have extensive scientific experience: about 60 percent hold a doctoral or PhD degree. The most represented fields were agricultural, forestry, natural, and animal sciences. Participants work at universities, research institutes, and agricultural and forestry educational institutions.
AI in Research: Generally Positive—but with Reservations
The impact of AI on research work is overall rated positively. At the same time, there is a strong awareness of potential issues: two-thirds of respondents expressed concerns regarding ethical questions, such as transparency, data quality, authorship, or reproducibility of scientific results.
Many researchers see AI clearly as a tool - not a replacement for scientific expertise. Efficiency gains, particularly for routine tasks, are appreciated. However, it is emphasized that AI-generated content must always be critically evaluated, especially in technically sensitive areas.
The survey shows a reflective, but not fully confident, understanding of AI. While many participants are familiar with basic terms and definitions, more than half report difficulty recognizing AI applications. Distinguishing between AI- and human-generated content is also challenging for many.
There is strong agreement, however, on labeling AI usage in scientific work. Transparency, good scientific practice, and accountability are highlighted as key principles. A clear need for training, guidelines, and standards is also identified.
Most commonly used tools:
- ChatGPT/OpenAI is used by over 80 percent of respondents.
- DeepL is used by about 75 percent.
Other applications - for research, literature management, or data analysis - play a much smaller role.
The actual usage intensity is generally low: the median use of AI in a research context is about one hour per week. Some heavy users raise the average, but for most, AI is not yet a fixed part of their daily work.
Areas of Use: Focus on Language and Research
AI is mainly used for language support - such as translations, corrections, or stylistic improvements. It is also increasingly used for literature reviews, abstracts, and explaining complex content.
AI is much less often used for conceptual or creative tasks, such as developing research questions, hypotheses, or project ideas. Technically demanding applications, like simulations or software development, currently play only a minor role.
AI as a Useful Tool with Development Potential
The results clearly show that AI has arrived in research - primarily as a supportive tool. Researchers use it to gain efficiency while remaining cautious and reflective. At the same time, it is clear that further measures are needed: training, clear guidelines, and a shared culture of responsible AI use are crucial to harness AI’s potential in research effectively and with quality-oriented practices.