For example, we have used it to select topics from our articles, we have run an experiment to search through the different political programs for the national elections last year, and we are experimenting with classifying user needs using the LLM. However, we are very careful with what type of task we use an LLM for. We do not want a LLM to generate our journalistic content, even while we do allow our journalists to use it to support their work. Within the editorial team, a workgroup has been set up to discuss and study how to use (or not to use) generative AI, and we have added an AI lemma to our professional NRC code for our journalistic work. In addition, we have set up some courses in the NRC Academy, where we explain how a foundational LLM works - including bias and hallucinations - and we conduct hands-on workshops on prompting for the organisation. In short, we try to be aware of the risks without losing the possible rewards.
Speaker: Tessel Boogard (NRC)