Description

Here's a first draft of the task description, maybe we can send something like this to Junction to start with. I haven't worked much on the actual task materials yet but I don't think there are any major obstacles there

There are some details that would be good to figure out, like

**## Task (30 minutes): Tagging messages using an autoregressive Large Language Model**

**### Introduction**

One important task FinUA uses to streamline operations is *tagging* user messages. This helps in organizing, prioritizing and distributing them to experts who can help with them. Currently, this is a manual process.

Recently, autoregressive Large Language Models have emerged as flexible tools for quickly prototyping NLP applications.

Their combined language understanding, in-context learning and open-ended text generation capabilities make them extremely flexible tools for this purpose. One of the main ways to influence the model output is ***prompt engineering***. 

**This can include writing instructions to the model, giving examples ("few-shot learning"), as well as more esoteric techniques that condition the model's outputs (e.g. offering a tip).**

**### The task**
In this task, we will use an autoregressive Large Language Model to tag messages, by calling the model from Python code. The goal is not to create a perfect tagging system, but to get a feel for how to develop with LLMs and
what kinds of limitations one might encounter.

If you use an on-device LLM, or have a subscription, you are also free to use those in this task. Just mention the model name along with your results.

If you are already an expert in NLP but are tired of LLMs, you are also free to use *any other model or technique* for this task. But for the sake of time, we recommend sticking to the LLM.

**### Instructions**
1. Fetch code template and example data from Github: ...
2. Write provided API key to the file ...
3. Test the code by running `python run.py`
4. Implement your solution in the function `def process(text):`
5. Finally, upload your solution text file to ...

Final Version

FinUA Workshop Format: 2 workshops, 1 hour each Theme: FinUA: Using Tech for Empathy and Connection Time: 12:00 - 14:30, 09/11

Overview

FinUA embraces a holistic approach to supporting Ukrainians in Finland by integrating legal assistance, emotional support, and social adaptation—all through empathy and technology.

One important task FinUA uses to streamline operations is tagging user messages. This helps in organizing, prioritizing and distributing them to experts who can help with them. Currently, this is a manual process. Recently, autoregressive Large Language Models have emerged as flexible tools for quickly prototyping NLP applications. Their combined language understanding, in-context learning and open-ended text generation capabilities make them extremely flexible tools for this purpose. One of the main ways to influence the model output is prompt engineering.

This can include writing instructions to the model, giving examples ("few-shot learning"), as well as more esoteric techniques that condition the model's outputs (e.g. offering a tip).

What You’ll Gain

In this workshop, we will use an autoregressive Large Language Model to tag messages, by calling the model from Python code. The goal is not to create a perfect tagging system, but to get a feel for how to develop with LLMs and what kinds of limitations one might encounter.

We will also share practical insights from FinUA’s journey since 2022, highlighting key technological solutions and tools that have transformed our operations.