Süsstrunk, Norman and Weichselbraun, Albert and Waldvogel, Roger (2023) Large Language Models versus Foundation Models for Assessing the Future-Readiness of Skills. In: 17 International Symposium for Information Science, 6-9 November 2023, Chur.
PDF
- Published Version
Available under License Creative Commons Attribution. 544kB |
Official URL: https://zenodo.org/records/10009338
Abstract
Automatization, offshoring and the emerging “gig economy” further accelerate changes in the job market leading to significant shifts in required skills. As automation and technology continue to advance, new technical proficiencies such as data analysis, artificial intelligence, and machine learning become increasingly valuable. Recent research, for example, estimates that 60% of occupations contain a significant portion of automatable skills.
The “Future of Work” project uses scientific literature, experts and deep learning to estimate the automatability and offshorability of skills which are assumed to impact their future-readiness. This article investigates the performance of two deep learning methods towards propagating expert and literature assessments on automatability and offshorability to yet unseen skills: (i) a Large Language Model (ChatGPT) with few-shot learning and a heuristic that maps results to the target variables, and (ii) foundation models (BERT, DistilBERT) trained on a gold
standard dataset. An evaluation on expert data provides initial insights into the systems’ performance and outlines the strengths and weaknesses of both approaches.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Subjects: | Q Science > QA Mathematics > QA75 Electronic computers. Computer science |
Divisions: | Faculty of Engineering, Science and Mathematics > School of Electronics and Computer Science |
ID Code: | 119 |
Deposited By: | Dr Albert Weichselbraun |
Deposited On: | 15 Mar 2024 12:59 |
Last Modified: | 15 Mar 2024 12:59 |
Repository Staff Only: item control page