Fine-tuning Large Language Models for Satellite Communications Knowledge Management: Challenges and Impacts

10 Nov 2025, 13:30
20m
Online

Online

Speaker

Prof. Ioannis Christou (American College of Greece)

Description

The application of large language models (LLMs) to specialized fields, such as Satellite Communications (SatCom), presents unique challenges due to the extensive and cutting-edge knowledge required. We present a fine-tuning approach for adapting 7-billion-parameter instructed LLMs (Llama-3v and Mistral) to SatCom, using a proprietary corpus sourced from the European Space Agency (ESA) consisting of domain-specific PDF documents.

The confidential nature of this corpus imposes constraints on both model training and evaluation, demanding a sensible text extraction pipeline capable of handling complex structures, such as tables, to preserve critical information.

Our fine-tuning methodology employs a carefully configured process, followed by an automatic evaluation framework using a curated Q&A set tailored to SatCom. Models were created in both non-quantified and 8-bit quantized formats, ensuring feasibility for desktop-level inference. The fine-tuned models demonstrated a 6,6% improvement over the baseline LLM, as well as significant gains when compared to retrieval-augmented generation (RAG) methods.

Presentation materials

There are no materials yet.