Transforming Space Engineering with Generative AI
Within the space engineering field there is an ever-growing demand for more efficient and cost-effective ways to design, build, and operate spacecraft. Generative AI, specifically large language models (LLMs) like GPT-3, have shown the potential to address these challenges by automating design processes, freeing human expertise to spend time on value-added tasks, and improving the overall performance of space systems.
The Rise of Generative AI Models
In the past two years, generative AI models have seen an explosion in popularity with many models being published. Text-to-Text models such as ChatGPT3, LaMDA, and PEER as well as Text-to-Code models such as Codex and Alphacode have demonstrated abilities to propose high quality solutions to technical tasks.
Self-Attention Mechanism and Fine-Tuning for Task-Specific Applications
In simple terms, LLMs are machine learning models that can understand and generate text. They take as an input some text (a paragraph, lines of code, etc.) and predict what text is most likely to come next, based on the patterns it has learned from its training data.
The most successful LLM of late utilise a self-attention mechanism. Self-attention is a mechanism that allows a model to weigh the importance of different parts of the input when making predictions. It works by looking at each word in the input and determining how much attention it should pay to each other word when processing the input. This allows the model to focus on the most relevant information and ignore irrelevant information when making its predictions. To be applied to a particular task base LLMs are often fine-tuned on a specific dataset.
Transforme language models to space engineering
Some research has already been conducted looking at applying transformer language models to space engineering:
- Researchers from the University of Strathclyde have published LLMs (SpaceBERT, SpaceRoBERTa, SpaceSciBERT) that have been fine-tuned on extracts from ECSS standards and other space systems documents (books, abstracts, Wikipedia pages etc.)
- Applications of space LLM have begun to be explored. For instance, utilising LLMs to map knowledge from satellite datasets.
The innovation that arguably led to the success of ChatGPT is the use of Reinforcement Learning from Human Feedback (RLHF). With RLHF, the original language model is fine tuned with a separate reward model which was trained using feedback from human labellers.
Figure 1: RLHF Process Diagram (Source: Hugging Face)
One use case where LLM could add significant value is within Model Based Systems Engineering (MBSE) processes , why MBSE is the key. LLMs offer a potential avenue to reduce both the manual work involved in modelling complex systems and reduce the risk or modelling error. Such applications include:
- Generating system models: The LLM could be used to generate initial system models based on natural language descriptions of the system and its requirements.
- Verifying system models: The LLM could be used to verify that the generated system models adhere to the system requirements and engineering principles.
- Updating system models: The LLM could be used to update the system models as requirements change or new information becomes available (i.e., covert design change described in natural language to model update).
- Maintaining consistency: The LLM could be used to ensure consistency across the different models and requirements by cross-referencing them and identifying any discrepancies.
- Identifying missing information: The LLM could be used to identify any missing information in the system models and requirements, which would help to ensure completeness.
- Generating documentation: The LLM could be used to generate technical documentation, such as system design documents, based on the system models and requirements.
Of course, issues remain with these models. LLMs have been shown to generate false information or hallucinate. Ultimately, current models cannot be trusted to generate text that is always correct, and a human verification and validation step is essential to safe applications of these models.