ADAGIA 101-2025.05 
Artificial Intelligence: Introduction on challenges concerning LLM-based AI-tools development
Peter Kaczmarski and Fernand Vandamme
https://doi.org/10.57028/LOA-101-Z1085

LWT e ISSN NL 101 09 2024 Pages: 104        
e-ISSN: 2953-1489
Access Request
        Summary
In recent years there have been many breakthroughs in various areas of Artificial Intelligence computing technologies, mainly with respect to building general GPTs (Generative Pre-Trained Transformers), and their extensions. This book takes a snapshot of today’s state-of-the-art in the main areas of these developments from the perspective of available AI capabilities and their possible utilization in custom cloud-based AI- tools. In the first two parts, attention is given to online accessibility of the top-ranked Large Language Models (LLMs) from Anthropic, Google, and OpenAI, and to the assessment of their available capabilities such as streaming chat, Theory of Mind tests analysis, reasoning on images, and LLM-based information retrieval using RAG, while the third part highlights the path from creating a custom local AI-tool to its cloud deployment and further to its online use. On a more general level, the presented insights and code snippets form a reference for building more extensive Artificial Intelligence use cases using Python and/or .NET 8 (LTS)


Table of contents

About this book  

 4

1 LLM-based AI-tools: case studies  

 8

1.1 Introduction  

 8

1.2 LLM-based AI-tools for knowledge assistance  

10

1.3 Adapting the answering style of LLMs

14

1.4 LLM-based scenario analysis and Theory of Mind 17 

17  

1.5 Document summarisation 

19

1.6 Advanced use cases for LLMs 

21

1.7 Conclusions

29

1.8 Appendix A: Developing LLM chat clients in Python

30

1.9 Appendix B: Chat test results overview

34

1.10 Appendix C: Theory of Mind test

40

1.11 Appendix D: Document summarisation tests

42

1.12 References

48 

2 LLM-based text search using RAG: implementation and assessment

52

2.1 Introduction

52

2.2 RAG procedure summary

53

2.3 Implementation overview

54

2.4 Experimental results 

56

2.5 Discussion of limitations and challenges

59

2.6 Conclusions 

60 

2.7 Appendix A: Python RAG implementation 

61 

2.8 Appendix B: Test document contents 

64

2.9 References 

69

3 Building LLM-based AI-tools for the cloud

72 

3.1 Introduction 

72 

3.2 Implementing a Blazor OpenAI LLM Chat in C# 

73 

3.2.1 Initial Blazor project

73 

3.2.2 Adding a Razor chat component

74 

3.2.3 Implementation of the LLM chat functionality

76

3.2.4 Testing the chat function in debug mode 

81 

3.3 Deploying the chat application to Google Cloud Platform

83 

3.3.1 Publishing a release version of the web application

83 

3.3.2 Docker image creation

85 

3.3.3 Importing Docker image to Google Artifact Registry 

86

3.3.4 Securing the OpenAI API key in GCP Secret Manager

88

3.3.5 Deploying Docker image from GAR to GCP Cloud Run 

89 

3.4 User experience: a simple test 

91

3.5 Conclusions 

94

3.6 Appendix A: Code listing (llmchat.razor) 

95

3.7 References

98

4 Summary 

102

About the authors 

104