You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So far we explored how to use prompts with LLMs, however to really leverage the power of LLMs it is essential that you
6
-
can build a conversation by referring to previous questions and answers and manage concurrent interactions.
5
+
So far we explored how to use prompts with LLMs, however to really leverage the power of LLMs it is essential that you can build a conversation by referring to previous questions and answers and manage concurrent interactions.
7
6
8
7
In this section, we'll cover how we can achieve this with the LangChain4j extension in Quarkus.
public class ChatMemoryBean implements ChatMemoryProvider {
56
-
57
-
private final Map<Object, ChatMemory> memories = new ConcurrentHashMap<>();
58
-
59
-
@Override
60
-
public ChatMemory get(Object memoryId) {
61
-
return memories.computeIfAbsent(memoryId, id -> MessageWindowChatMemory.builder() //<1>
62
-
.maxMessages(20) //<2>
63
-
.id(memoryId)
64
-
.build());
65
-
}
66
-
67
-
@PreDestroy
68
-
public void close() {
69
-
memories.clear();
70
-
}
71
-
}
72
-
----
73
-
<1> If no chat memory exists yet, create a new instance
74
-
<2> Retain a maximum of 20 messages
75
-
76
-
77
32
== Create a Developer resource
78
33
79
-
Now let's create a resource to help us write some code.
34
+
Now let's create a resource to help us write some code, and then ask the model to create a test for the code as well in a second request. Thanks to the memory feature, the model will remember what code we're referring to from the first request.
80
35
81
36
Create a new `DeveloperResource` Java class in `src/main/java` in the `com.redhat.developers` package with the following contents:
82
37
@@ -85,78 +40,54 @@ Create a new `DeveloperResource` Java class in `src/main/java` in the `com.redha
You can check your prompt implementation by pointing your browser to http://localhost:8080/code/rest[window=_blank]
80
+
You can check your prompt implementation by pointing your browser to http://localhost:8080/memory[window=_blank]
150
81
151
82
You can also run the following command in your terminal:
152
83
153
84
[.console-input]
154
85
[source,bash]
155
86
----
156
-
curl localhost:8080/code/rest
87
+
curl localhost:8080/memory
157
88
----
158
89
159
-
The result will be in the logs of your Quarkus application (ie. the terminal where you're running the `quarkus dev` command). An example of output (it can vary on each prompt execution):
90
+
An example of output (can vary on each prompt execution):
160
91
161
92
[.console-output]
162
93
[source,text]
@@ -200,7 +131,7 @@ public class HelloResource {
200
131
This class defines two REST endpoints: `/hello` for saying hello to the world, and `/hello/{name}` for saying hello to a specific name. You can access these endpoints at `http://localhost:8080/hello` and `http://localhost:8080/hello/{name}` respectively.
201
132
202
133
203
-
[User]: Create a test of the first point. Be short, 15 lines of code maximum.
134
+
[User]: Create a test of the first step. Be short, 15 lines of code maximum.
204
135
205
136
[LLM]: Here's an example of a simple test for the `sayHello` endpoint in Quarkus using JUnit:
206
137
@@ -231,138 +162,97 @@ In this test, we are using the QuarkusTest annotation to run the test in the Qua
231
162
232
163
----
233
164
234
-
Let's now get some help to learn a little bit about Kubernetes.
235
165
236
-
Add a new `generateKubernetes()` method to the `DeveloperResource` class:
166
+
167
+
== How to index a conversation
168
+
169
+
We can use the LangChain4j extension to index a conversation so we can reuse it, and keep multiple, parallel conversations separated.
170
+
171
+
Let's add a new `guessWho()` method to our `DeveloperResource`:
237
172
238
173
[.console-input]
239
174
[source,java]
240
175
----
241
176
@GET
242
-
@Path("/k8s")
177
+
@Path("/guess")
243
178
@Produces(MediaType.TEXT_PLAIN)
244
-
public void generateKubernetes() {
179
+
public String guess() {
180
+
String msg1FromUser1 = "Hello, my name is Klaus and I'm a doctor";
You can check your prompt implementation by pointing your browser to http://localhost:8080/code/k8s[window=_blank]
211
+
You can check your implementation by pointing your browser to http://localhost:8080/guess[window=_blank]
268
212
269
213
You can also run the following command:
270
214
271
215
[.console-input]
272
216
[source,bash]
273
217
----
274
-
curl localhost:8080/code/k8s
218
+
curl localhost:8080/guess
275
219
----
276
220
277
-
The result will be once again in your Quarkus application logs. An example of output (it can vary on each prompt execution):
221
+
The result will be at your Quarkus terminal. An example of output (it can vary on each prompt execution):
278
222
279
223
[.console-output]
280
224
[source,text]
281
225
----
282
-
[User]: Can you give a brief explanation of Kubernetes, 3 lines max?
283
-
284
-
[LLM]: Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It simplifies the process of managing and coordinating large numbers of containers across multiple clusters. Kubernetes provides a scalable and efficient way to deploy and manage containerized applications in a production-ready environment.
285
-
286
-
287
-
[User]: Can you give me a YAML example to deploy an application for that?
288
-
289
-
[LLM]: Sure! Here is an example of a simple YAML file that deploys a sample application using Kubernetes:
290
-
291
-
```yaml
292
-
apiVersion: apps/v1
293
-
kind: Deployment
294
-
metadata:
295
-
name: sample-app
296
-
spec:
297
-
replicas: 3
298
-
selector:
299
-
matchLabels:
300
-
app: sample-app
301
-
template:
302
-
metadata:
303
-
labels:
304
-
app: sample-app
305
-
spec:
306
-
containers:
307
-
- name: sample-app
308
-
image: nginx:latest
309
-
ports:
310
-
- containerPort: 80
311
-
```
226
+
[User1]: Hello, my name is Klaus and I'm a doctor
312
227
313
-
Save this YAML file as `sample-app-deployment.yaml` and apply it using the `kubectl apply -f sample-app-deployment.yaml` command to deploy the sample application with 3 replicas running NGINX.
314
-
----
228
+
[LLM]: Nice to meet you, Klaus! What field of medicine do you specialize in?
315
229
316
-
== How to index a conversation
317
230
318
-
We can use the LangChain4j extension to index a conversation so we can reuse it.
231
+
------------------------------------------
319
232
320
-
Let's inject an instance of the `AssistantWithMemory` class and add a new `guessWho()` method to our `DeveloperResource`:
321
233
322
-
[.console-input]
323
-
[source,java]
324
-
----
325
-
@Inject
326
-
AssistantWithMemory assistant;
234
+
[User2]: Hi, I'm Francine and I'm a lawyer
327
235
328
-
@GET
329
-
@Path("/guess")
330
-
@Produces(MediaType.TEXT_PLAIN)
331
-
public void guessWho() {
236
+
[LLM]: Hello Francine, nice to meet you. How can I assist you today?
332
237
333
-
System.out.println(assistant.chat(1, "Hello, my name is Klaus, and I'm a Doctor"));
334
238
335
-
System.out.println(assistant.chat(2, "Hello, my name is Francine, and I'm a Lawyer"));
239
+
------------------------------------------
336
240
337
-
System.out.println(assistant.chat(1, "What is my name?"));
338
241
339
-
System.out.println(assistant.chat(2, "What is my profession?"));
242
+
[User2]: What is my name?
340
243
341
-
}
244
+
[LLM]: Your name is Francine, and you mentioned earlier that you are a lawyer. How can I assist you today, Francine?
342
245
343
-
----
344
246
345
-
== Invoke the endpoint
247
+
------------------------------------------
346
248
347
-
You can check your implementation by pointing your browser to http://localhost:8080/code/guess[window=_blank]
348
249
349
-
You can also run the following command:
250
+
[User1]: What is my profession?
350
251
351
-
[.console-input]
352
-
[source,bash]
353
-
----
354
-
curl localhost:8080/code/guess
355
-
----
252
+
[LLM]: Your profession is being a doctor, Klaus. How can I assist you today?
356
253
357
-
The result will be at your Quarkus terminal. An example of output (it can vary on each prompt execution):
358
254
359
-
[.console-output]
360
-
[source,text]
361
-
----
362
-
Hello Klaus, it's nice to meet you. What type of doctor are you?
363
-
Hello Francine, nice to meet you! How can I assist you today?
364
-
Your name is Klaus.
365
-
Your profession is a Lawyer. You are legally trained and licensed to represent clients in legal matters.
255
+
------------------------------------------
366
256
----
367
257
368
-
NOTE: You might be confused by the responses (ie. Klaus is not a lawyer but a doctor). Take a close look at the IDs of our calls to the assistant. Do you notice that the last question was in fact directed to Francine with ID=2? We were indeed able to maintain 2 separate and concurrent conversations with the LLM!
258
+
NOTE: Take a close look at the IDs of our calls to the assistant. Do you notice that the last question was in fact directed to Klaus with ID=1? We were indeed able to maintain 2 separate and concurrent conversations with the LLM!
0 commit comments