Skip to content

Commit 642fdb0

Browse files
docs: update README with more contextual eval info
Co-authored-by: Kelly Brown <kelbrown@redhat.com> Signed-off-by: Nathan Weinberg <nweinber@redhat.com>
1 parent 83f9d95 commit 642fdb0

2 files changed

Lines changed: 73 additions & 1 deletion

File tree

.spellcheck-en-custom.txt

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,15 +4,20 @@
44
# SPDX-License-Identifier: Apache-2.0
55
Backport
66
backported
7+
benchmarking
78
codebase
9+
dr
810
eval
911
gpt
1012
hoc
1113
instructlab
1214
jsonl
1315
justfile
16+
MMLU
1417
openai
18+
SDG
1519
Tatsu
20+
tl
1621
TODO
1722
venv
1823
vllm

README.md

Lines changed: 68 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,74 @@
77

88
Python Library for Evaluation
99

10-
## MT-Bench / MT-Bench-Branch Testing Steps
10+
## What is Evaluation?
11+
12+
Evaluation allows us to assess how a given model is performing against a set of specific tasks. This is done by running a set of standardized benchmark tests against
13+
the model. Running evaluation produces numerical scores across these various benchmarks, as well as logs excerpts/samples of the outputs the model produced during these
14+
benchmarks. Using a combination of these artifacts as reference, along with a manual smoke test, allows us to get the best idea about whether or not a model has learned
15+
and improved on something we are trying to teach it. There are 2 stages of model evaluation in the InstructLab process:
16+
17+
### Inter-checkpoint Evaluation
18+
19+
This step occurs during multi-phase training. Each phase of training produces multiple different “checkpoints” of the model that are taken at various stages during
20+
the phase. At the end of each phase, we evaluate all the checkpoints in order to find the one that provides the best results. This is done as part of the
21+
[InstructLab Training](https://github.com/instructlab/training) library.
22+
23+
### Full-scale final Evaluation
24+
25+
Once training is complete, and we have picked the best checkpoint from the output of the final phase, we can run full-scale evaluation suite which runs MT-Bench, MMLU,
26+
MT-Bench Branch and MMLU Branch.
27+
28+
## Methods of Evaluation
29+
30+
Below are more in-depth explanations of the suite of benchmarks we are using as methods for evaluation of models.
31+
32+
### Multi-turn benchmark (MT-Bench)
33+
34+
**tl;dr** Full model evaluation of performance on **skills**
35+
36+
MT-Bench is a type of benchmarking that involves asking a model 80 multi-turn questions - i.e.
37+
38+
```text
39+
<Question 1> → <model’s answer 1> → <Follow-up question> → <model’s answer 2>
40+
```
41+
42+
A “judge” model reviews the given multi-turn question, the provided model answer, and rate the answer with a score out of 10. The scores are then averaged out
43+
and the final score produced is the “MT-bench score” for that model. This benchmark assumes no factual knowledge on the model’s part. The questions are static, but do not get obsolete with time.
44+
45+
You can read more about MT-Bench [here](https://arxiv.org/abs/2306.05685)
46+
47+
### MT-Bench Branch
48+
49+
MT-Bench Branch is an adaptation of MT-Bench that is designed to test custom skills that are added to the model with the InstructLab project. These new skills
50+
come in the form of question/answer pairs in a Git branch of the [taxonomy](https://github.com/instructlab/taxonomy).
51+
52+
MT-Bench Branch uses the user supplied seed questions to have the candidate model generate answers to, which are then judged by the judge model using the user supplied
53+
seed answers as a reference.
54+
55+
### Massive Multitask Language Understanding (MMLU)
56+
57+
**tl;dr** Full model evaluation of performance on **knowledge**
58+
59+
MMLU is a type of benchmarking that involves a series of fact-based multiple choice questions, along with 4 options for answers. It tests if a model is able to interpret
60+
the questions correctly, along the answers, formulate its own answer, then selects the correct option out of the provided ones. The questions are designed as a set
61+
of 57 “tasks”, and each task has a given domain. The domains cover a number of topics ranging from Chemistry and Biology to US History and Math.
62+
63+
The performance number is then compared against the set of known correct answers for each question to determine how many the model got right. The final MMLU score is the
64+
average of its scores. This benchmark does not involve any reference/critic model, and is a completely objective benchmark. This benchmark does assume factual knowledge
65+
on the model’s part. The questions are static, therefore MMLU cannot be used to gauge the model’s knowledge on more recent topics.
66+
67+
InstructLab uses an implementation found [here](https://github.com/EleutherAI/lm-evaluation-harness) for running MMLU.
68+
69+
You can read more about MMLU [here](https://arxiv.org/abs/2306.05685)
70+
71+
### MMLU Branch
72+
73+
MMLU Branch is an adaptation of MMLU that is designed to test custom knowledge that is being added to the model via a Git branch of the [taxonomy](https://github.com/instructlab/taxonomy).
74+
75+
A teacher model is used to generate new multiple choice questions based on the knowledge document included in the taxonomy Git branch. A “task” is then constructed that references the newly generated answer choices. These tasks are then used to score the model’s grasp on new knowledge the same way MMLU works. Generation of these tasks are done as part of the [InstructLab SDG](https://github.com/instructlab/sdg) library.
76+
77+
## MT-Bench / MT-Bench Branch Testing Steps
1178

1279
> **⚠️ Note:** Must use Python version 3.10 or later.
1380

0 commit comments

Comments
 (0)