-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathiteration_add_score.txt
89 lines (66 loc) · 3.21 KB
/
iteration_add_score.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
#iteration - add a score to each clause in the input paragraph
#develop the functionality
we have an input paragraph
we saperate an input paragraph into a list of clause (text.split)
we need to define a score for a small text
we need to motivate the matrics
interface:provide all signatures(the name of functions and parameters) you need to compute the metrics
we need to compute a value
we transform the value into a score (ML:dimensionality reduction 10 to 2, plot, PCA)
#integration/implementation into the current system
modify the server - new responsibility (adding score)
modify the client - ask the server for openai api; ask for annotation (requesting the score)
metrics(motivate/find reference in the literature the choice)
-contains terms in some domains
-whether it has keywords (0 or 1)
-it is said by gpt (option)
-easy to implement
(more than 4+ character, belong to a domain's dictionary)
[AI, AGI, LLM,OpenAI]
-base one(everyone develops once); good one
-interesting results
in implementation, openai result is not reliable, cannot reproduce
if can use other metrix
#another generative AI (more specified)/ midjournals is made to reponse images
if you have the keywords, you can build context
generate, nevigate, implement
1) if i give you two words, generate a graph between two words -- lm ()
consider the result is not a dead-end. find a dedicated ai.
2) find algorithms(realiable) to nevigate the graph -- mapsec
3)
next step: find a solution of sth what ai is not made for
potential solution: to create a prompt to ensure the formatted response (literature review)
domain fine-tuned approaches: outperform gpt in terms of measurement
reduce the domain to outperform
read papers: write a review of papers (choices)
more realistic one: propose a realistic metrix
requirements: repoduceable results; else: if not repoduceable, find some way to get coherence results(not)
specification of the feature - use case diagram
input
reference paragraph -> a list of keywords
user text input
middle process- split into sentences/clauses
output
-json {keyword: value;} based on scale
-color for each sentence
where are the diff functions ; refact the code to identify the modularization
which part is highly depending on gpt; de the problem
take the code - make a function -modularization(a model that highly depends on gpt)
add the feature, just add one dependency to gpt
validation cycle
when we ask the criteria from ai, we can specify the type of response we want, so ai could check the crieria
we need an validation process to test the AI response, whether it will meet requirements
(in terms of formats,limits).
- validated response
- to adapt the function that validates the text should contain a list of specific values
- validate it's the list of number - yes/no
- change the text into a list of number with no error
- go through each text and see it belongs a given range
you can be sure your list, value you give the
homework:
can be sure can ask gpt to give us a list of values - compute bt one paragraph and another para
- compute the scores
- add new feature to validate -validate the GPT response
is effectively a list of value
- use the new feature to the existing (where to put this feature)
- color (user experience)