diff --git a/data/xml/2024.clasp.xml b/data/xml/2024.clasp.xml index ab186c991a..ad50e24e42 100644 --- a/data/xml/2024.clasp.xml +++ b/data/xml/2024.clasp.xml @@ -86,8 +86,10 @@ SinaZarrieß 39–55 Syntactic learning curves in LMs are usually reported as relatively stable and power law-shaped. By analyzing the learning curves of different LMs on various syntactic phenomena using both small self-trained llama models and larger pre-trained pythia models, we show that while many phenomena do follow typical power law curves, others exhibit S-shaped, U-shaped, or erratic patterns. Certain syntactic paradigms remain challenging even for large models, resulting in persistent preference for ungrammatical sentences. Most phenomena show similar curves for their paradigms, but the existence of diverging patterns and oscillations indicates that average curves mask important developments, underscoring the need for more detailed analyses of individual learning trajectories. - 2024.clasp-1.7 + 2024.clasp-1.7 bunzeck-zarriess-2024-fifty + + This revision corrects an axis-scaling error for the three smaller Pythia models in Figure 2 and adjusts the description of the Figure and its results accordingly. Not Just Semantics: Word Meaning Negotiation in Social Media and Spoken Interaction