Skip to content

Latest commit

 

History

History
60 lines (43 loc) · 3.59 KB

README.md

File metadata and controls

60 lines (43 loc) · 3.59 KB

QFiction

Project uses BERT fine tuned on Stanford Question Answering Dataset (SQuAD) and then again fine tuned (language model) on corpus containing different fiction books concatenated into one large text file. For question query, pipeline is basically:

Extract keywords from question -> Search book for keywords closest to each other -> Run BERT QnA task on extracted abstracts -> Select the one with highest confidence

Right now, it takes ~25-30 seconds to get an answer. Since for BERT the maximum sequence length after tokenization is 512, if an answer is not found centered around the searched keywords falling within this length, it is possible to get a random answer/[CLS] token which means an answer could not be found.

This project works decent with Harry Potter book, but for Lord Of The Rings and A Song Of Ice and Fire it doesn't. It might be because of simplicity of language in Harry Potter books and lot of complexity in the latter mentioned.

Jupyter notebook for pretraining on Colab (use GPU).

Books used for the project in .txt format can be found in this repository. Merge all corpus into a single file, then train.

Screenshots

Home

Questions

Epic Fails

Wouldn't have disagreed if I asked this about the Starks

What a plot twist!

Cunning indeed!

Video

https://youtu.be/N6OQ2bsTO2c

Technology Stack

  • Django
  • jQuery
  • HTML/CSS/BootStrap4

I have used huggingface/transformers for using BERT in python and pre-trained model on SQuAD can be downloaded here bert-medium-finetuned-squadv2.

For better results one can try BERT large which is ~1GB in size. Or even better, gain access to GPT-3 API if lucky.