Tensorflow implementation and pre-trained models of QANet for machine reading comprehension
-
Updated
Jul 21, 2022 - Jupyter Notebook
Tensorflow implementation and pre-trained models of QANet for machine reading comprehension
❓✔️ BERT-based model which returns “an answer”, given a user question and a passage which includes the answer of the question
Pipeline for performing question-answering tasks using the Stanford Question Answering Dataset (SQuAD) 2.0.
A simple python script to crawl worldfootball.net and find club and national squad mates of any footballer
A repository for Extractive Question Answering, using BERT model and SQuAD dataset.
squad mod
NLU_NLG Winter Semester
Implementation of the Bi-Directional Attention Flow Model (BiDAF) in Python using Keras
Web service for the neural based answering on open-domain questions
This project uses BERT to build a QA system fine-tuned on the SQuAD dataset, improving the accuracy and efficiency of question-answering tasks. We address challenges in contextual understanding and ambiguity handling to enhance user experience and system performance.
Experiments related to reading comprehension datasets: SQuAD and NewsQA
In this repository the pretrained distilbert model was used to train on a squad like dataset for question answering task with 67 percent accuracy
#SQuADGoals Tensorflow Code
Add a description, image, and links to the squad topic page so that developers can more easily learn about it.
To associate your repository with the squad topic, visit your repo's landing page and select "manage topics."