Implemented SOTA model for fine tuning our tweet and Reddit data for Prediction of Rumour through SDQC
-
Updated
Nov 11, 2019 - Jupyter Notebook
Implemented SOTA model for fine tuning our tweet and Reddit data for Prediction of Rumour through SDQC
👨🎓 This repo is a supplement to my video on Transformers and Text Summarization as part of my series AI does AI (https://youtu.be/p_6xgrykPMQ)
Multi-Label Classification using BERT and XLNet
This repository houses three fine-tuned machine learning models for NLP tasks: Text Emotion Recognition, Sentiment Analysis, and Cyberbullying Detection.
Some experiments to compare the performances of some pre-trained transformer models on a basic sentiment regression task
invos Data Project - https://www.youtube.com/watch?v=uSjCmNSwtIo
a novel hypernym discovery system based on XLNet, which outperformed baselines in SemEval tasks, highlighting expertise in innovative language models and ethical considerations in NLP.
Simple from-scratch implementations of transformer-based models that match the state of the art.
Affevtive Bias in Large Pre-trained Language Models
Exploring a collection of Jupyter notebooks showcasing a variety of Natural Language Processing (NLP) projects.
Affective Bias in PLM
Three transformer models for performing extractive summarization technique on news data
Sentiment Analysis on an amazon english reviews dataset using various transformers from Hugging Face.
The aim is to train a model on movie reviews to predict the rating of a given review.
Testing of the possible use of transformers model for various NLP tasks leveraging BERT pretrained model from Hugginface
Sentiment analysis using pre-trained neural networks based on transformers.
embeddings language models
Add a description, image, and links to the xlnet topic page so that developers can more easily learn about it.
To associate your repository with the xlnet topic, visit your repo's landing page and select "manage topics."