Finetuned FLAN-T5 to translate English to Hawaiian Pidgin
-
Updated
Oct 31, 2023 - Jupyter Notebook
Finetuned FLAN-T5 to translate English to Hawaiian Pidgin
Performing Prompt engineering on a dialogue summarization task using Flan-T5 and the dialogsum dataset. Exploring how different prompts affect the output of the model, and compare zero-shot and few-shot inferences.
Web app for a therapist chatbot. Using a custom fine-tuned local flan-t5 model for summarisation and ChatGPT3.5 for chat.
Project based on PyTorch-lightning and Transformers for training Seq2SeqLM models, with a primary focus on MT5 and FLAN-T5, yet not limited to them
Discussed about 4 use-cases or case studies. Discussed about the approaches and significance of these use-cases as these are different from others. There are several approaches available which can be done using LLM but here the approaches and it's significance could bring insightful approaches towards it's execution.
LLM projects
This repository is made for T5 model where user can train their model on any T5 model version.
Multiple LLM based models for NLP tasks. Starting with Question answering on custom data
This project is based on fine-tuning LLM models (FLAN-T5) for text summarisation task using PEFT approach. All evaluation metrics being computed on ROUGE scoring and LoRA optimisation techniques being used for fine-tuning.
Text-To-Text Textbots to Demonstrate Output Differences Between Models Trained on Filtered/Unfiltered Datasets for HSS4 - The Modern Context: Select Figures and Topics
Developed a generative large language model fine-tuned on Stack Overflow data for question answering.
Demonstration of LLM techniques such as prompt engineering, full finetuning, PEFT (LoRA) etc.
NLU_NLG Winter Semester
Official Code for Analysis Done in the Paper "Frugal Prompting for Dialog Models"
Fine-tuned FLAN T-5 using Instruction Fine-Tuning (Full), LoRA-based PEFT, and RLHF with PPO
MTP-FlanT5-SBERT-Model-for-NewsQA-and-Teacher-Student-Model
The Summarizer Module of the TURB
This repository contains Zero, Few shot Inference, Fine Tuning a Large Language Model For Document Summarization, and lot more is coming...
Dialogue Summary LLM - FLAN - T5: An implementation of the Flan-t5 LLM to summarize dialogues. Prompt Engineering , Fine tuning with PEFT and fine tuning with RL (PPO) is explored within this project.
Add a description, image, and links to the flan-t5 topic page so that developers can more easily learn about it.
To associate your repository with the flan-t5 topic, visit your repo's landing page and select "manage topics."