Collection of best practices, reference architectures, model training examples and utilities to train large models on AWS.
-
Updated
Jun 12, 2024 - Python
Collection of best practices, reference architectures, model training examples and utilities to train large models on AWS.
A benchmark for evaluating learning agents based on just language feedback
SORSA: Singular Value and Orthogonal Regularized Singular Vector Adaptation of Large Language Models
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.github.io/h2o-llmstudio/
An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
DLRover: An Automatic Distributed Deep Learning System
A Comparison of LLM Chat Bot Implementation Methods with Travel Use Case
SkyPilot: Run LLMs, AI, and Batch jobs on any cloud. Get maximum savings, highest GPU availability, and managed execution—all with a simple interface.
Nvidia GPU exporter for prometheus using nvidia-smi binary
Implementation of LLM ✨from scratch✨
Popular Large Language Models from scratch - 2024
Official implementation for the paper *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*
Computing algorithms to increase the context windows of LLMs at a smaller scale
Low-code framework for building custom LLMs, neural networks, and other AI models
collection of text2cypher datasets, evaluations, and finetuning instructions
Fine-tuning of Flan-5T LLM for text classification
LLM-PowerHouse: Unleash LLMs' potential through curated tutorials, best practices, and ready-to-use code for custom training and inferencing.
Collection of bet practices, reference architectures, examples, and utilities for foundation model development and deployment on AWS.
Add a description, image, and links to the llm-training topic page so that developers can more easily learn about it.
To associate your repository with the llm-training topic, visit your repo's landing page and select "manage topics."