Abacus.AI's Projects
run tests in https://swj0419.github.io/detect-pretrain.github.io/ for contamination
Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.
Packages and instructions for training and inference of LLMs on NVIDIA's new GH200 machines
Code to accompany NeurIPS paper https://arxiv.org/abs/2006.08564
A framework for few-shot evaluation of language models.
This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and benchmark tasks that evaluate a modelβs information retrieval capabilities with context expansion. We also include key experimental results and instructions for reproducing and building on them.
A lightweight python gRPC client to communicate with TensorFlow Serving
Use Case Notebooks
π€ PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Puzzle Generator; Einstein's Riddle, Zebra Puzzle and Blood Donation Puzzle Solver. For non-commercial use only!
π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Fast Inference Solutions for BLOOM
A high-throughput and memory-efficient inference and serving engine for LLMs
XAI-Bench is a library for benchmarking feature attribution explainability techniques
Hackable and optimized Transformers building blocks, supporting a composable construction.